Index ACI SQL
Index ACI SQL
l Internal fragmentation
Occurs when rows are deleted and entries are removed from index pages. The result is that the index pages
require more space than they need to store the entries, which increases the time taken to scan the index
pages.
l External fragmentation
Occurs when the logical order of the entries in the index page does not match the physical order in which the
entries appear in the database. This will only affect performance when an ordered scan is performed on all or
part of an index.
l Extent fragmentation
Occurs when gaps occur in the physical location of index pages. Index pages are stored in groups of eight,
called extents. As index entries are deleted, some extents could become empty and will contain no index
pages. Reading groups of contiguous empty extents is inefficient.
To correct internal or external fragmentation, the indexes on the table can either be rebuilt or defragmented. To
correct extent fragmentation, the indexes must be rebuilt. For more information on this, consult Microsoft SQL Server
documentation.
Defragmenting indexes
The DBCC INDEXDEFRAG command can be used to defragment the indexes on a table. This command rearranges the
logical order of index pages to match the physical order, thus improving index-scanning performance.
DBCC INDEXDEFRAG compacts the index pages to reduce the amount of pages to scan. The fill factor specified when
creating the index is used to leave space for new entries. Further, DBCC INDEXDEFRAG does not hold any long-term
locks on the table and will not affect long running queries or updates.
The DBCC SHOWCONTIG command can be used to analyze whether an index needs defragmenting. This operation is
very resource intensive and should never be performed on a system running at high transaction volumes.
Applicability
During online transaction processing, Realtime server writes every transaction to the database. The tm_trans table
stores transactions entries and the Transaction Cleaner is responsible for deleting the entries that are older than a
configured retention period. To do this, the cleaner scans the entire table. Therefore the speed at which entries can be
retrieved is effected by the amount of index page fragmentation.
The speed of the Transaction Cleaner provides a good metric to measure whether the fragmentation on these indexes
has any significant effect on a Realtime server. Note that the Transaction Cleaner process is throttled in order to
minimize the impact on transaction processing. However, it can be anticipated that the cleaner can clean more
transactions in a given time when the indexes are not fragmented.
On the test system it has been shown that the logical and extent fragmentation of the indexes on tm_trans increases
daily when the indexes are not defragmented. However, this fragmentation does not appear to have a detrimental
effect on transaction processing or Transaction Cleaner processing.
mk:@MSITStore:d:\postilion\realtime\doc\ug_postilion.chm::/Userguide/Index_fragmenta... 10/5/2021
Creating and recreating indexes Page 2 of 3
The statistics for Table 1 show the daily performance of the system between 05:00 and 10:00 when no other database
maintenance tasks affect the system. The indexes were defragmented on April 17 before the statistics were gathered.
There was no significant improvement in performance after the indexes were defragmented.
Defragmenting all indexes on tm_trans took about 1 hour 10 min whether performed on a daily or weekly basis. While
the indexes were being defragmented, the transaction log grew to 2.3 GB. The PPU increased slightly to 35%, while
the DTPS increased to about 600. While the AEPT increased to 62, the average TPS was not significantly affected.
From Figure 8 it can be seen that while on average the TPS stayed constant, it became less stable.
mk:@MSITStore:d:\postilion\realtime\doc\ug_postilion.chm::/Userguide/Index_fragmenta... 10/5/2021
Creating and recreating indexes Page 3 of 3
At 50 TPS, index defragmenting has a much greater effect on the system. The PPU increased slightly from 75% to
78% and the DTPS increased from 545 to 660. The AEPT increased from 115 to 170. With the additional load on the
database, the system was not able to cope while transactions were being injected at 50 TPS. The system could only
process 43 TPS during the 2 hours 30 min it took to defragment the indexes.
Recommended practice
Defragmenting indexes is a fairly intensive database operation that should be scheduled for times when the system is
less busy. This task has not been shown to have noticeable performance benefits when run weekly. Over time it is
possible that indexes could become so fragmented that they may have a negative effect on transaction processing.
Using this information, it is recommended that indexes be defragmented once a month. Note that this should only be
done during a time of relatively low transaction volumes when the system has sufficient resources to handle the
operation.
Rebuilding indexes
An index on a table can be rebuilt by using the DROP INDEX and CREATE INDEX commands separately, or by using the
DBCC DBREINDEX command. In both cases, there is a shared lock on the table while the index is being created or
rebuilt.
Applicability
As in the case of defragmenting indexes, rebuilding indexes could potentially improve aspects of system performance.
Since the tm_trans table will have a shared lock during the rebuild process, no transactions can be written to the table
in that time. Therefore, if this process takes more than a few seconds, indexes cannot realistically be rebuilt while a
system is processing transactions online.
This is a major disadvantage to rebuilding indexes. The system must effectively be taken offline for the time it takes
to rebuild the indexes - as shown below with the test system, this could take more than an hour.
Rebuilding all indexes on tm_trans took 1 hour 30 min. During this time, no transactions could be processed.
The Transaction Cleaner's speed for the week prior to the rebuild was, on average, 37. After the rebuild, the TPS
immediately increased to 51 for the next hour. Thereafter it started to degrade again. The average TPS for the
following week degraded to 47. The next week the TPS dropped to 41. When the indexes were not rebuilt on a regular
basis the Transaction Cleaner speed stabilized at 37 TPS.
Recommended practice
It is not possible to rebuild an index while the system is processing transactions online. Rebuilding an index during
scheduled downtime can result in performance improvements relative to defragmenting the index, but because this
task cannot be performed online (since it will lock Transaction Manager out of the database and cause additional
problems), rebuilding indexes is not recommended.
mk:@MSITStore:d:\postilion\realtime\doc\ug_postilion.chm::/Userguide/Index_fragmenta... 10/5/2021