2

How much fragmentation is bad? How low of scan density is too low? How low does scan density is bad?

i have a table with the following index density and fragmentation levels:

Name                           Scan Density  Logical Fragmentation
=============================  ============  =====================
IX_TransactionEntries_Tran...  12.834        48.392
IX_TransactionEntries_Curr...  15.419        41.239
IX_TransactionEntries_Tran...  12.875        48.372
TransactionEntries17           98.081         0.0049325
TransactionEntries5            12.960        48.180
PK_TransactionEntries          12.869        48.376
TransactionEntries18           12.886        48.480
IX_TranasctionEntries_CDR...   12.799        49.157
IX_TransactionEntries_CDR...   12.969        48.103
IX_TransactionEntries_Tra...   13.181        47.127

You can see that i just defragmented TransactionEntries17, which is why its Scan Density is so high, and it's fragmentation is so low.

But is 12% scan density horribly low? Is 48% fragmentation horribly high?

i'm getting performance issues when deleting rows (which requires index scanning). Is the index fragmentation a GIANT FLASHING RED ALARM on a 70000 page index, or is it a possible but unlikely cause?


From SQL Server BOL:

Scan Density [Best Count: Actual Count]

Is a percentage. It is the ratio Best Count to Actual Count. This value is 100 if everything is contiguous; if this value is less than 100, some fragmentation exists.

Best Count is the ideal number of extent changes if everything is contiguously linked. Actual Count is the actual number of extent changes.

LogicalFragmentation

Percentage of out-of-order pages returned from scanning the leaf pages of an index. This number is not relevant to heaps. An out-of-order page is a page for which the next physical page allocated to the index is not the page pointed to by the next-page pointer in the current leaf page.

But there's no guidance about what level of fragmentation is too high, and should be decreased. Nor is there any guidance about what scan density is too low, and should be increased.

Community
  • 1
  • 1
Ian Boyd
  • 220,884
  • 228
  • 805
  • 1,125

1 Answers1

3

Like anything else in SQL, it depends.

It will depend on stuff like contention (which increases wait times since there are more data pages to "contend" for), how wide your indexes are, etc etc etc etc.

From my personal experience, I did some testing of impacts of fragmentation on some builds and loads that I run.

For an aggregate-intensive procedure, I ran one run against a table that was 40% fragmented, and another against the "same" table with 0% fragmentation.

The 40% fragmented table took 1,200% longer to run the same basic operations.

Your mileage may vary, but it makes a big difference.

Paul Randal, arguably responsible for most of DBCC in SQL Server 2005+, thinks its a pretty huge deal too.

In short, the only way to know for sure is to do testing. BOL doesn't give guidelines on ranges for density and fragmentation because there are too many variables to make an "general" assessment.

It's like asking "Is my car too damaged?" The answer depends on "too damaged for what?", how far you are driving, how fast, on what kind of roads, what weather, what time of year, and a million other things.

JNK
  • 58,244
  • 14
  • 113
  • 132
  • What if it was "my tires are only 50% inflated". For me that would be an *omg you have to fix that* moment. – Ian Boyd Oct 05 '11 at 19:43
  • @IanBoyd - unless you are driving at 5 mph over sand dunes, in which case deflated tires may be appropriate... – JNK Oct 05 '11 at 19:43
  • i'm moving at 382 mph (382 rows per hour = 7 rows per second). – Ian Boyd Oct 05 '11 at 20:32
  • @IanBoyd - you still need to test :) I'm willing to bet you will see significant improvement by rebuilding the indexes. – JNK Oct 05 '11 at 20:33
  • Which i'd like to do. But after we schedule down-time, and have government auditors come in to oversee the downtime, and it doesn't fix it... – Ian Boyd Oct 06 '11 at 02:05