9

Recently I was running the example provided by Andrew Hunter on his blog "The Dangers of the Large Object Heap" compiled against .NET 4 and I got the following numbers:

With large blocks: 622Mb allocated
With large blocks, frequent garbage collections: 582Mb allocated
Only small blocks: 1803Mb allocated
With large blocks, large blocks not growing: 630Mb allocated

If the same code is compiled against.NET 2.0 I got almost the numbers mentioned in article:

With large blocks: 21Mb allocated
With large blocks, frequent garbage collections: 26Mb allocated
Only small blocks: 1811Mb allocated
With large blocks, large blocks not growing: 707Mb allocated

What is the cause of such dramatical improvement?

Code is compiled for x86 platform and is run on Windows 7

Eugeniu Torica
  • 6,986
  • 11
  • 44
  • 61

3 Answers3

4

Some much needed work from the CLR team is the reason for the improvements, but apparently there is room for improvement still:

http://mitch-wheat.blogspot.com/2010/11/net-clr-large-object-heap.html

Adam Houldsworth
  • 60,104
  • 9
  • 137
  • 177
4

Something changed but it is a well kept secret, I can find nothing about it. I wouldn't put too much stock into it. The code sample was hand-tuned to the make the CLR 2 large object heap look as bad as possible. Even a small change in the algorithm, perhaps inspired by the blog post, will have very large effects.

Hans Passant
  • 873,011
  • 131
  • 1,552
  • 2,371
  • Agree. The question is an attempt to find what are the differences because I thought that no one of presented changes could affect the numbers in this way. – Eugeniu Torica Mar 24 '11 at 11:58
2

I can think of some easy things Microsoft could have done to the memory allocator that would have greatly reduced LOH fragmentation without major overhaul, such as rounding allocation sizes up to some multiple like 4K. Given that the smallest non-static LOH objects were 85K, that would represent at most a 5% loss of useful space, but would reduce the number of different-sized objects and gaps. BTW, I'm really unconvinced of the value forcing all big objects to the LOH (as opposed to, perhaps, having a means of designating when an object is created whether it should go to the LOH or not). I can understand some value in separating small objects from big ones once they reach Level 2, but there are enough cases where big objects get created and abandoned that forcing them to level 2 seems counterproductive.

supercat
  • 69,493
  • 7
  • 143
  • 184
  • array of doubles are put in the LOH at a much lower size then 85K, however rounding up is still a good ideal – Ian Ringrose Jul 05 '11 at 20:47
  • I'm really puzzled by some of Microsoft's decisions. Apparently the reason arrays of doubles get pushed to LOH is because LOH objects are aligned on 8-byte boundaries and ordinary heap objects aren't. I would think that it would make sense to special-case objects that are no bigger than a pointer so that they get stored directly in the heap descriptor table (in place of the pointer), and then round all heap objects up to the next cache line size, regardless of whether they contain any doubles or not. – supercat Jul 06 '11 at 04:07