42

As I understand it there's a 2 GB limit on single instances in .NET. I haven't paid a lot of attention to that since I have mainly worked on 32 bit OS so far. On 32 but it is more or less an artificial limitation anyway. However, I was quite surprised to learn that this limitation also applies on 64 bit .NET.

Since collections such as List<T> use an array to store items, that means that a .NET application running on 32 bit will be able to hold twice as many reference type items in a list compared to the same application running on 64 bit. That is quite surprising imo.

Does anyone know if this limitation is addressed in CLR 4.0 (I don't have a 4.0 installation at hand at the moment).

Brian Rasmussen
  • 109,816
  • 33
  • 208
  • 305
  • 8
    Update 2012: As of .NET 4.5, on x64 systems, developers can now allocate objects larger (much larger) than 2GB. The 2GB limit is dead. http://www.centerspace.net/blog/nmath/large-matrices-and-vectors/?preview=true&preview_id=221&preview_nonce=7d1678c20c – Paul Jul 27 '12 at 21:52
  • 2
    Correct link appears to be http://www.centerspace.net/blog/nmath/large-matrices-and-vectors/ – Brian Rasmussen Jul 27 '12 at 22:13
  • 1
    Even the corrected links is dead :( – RBT Aug 16 '16 at 12:13
  • Try this link for info on bypassing the 2GB limit (MSDN) https://msdn.microsoft.com/en-us/library/hh285054(v=vs.110).aspx – John Thoits May 09 '17 at 22:08
  • Corrected link: https://www.centerspace.net/large-matrices-and-vectors – Trevor Misfeldt Nov 26 '19 at 01:12

3 Answers3

54

It's worse than that - you're process space, when you're working in .NET in 32bit is much smaller than the theoretical limit. In 32bit .NET apps, my experience is that you'll always tend to start getting out of memory errors somewhere around 1.2-1.4gb of memory usage (some people say they can get to 1.6... but I've never seen that). Of course, this isn't a problem on 64bit systems.

That being said, a single 2GB array of reference types, even on 64bit systems, is a huge amount of objects. Even with 8 byte references, you have the ability to allocate an array of 268,435,456 object references - each of which can be very large (up to 2GB, more if they're using nested objects). That's more memory than would ever really be required by most applications.

One of the members of the CLR team blogged about this, with some options for ways to work around these limitations. On a 64bit system, doing something like his BigArray<T> would be a viable solution to allocate any number of objects into an array - much more than the 2gb single object limit. P/Invoke can allow you to allocate larger arrays as well.


Edit: I should have mentioned this, as well - I do not believe this behavior has changed at all for .NET 4. The behavior has been unchanged since the beginning of .NET.


Edit: .NET 4.5 will now have the option in x64 to explicitly allow objects to be larger than 2gb by setting gcAllowVeryLargeObjects in the app.config.

Reed Copsey
  • 522,342
  • 70
  • 1,092
  • 1,340
  • I am aware of all the limitations on 32 bit, but that is not really the point of my question. The surprise was that things are actually worse on 64 bit. I'll take a look at the blog post. Thanks for the link. – Brian Rasmussen Jul 06 '09 at 16:56
  • 1
    No problem - I just added that because it's really not worse in 64bit than 32bit. The theoretical limit is 1/2 of the objects, but since you're really limited to a total of 1.4gb of process space, there's no way to make an array of object references even close to half the allowable size. (Each reference requires it to point to something, as well as the size of the reference.... so you really cap out most of the time around 1/4gb worth of references in .NET 32bit). – Reed Copsey Jul 06 '09 at 17:02
  • In reply to Edit: I suspect you're right. As far as I can tell the limitation has always been there and I haven't been able to find anything indicating that the limit has been changed. I can understand why MS may not want to address, but it is really weird that moving to 64 bit will actually yield less "space" for single collections anyway. – Brian Rasmussen Jul 06 '09 at 17:03
  • 1
    Brian: This is just one more (albeit minor, IMO) disadvantage of 64bit vs. 32bit. There are many reasons NOT to move to 64bit - but people seem to forget that, thinking that 64bit is automatically better. – Reed Copsey Jul 06 '09 at 17:07
  • 1
    I've read that Mono supports 64bit arrays in C# (e.g.arrays with more than 2^32 entries) – Alex Black Jul 06 '09 at 18:41
  • 2
    Yes, Mono does that. Note that the theoretical capability is there for all CLR implementations (all arrays have `long LongLength` property), but so far only Mono actually used it. – Pavel Minaev Sep 24 '09 at 16:39
  • Even on 64-bit with gcAllowVeryLargeObjects we are limited: "The maximum number of elements in an array is UInt32.MaxValue." – Rick Minerich Jul 10 '12 at 19:12
  • @RickMinerich Yes, though at least custom value types can allow much larger than 2gb arrays now – Reed Copsey Jul 10 '12 at 19:14
  • @RickMinerich Though, I thought you could do it with multiple dimensions - see http://msdn.microsoft.com/en-us/library/hh285054%28v=vs.110%29.aspx - "The maximum index in any single dimension is"... – Reed Copsey Jul 10 '12 at 19:15
  • I believe your every mention of `gb` in regards to size stands for GB i.e. Giga Bytes. Memory addressing scheme is always in terms of bytes but I got confused with your post with regards to size. Can you please clarify or make it consistent in case there is a miss? – RBT Aug 18 '16 at 06:02
17

.NET Framework 4.5 allows creating arrays larger than 2GB on 64 bit platforms. This feature is not on by default, it has to be enabled via config file using the gcAllowVeryLargeObjects element.

http://msdn.microsoft.com/en-us/library/hh285054(v=vs.110).aspx

Alina Popa
  • 231
  • 3
  • 3
14

This is a big deal in the numerical field. Anyone using numerical class libraries in .NET has their matrices stored as arrays underneath. This is so native libraries can be called to do the number-crunching. The 2GB limit seriously hampers the size of matrices possible in 64-bit .NET. More here.

Mast
  • 1,542
  • 4
  • 25
  • 42
Trevor Misfeldt
  • 216
  • 2
  • 8
  • 1
    We've talked with Microsoft about this issue. It's unlikely to be fixed in .NET 4.0 but they seemed very receptive to finding a solution. I think we'll end up with long-indexed arrays but more likely some sort of giant blob object. – Trevor Misfeldt Oct 08 '09 at 02:51
  • How does the performance of a 65536*65536 array of float compare with that of 65536 arrays of 65536 floats? The performance of 256 arrays of 256 floats will be inferior to that of a 256*256 array, since the latter will have better cache locality and the former won't, but if one is accessing rows of a matrix that are sufficiently localized to benefit from cache locality, I would think one the processor would be able to cache the object-table references. – supercat Mar 24 '11 at 03:39