5

Short background:

I'm developing a system that should run for months and using dynamic allocations.

The question:

I've heard that memory fragmentation slows down new and malloc operators because they need to "find" a place in one of the "holes" I've left in the memory instead of simply "going forward" in the heap.

I've read the following question: What is memory fragmentation?

But none of the answers mentioned anything regarding performance, only failure allocating large memory chunks.

So does memory fragmentation make new take more time to allocate memory? If yes, by how much? How do I know if new is having a "Hard time" finding memory on the heap ?

I've tried to find what are the data structures/algorithms GCC uses to find a "hole" in the memory to allocate inside. But couldn't find any descent explanation.

OopsUser
  • 4,258
  • 6
  • 37
  • 59
  • Other apps and services running in the same machine may influence memory fragmentation. – Ripi2 Jun 26 '17 at 19:29
  • 6
    If you very much *need* to allocate memory dynamically, then do so by accepting that the underlying system has a nicely optimized algorithm for dealing with fragmentation. Sounds like famous last words, but it's the best thing to do in comparison to the time spent worrying about it. – DeiDei Jun 26 '17 at 19:34
  • 2
    "'m developing a system that should run for months and using dynamic allocations." -- that was your first mistake :) . Seriously, it's probably better to design for restarting an app than to expect it to run without problems for months on end. Then you're also covered for power outages, required system restarts, operating system / other app failures. – Dave S Jun 26 '17 at 19:36
  • Unless you have a specific problem, for which you unambiguously identified that memory allocation was the bottleneck, and that this bottleneck was due to heap fragmentation, it sounds like premature optimization. – spectras Jun 26 '17 at 19:36
  • @rcgldr> .NET repacks the heap (specifically, the small object heap, or SOH). It has nothing to do with pages. Pages are the basic memory allocation unit that a program can ask the system. They do not have to be contiguous, they do not even have to be in physical memory, the MMU will remap addresses transparently. But pages are big (4kB, 2MB, 1GB on x86-64) so we don't usually manage them directly. We use `malloc` or `new` which, behind the scenes, get pages from the system and manage the space to fit many small objects in it. *This* is where fragmentation happens. – spectras Jun 27 '17 at 10:00
  • @rcgldr> Yes, my point was just that fragmentation happens in the virtual address space and is the result of heap allocator strategy. It has nothing to do with pages, which are a lower-level tool used to suppress fragmentation issues at the system level. One actually can allocate pages manually in his program, using an anonymous `mmap()` or `CreateFileMapping()` with an invalid file handle. This will not have any fragmentation issue. – spectras Jun 27 '17 at 14:06
  • The .NET example is good: it shows a different allocator strategy, that takes advantage of the fact that object references in .NET use an opaque handle structure instead of a direct address, which allows it to move objects around to compact the heap, eliminating fragmentation. Also, it splits allocation areas in size buckets, which helps alleviating fragmentation issues in the first place. The ability to move objects around comes at the price of an additional indirection for all reference uses though. Trade-offs… – spectras Jun 27 '17 at 14:09

1 Answers1

7

Memory allocation is platform specific, depending on the platform.

I would say "Yes, new takes time to allocate memory. How much time depends on many factors, such as algorithm, level of fragmentation, processor speed, optimizations, etc.

The best answer for how much time is taken, is to profile and measure. Write a simple program that fragments the memory, then measure the time for allocating memory.

There is no direct method for a program to find out the difficulty of finding available memory locations. You may be able to read a clock, allocate memory, then read again. Another idea is to set a timer.

Note: in many embedded systems, dynamic memory allocation is frowned upon. In critical systems, fragmentation can be the enemy. So fixed sized arrays are used. Fixed sized memory allocations (at compile time) remove fragmentation as an defect issue.

Edit 1: The Search
Usually, memory allocation requires a call to a function. The impact of the this is that the processor may have to reload its instruction cache or pipeline, consuming extra processing time. There also may be extra instruction for passing parameters such as the minimal size. Local variables and allocations at compile time usually don't need a function call for allocation.

Unless the allocation algorithm is linear (think array access), it will require steps to find an available slot. Some memory management algorithms use different strategies based on the requested size. For example, some memory managers may have separate pools for sizes of 64-bits or smaller.

If you think of a memory manager as having a linked list of blocks, the manager will need to find the first block greater than or equal in size to the request. If the block is larger than the requested size, it may be split and the left over memory is then created into a new block and added to the list.

There is no standard algorithm for memory management. They differ based on the needs of the system. Memory managers for platforms with restricted (small) sizes of memory will be different than those that have large amounts of memory. Memory allocation for critical systems may be different than those for non-critical systems. The C++ standard does not mandate the behavior of a memory manager, only some requirements. For example, the memory manager is allowed to allocate from a hard drive, or a network device.

The significance of the impact depends on the memory allocation algorithm. The best path is to measure the performance on your target platform.

Thomas Matthews
  • 52,985
  • 12
  • 85
  • 144