2

I know using heap objects are slower because of necessary memory management(allocation, deallocation). What about accessing them? Is there any performance difference while accessing an object on stack vs heap?

EDIT: My question is not about allocation but accessing them. This includes memory location of stack vs heap, cache misses or any other variable that I am not aware of. With a simple toy example:

int stack_array[100];
int* heap_array = new int[100];
...
...
std::cout << stack_array[51]; // Any difference between these two statements
std::cout << heap_array[51]; // Any difference between these two statements
yolgun
  • 186
  • 3
  • 12
  • possible duplicate of [Which is faster: Stack allocation or Heap allocation](http://stackoverflow.com/questions/161053/which-is-faster-stack-allocation-or-heap-allocation) – Paul Roub Jul 01 '15 at 21:13
  • 1
    I am not talking about allocation part. What happens after it, when accessing(reading or writing) them. – yolgun Jul 01 '15 at 21:14
  • 2
    You're aware there is a difference in allocation. Now run few tests (in your **specific CPU architecture**) to see if also performance on access are different. In general I would say "no difference" but in some architectures stack may be "near" and CPU may have faster/shorter instructions to load/store on near addresses (read near as "same segment"). – Adriano Repetti Jul 01 '15 at 21:14
  • @yolgun Are you worrying about cache misses or what? Clarify your question please! – πάντα ῥεῖ Jul 01 '15 at 21:20
  • 2
    I don't understand why this question is down voted. – DoubleTrouble May 01 '16 at 20:50

1 Answers1

2

You probably won't notice any speed differences between stack and heap (dynamic memory), unless the physical memory is different.

The access for an array is direct, regardless of the memory location. You can confirm this by looking at the assembly language generated by the compiler.

There could be a difference if the OS decides to use virtual memory for your arrays. This means that the OS could page chunks of your array to the hard drive and swap them out on demand.

In most applications, if there is a physical difference (in terms of speed) between memory types, it will be negligible, in order of nanoseconds. For more computational intense applications (lots of data or need for speed), this could make a difference.

However, there are other issues that make memory access a non-issue such as:

  • Disk I/O
  • Waiting for User Input
  • Memory paging
  • Sharing of the CPU with other applications or threads

All of the above items have an overhead that is usually an order of magnitude more than an access to a memory device.

The main reason for using dynamic memory instead of stack based is size. Stack memory is mainly used for passing arguments and storing return addresses. Local variables that are not declared static will also be placed on the stack. Most programming environments give the stack area a smaller size. Larger items can be placed on the heap or declared as static (and placed in the same area as globals).

Worry more about correctness than memory performance. Profile when in doubt.

Edit 1: Cache misses
A Cache Miss is when the processor looks in it's data cache and doesn't find the item; the processor must then fetch the item from external memory (a.k.a. reloading the cache).

For most applications, cache misses are negligible in performance, usually measured in small nanosecond values. They are not noticeable unless your program is computationally intensive or processing a huge amount of data.

Branch instructions will take up more execution time than a cache miss. Some conditional branch instructions may force the processor to reload the instruction pipeline and reload the Program Counter register. (Note: some processors can haul in executable loop code into the instruction cache and reduce the penalty of the branch effects.)

You can organize your data to reduce the amount of cache misses. Search the web for "data driven" or "data optimizations". Also try reducing the branches by applying algebra, Boolean Algebra, and factoring invariants out of loops.

Thomas Matthews
  • 52,985
  • 12
  • 85
  • 144