1

I have a simple question that whether only using pointers instead of normal variables increase the efficiency of the program either time wise or memory wise? For an instance if I use following program to swap two integers.

#include<iostream>
#include<conio.h>
#include<new>
using namespace std;
int main()
{
   int *a=new int;
   int *b=new int;
   int *c=new int;
   cin>>(*a)>>(*b);
   *c=*a;*a=*b;*b=*c;
   cout<<"swapping";
   cout<<*a<<*b;
getch();
}
Mohit Sehgal
  • 745
  • 8
  • 21
  • 1
    If it did, it would be too minimal to care. – chris Nov 10 '12 at 06:18
  • 4
    When you measured it, what were the results? – Benjamin Lindley Nov 10 '12 at 06:19
  • 1
    In this case I would say it actually increases the "runtime", as you have to call extra statements for the allocations, which wouldn't happen in other cases. Also, on a 64-bit machine an `int` is still 32 bits while a pointer is 64 bits, so the program will be larger as well. And use more heap too. – Some programmer dude Nov 10 '12 at 06:22
  • This doesn't seem like something you should be concerned with. Pointer manipulation will always occur a little bit of overhead; review the assembly code if you need verification of this. Measuring the runtime performance, though, will likely yield any measurable effect until you come up with a more sophisticated test case. – dans3itz Nov 10 '12 at 06:25
  • 1
    @Steve: Lack of research effort. Namely, measuring. Why the upvote? – Benjamin Lindley Nov 10 '12 at 06:29
  • @JoachimPileborg, while a 32-bit int may be common on a lot of platforms, it's by no means mandated. You could just as easily have a 32bit pointer and 2048-bit integer. – paxdiablo Nov 10 '12 at 06:29
  • 1
    Even in this simple program you've already leaked memory. – GManNickG Nov 10 '12 at 06:34
  • @JoachimPileborg I am not asking specific to this case. I just want to know it generally – Mohit Sehgal Nov 10 '12 at 06:46
  • @Mohit_Sehgal And the answer to that is: It depends. – Some programmer dude Nov 10 '12 at 06:48
  • 1
    @Mohit_Sehgal: Uh, by not deleting `a`, `b`, nor `c`. – GManNickG Nov 10 '12 at 07:38

4 Answers4

9

Using pointers to variables instead of variables is unlikely to improve performance. Write the code in clearest way possible and let the compiler optimize the code for you. If anything, using pointers is likely to slow things down as it makes the compiler analysis harder.

For large objects, it is worth keeping pointers to objects instead of copying those objects around. Maybe that is the kernel of truth from which you are incorrectly extrapolating.

Keith Randall
  • 22,422
  • 1
  • 32
  • 53
4

In the example given above, it is less efficient both time wise and memory wise.

  • Dynamic allocation cost more time than local variables (there are function calls and work to be done, local variables are created by substracting from a stack pointer, often just one operation for all local variables).
  • Dynamic allocation cost more memory than local variables (there are supporting data structures in the memory manager).

Pointers can make the program more efficient when they are used to prevent copying large structures. ints don't fall in this category.

Alex
  • 7,252
  • 3
  • 29
  • 58
  • + Dynamic allocation decreases locality of reference; the pointed-to variable is very unlikely to share a cacheline with other variables. And since cache is between 10x - 1000x faster than main memory, this matters. Of course, not in a trivial program like this which still fits in L1 cache. – MSalters Nov 11 '12 at 01:17
3

The only thing I can think of that might make using a pointer slower than using a local variable is that the generated assembly might involve more complex memory addressing, thus resulting in larger machine op-codes, which in turn would take ever-so-slightly more time to execute.

That time difference would be so negligible though that you shouldn't worry about it.

What you should consider is that allocation on the stack is much faster than allocation on the heap. In other words:

int* a = new int;

is slower than:

int a;

but only because of the allocation new int, not because you are using a pointer.

Community
  • 1
  • 1
kevintodisco
  • 4,681
  • 1
  • 19
  • 27
  • No, indirection caused by over use of pointers can cause huge performance hits. The big reason why the vector is a top notch container is because it is contiguous memory and it minimizes indirection compared to other std containers. – Yakk - Adam Nevraumont Nov 10 '12 at 11:36
0

This code does almost nothing at the level where you wrote it. If you step through it at the instruction level, you will see that each call to new executes 100s of instructions, and the cout << calls do even far more.

That's what you're measuring.

Mike Dunlavey
  • 38,662
  • 12
  • 86
  • 126