14

Possible Duplicate:
When you exit a C application, is the malloc-ed memory automatically freed?

This question came to my mind when i was reading about how compulsory it is to use delete/free respectively when it comes to dynamic memory allocation in C/C++. I thought if the memory allocation persisted beyond the termination of my program execution, then yes it is compulsory; otherwise, why do i have to worry about freeing up the allocated space? Isn't the OS going to free it up automatically with process termination? How right am i? My question is that can

int *ip = new int(8);

persist beyond the termination of my program?

Community
  • 1
  • 1
badmaash
  • 4,357
  • 7
  • 40
  • 60

9 Answers9

15

Short answer: No.

Long answer: No. C++ will never persist memory unless you do the work to make it do so. The reason to free memory is this:

If you don't free memory, but keep allocating it, you will run out at some point. Once you run out, almost anything can happen. On Linux, maybe the OOM killer is activated and your process is killed. Maybe the OS pages you completely to disk. Maybe you give a Windows box a blue screen if you use enough memory. It can almost be thought of as undefined behavior. Also, if you leak memory, it's just siting there, unused, unreleased, and no one can use it until your process terminates.

There's another reason too. When you release memory to the allocator, the allocator might keep it around, but just mark it as usable. That means that next time you need memory, it's already sitting there waiting for you. That means there are less calls into the kernel to ask for memory, increasing performance, as context switches are very inefficient.

EDIT: The C and C++ standards don't even give a guarantee that the memory will be cleaned up by the OS after termination. Many OSes and compilers may, but there is no guarantee. Despite this, all major desktop and mobile operation systems (with the exception of probably DOS and some very old embedded systems) do clean up a processes memory after it.

Linuxios
  • 31,993
  • 12
  • 82
  • 110
  • 1
    This answer is very Linux-centric. I'm not at all sure the *language* guarantees this behavior. – unwind Jul 08 '12 at 13:58
  • 1
    @unwind: No, the language doesn't. I'll edit. I know that it's Linux centric, but Linux is what I know. – Linuxios Jul 08 '12 at 14:06
  • 1
    @unwind: While that's true, it is not much different under other systems (except of course, few systems have such an abomination as an OOM killer in the first place). This may have been different some 20 years ago on a "home computer", but on every mainstream OS nowadays (and every professional OS for the last 40-45 years), when a process terminates, all of its memory pages instantly go "poof". So, although the language doesn't guarantee for it, it's nevertheless happening reliably. Note that I'm not saying it's good to rely on it... – Damon Jul 08 '12 at 14:06
  • 2
    @Damon: Why would you call an OOM kill an abomination? When a system is *truly* out of memory (no more physical, no more swap), then the system has to do *something*, doesn't it? And why is killing the offending process a bad thing? As long as it can be configured so that your mission-critical server process isen't the one to go. – Linuxios Jul 08 '12 at 14:10
  • 1
    @Linuxios: First, the OOM killer does not kill the offending process, it kills a **random** process. Granted, there are some "heuristics" involved nowadays, whatever they may be, so it is not purely random any more, but that's beyond the point. Second, the correct way of approaching the problem is to not give away memory that you don't have in the first place. Linux gratuously overcommits under the assumption that you won't use what you allocate anyway. And when you do, it comes as a big surprise. Allocations that are not backed by VM should just fail, end of story. No OOM happening. – Damon Jul 08 '12 at 14:23
  • 1
    @Damon: Good point, but I don't see Windows or Mac doing anything better. What do they do? And what about the badly writen programs that don't handle allocation failures? – Linuxios Jul 08 '12 at 14:24
  • 1
    @Linuxios: Under Windows, when there is not enough swap space, `VirtualAlloc` returns a null pointer. I don't know how MacOS does. A badly written program that doesn't handle allocation failure will dereference a null pointer and crash. This will indeed kill the offending process, which in my opinion, is much preferrable over killing some random process. – Damon Jul 08 '12 at 14:29
  • OOM killer is irrelevant. A properly configured Linux system does not OOM; `malloc` returns 0 or `new` throws an exception when there is no memory available. – R.. GitHub STOP HELPING ICE Jul 08 '12 at 14:43
  • As for your edit, even DOS's `malloc` cleans up when the program exits. That's because program exit on DOS is just resetting the top-of-used-memory pointer back to where it was before the program started (for 16-bit programs; "DOS extended" programs clean up the whole 32-bit environment they created when they exit). – R.. GitHub STOP HELPING ICE Jul 08 '12 at 14:54
  • @R..: Thanks. I just realized that a lot of my data is off, my Linux programing book is from Linux 2.2. And thanks for the info. – Linuxios Jul 08 '12 at 15:45
  • By the way, disabling overcommit on modern Linux is as easy as `echo "2" > /proc/sys/vm/overcommit_memory`. Why the default hasn't been fixed is anybody's guess.... – R.. GitHub STOP HELPING ICE Jul 08 '12 at 15:47
  • I knew Linux had a good, modern option. Its true, but maybe someone can just go into the code and commit that tiny, tiny change. – Linuxios Jul 08 '12 at 15:49
  • My best guess is that they're worried changing the default will break broken software that depends on overcommit, whereas people who want robust systems can just change the setting manually... Not that I agree with this philosophy... Anyway, just disabling overcommit does not prevent your system from "crashing" due to memory exhaustion if you have tons of swap. Everything will get swapped so bad it takes hours or days for anything to respond, so for most practical purposes the system has "crashed". Removing excess swap fixes it but that's also a "policy" change... :-( – R.. GitHub STOP HELPING ICE Jul 08 '12 at 15:58
  • @R..: Agreed. But at least their not worried about backward compatibility like Microsoft... Why would Windows 7 users need an option to create an MS-DOS boot floppy? Or the ability to run DOS `.COM` executables directly? – Linuxios Jul 08 '12 at 16:06
  • @R..: No kidding on the swap thing. Why do you think I gave my Mac Mini 8GB of RAM? Otherwise, it wouldn't stop swapping. Now, I have 5GB free almost all of the time. It's nice to not wait 3 seconds every time I switch windows. – Linuxios Jul 08 '12 at 16:07
  • 4
    It hasn't been "fixed" because apparently it would break the combo of `fork()`/`exec()` when the parent process uses a lot of memory: http://www.quora.com/What-are-the-disadvantages-of-disabling-memory-overcommit-in-Linux – Izkata Jul 08 '12 at 16:10
  • @lzkata: Fascinating! Thanks for the link. That's a vary interesting issue that it brings up with the relationship of calls like `fork()` and the issue of memory overcommitment. Because, the likelihood that a child is going to write every page of it's parent memory is really slim, especially if the developer even knows what copy-on-write means. – Linuxios Jul 08 '12 at 16:14
  • @Izkata: That's why such programs should be using `posix_spawn`, not `fork` and `exec`. Or they could use `vfork` if it's available. This is not an excuse for refusal to fix the overcommit default. – R.. GitHub STOP HELPING ICE Jul 08 '12 at 18:58
  • @R..: But `fork()` still has other use cases, not covered by `posix_spawn` or `vfork()`. Take a server that forks for each client, as an example. – Linuxios Jul 08 '12 at 19:08
  • @Linuxios: Then this server needs to avoid allocating ridiculous amounts of memory, or else each client is going to also consume ridiculous amounts of memory. This is fundamental. As far as I know, there's no way to annotate certain parts of memory as "read-only after fork", but if there were, that would solve the problem too. – R.. GitHub STOP HELPING ICE Jul 08 '12 at 19:44
  • @R..: That would be the best solution. Or you can just use `clone()` instead of `fork()`. That would solve a *lot* or problems. `clone()` would let you do that. I'm going to go do some light manpage reading. – Linuxios Jul 08 '12 at 19:50
  • @Linuxios: I don't think `clone` makes that possible, unless you want to leave **all** the memory shared, but then it's just as dangerous as `vfork` and you should really be using threads... – R.. GitHub STOP HELPING ICE Jul 08 '12 at 19:56
  • @R..: I'm going to take this as an interesting chalenge. Using `clone` and mutex/semaphore/etc., make a version of `fork` that can accommodate copying some memory, and read-only sharing other parts. It will be interesting. – Linuxios Jul 08 '12 at 20:02
  • FWIW, using shared memory will avoid the CoW problem. Not that I particularly agree that all programs using fork are broken. – Per Johansson Jul 08 '12 at 20:03
  • Yes, but I'm having a hard time thinking of what you could possibly use massive amounts of anonymous (non-backed) shared memory for where the server would not modify it after the first client is forked. If you're going to be modifying it, you'll need synchronization and you should probably be using threads correctly rather than ugly hacks... – R.. GitHub STOP HELPING ICE Jul 08 '12 at 20:18
  • @R..: Yes, but Linux pthreads are really just implemented over the `clone()` call with everything shared. – Linuxios Jul 08 '12 at 23:25
  • The difference is that using `clone` yourself to do this will give you **broken** locking primitives, since the thread register for your "threads" will not be setup right and locking primitives will not be able to obtain their tid to store as the owner of locks. Basically `clone` should never be used except in implementating `pthread_create` or for `fork`-like usages (minimal/no sharing); breaking this rule is asking for trouble. – R.. GitHub STOP HELPING ICE Jul 09 '12 at 01:06
  • @R..: I never said I'd use the preexisting thread sync primitives. I like reinventing the wheel for fun. – Linuxios Jul 09 '12 at 01:07
  • The problem is that plenty of internal library functions might be using them: things like the resolver, timezone setting, stdio, etc. – R.. GitHub STOP HELPING ICE Jul 09 '12 at 01:55
  • @R..; I know. I can always try! I'm building a VM anyway, so I can implement things my own way. Still in the conceptual stage, but it's called NVM and it will be on GitHub. – Linuxios Jul 09 '12 at 02:02
6

You do not need to release the memory back to the OS before the program exits, because the operating system will reclaim all memory that has been allocated to your process upon the termination of the process. If you allocate an object that you need up to the completion of your process, you don't have to release it.

With that said, it is still a good idea to release the memory anyway: if your program uses dynamic memory a lot, you will almost certainly need to run a memory profiler to check for memory leaks. The profiler will tell you about the blocks that you did not free at the end, and you will need to remember to ignore them. It is a lot better to keep the number of leaks at zero, for the same reason that it is good to eliminate 100% of your compiler's warnings.

Sergey Kalinichenko
  • 675,664
  • 71
  • 998
  • 1,399
  • Not only that, but it can be *crucial* to release memory. You think customers/users will be happy when your backgrund daemon takes 500 MB more of RAM every hour? – Linuxios Jul 08 '12 at 13:24
  • @Linuxios I meant the situations when once the memory has been allocated, it is genuinely needed by your program, and the only time when you program can release that memory is right before the exit. – Sergey Kalinichenko Jul 08 '12 at 13:28
  • Oh! I get it. Sorry... I thought you ment to throw away the pointer and let the OS get it at the end. My bad! +1! – Linuxios Jul 08 '12 at 13:31
4

Any good OS should clean up all resources when the process exits; the 'always free what you allocated' principle is good for two things:

  1. If your program leaks memory but never exits (daemons, servers, ...) continuously leaking memory will waste RAM badly.

  2. You should not defer freeing all memory until your program terminates (like Firefox does sometimes - noticed how much time it takes for it to exit?) - the point is to minimalize the time for which you have allocated memory; even if your program continues to run, you should immediately free up allocated RAM after you are finished with it.

  • 1
    @SanJacinto: Although any major kernel nowadays will clean up your memory after you to rescue the system from pending disaster of running out of memory. NT will, Mach/XNU will, Linux will, BSD will, etc. – Linuxios Jul 08 '12 at 13:22
  • 1
    Ok, how many real time and lightweight OS's have you considered? I understand that this is commonplace behavior, and that's what makes it all the worse. If you perpetuate the thought that "all good" os's do this, then somebody will have a bad surprise someday. – San Jacinto Jul 08 '12 at 13:23
  • 1
    @SanJacinto: I know. But I'm only considering the major kernels that 99% of devices with real processing power use. And most of those are DOS (I don't know), Linux (defiantly does), OSX's Mach/XNU (defiantly does), Windows NT (does), or other UNIX (most likely does). – Linuxios Jul 08 '12 at 13:30
  • @SanJacinto do you think I suggested not to clean up after ourselves? I didn't. I just told I expect from a decent OS to cleanup if a process exits - **in case the programmer accidentally forgot to do so.** –  Jul 08 '12 at 13:46
  • Well, your guess is wrong - I've worked with plenty of these embedded OSes - but I think they're a bit out of scope here as I don't think OP was primarily considering special embedded systems, properietary systems, or anything 'exotic' -- seeing from his profile, I think he only wanted to know about the good ol' PC. Although you're technically right, I feel this conversation has now been become more of a quite pointless argument and I don't want start trolling. –  Jul 08 '12 at 19:46
4

1) Free your memory when you request if off the heap. Memory leaks are never a good thing. If it doesn't hurt you now, it likely will down the road.

2) There is no guarantee by C or C++ that your OS will clean up the memory for you. You may be programming some day on a system that, in fact, does not. Or worse, you may be porting code in which you didn't care about memory leaks to this new platform.

San Jacinto
  • 8,490
  • 3
  • 37
  • 58
  • 2
    Any OS which didn't clean up this memory is crap. This would mean that any application crash on this OS would leave these leaked resources forever. Standard malloc/new creates application memory, there is no reason to believe it'd persists beyond the end of that app. – edA-qa mort-ora-y Jul 08 '12 at 13:55
  • If it doesn't clean up memory I wouldn't call it an OS. At that point it is merely a device abstraction layer. – edA-qa mort-ora-y Jul 08 '12 at 20:57
  • @edA-qa mort-ora-y Ok. Doesn't really matter much what is your personal choice of words for such a system, does it? Oh well. Life goes on. – San Jacinto Jul 09 '12 at 00:20
4

For a historical note: the operating system used by old Amiga computers (“AmigaOS”) did not have full memory management as it is assumed now (except maybe for some later versions released when Amiga was no longer popular).

The CPU did not have a MMU (memory management unit), and as a consequence every process had access to all physical memory. Because of that when two processes wanted to share some information, they could just exchange pointers. This practice was even encouraged by the OS, which used this technique in its message-passing scheme.

However, this made it impossible to track which process owns which part of memory. Because of that the OS did not free memory of a finished process (or any other resource, for the fact). Freeing all allocated memory was therefore vital.

liori
  • 36,848
  • 11
  • 71
  • 101
3

If you are very sure that you will never need to free the memory in the lifetime of the program, technically it may be alright to skip free/delete. Operating systems like Linux, Windows etc. will free up the allocated memory when the process ends. But in practice you can almost never make the assumption that the memory you allocate does not need to be freed within the lifetime of the process. Keeping code reusability, maintainability and extensibility in mind, it is a good practice to always free up everything that you allocate at the appropriate place.

soorajmr
  • 450
  • 3
  • 9
  • Exactly. Because imagine if a system is under high load? A program that doesn't free it's memory will cause expensive context switches and loss of memory. Unless you're writing special, mission critical software that will be the only thing running and can't bear the performance hit of using `malloc` *or* `free` (but using low level techniques), then you can think about not freeing. – Linuxios Jul 08 '12 at 13:27
2

This is an interesting question. My original take on your question was whether or not you could access memory after program completion, but after a second read I see you want to know why memory should be freed.

You free dynamically allocated memory, because if you don't the OS and other process will run out and you will have to reboot.

I thought you might want to access that memory after program completion, so my guess is that even if you wrote out the starting address and length of a dynamically allocated memory block -- to the console or a file -- that address might not be valid after the program completion.

That is because when your program is running you have a virtual page address, which you either might not be able to touch without kernel privileges after program completion. Or, there is another reason.

octopusgrabbus
  • 9,974
  • 12
  • 59
  • 115
1

It certainly doesn't survive beyond program termination. The idea is to free memory when not needed anymore so that your program doesn't waste memory (it doesn't consume more then it really needs) or, even worse, doesn't run out of memory (depending on your allocation pattern)

Razvan
  • 9,372
  • 4
  • 35
  • 49
0

You have to worry about it because imagine that you were allocating a lot of memory in many many places and NOT freeing it. Once memory is allocated it occupies a portion of memory that cannot be allocated anymore. This will result in the amount of available memory getting smaller and smaller each time because you are failing to free it. At some point the memory will be exhausted. Even if the memory is freed at program termination, imagine that your program runs for several weeks at a time, constantly allocating memory but never freeing it. Memory is a finite resource and you need to be responsible when using dynamic allocation.

mathematician1975
  • 20,311
  • 6
  • 52
  • 93