14

I'm working on a C++ application which is keeping some user secret keys in the RAM. This secret keys are highly sensitive & I must minimize risk of any kind of attack against them.
I'm using a character array to store these keys, I've read some contents about storing variables in CPU registers or even CPU cache (i.e using C++ register keyword), but seems there is not a guaranteed way to force application to store some of it's variables outside of RAM (I mean in CPU registers or cache).
Can anybody suggest a good way to do this or suggest any other solution to keep these keys securely in the RAM (I'm seeking for an OS-independent solution)?

Ehsan Khodarahmi
  • 4,390
  • 9
  • 57
  • 81
  • 2
    Are you asking the right question? Seems to me that if you're worried about the keys being stolen from volatile RAM that your effort would be better expended securing the physical location of the hardware. – Eric Andres May 11 '13 at 18:52
  • If I'm worry about the physical location, it's because I know this is much more easier to steal sensitive data from RAM (i.e using cold boot attack), but infact the storage location is not the main problem if I can guarantee safety of data in the RAM, then I'll satisfy! – Ehsan Khodarahmi May 11 '13 at 18:56
  • Will the physical access to the RAM be protected in your application? If not software protection will not help a lot, some SDRAM keep information for a minute after shut down, even at room temperature. – Étienne May 11 '13 at 19:01
  • 8
    @EhsanKhodarahmi you are misguided -- you absolutely **CANNOT** guarantee the safety of *any* unencrypted information, regardless of its location (i.e. RAM, cache, register, whatever) on a general purpose machine running a general purpose O/S. – Nik Bougalis May 11 '13 at 19:02
  • 4
    As long as the attacker has access to the machine where the program that uses the key runs, he will be able to get the key. – typ1232 May 11 '13 at 19:04
  • 2
    If you're worried about some attacker reading the memory, you should not store the key at all in memory and instead focus on using an approach that does not require it. For example, you can use a one-way hash to ensure that the password is not stored in cleartext but you can still see if a user have entered the correct password. – Mats Kindahl May 11 '13 at 19:39
  • @all, the OP is not looking for guarantees, only minimization. Please read the question closer. – Freedom_Ben May 12 '13 at 14:59

8 Answers8

17

Your intentions may be noble, but they are also misguided. The short answer is that there's really no way to do what you want on a general purpose system (i.e. commodity processors/motherboard and general-purpose O/S). Even if you could, somehow, force things to be stored on the CPU only, it still would not really help. It would just be a small nuisance.

More generally to the issue of protecting memory, there are O/S specific solutions to indicate that blocks memory should not be written out to the pagefile such as the VirtualLock function on Windows. Those are worth using if you are doing crypto and holding sensitive data in that memory.

One last thing: I will point out that it worries me is that you have a fundamental misunderstanding of the register keyword and its security implications; remember it's a hint and it won't - indeed, it cannot - force anything to actually be stored in a register or anywhere else.

Now, that, by itself, isn't a big deal, but it is a concern here because it indicates that you do not really have a good grasp on security engineering or risk analysis, which is a big problem if you are designing or implementing a real-world cryptographic solution. Frankly, your posts suggests (to me, at least) that you aren't quite ready to architect or implement such a system.

Nik Bougalis
  • 10,222
  • 1
  • 19
  • 37
  • 4
    I would say it makes attacking the system easier if they are in the register. If you know where the data is, that's half the battle. The only advantage of RAM over CPU is there is a lot of it to to search (but again if you know where it is then it is no more difficult to extract it (with the correct tools). – Martin York May 11 '13 at 19:56
  • @LokiAstari I completely agree - as I said, the OP is misguided. – Nik Bougalis May 11 '13 at 21:37
  • 10
    The OP is not looking for security guarantees, only minimization. You're being overly harsh, especially by saying he's not qualified. We're all here to learn. – Freedom_Ben May 12 '13 at 15:06
  • 4
    @Freedom_Ben The risk he's worried about is **impossible** to manage using software or language keywords. It's not harsh to point that out, or tell him that that he is asking the wrong question and operating on assumptions that don't hold. – Nik Bougalis May 12 '13 at 17:47
9

You can't eliminate the risk, but you can mitigate it.

Create a single area of static memory that will be the only place that you ever store cleartext keys. And create a single buffer of random data that you will use to xor any keys that are not stored in this one static buffer.

Whenever you read a key into memory, from a keyfile or something, you only read it directly into this one static buffer, xor with your random data and copy it out wherever you need it, and immediately clear the buffer with zeroes.

You can compare any two key by just comparing their masked versions. You can even compare hashes of masked keys.

If you need to operate on the cleartext key - e.g. to generate a hash or validate they key somehow load the masked xor'ed key into this one static buffer, xor it back to cleartext and use it. Then write zeroes back into that buffer.

The operation of unmasking, operating and remasking should be quick. Don't leave the buffer sitting around unmasked for a long time.

If someone were to try a cold-boot attack, pulling the plug on the hardware, and inspecting the memory chips there would be only one buffer that could possibly hold a cleartext key, and odds are during that particular instant of the coldboot attack the buffer would be empty.

When operating on the key, you could even unmask just one word of the key at a time just before you need it to validate the key such that a complete key is never stored in that buffer.

@update: I just wanted to address some criticisms in the comments below:

The phrase "security through obscurity" is commonly misunderstood. In the formal analysis of security algorithms "obscurity" or methods of hiding data that are not crytpographically secure do not increase the formal security of a cryptographic algorithm. And it is true in this case. Given that keys are stored on the users machine, and must be used by that program on that machine there is nothing that can be done to make the keys on this machine cryptographically secure. No matter what process you use to hide or lock the data at some point the program must use it, and a determined hacker can put breakpoints in the code and watch when the program uses the data. But no suggestion in this thread can eliminate that risk.

Some people have suggested that the OP find a way to use special hardware with locked memory chips or some operating system method of locking a chip. This is cryptographically no more secure. Ultimately if you have physical access to the machine a determined enough hacker could use a logic analyzer on the memory bus and recover any data. Besides the OP has stated that the target systems don't have such specialized hardware.

But this doesn't mean that there aren't things you can do to mitigate risk. Take the simplest of access keys- the password. If you have physical access to a machine you can put in a key logger, or get memory dumps of running programs etc. So formally the password is no more secure than if it was written in plaintext on a sticky note glued to the keyboard. Yet everyone knows keeping a password on a sticky note is a bad idea, and that is is bad practice for programs to echo back passwords to the user in plaintext. Because of course practically speaking this dramatically lowers the bar for an attacker. Yet formally a sticky note with a password is no less secure.

The suggestion I make above has real security advantages. None of the details matter except the 'xor' masking of the security keys. And there are ways of making this process a little better. Xor'ing the keys will limit the number of places that the programmer must consider as attack vectors. Once the keys are xord, you can have different keys all over your program, you can copy them, write them to a file, send them over the network etc. None of these things will compromise your program unless the attacker has the xor buffer. So there is a SINGLE BUFFER that you have to worry about. You can then relax about every other buffer in the system. ( and you can mlock or VirtualLock that one buffer )

Once you clear out that xor buffer, you permanently and securely eliminate any possibility that an attacker can recover any keys from a memory dump of your program. You are limiting your exposure both in terms of the number of places and the times that keys can be recovered. And you are putting in place a system that allows you to work with keys easily without worrying during every operation on an object that contains keys about possible easy ways the keys can be recovered.

So you can imagine for example a system where keys refcount the xor buffer, and when all key are no longer needed, you zero and delete the xor buffer and all keys become invalidated and inaccessible without you having to track them down and worry about if a memory page got swapped out and still holds plaintext keys.

You also don't have to literally keep around a buffer of random data. You could for example use a cryptographically secure random number generator, and use a single random seed to generate the xor buffer as needed. The only way an attacker can recover the keys is with access to the single generator seed.

You could also allocate the plaintext buffer on the stack as needed, and zero it out when done such that it is extremely unlikely that the stack ever leaves on chip cache. If the complete key is never decoded, but decoded one word at a time as needed even access to the stack buffer won't reveal the key.

Rafael Baptista
  • 10,259
  • 4
  • 35
  • 56
  • 3
    Excellent answer. This is the best one by far. +1 because I can only upvote it once. – Freedom_Ben May 12 '13 at 03:44
  • 1
    @Freedom_Ben This answer is flawed (the buffer in question could have been swapped out to RAM, for example) and I'm not sure what it really achieves, if anything. In fact, knowing that a particular virtual address will always be the one holding the sensitive data at some point in time makes it *EASIER* to mount an attack: set hardware breakpoints that trigger when that virtual address is modified. I'll say it again and make it really simple: if the attacker has physical access to the machine it is **GAME OVER** and there is **NOTHING** that software can do to prevent that, not even XOR. – Nik Bougalis May 12 '13 at 20:51
  • 2
    This makes absolutely no sense whatsoever. The XOR scheme is idiotic for a number of reasons. One is that all an attacker has to do is find the place where the XOR master data is stored, do the XOR, and that's it. The other is that it's completely vulnerable to a variety of attacks such as known plaintext. It probably won't do any harm provided nobody relies on it and also does the various sane things suggested by others -- but if someone expects this to provide some measure of security and relies on it, very bad things will happen. – David Schwartz May 12 '13 at 21:19
  • 1
    If someone has physical access to the system, AND the program is running, AND the attacker knows the location so they can set breakpoints, then yes, this is vulnerable. However there are many attacks that can be launched remotely to recover contents of memory. @RafaelBaptista suggestion will make it more difficult (not impossible) to recover the keys in this manner. And what if they're swapped to disc? Yes it's possible, but less likely since the keys are used and zeroed ASAP. Having a root password on a system is useless too if the attacker has physical access, but we still do it. – Freedom_Ben May 12 '13 at 21:31
  • 1
    To clarify, this is "security through obscurity." I agree it's dangerous if you let it give you a false sense of security, but it does make it harder to exploit the system. This is like hiding the spare key under the doormat. If the crook finds it then you're hosed, but if the choices are leave the key in the lock, or hide it under the doormat, obviously the latter is preferable. The answer points this out by saying "you can't eliminate the risk, but you can mitigate it." – Freedom_Ben May 12 '13 at 21:33
  • 1
    @Freedom_Ben: The point is, this barely mitigates the risk while there are solutions that actually do mitigate the risk. If you do this and also other things, this won't hurt you. But if you rely on this, it will hurt you very, very badly. There is a too common story of how people who know very, very little about security implement security. It starts out with bad ideas like this one that at least, the thinking goes, can't hurt. In the middle, this is the only thing that's implemented. And in the end, the people who relied on it get screwed. – David Schwartz May 12 '13 at 21:42
  • Some criticisms addressed above. – Rafael Baptista May 13 '13 at 14:06
  • 2
    Some of the criticisms posted here seem to be along the lines of "there is no very good solution so you shouldn't even try". Presumably you don't lock your front door of your house either because it's trivial to smash a window and get in that way.... – jcoder May 13 '13 at 15:55
  • @jcoder Sorry, but your analogy is flawed. The XOR doesn't amount to locking the door of your house - it amounts to putting a sticker on the door that says "this door is locked." There are appropriate security measures (e.g. locking memory so it cannot be swapped out) that were suggested to the OP. Sorry if we don't endorse snake oil. – Nik Bougalis May 13 '13 at 17:48
  • @NikBougalis Well fair enough. I wouldn't for a moment want anyone to think this offered real security. But sometimes a "beware of the dog sign" and a fake security camera if all you need if you're only guarding something of low value. Obviously if there is a better solution then use that of course. But anyway yes, I agree. – jcoder May 13 '13 at 19:26
  • @jcoder: The point is, there are better solutions. This is overly-complex and provides very little real security for the required development effort. Most importantly, this is not how real world security professionals solve this real world problem. – David Schwartz May 13 '13 at 20:35
  • The useful idea here is concentration of secrets, so you only have to worry about one small area of memory. However if you are going to take this approach, use a standard authenticated encryption algorithm (with appropriate IVs etc as required) rather than using the N-time pad as described in the answer. – Michael May 13 '13 at 22:02
  • 4
    @david: You say "there are better solutions" - but then you suggest using "mlock". But that offers zero additional crytographic security. In this case he is worried about a cold boot attack - but If you can boot the system - you can boot whatever you want - so you don't even need root/admin access to scan memory. All the keys will be sitting in memory in plaintext. You're also wrong about this being susceptible to a known plaintext attack - the mask bits can change as often as you like. – Rafael Baptista May 13 '13 at 22:38
  • 2
    @RafaelBaptista: I'm not going to debate this with you in SO comments. It's a truly bad idea and you make yourself look foolish defending it. It *is* vulnerable to a known plaintext attack because if you know the plaintext of a key, you can XOR the plaintext key with the XORed key and recover the mask. If it was a good idea, real programs would do things this way, and they don't. – David Schwartz May 13 '13 at 22:47
  • I fully agree with this answer. While `mlock` & other methods should be implemented where possible, obfuscating values in memory mitigates against untargeted memory searching. (I.e., where a malicious process simply dumps RAM & pattern matches to find strings to send back to the threat actor.) Sure, a malicious process that can access user-memory absolutely compromises all system security, but practically speaking untargeted/unmanned malicious processes can be slowed down or thrown off by obfuscating values. IMO, this is comparable to DRM. Granted, an encryption algo would be preferred to XOR. – Spencer D Apr 26 '19 at 19:32
5

There is no platform-independent solution. All the threats you're addressing are platform specific and thus so are the solutions. There is no law that requires every CPU to have registers. There is no law that requires CPUs to have caches. The ability for another program to access your program's RAM, in fact the existence of other programs at all, are platform details.

You can create some functions like "allocate secure memory" (that by default calls malloc) and "free secure memory" (that by default calls memset and then free) and then use those. You may need to do other things (like lock the memory to prevent your keys from winding up in swap) on platforms where other things are needed.

David Schwartz
  • 166,415
  • 16
  • 184
  • 259
2

Aside from the very good comments above, you have to consider that even IF you succeed in getting the key to be stored in registers, that register content will most likely get stored in memory when an interrupt comes in, and/or when another task gets to run on the machine. And of course, someone with physical access to the machine can run a debugger and inspect the registers. Debugger may be an "in circuit emulator" if the the key is important enough that someone will spent a few thousand dollars on such a device - which means no software on the target system at all.

The other question is of course how much this matters. Where are the keys originating from? Is someone typing them in? If not, and are stored somewhere else (in the code, on a server, etc), then they will get stored in the memory at some point, even if you succeed in keeping them out of the memory when you actually use the keys. If someone is typing them in, isn't the security risk that someone in one way or another, forces the person(s) knowing the keys to reveal the keys?

Mats Petersson
  • 119,687
  • 13
  • 121
  • 204
2

As others have said, there is no secure way to do this on a general purpose computer. The alternative is to use a Hardware Security Module (HSM).

These provide:

  • greater physical protection for the keys than normal PCs/servers (protecting against direct access to RAM);
  • greater logical protection as they are not general purpose - no other software is running on the machine so no other processes/users have access to the RAM.

You can use the HSM's API to perform the cryptographic operations you need (assuming they are somewhat standard) without ever exposing the unencrypted key outside of the HSM.

Michael
  • 879
  • 4
  • 12
  • I thought about including a link to the IBM cryptoprocessors in my answer, but was in a bit of a rush when I wrote it. Thanks for posting. – Nik Bougalis May 12 '13 at 20:47
1

If your platform supports POSIX, you would want to use mlock to prevent your data from being paged to the swap area. If you're writing code for Windows, you can use VirtualLock instead.

Keep in mind that there's no absolute way to protect the sensitive data from getting leaked, if you require the data to be in its unencrypted form at any point in time in the RAM (we're talking about plain ol' RAM here, nothing fancy like TrustZone). All you can do (and hope for) is to minimize the amount of time that the data remains unencrypted so that the adversary will have lesser time to act upon it.

JosephH
  • 7,782
  • 4
  • 29
  • 58
0

As the other answers mentioned, you may implement a software solution but if your program runs on a general purpose machine and OS and the attacker has access to your machine it will not protect your sensitive data. If you data is really very sensitive and an attacker can physically access the machine a general software solution won't be enough.

I once saw some platforms dealing with very sensible data which had some sensors to detect when someone was accessing the machine physically, and which would actively delete the data when that was the case.

You already mentioned cold boot attack, the problem is that the data in RAM can be accessed until minutes after shut down on general RAM.

Étienne
  • 4,131
  • 2
  • 27
  • 49
0

If yours is an user mode application and the memory you are trying to protect is from other user mode processes try CryptProtectMemory api (not for persistant data).

Maxim
  • 13
  • 3