1343

As I continue to build more and more websites and web applications I am often asked to store user's passwords in a way that they can be retrieved if/when the user has an issue (either to email a forgotten password link, walk them through over the phone, etc.) When I can I fight bitterly against this practice and I do a lot of ‘extra’ programming to make password resets and administrative assistance possible without storing their actual password.

When I can’t fight it (or can’t win) then I always encode the password in some way so that it, at least, isn’t stored as plaintext in the database—though I am aware that if my DB gets hacked it wouldn't take much for the culprit to crack the passwords, so that makes me uncomfortable.

In a perfect world folks would update passwords frequently and not duplicate them across many different sites—unfortunately I know MANY people that have the same work/home/email/bank password, and have even freely given it to me when they need assistance. I don’t want to be the one responsible for their financial demise if my DB security procedures fail for some reason.

Morally and ethically I feel responsible for protecting what can be, for some users, their livelihood even if they are treating it with much less respect. I am certain that there are many avenues to approach and arguments to be made for salting hashes and different encoding options, but is there a single ‘best practice’ when you have to store them? In almost all cases I am using PHP and MySQL if that makes any difference in the way I should handle the specifics.

Additional Information for Bounty

I want to clarify that I know this is not something you want to have to do and that in most cases refusal to do so is best. I am, however, not looking for a lecture on the merits of taking this approach I am looking for the best steps to take if you do take this approach.

In a note below I made the point that websites geared largely toward the elderly, mentally challenged, or very young can become confusing for people when they are asked to perform a secure password recovery routine. Though we may find it simple and mundane in those cases some users need the extra assistance of either having a service tech help them into the system or having it emailed/displayed directly to them.

In such systems the attrition rate from these demographics could hobble the application if users were not given this level of access assistance, so please answer with such a setup in mind.

Thanks to Everyone

This has been a fun question with lots of debate and I have enjoyed it. In the end I selected an answer that both retains password security (I will not have to keep plain text or recoverable passwords), but also makes it possible for the user base I specified to log into a system without the major drawbacks I have found from normal password recovery.

As always there were about 5 answers that I would like to have marked as correct for different reasons, but I had to choose the best one--all the rest got a +1. Thanks everyone!

Also, thanks to everyone in the Stack community who voted for this question and/or marked it as a favorite. I take hitting 100 up votes as a compliment and hope that this discussion has helped someone else with the same concern that I had.

Mike W
  • 809
  • 13
  • 22
Shane
  • 16,499
  • 4
  • 25
  • 46
  • 155
    I think he knows that it is not good. He's still looking for the best solution under the stated requirements. – stefanw Feb 17 '10 at 22:02
  • 33
    At the end of the day all you will be doing is carefully implementing an avoidable vulnerability. – rook Feb 18 '10 at 09:14
  • True, but the question was for the most responsible way of doing it ;) – stefanw Feb 18 '10 at 09:47
  • 20
    @Michael Brooks - I want you to know that I am absolutely in agreement with CWE-257 and would love to just quote that verbatim each and every time I am asked to make passwords recoverable as plaintext. However, in reality, clients and users are rarely interested in NIST regulations and just want me to do it anyway. 90% of the time I can convince them otherwise but in that 10% of time when I can't I am trying to determine the best course of action--in those cases CWE-257 is ashes in my hand (unfortunately). – Shane Feb 18 '10 at 13:21
  • 5
    @Shane, Perhaps you should ask how you can implement a Buffer Overflow Vulnerability in a Secure way. There is nothing stopping you from implement a secure system. Use password resets, if they don't know their password then they should make a new one. – rook Feb 18 '10 at 20:28
  • 8
    @Michael Brooks - I am not arguing--AT ALL--that your suggestion is the best way to do it. But certain user bases (for example, the elderly or actual non-computer users) who are targeted by sites I have worked on are confused by what we consider standard password reset routines. In those cases functionality (not security) would call for a password reminder rather than a reset. Those are the times when this has come up for me--clients don't want to throw away valuable users in certain demographics by asking them to preform what is to them a highly technical task. I hope that makes sense. – Shane Feb 18 '10 at 20:34
  • 2
    @Shane/Michael: By default let user be able to recover password in plain text. Then have user setting that allows user to make a secure choice as not to be able to recover password in plain text. Advanced users who can will choose this setting. – Gladwin Burboz Feb 22 '10 at 17:44
  • 4
    Loving this discussion! However, one important point has been very glossed over by nearly everyone... My initial reaction was very similar to @Michael Brooks, till I realized, like @stefanw, that the issue here is broken requirements, but these are what they are. But then, it occured to me that that might not even be the case! The missing point here, is the unspoken *value* of the application's assets. Simply speaking, for a low value system, a fully secure authentication mechanism, with all the process involved, would be overkill, and the **wrong** security choice. – AviD Feb 23 '10 at 14:45
  • 4
    (continuing) Obviously, for a bank, the "best practices" are a must, and there is no way to ethically violate CWE-257. But it's easy to think of low value systems where its just not worth it (but a simple password is still required). It's important to remember, true security expertise is in finding appropriate tradeoffs, NOT in dogmatically spouting the "best practices" that anyone can read online. – AviD Feb 23 '10 at 14:48
  • 81
    @AviD: The "low value" of the system has **absolutely no bearing** on this issue because **people reuse their passwords**. Why can't people understand this simple fact? If you crack the passwords on some "low value" system, you will likely have several valid passwords for other "high value" systems. – Aaronaught Feb 24 '10 at 14:19
  • 20
    Another point has also been glossed over, which I just mentioned in the comment stream to my answer: How do you know that the person asking for these requirements is trustworthy? What if the "usability" excuse is just a façade masking a real intent to steal the passwords at some point in the future? Your naiveté may have just cost customers and shareholders millions. How many times must security experts repeat this before it finally sinks in: **The most common and most serious security threats are always INTERNAL.** – Aaronaught Feb 24 '10 at 14:23
  • 8
    Is it ironic that the most ethical solution here is to *lie* and tell the client passwords cannot be stored securely and be recoverable? Offer a more secure solution at the same time, and maybe you'll have convinced someone that passwords must be irreversibly encrypted. – ehdv Feb 25 '10 at 21:42
  • I responded to the comments here, down in my response, since it was quite lengthy - i think its important to review the analysis and the discussion of issues raised. http://stackoverflow.com/questions/2283937/how-should-i-ethically-approach-user-password-storage-for-later-plaintext-retriev/2319090#2319090 – AviD Mar 01 '10 at 02:37
  • 2
    Apart from the whole security discussion, it really eludes me why you would need such a system, even for specific audiences. If your users already have to rely on support for retrieving the lost password, then why don't you have support reset the password for your users in the first place? Let support generate an OTP, send a mail to the user with a link (or password) which will allow him/her to login and change his/her password back to something sensible and there's no need for anyone to see what the password was... – wimvds Mar 01 '10 at 12:44
  • 4
    You can save them as encrypted md5sums, and setup a $10 Billion CPU network to later brute-force attack the encrypted hash and get the password in plain text when required... Simple.. – user3459110 May 07 '14 at 15:00
  • 3
    A simple solution could be using an OpenID provider (or more than one), such as Google, Facebook etc. That way you can at least say: "Hey, its their security system, ask them!", while avoiding to implement password reset functions yourself. – Tuncay Göncüoğlu Jan 13 '15 at 15:58

26 Answers26

1037

How about taking another approach or angle at this problem? Ask why the password is required to be in plaintext: if it's so that the user can retrieve the password, then strictly speaking you don't really need to retrieve the password they set (they don't remember what it is anyway), you need to be able to give them a password they can use.

Think about it: if the user needs to retrieve the password, it's because they've forgotten it. In which case a new password is just as good as the old one. But, one of the drawbacks of common password reset mechanisms used today is that the generated passwords produced in a reset operation are generally a bunch of random characters, so they're difficult for the user to simply type in correctly unless they copy-n-paste. That can be a problem for less savvy computer users.

One way around that problem is to provide auto-generated passwords that are more or less natural language text. While natural language strings might not have the entropy that a string of random characters of the same length has, there's nothing that says your auto-generated password needs to have only 8 (or 10 or 12) characters. Get a high-entropy auto-generated passphrase by stringing together several random words (leave a space between them, so they're still recognizable and typeable by anyone who can read). Six random words of varying length are probably easier to type correctly and with confidence than 10 random characters, and they can have a higher entropy as well. For example, the entropy of a 10 character password drawn randomly from uppercase, lowercase, digits and 10 punctuation symbols (for a total of 72 valid symbols) would have an entropy of 61.7 bits. Using a dictionary of 7776 words (as Diceware uses) which could be randomly selected for a six word passphrase, the passphrase would have an entropy of 77.4 bits. See the Diceware FAQ for more info.

  • a passphrase with about 77 bits of entropy: "admit prose flare table acute flair"

  • a password with about 74 bits of entropy: "K:&$R^tt~qkD"

I know I'd prefer typing the phrase, and with copy-n-paste, the phrase is no less easy to use that the password either, so no loss there. Of course if your website (or whatever the protected asset is) doesn't need 77 bits of entropy for an auto-generated passphrase, generate fewer words (which I'm sure your users would appreciate).

I understand the arguments that there are password protected assets that really don't have a high level of value, so the breach of a password might not be the end of the world. For example, I probably wouldn't care if 80% of the passwords I use on various websites was breached: all that could happen is a someone spamming or posting under my name for a while. That wouldn't be great, but it's not like they'd be breaking into my bank account. However, given the fact that many people use the same password for their web forum sites as they do for their bank accounts (and probably national security databases), I think it would be best to handle even those 'low-value' passwords as non-recoverable.

wallyk
  • 53,902
  • 14
  • 79
  • 135
Michael Burr
  • 311,791
  • 49
  • 497
  • 724
  • 93
    +1 for passphrases, which presently seem to offer the best balance of password strength and user recall. – Aaronaught Feb 28 '10 at 17:04
  • 3
    (+1 + Bounty) - @Michael Burr, I feel like this solution best fits the question posed as well as the bounty information. Pass phrases will allow the user base I specified the easiest method of logging into the site to rest passwords from and email system. I will use this in hybrid with phone support of a similar method (probably a single word passphrase) when necessary. Good answer. – Shane Mar 01 '10 at 13:49
  • 197
    Also you can make a full sentence e.g. The is . The green cat is jumping wildly. have lists for the categories. with 1024 choices for each you have 40 bits of entropy. – Dominik Weber Aug 25 '10 at 19:50
  • 28
    +1 for considering password reuse as a critical issue for avoiding it – lurscher Oct 13 '10 at 15:17
  • 2
    This is one of those situations where I feel dumb, because I didn't think of it. Nice solution and well-explained! – Jon Smock Oct 13 '10 at 15:54
  • How do you feel about password reset links that expire if not used within a reasonable time frame? – jason saldo Oct 13 '10 at 18:29
  • I'm just revamping a legacy system's password retrieval mechanism and instead of a random digit code, I'm going to use a short random phrase. I like it better, even if people are most likely to cut and paste it. – Hans Oct 14 '10 at 00:42
  • 1
    @jms: I think a time-limited password reset link is a nice idea - just make sure it includes a random, non-guessable nonce. It's an auto-generated password that people don't have to type or copy-n-paste. By the way, this page got a lot of attention yesterday - what article mentioned it? – Michael Burr Oct 14 '10 at 15:35
  • @Michael Burr Agreed. re: attention http://twitter.com/#!/spolsky/status/27244187286 – jason saldo Oct 14 '10 at 20:48
  • 57
    "Think about it - if the user needs to retrieve the password, it's because they've forgotten it" -- not necessarily true! I often want to get my password because I'm on the laptop, and I KNOW my machine back home has my password stored, or it's written down somewhere safe, and I don't want to break that by getting issued with a new one. – joachim Dec 10 '10 at 17:08
  • 1
    A time-limited password reset link is actually more user friendly than sending them a new password. @joachim -- have you tried KeePass? (http://keepass.info/) I like to describe it as a DVCS for your passwords -- perfect for your kind of use case. – jammycakes Apr 28 '11 at 12:02
  • 5
    [The highest-scoring question on the new IT Security SE site](http://security.stackexchange.com/questions/6095/xkcd-936-short-complex-password-or-long-dictionary-passphrase) deals with the validity of this entropy calculation. (Well, technically, it deals with the xkcd that @Pieter linked.) – Pops Nov 15 '11 at 19:45
  • About passphrases, should they contain spaces or not ? Thanks in advance. – James P. Jun 29 '13 at 01:27
  • That opening paragraph is the logic I use time and time again to rebuke those who insist they want a system where they can retrieve a password without changing it. "If you can't remember it, why do you care if it's changed?" – crush Jan 16 '14 at 18:35
  • 1
    @jammycakes KeePass has one critical flaw: If you forget your main password to unlock your DB (yes, I did, somehow), you're screwed. – Agi Hammerthief Mar 10 '14 at 18:42
  • 1
    With the suggested solution, make sure to use a big word list, not some hand-made list. Otherwise it might be pretty easy for an attacker to retrieve all the words then bruteforce at light speed. – Gras Double Mar 17 '14 at 02:11
  • 1
    "all that could happen is a someone spamming or posting under my name for a while" - which may well include libel of public persons or false accusations against a company or its products. Let alone commit a copyright infringement in your name. That, in turn, can easily land you a costly lawsuit. In terms of your finances, with some bad luck that can be *just as bad* as breaking directly into your bank account. – O. R. Mapper Apr 26 '14 at 21:12
  • 3
    @AwalGarg: That's not how password entropy works. The entropy calculations in the question already measure entropy through how the password is generated, not how an attacker might attempt to guess it. An attacker who knows the structure of the password still has 77 bits of entropy to go through. – user2357112 supports Monica May 26 '14 at 07:38
  • 1
    Has nobody heard of dictionary attacks anymore? This is why we started using random characters for passwords in the first place, people. – Codefun64 Jun 20 '14 at 00:18
  • 2
    @Codefun64 Even if you knew that all passwords would be exactly 6 dictionary words, that's still equivalent to a 6-character password in a language with tens of thousands of characters (there are hundreds of thousands, possibly even millions of "words", but presumably it would only choose relatively common words, of which there are still tens of thousands to choose from). Have fun dictionary-attacking that. – neminem Oct 22 '14 at 22:41
  • 1
    @neminem Actually, there are a few hundred to maybe a couple thousand commonly used words in passwords that are English words, common sense would tell you this, check out the most common passwords (which are truly ridiculous) at www.cbsnews.com/news/the-25-most-common-passwords-of-2013/ to see what I mean. So, let's say the user picks three words (note: a normal, average person would NOT pick three words, that's too many for them to care about and remember), and they have 2,000 common words they'd normally choose from. That's only eight billion cycles. I can run that on my laptop in < 1 day. – Codefun64 Oct 23 '14 at 23:22
  • 1
    Just made a basic dictionary attack script and ran it locally. Finished in an hour and 32 minutes. The password I gave it to crack was "quiet rabbit balloon". – Codefun64 Oct 24 '14 at 02:56
  • 2
    @Codefun64 But this answer isn't *talking* about words that are commonly used in passwords, not is it talking about the user picking them and only picking <=3. Quoth, "One way around that problem is to provide auto-generated passwords that are more or less natural language text": that is, *generating* a full grammatical English sentence and giving it to the user to remember (and if they forget it - fine, generating them a new one). Unlikely that too many passwords use the word "acute" (as in the example in the answer), for instance, but it's not archaic or anything. – neminem Oct 24 '14 at 04:32
  • 2
    Just to remark that there's no reason to limit oneself to a 7776 (6⁵) dictionary. It's fairly easy to come up with a dictionary of around 2^15 words and get yourself 75 bits of entropy with only 5 words. – Emmet Mar 22 '16 at 01:20
592

Imagine someone has commissioned a large building to be built - a bar, let's say - and the following conversation takes place:

Architect: For a building of this size and capacity, you will need fire exits here, here, and here.
Client: No, that's too complicated and expensive to maintain, I don't want any side doors or back doors.
Architect: Sir, fire exits are not optional, they are required as per the city's fire code.
Client: I'm not paying you to argue. Just do what I asked.

Does the architect then ask how to ethically build this building without fire exits?

In the building and engineering industry, the conversation is most likely to end like this:

Architect: This building cannot be built without fire exits. You can go to any other licensed professional and he will tell you the same thing. I'm leaving now; call me back when you are ready to cooperate.

Computer programming may not be a licensed profession, but people often seem to wonder why our profession doesn't get the same respect as a civil or mechanical engineer - well, look no further. Those professions, when handed garbage (or outright dangerous) requirements, will simply refuse. They know it is not an excuse to say, "well, I did my best, but he insisted, and I've gotta do what he says." They could lose their license for that excuse.

I don't know whether or not you or your clients are part of any publicly-traded company, but storing passwords in any recoverable form would cause you to to fail several different types of security audits. The issue is not how difficult it would be for some "hacker" who got access to your database to recover the passwords. The vast majority of security threats are internal. What you need to protect against is some disgruntled employee walking off with all the passwords and selling them to the highest bidder. Using asymmetrical encryption and storing the private key in a separate database does absolutely nothing to prevent this scenario; there's always going to be someone with access to the private database, and that's a serious security risk.

There is no ethical or responsible way to store passwords in a recoverable form. Period.

Aaronaught
  • 115,846
  • 24
  • 251
  • 329
  • 124
    @Aaronaught - I think that is a fair and valid point, but let me twist that on you. You are working on a project for a company as an employee and your boss says 'this is a requirement of our system' (for whatever reason). Do you walk off the job full of righteous indignation? I know that there is an obligation when I am in full control to be responsible--but if a company chooses to risk failure of audits or liability then is it my duty to sacrifice my job to prove a point, or do I seek the BEST and SAFEST way to do what they say? Just playing devil's advocate.. – Shane Feb 18 '10 at 16:12
  • 13
    @Shane: I suppose it depends whether or not you consider yourself a professional. Professionals have an obligation to uphold certain standards whether they are "in control" or not. I am not saying I would storm out of the office and never come back, but my response to that would be, *"No, it is **not** a requirement of your system, not anymore."* Let me twist this back on you - if your employer asked you to start taking registration passwords and use them to attempt to hijack the registered e-mail addresses for spam, would you do it? – Aaronaught Feb 18 '10 at 16:37
  • 5
    Of course not, and if I knew of anyone who was doing so I would take them down. I don't find that to be a 1 to 1 comparison though--there is a difference between defending against possible security breaches and exploiting them myself. Even though I think we are philosophically getting a little far afield from the question, with your statement here I feel like I should suppose that any website asking for my username and password without an SSL certificate is just as evil as someone who has just hacked my email account because they have allowed for the possibility of sniffing my credentials? – Shane Feb 18 '10 at 17:12
  • 44
    I am not a lawyer, but consider this. If your supervisor orders you to do something against the interests of the company, such as by exposing them to an easily avoided liability, is it your job to obey or to politely refuse? Yes, they're your boss, but they have a boss of their own, even if it's the investors. If you *don't* go over their heads, whose head is going to roll when your security hole is exploited? Just something to consider. – Steven Sudit Feb 18 '10 at 17:52
  • 4
    @Shane: My intent wasn't to equate the "evilness" of harvesting e-mail addresses/passwords to that of storing them in recoverable form; I was simply pointing out that as an IT professional, your responsibilities go beyond simply following orders. At some point you have to put your foot down and say "this is unethical behaviour," and it shouldn't need to get as bad as the above example for that to happen. You can't allow people who don't know the first thing about security to be dictating security policies for a *public* service. – Aaronaught Feb 18 '10 at 18:00
  • 8
    @Aaronnaught - Overall I agree with you and I treat my responsibilities professionally, which is why I do fight rather bitterly whenever someone makes a request like that (which I state in my question). However, there have been times when I have had no real choice in the matter short of walking away and that isn't always possible. My real concern is how this could be handled if you were to do it, I really am not arguing for the merits of doing so for there are none. And I have, in the past, been lectured by 'security' people that the risk is necessary--sometimes there really is no winning. – Shane Feb 18 '10 at 18:32
  • 12
    @Shane: I suppose, if it were me, if they put a proverbial gun to my head, I would probably walk away. Maybe it sounds far-fetched or idealistic, but when one action is taken to breach the public trust, others are sure to follow. The CEO or CIO is not the expert on this; I am, and if I say that something can't be done safely/responsibly, that's the end of the discussion, **unless** I'm proven wrong with **facts**. None of the answers to this question provide any guarantee of security against malicious *insiders*, who are universally considered to be the most significant security threat. – Aaronaught Feb 18 '10 at 20:00
  • 3
    @Aaronaught So there is a government agency that perform obligatory checks for recoverable passwords? And if there are, you are not allowed to legally build this system? ... Nope. Additionally, why should a malicious person bother about passwords if he could simply walk out with all the other data? Credit card numbers anyone? It's a weakness (admittedly a serious one) the customer must understand and evaluate. No more, no less. Period. – sfussenegger Feb 23 '10 at 17:30
  • 5
    @sfussenegger: Extremely weak argument. Recoverable passwords does not imply recoverable credit card numbers, and even if it did, someone who steals credit card numbers is far more likely to get caught and would only be able to do limited damage once the credit unions have been notified. And I never claimed that there was a "government agency" performing these security audits - do you not realize that audits can be forced by investors, a board of directors, or even business partners or customers, depending on the contract? – Aaronaught Feb 23 '10 at 20:47
  • 1
    @Aaronaught It wasn't any weaker than your answer itself. Federal law is a completely different requirement than potential audits the client might face in the future. If the client has this requirement and understands the potential threats (completely!), there's nothing stopping you from implementing it. At the end, we don't even no the full background; this makes it hard to evaluate any potential threats. Nevertheless, I do agree with you. It's a very bad idea and you should insist on avoiding recoverable passwords. But in the end, there are others to decide and other to be responsible. – sfussenegger Feb 24 '10 at 08:17
  • 15
    @sfussenegger: What you are trying to say was in fact **the whole point of my answer**. Engineers, doctors, and other professions aren't bound by "federal laws" either, they are bound by professional ethics and standards. If we want to be treated like professionals then we have to start acting like professionals. We aren't children, we don't have to follow every instruction given by a client who we know for a fact **does not understand** the scope of potential threats, no matter how many times he says "yeah, yeah, whatever, just do it." – Aaronaught Feb 24 '10 at 13:13
  • 68
    Developers are always trying to say how our jobs are so much harder than others because we get garbage requirements that change all the time. Well, this is a perfect example of why; our profession desperately needs a backbone. Our profession desperately needs to be able to say "no, this is not an acceptable requirement, this is not something that I can develop in good faith, you may be my client/employer but I have professional responsibilities toward your customers and the public, and if you want this done then you'll have to look elsewhere." – Aaronaught Feb 24 '10 at 13:17
  • 1
    @Aaronaught As I said, it's impossible to judge whether this is an unacceptable requirement without knowing any background. I've developed software that is used for automated security audits. There I've learned that sometimes certain security restrictions might be relaxed by introducing some others (but never ignore them completely). There's no need to insist on such things as if they were carved in stone. Simply explain to your client what it would take to get his requirements adequately secure. And if he doesn't speak security, he'll certainly speak money - they all do ;) – sfussenegger Feb 24 '10 at 13:49
  • 36
    @sfussenegger: You don't need to know the background. It's unacceptable. You're assuming that the client is 100% trustworthy - what if he's asking for this requirement specifically so he can make off with the passwords later? Security is one of the few items in development that **are** carved in stone. There are some things you just don't do, and storing recoverable passwords is ten of them. – Aaronaught Feb 24 '10 at 14:17
  • 3
    @Aaronaught Just to give an example: in the early days of Twitter, 3rd party services needed a user's password to act on his behalf (The thought itself sends shivers down my spine). At this time, I would have appreciated any developer spending as much time on the issue as @Shane. In reality, lots of well-intentioned services certainly stored the password as plain text. Anyway, the point is that storing recoverable passwords was a requirement for this use cases. The only alternative would have been to absent oneself from this thriving market. – sfussenegger Feb 24 '10 at 19:35
  • 2
    @sfusseneger: That is similar to the many "Password Management" programs out there; secondary passwords can be encrypted with the user's primary password, which is fine, because it is not recoverable to anyone *except the user who knows the primary password*. When every user has a unique encryption key known only to himself, that is effectively non-recoverable and *completely* different from having a single key owned by some administrator/employee. – Aaronaught Feb 24 '10 at 20:05
  • 2
    @Aaronaught this would require that the user enters his password any time a service would like to post something on his behalf. But what if this isn't possible, i.e. some kind of bot that posts update? Simply imagine a service that fetches a blog's RSS feed and automatically posts new entries to twitter. You'll certainly need the user's password in plain text for this API call. So how would you do this? – sfussenegger Feb 25 '10 at 10:18
  • 3
    @sfussenegger, this is exactly what e.g. OAuth is for. – molf Feb 25 '10 at 13:12
  • 5
    @sfussenegger: To be blunt, I wouldn't develop or use any service that purports to use somebody's credentials without their explicit permission, *every* time. That's a massive can of worms, technologically, ethically, legally. Systems that do need to allow this kind of behaviour will have an *impersonation* or *delegation* subsystem, where users or services can log into surrogate accounts that are given specific permissions to act as another user. E-mail (i.e. Microsoft Exchange) is a simple example of this. Delegation exists specifically to *prevent* having shared passwords. – Aaronaught Feb 25 '10 at 14:46
  • 2
    @molf Certainly yes, but I was referring to "the early days of Twitter" where OAuth wasn't an option. There are certainly still some other examples around, but Twitter is still a well-known one. If the service provider does not support OAuth or similar methods (which Twitter didn't at the beginning), what would you do? – sfussenegger Feb 25 '10 at 15:06
  • 9
    @Aaronaught so you would have told a client/boss who wanted a RSS-to-Twitter (early days, as mentioned above) service that you can't do this until an impersonation or delegation subsystem is in place. But what would you say if he asks what others in the field do? That they aren't acting as ethically as you do? Do you believe you would have helped to improve the situation of potential users by sending your client to a "non-ethical" engineer? I could rest easier if I know that I did my best to get this as save as possible instead of simply looking away, that's for sure. – sfussenegger Feb 25 '10 at 15:16
  • 1
    @Aaronaught ... but let's stop it here. I enjoyed the discussion with you. Your have an admirable enthusiasm for your job, that's for sure. I hope you didn't mind that I've tried to be "a bit" provocative here ;) – sfussenegger Feb 25 '10 at 15:20
  • 4
    This is a false analogy. It equates lessen security with life threatening omissions of fire exits. It is nonsense to think that all systems require NSA security measures. A closer, albeit flawed, analogy would be mandating that all doors have deadbolts instead of simple knob locks and that installing anything less than deadbolts is unethical. – Thomas Feb 27 '10 at 20:43
  • 28
    @Thomas: Everyone always uses the same excuse for weak password security, that being the "non-criticality" of the data. You, like every other apologist, ignore the fact that passwords are **reused**, meaning that your "closer" analogy is in fact worthless. Change it to requiring someone to give you their alarm code and entire keyring for "safe-keeping" and then having anything less than military-grade security on it; you may not get that person killed, but you could easily end up ruining his life. – Aaronaught Feb 28 '10 at 15:53
  • 4
    @Aaronnaugh: Every security person uses the same weak excuse for extreme security measures. Security is based on risk assessement. You want to put an alarm code on the cookie jar and are aghast when everyone uses 12345 for their alarm code. The way you store the password means nothing if you don't also enforce strong password requirements and limit password reuse and on it goes. Claiming that having weak password security on an irrelevant system will ruin someone's life is again extreme. This is about living in the real world the person paying the checks makes the call, not the developer. – Thomas Feb 28 '10 at 16:11
  • 10
    @Thomas: I don't know what you're trying to say here. First you say that one-way hashes don't preclude password strength requirements. I agree. Then you repeat the same "irrelevant system" argument, which is itself irrelevant because because the same password can and will be used for other systems. Finally you fall back on the "person paying the checks" argument, which is a non-starter because you cannot guarantee that this person and those he delegates to are all trustworthy. Your prime responsibility here is to the customers. All of these points have already been discussed. – Aaronaught Feb 28 '10 at 16:25
  • 3
    The "irrelevant system" argument as you put it is THE fundamental issue: risk assessment. You want to treat every system like it's the FBI crime database. Weak passwords and password reuse are the fault of the user; not the system and that does not, alone, justify any security measure with respect to credentials. The error in your arg that I pointed out is that 1-hashes are not enough. To you, to be "ethicial", you would have to have pwd length reqs, pwd complexity reqs, pwd reuse restrictions and so on. All that cannot be justified if the system is irrelevant. – Thomas Feb 28 '10 at 16:55
  • 7
    @Thomas: One-way hashes are *part* of a complete password policy; they are also **the most important part, by a wide margin.** If you equate one-way hashes to the "FBI crime database" then you have no right to be working on any security code, anywhere. You go ahead and rant; I'm done arguing with this baloney. – Aaronaught Feb 28 '10 at 18:05
  • 6
    You have point blank said that anything less than 1-way hashes is not secure. That is extremist nonsense. It is akin to saying that without an alarm system, a home is not secure. I used the FBI db as example of the slippery slope that security nazis take: "we must have 1-way hashes, and min password lengths, and complexity requirements and password expiration and...". Without a reasonable assessment of risk, every justification becomes one based on fear. – Thomas Feb 28 '10 at 20:11
  • 37
    OK, let's do a risk assessment, right here and now. "If you store passwords in a recoverable form, you are creating a non-trivial risk of passwords being stolen. It is also likely that at least some users will use the same passwords for their e-mail and bank accounts. If passwords are stolen and bank accounts are drained, it will probably make headlines, no one will ever do business with you again, and you will likely be sued out of existence." Can we cut the crap now? The fact that you brought the word "Nazis" into this clearly shows that you have no sense of reason. – Aaronaught Feb 28 '10 at 20:22
  • 5
    BTW, thinking that architects and builders and whatnot do always behave ethically is nonsense, there are a lot people who'll work on stolen material or degrade the quality of the cement (by adding more sand) and so on. This is totally unrelated to the issue at hand anyway, but always assuming that other professions have a better position to refuse things is naïve... [Continues] – Vinko Vrsalovic Mar 01 '10 at 07:46
  • 3
    ... For the issue at hand, I think the best way has already been mentioned, which is to sidestep the issue completely by having the tech support person provide a new password for them, making sure they are at least pronounceable passwords. If this is not possible, then the customer has very little faith in the elderly or mentally challenged :) – Vinko Vrsalovic Mar 01 '10 at 07:47
  • 2
    @Vinko Vrsalovic: True enough about other professions behaving unethically. The question did ask how to do this ethically though, and my response was/is essentially that it's unethical no matter how you approach it. If you don't care about professional ethics then all bets are off! – Aaronaught Mar 01 '10 at 14:19
  • 1
    @Aaronaught The issue here is that you have no sense of perspective. Again, the example you given is extreme. If there was an actual, tangibly documented liability issue with the system you were building and its security that might be different but that is not at all clear not definitely always the case. – Thomas Mar 01 '10 at 20:05
  • 1
    @Aaronaught: "If you store passwords in a recoverable form, you are creating a non-trivial risk of passwords being stolen" A faulty assumption. Yes, you are creating a risk that the passwords can be stolen but the degree of risk is entirely related to protection of the decryption key(s). I agree that finding ways of convincing customers of using 1-ways hashes and password resets is by far the best solution. However, at the end of the day, if the client says no, even if it costs more, a professional would document their objection and do the best the can to mitigate their client's risk. – Thomas Mar 01 '10 at 20:09
  • 13
    Yes, the degree of risk is related to the protection of the decryption keys, but that risk is still always non-trivial. The keys will always be readable by someone in the organization (otherwise you might as well use one-way hashes), and that means they can be stolen. Businesses have "disaster recovery" plans for a reason - it's not the *likelihood* of something bad happening that matters, it's the *consequences*. And for someone claiming I have no perspective, you've demonstrated a shockingly poor understanding of just how wide the scope of this problem is. Password theft **cannot** happen. – Aaronaught Mar 01 '10 at 20:15
  • 3
    You can mitigate risk by splitting the key(s) amongst multiple people so that no one person has access to the keys. There are solutions to key(s) management. You continue to harp on people using the same password everywhere. Frankly, THAT is a bigger risk, created and born by the user, than any implementation could ever hope to be and to say that every designer of every system, no matter how insignificant, must account for this risk lest they be considered unethical is simply not universally shared. – Thomas Mar 01 '10 at 22:50
  • Btw, the original question was regarding arguments to convince the client to use one-way hashes. A good argument to that end would be the cost of the cumbersome protocols necessary to ensure protection of the decryption key(s) and protection of the passwords. They wouldn't be able to decrypt a password if the key holders are in Tahiti. Using 1-way hashes and password resets would eliminate this problem. That said, I think that finding legal precedence in terms of liability is by far the best ammunition. – Thomas Mar 01 '10 at 22:56
  • 6
    Frankly, the only thing I'm suggesting is that there are solutions. Every risk you suggested can be mitigated but you do not want to hear that. If the risk isn't zero, it isn't acceptable to you. The real world rarely works in such absolutes. Lastly, to say that I've added nothing of substance is worse than the kettle calling the pot black. Your only answer has been: "you can't". The world will not stop spinning on its axis because someone used asymmetrical encryption instead of a 1-way hash for an inconsequential system. – Thomas Mar 01 '10 at 23:55
  • Did we get digged or something? Not that I'm complaining, but what's with all the sudden attention? – Aaronaught Oct 13 '10 at 15:38
  • @Aaronaught spolsky twittered you. – San Jacinto Oct 13 '10 at 16:29
  • @Aaronaught well, not YOU specifically.. but the question. – San Jacinto Oct 13 '10 at 16:36
  • 1
    I work on a number of integration systems that require storing passwords in a way that means the system can automatically get back the plaintext version. Honestly it never feels good doing it and usually we do manage to get a completely separate account setup unique to the integration, but in the cases it can't, you can not simply just say no, especially for features customers want. For these we generally use GPG with keys *not* coded in the application or db. – Ryaner Dec 28 '11 at 10:10
  • 4
    Haha, good one! As an architect, I can say that this is really really true. Not in that dimension, but in fact clients are trying to "correct" you, even if they have absolutly no idea... Strange world. – Sliq Apr 27 '13 at 00:49
  • 1
    @Aaronaught: "There is no ethical or responsible way to store passwords in a recoverable form. Period." Yes, technically there are many ways to store sensitive recoverable information in a way that only the owner can see and/or change. Bitcoin (a virtual currency) for instance is basically a shared p2p public table. But even being public its safe - only the owner of each bitcoin can see/change your record. So, yes, there is. Period (:P). While I disagree of your answer (I will consider it was just a lack of technical knowledge at time), I will thank you for sparkle a debate about ethics. – Daniel Loureiro Jan 22 '14 at 19:14
  • Quoting the first part of this answer "For a building of this size and capacity, you will need..." The same goes for IT security. It depends on the size and capacity, ie what is the system doing and what is it used for and by who. What are the risks. A forum site for a flower club is probably fine to store in plain text of only 4-6 characters. If we're talking about an email system or a bank, a very different story. So I can't agree with blanket statements that say "you shall do ..." across the board. – Trevor Dec 29 '15 at 14:03
  • 3
    Completely false equivalency. In your example as soon as the architect says that fire exits are a legal requirement and the building would be condemned or demolished without them (even if it got built in the first place, which it probably wouldn't), the client says "Well damn, that sucks. Fine." Until such time as there is a legal requirement for the proper storage and hashing of passwords, this is a crap analogy. – Phillip Copley Feb 19 '16 at 20:31
  • Its late comment on this subject, according to my experience, who ever can access the database, then no way to keep things safe, consider this example that happened with me!!, I used md5 hashing to store the passwords of the clients and I made a page to allow them to reset their passwords, so far so good, I discovered that I lost my access to the website one day, by spending 8 hours reviewing the code, I discovered that my boss was replacing his record's password field value with other records and accessing other accounts!!! no way to be safe as long as the DB is accessible by any! – Monah Apr 23 '17 at 16:58
206

You could encrypt the password + a salt with a public key. For logins just check if the stored value equals the value calculated from the user input + salt. If there comes a time, when the password needs to be restored in plaintext, you can decrypt manually or semi-automatically with the private key. The private key may be stored elsewhere and may additionally be encrypted symmetrically (which will need a human interaction to decrypt the password then).

I think this is actually kind of similar to the way the Windows Recovery Agent works.

  • Passwords are stored encrypted
  • People can login without decrypting to plaintext
  • Passwords can be recovered to plaintext, but only with a private key, that can be stored outside the system (in a bank safe, if you want to).
stefanw
  • 9,890
  • 3
  • 33
  • 34
  • 34
    -1 passwords should never be "encrypted" It is a violation of CWE-257 http://cwe.mitre.org/data/definitions/257.html – rook Feb 17 '10 at 21:52
  • 100
    1. The question stated that the password should be recoverable to plaintext, so this is a requirement. 2. I'm using asymmetric encryption here and not symmetric encryption. The key to decrypt is not necessary for daily operations and can be kept in a bank safe. The argumentation in the link is valid, but does not apply to this situation. – stefanw Feb 17 '10 at 21:59
  • 5
    CWE-257 specifically says storing passwords in a "recoverable format". What you have proposed is a recoverable format and there for is a vulnerability according to the NIST. This is why I gave you a -1. Further more, this vulnerability can be avoided all together by forcing users to reset their password. This added complexity is unnecessary and produces a weaker system. – rook Feb 18 '10 at 09:22
  • 57
    True, but could you agree that given the requirements this is the most responsible way of doing it? You can hit me all day with your CWE-257, it's not going to change the interesting problem of safely storing and working with credentials and being able to recovering them to their original form if required. – stefanw Feb 18 '10 at 09:53
  • @Michael Brooks - I am commenting on my main question in regard to this comment and those on other posts. – Shane Feb 18 '10 at 13:17
  • 2
    @Michael Brooks: your 1st comment directly contradicts the CWE: "Use strong, non-reversible encryption to protect stored passwords." probably a typo? – devio Feb 18 '10 at 13:52
  • 2
    Requiring a private key doesn't prevent insiders decrypting passwords. Insiders are coincidentally going to be people who already have access to the encrypted passwords. – Steven Sudit Feb 18 '10 at 17:44
  • 10
    Windows Recovery Agent is also a poor example here, as it deals with actual encryption, not password management. An encryption key is **not** the same as a password; the rules and practices surrounding each are *completely* different. Encryption and authentication are not the same. Encryption is for **privacy** - a key is used to protect *data*. Authentication is for **identity**, where the key *is the data* (it is one *factor* in the authentication process). So I repeat, **encryption and authentication are not the same.** You cannot validly apply the principles of one to the other. – Aaronaught Feb 18 '10 at 18:15
  • 2
    @Aaronaught Sure you can't. Nobody wants to store recoverable passwords. But for the sake of the requirement that passwords should be recoverable, it may very well fit the case. @Steven You can enforce a 4 or more eyes policy on the private key (e.g. multiple persons encrypting it sequentially). – stefanw Feb 18 '10 at 19:54
  • Unless you're in the military (in which case you'll have much more strict security anyway), that policy is just not going to happen, and it *still* doesn't protect against malicious insiders (either they can conspire or the sysadmin can install a few keyloggers - in some companies such things are standard). Even if it did happen, it would make the system basically useless for password recovery. I know you're just answering the question that was asked, and I don't have a problem with that; my problem is with the suggestion that this is secure enough to be responsibly used in production. – Aaronaught Feb 18 '10 at 20:05
  • @Devio, Yes, storing passwords in a recoverable format is a vulnerability. Did you read the link? – rook Feb 18 '10 at 20:31
  • 5
    All true. I wouldn't build such a system, I took the question as an abstract challenge with given requirements. On a side note: if keyloggers are involved, everything goes: you can get the password right from the people. – stefanw Feb 18 '10 at 20:31
  • 2
    @Shane: To avoid the issue of someone internally misusing private key, you could further double encrypt password using symmetric key as some info that user has (birth palce, pet name, etc.) but is not stored in the system. To recover password user provides key using which you decrypt and then again re-decrypt using private key to get orignal password. – Gladwin Burboz Feb 22 '10 at 17:38
  • 2
    In one company I worked for we had a information security team, if a significant security threat was identified with a proposed system, a Director of company had to sign a form saying they understood the risk and were happy for the system to go ahead... – Jason Roberts Feb 23 '10 at 15:56
  • 16
    +1 Where's the point in obsessively insisting on CWE-257? It's a weakness (CWE), not a vulnerability (CVE). Comparing recoverable passwords with buffer overflows is comparing apples and oranges. Simply make sure the client understands the problem (let him sign something that says so - otherwise he might not remember anything if in question) and go ahead. Additionally, required security measures depend on the value of the system and potential risk of an attack. If a successful attacker could only cancel some newsletter subscriptions, there's no reason to argue about any issues. – sfussenegger Feb 23 '10 at 17:19
  • -1 @sfuseenegger: Please see my other comments elsewhere. A successful attacker who gets hold of your users' passwords can cause much more serious damage than just cancelling some newsletter subscriptions, they can breach their Facebook, Gmail, PayPal, bank accounts etc. – jammycakes Feb 24 '10 at 09:53
  • 1
    @jammycakes That's what I meant by "value of the system". You're already assuming that user's provided passwords themselves. – sfussenegger Feb 24 '10 at 15:30
  • Yes, but systems where the user provides the passwords themselves are exactly what we're talking about here. – jammycakes Feb 25 '10 at 09:06
  • 5
    @jammycakes not explicitly, no - at least I can't find anything saying so. Additionally, there are cases where passwords must be available in plain text. For example, the early days of Twitter where tons of services acted on the behalf of users without any interactions. I just want to make clear that there are valid use-cases and @stefanw did his best to answer the question. I simply can't believe people are down-voting him for giving an acceptable answer. – sfussenegger Feb 25 '10 at 10:52
  • @sfussenegger: not explicitly, but in practice that's what will be happening. As far as Twitter is concerned, that was bad, bad practice in itself on the part of Twitter because it is teaching users to be phished. These days there are better alternatives available such as oAuth -- see http://adactio.com/journal/1357 for a discussion. – jammycakes Feb 25 '10 at 13:42
  • 3
    @jammycakes it certainly was bad practice, no doubt about it. But as a 3rd party developer you had no choice but (ranting and) asking yourself this question: "How should I ethically approach user password storage for later plaintext retrieval?" ;) – sfussenegger Feb 25 '10 at 13:53
  • btw, not asking for passwords in favor of OAuth as suggested by your link is merely an option if there is no such mechanism available - unless you'd argue to abandon this idea. But unfortunately, you'd do this in favor of your competitors rather than your users' education. – sfussenegger Feb 25 '10 at 13:59
  • True, but it's a moot point because OAuth is now available. But in cases such as that, where you had no choice but to ask your users for their passwords to a third party service, my answer would be not to save them on your servers in the first place -- instead, you should discard them as soon as you've got all the data that you need. – jammycakes Feb 25 '10 at 20:31
  • I responded to the comments here, down in my response, since it was quite lengthy - i think its important to review the analysis and the discussion of issues raised. http://stackoverflow.com/questions/2283937/how-should-i-ethically-approach-user-password-storage-for-later-plaintext-retriev/2319090#2319090 – AviD Mar 01 '10 at 02:37
  • Pretty clever answer. – Captain Hypertext Jun 08 '16 at 01:54
133

Don't give up. The weapon you can use to convince your clients is non-repudiability. If you can reconstruct user passwords via any mechanism, you have given their clients a legal non-repudiation mechanism and they can repudiate any transaction that depends on that password, because there is no way the supplier can prove that they didn't reconstruct the password and put the transaction through themselves. If passwords are correctly stored as digests rather than ciphertext, this is impossible, ergo either the end-client executed the transaction himself or breached his duty of care w.r.t. the password. In either case that leaves the liability squarely with him. I've worked on cases where that would amount to hundreds of millions of dollars. Not something you want to get wrong.

user207421
  • 289,834
  • 37
  • 266
  • 440
  • 2
    Webserver logs don't count in a court? Or in this case they would be assumed as faked as well? – Vinko Vrsalovic Mar 01 '10 at 08:04
  • 10
    @Vinko Vrsalovic, Web server logs SHOULDNT count in court, in order to do so you need to prove non-repudiation, proof of authenticity, chain of evidence, etc etc. which webserver logs are clearly not. – AviD Mar 01 '10 at 19:34
  • 7
    Exactly. The supplier has to prove that *only* the client could have performed that transaction. A web server log doesn't do that. – user207421 Mar 02 '10 at 00:27
  • Not all passwords are actually needed for "transactions", so to speak. Say the website is for developing a webpage bookmarking list. In this case the limit of liability (which is usually called out in the T&C's when registering to the website) is zero, because there is no financial transaction. If the website has no actions affecting others then at most, data is lost to the hacked user. The company is protected by the T&C's. – Sablefoste Dec 02 '15 at 07:13
  • 1
    @Sablefoste On that website. If the user uses the same password elsewhere you're creating a risk of leaking his private credentials. If you never engage in the practice you can't cause the problem. – user207421 Dec 09 '15 at 07:47
94

You can not ethically store passwords for later plaintext retrieval. It's as simple as that. Even Jon Skeet can not ethically store passwords for later plaintext retrieval. If your users can retrieve passwords in plain text somehow or other, then potentially so too can a hacker who finds a security vulnerability in your code. And that's not just one user's password being compromised, but all of them.

If your clients have a problem with that, tell them that storing passwords recoverably is against the law. Here in the UK at any rate, the Data Protection Act 1998 (in particular, Schedule 1, Part II, Paragraph 9) requires data controllers to use the appropriate technical measures to keep personal data secure, taking into account, among other things, the harm that might be caused if the data were compromised -- which might be considerable for users who share passwords among sites. If they still have trouble grokking the fact that it's a problem, point them to some real-world examples, such as this one.

The simplest way to allow users to recover a login is to e-mail them a one-time link that logs them in automatically and takes them straight to a page where they can choose a new password. Create a prototype and show it in action to them.

Here are a couple of blog posts I wrote on the subject:

Update: we are now starting to see lawsuits and prosecutions against companies that fail to secure their users' passwords properly. Example: LinkedIn slapped with $5 million class action lawsuit; Sony fined £250,000 over PlayStation data hack. If I recall correctly, LinkedIn was actually encrypting its users' passwords, but the encryption it was using was too weak to be effective.

jammycakes
  • 5,276
  • 1
  • 36
  • 48
  • 8
    @jimmycakes - This is a good thing to do on a low security system, but if you are storing any data of high value then you have to assume that the persons email is already compromised and that sending them a direct login link compromises your system. +1 for answering my question with a feasible alternative, but pointing out a flaw in the logic as a whole. I don't want Payppal sending a direct login link EVER. It may sound paranoid but I always assume my email account is corrupt--it keeps me honest. ;) – Shane Feb 24 '10 at 14:32
  • Absolutely -- I'd expect my bank to at the very least give me a phone call and verify my identity before letting me reset (**not** recover) my password. What I've outlined here is the absolute minimum standard of password security that I would expect from any website, anywhere. – jammycakes Feb 24 '10 at 20:02
  • 1
    Ignoring the bank or paypal who wouldn't have the restriction you set anyway; If you assume their email is compromised, how is any online method possible? If you email a generated password, how is that any more secure? – Peter Coulton Oct 13 '10 at 17:17
  • I'm not talking about obtaining a single individual's password, I'm talking about obtaining multiple passwords from a database. If a system stores passwords recoverably to plain text, a hacker can potentially write a script to extract all the passwords from your database. – jammycakes Oct 13 '10 at 18:44
  • I'm wondering about sending link/password in email, going through network in plain form through unknown network nodes... – Jakub Apr 26 '12 at 13:43
  • Absolutely agree. If there is a way to get my plain password, then I won't rely in that site. Simple. No ethically way to do it, actually it goes against the concept of password itself. – jhmilan Aug 04 '15 at 10:57
  • LinkedIn was not encrypting passwords, and the encryption was not too weak (because they weren't encrypting, remember). They were **hashing** passwords, but their **hashing** algorithm was too weak to be effective. (SHA1, to be exact.) – Scott Arciszewski Dec 09 '15 at 07:29
  • @ScottArciszewski Is there anything wrong with choosing to hash over encrypting (ignoring choice in algorithm) other than that with encryption you can retrieve the initial data? I've been using bcrypt for my applications – Abdul Jul 26 '16 at 12:47
  • As long as you're using a [password hashing](https://paragonie.com/blog/2016/02/how-safely-store-password-in-2016) algorithm instead of a fast hash, that's fine. Bcrypt is a password-hashing algo, not a fast-hashing algo, so that's great. – Scott Arciszewski Jul 26 '16 at 13:08
  • +1 for "Even John Skeet..." because you raise an important point: even experts can't do this in a way that is ironclad, not even if they wanted to. A recoverable password is recoverable, period. Leaving it to the user to choose unique passwords would be the only thing that protects them, and we know how often that happens. And that would only protect their accounts on other sites; not the current one. – trnelson Feb 08 '17 at 15:06
55

After reading this part:

In a note below I made the point that websites geared largely toward the elderly, mentally challenged, or very young can become confusing for people when they are asked to perform a secure password recovery routine. Though we may find it simple and mundane in those cases some users need the extra assistance of either having a service tech help them into the system or having it emailed/displayed directly to them.

In such systems the attrition rate from these demographics could hobble the application if users were not given this level of access assistance, so please answer with such a setup in mind.

I'm left wondering if any of these requirements mandate a retrievable password system. For instance: Aunt Mabel calls up and says "Your internet program isn't working, I don't know my password". "OK" says the customer service drone "let me check a few details and then I'll give you a new password. When you next log in it will ask you if you want to keep that password or change it to something you can remember more easily."

Then the system is set up to know when a password reset has happened and display a "would you like to keep the new password or choose a new one" message.

How is this worse for the less PC-literate than being told their old password? And while the customer service person can get up to mischief, the database itself is much more secure in case it is breached.

Comment what's bad on my suggestion and I'll suggest a solution that actually does what you initially wanted.

Mr. Boy
  • 52,885
  • 84
  • 282
  • 517
  • 4
    @john - I think that is a perfectly viable solution. Prepare to get flamed over internal threats though! You know, if I were to do this with an intermediate password reset (tech sets the password manually as a temporary mesaure and tells Mabel to type 1234 as her password) then it would probably work well on a system not containing important data. If it were a high security environment though we then have a problem where cust service can set the CEO's password to 1234 and log in directly. There is no perfect solution but this one works in a lot of instances. (+1) – Shane Feb 25 '10 at 18:02
  • 5
    I just noticed this answer. @Shane, I don't understand why you predicted flaming over "internal threats." The ability to change a password is not a notable weakness; the problem is the ability to discover a password that's likely to be used for *other* services - her e-mail, her bank, her online shopping sites that have her CC information stored. That weakness is not manifested here; if Bob resets Mabel's password and tells it to her over the phone, that doesn't give him access to any of her other accounts. I would, however, *force* rather than "suggest" a password reset on login. – Aaronaught Feb 28 '10 at 16:10
  • @Aaronaught - I do see your point, but what I am thinking of are times where even the customer service folks are locked out of certain areas of a system (such as payroll, accounting, etc) and allowing them to directly set a password is a security issue in and of itself. I do see your point though that the type of system I asked this question about differs largely from an internal accounting system. We could probably have an entirely different discussion about proprietary internal systems and password security therein. – Shane Mar 01 '10 at 00:05
  • 1
    @Shane: Then the question makes even less sense. I assumed that you wanted someone to read them a password over the phone. If you wanted to e-mail users their passwords through some automated self-service system then you might as well dispense with the password entirely because it's being "protected" with something much weaker. Maybe you need to be a lot more specific about exactly what kind of usability scenarios you're trying to support. Maybe that analysis will subsequently show you that recoverable passwords are not even necessary. – Aaronaught Mar 01 '10 at 00:14
  • 4
    The code provided by the support person doesn't literally have to be the new password. It can just be a one-time code which unlocks the password-reset function. – kgilpin Sep 18 '13 at 18:46
42

Michael Brooks has been rather vocal about CWE-257 - the fact that whatever method you use, you (the administrator) can still recover the password. So how about these options:

  1. Encrypt the password with someone else's public key - some external authority. That way you can't reconstruct it personally, and the user will have to go to that external authority and ask to have their password recovered.
  2. Encrypt the password using a key generated from a second passphrase. Do this encryption client-side and never transmit it in the clear to the server. Then, to recover, do the decryption client-side again by re-generating the key from their input. Admittedly, this approach is basically using a second password, but you can always tell them to write it down, or use the old security-question approach.

I think 1. is the better choice, because it enables you to designate someone within the client's company to hold the private key. Make sure they generate the key themselves, and store it with instructions in a safe etc. You could even add security by electing to only encrypt and supply certain characters from the password to the internal third party so they would have to crack the password to guess it. Supplying these characters to the user, they will probably remember what it was!

Phil H
  • 18,593
  • 6
  • 62
  • 99
  • 8
    And, of course, you could use any secret-splitting technique to require multiple someones from your company for decryption. But none of that fulfills the original requirements of being able to mail a user their passwords or have some run-off-the-mill first-level phone supporter walk them through logging in. – Christopher Creutzig Feb 24 '10 at 13:18
27

There's been a lot of discussion of security concerns for the user in response to this question, but I'd like to add a mentioning of benefits. So far, I've not seen one legitimate benefit mentioned for having a recoverable password stored on the system. Consider this:

  • Does the user benefit from having their password emailed to them? No. They receive more benefit from a one-time-use password reset link, which would hopefully allow them to choose a password they will remember.
  • Does the user benefit from having their password displayed on screen? No, for the same reason as above; they should choose a new password.
  • Does the user benefit from having a support person speak the password to the user? No; again, if the support person deems the user's request for their password as properly authenticated, it's more to the user's benefit to be given a new password and the opportunity to change it. Plus, phone support is more costly than automated password resets, so the company also doesn't benefit.

It seems the only ones that can benefit from recoverable passwords are those with malicious intent or supporters of poor APIs that require third-party password exchange (please don't use said APIs ever!). Perhaps you can win your argument by truthfully stating to your clients that the company gains no benefits and only liabilities by storing recoverable passwords.

Reading between the lines of these types of requests, you'll see that your clients probably don't understand or actually even care at all about how passwords are managed. What they really want is an authentication system that isn't so hard for their users. So in addition to telling them how they don't actually want recoverable passwords, you should offer them ways to make the authentication process less painful, especially if you don't need the heavy security levels of, say, a bank:

  • Allow the user to use their email address for their user name. I've seen countless cases where the user forgets their user name, but few forget their email address.
  • Offer OpenID and let a third-party pay for the costs of user forgetfulness.
  • Ease off on the password restrictions. I'm sure we've all been incredibly annoyed when some web site doesn't allow your preferred password because of useless requirements like "you can't use special characters" or "your password is too long" or "your password must start with a letter." Also, if ease of use is a larger concern than password strength, you could loosen even the non-stupid requirements by allowing shorter passwords or not requiring a mix of character classes. With loosened restrictions, users will be more likely to use a password they won't forget.
  • Don't expire passwords.
  • Allow the user to reuse an old password.
  • Allow the user to choose their own password reset question.

But if you, for some reason (and please tell us the reason) really, really, really need to be able to have a recoverable password, you could shield the user from potentially compromising their other online accounts by giving them a non-password-based authentication system. Because people are already familiar with username/password systems and they are a well-exercised solution, this would be a last resort, but there's surely plenty of creative alternatives to passwords:

  • Let the user choose a numeric pin, preferably not 4-digit, and preferably only if brute-force attempts are protected against.
  • Have the user choose a question with a short answer that only they know the answer to, will never change, they will always remember, and they don't mind other people finding out.
  • Have the user enter a user name and then draw an easy-to-remember shape with sufficient permutations to protect against guessing (see this nifty photo of how the G1 does this for unlocking the phone).
  • For a children's web site, you could auto-generate a fuzzy creature based on the user name (sort of like an identicon) and ask the user to give the creature a secret name. They can then be prompted to enter the creature's secret name to log in.
Jacob
  • 72,750
  • 22
  • 137
  • 214
  • I responded to the comments here, down in my response, since it was quite lengthy - i think its important to review the analysis and the discussion of issues raised. http://stackoverflow.com/questions/2283937/how-should-i-ethically-approach-user-password-storage-for-later-plaintext-retriev/2319090#2319090 – AviD Mar 01 '10 at 10:52
  • Does the user benefit from having their password displayed on screen? In my opinion - definitely yes! Every time I get obscure password from internet provided, I thank Apple I can make it visible so I don't have to retype it 100 times in pain. I can imagine how disable person would feel. – Dmitri Zaitsev Aug 24 '13 at 10:33
  • 1
    Why is displaying an obscure password better than letting you choose a new password you can remember? – Jacob Aug 25 '13 at 13:12
  • @Jacob: More entropy? – Alexander Shcheblikin Mar 08 '14 at 12:28
25

Pursuant to the comment I made on the question:
One important point has been very glossed over by nearly everyone... My initial reaction was very similar to @Michael Brooks, till I realized, like @stefanw, that the issue here is broken requirements, but these are what they are.
But then, it occured to me that that might not even be the case! The missing point here, is the unspoken value of the application's assets. Simply speaking, for a low value system, a fully secure authentication mechanism, with all the process involved, would be overkill, and the wrong security choice.
Obviously, for a bank, the "best practices" are a must, and there is no way to ethically violate CWE-257. But it's easy to think of low value systems where it's just not worth it (but a simple password is still required).

It's important to remember, true security expertise is in finding appropriate tradeoffs, NOT in dogmatically spouting the "Best Practices" that anyone can read online.

As such, I suggest another solution:
Depending on the value of the system, and ONLY IF the system is appropriately low-value with no "expensive" asset (the identity itself, included), AND there are valid business requirements that make proper process impossible (or sufficiently difficult/expensive), AND the client is made aware of all the caveats...
Then it could be appropriate to simply allow reversible encryption, with no special hoops to jump through.
I am stopping just short of saying not to bother with encryption at all, because it is very simple/cheap to implement (even considering passible key management), and it DOES provide SOME protection (more than the cost of implementing it). Also, its worth looking at how to provide the user with the original password, whether via email, displaying on the screen, etc.
Since the assumption here is that the value of the stolen password (even in aggregate) is quite low, any of these solutions can be valid.


Since there is a lively discussion going on, actually SEVERAL lively discussions, in the different posts and seperate comment threads, I will add some clarifications, and respond to some of the very good points that have been raised elsewhere here.

To start, I think it's clear to everyone here that allowing the user's original password to be retrieved, is Bad Practice, and generally Not A Good Idea. That is not at all under dispute...
Further, I will emphasize that in many, nay MOST, situations - it's really wrong, even foul, nasty, AND ugly.

However, the crux of the question is around the principle, IS there any situation where it might not be necessary to forbid this, and if so, how to do so in the most correct manner appropriate to the situation.

Now, as @Thomas, @sfussenegger and few others mentioned, the only proper way to answer that question, is to do a thorough risk analysis of any given (or hypothetical) situation, to understand what's at stake, how much it's worth to protect, and what other mitigations are in play to afford that protection.
No, it is NOT a buzzword, this is one of the basic, most important tools for a real-live security professional. Best practices are good up to a point (usually as guidelines for the inexperienced and the hacks), after that point thoughtful risk analysis takes over.

Y'know, it's funny - I always considered myself one of the security fanatics, and somehow I'm on the opposite side of those so-called "Security Experts"... Well, truth is - because I'm a fanatic, and an actual real-life security expert - I do not believe in spouting "Best Practice" dogma (or CWEs) WITHOUT that all-important risk analysis.
"Beware the security zealot who is quick to apply everything in their tool belt without knowing what the actual issue is they are defending against. More security doesn’t necessarily equate to good security."
Risk analysis, and true security fanatics, would point to a smarter, value/risk -based tradeoff, based on risk, potential loss, possible threats, complementary mitigations, etc. Any "Security Expert" that cannot point to sound risk analysis as the basis for their recommendations, or support logical tradeoffs, but would instead prefer to spout dogma and CWEs without even understanding how to perform a risk analysis, are naught but Security Hacks, and their Expertise is not worth the toilet paper they printed it on.

Indeed, that is how we get the ridiculousness that is Airport Security.

But before we talk about the appropriate tradeoffs to make in THIS SITUATION, let's take a look at the apparent risks (apparent, because we don't have all the background information on this situation, we are all hypothesizing - since the question is what hypothetical situation might there be...)
Let's assume a LOW-VALUE system, yet not so trival that it's public access - the system owner wants to prevent casual impersonation, yet "high" security is not as paramount as ease of use. (Yes, it is a legitimate tradeoff to ACCEPT the risk that any proficient script-kiddie can hack the site... Wait, isn't APT in vogue now...?)
Just for example, let's say I'm arranging a simple site for a large family gathering, allowing everyone to brainstorm on where we want to go on our camping trip this year. I'm less worried about some anonymous hacker, or even Cousin Fred squeezing in repeated suggestions to go back to Lake Wantanamanabikiliki, as I am about Aunt Erma not being able to logon when she needs to. Now, Aunt Erma, being a nuclear physicist, isn't very good at remembering passwords, or even with using computers at all... So I want to remove all friction possible for her. Again, I'm NOT worried about hacks, I just dont want silly mistakes of wrong login - I want to know who is coming, and what they want.

Anyway.
So what are our main risks here, if we symmetrically encrypt passwords, instead of using a one-way hash?

  • Impersonating users? No, I've already accepted that risk, not interesting.
  • Evil administrator? Well, maybe... But again, I dont care if someone can impersonate another user, INTERNAL or no... and anyway a malicious admin is gonna get your password no matter what - if your admin's gone bad, its game over anyway.
  • Another issue that's been raised, is the identity is actually shared between several systems. Ah! This is a very interesting risk, that requires a closer look.
    Let me start by asserting that it's not the actual identity thats shared, rather the proof, or the authentication credential. Okay, since a shared password will effectively allow me entrance to another system (say, my bank account, or gmail), this is effectively the same identity, so it's just semantics... Except that it's not. Identity is managed seperately by each system, in this scenario (though there might be third party id systems, such as OAuth - still, its seperate from the identity in this system - more on this later).
    As such, the core point of risk here, is that the user will willingly input his (same) password into several different systems - and now, I (the admin) or any other hacker of my site will have access to Aunt Erma's passwords for the nuclear missile site.

Hmmm.

Does anything here seem off to you?

It should.

Let's start with the fact that protecting the nuclear missiles system is not my responsibility, I'm just building a frakkin family outing site (for MY family). So whose responsibility IS it? Umm... How about the nuclear missiles system? Duh.
Second, If I wanted to steal someone's password (someone who is known to repeatedly use the same password between secure sites, and not-so-secure ones) - why would I bother hacking your site? Or struggling with your symmetric encryption? Goshdarnitall, I can just put up my own simple website, have users sign up to receive VERY IMPORTANT NEWS about whatever they want... Puffo Presto, I "stole" their passwords.

Yes, user education always does come back to bite us in the hienie, doesn't it?
And there's nothing you can do about that... Even if you WERE to hash their passwords on your site, and do everything else the TSA can think of, you added protection to their password NOT ONE WHIT, if they're going to keep promiscuously sticking their passwords into every site they bump into. Don't EVEN bother trying.

Put another way, You don't own their passwords, so stop trying to act like you do.

So, my Dear Security Experts, as an old lady used to ask for Wendy's, "WHERE's the risk?"

Another few points, in answer to some issues raised above:

  • CWE is not a law, or regulation, or even a standard. It is a collection of common weaknesses, i.e. the inverse of "Best Practices".
  • The issue of shared identity is an actual problem, but misunderstood (or misrepresented) by the naysayers here. It is an issue of sharing the identity in and of itself(!), NOT about cracking the passwords on low-value systems. If you're sharing a password between a low-value and a high-value system, the problem is already there!
  • By the by, the previous point would actually point AGAINST using OAuth and the like for both these low-value systems, and the high-value banking systems.
  • I know it was just an example, but (sadly) the FBI systems are not really the most secured around. Not quite like your cat's blog's servers, but nor do they surpass some of the more secure banks.
  • Split knowledge, or dual control, of encryption keys do NOT happen just in the military, in fact PCI-DSS now requires this from basically all merchants, so its not really so far out there anymore (IF the value justifies it).
  • To all those who are complaining that questions like these are what makes the developer profession look so bad: it is answers like those, that make the security profession look even worse. Again, business-focused risk analysis is what is required, otherwise you make yourself useless. In addition to being wrong.
  • I guess this is why it's not a good idea to just take a regular developer and drop more security responsibilities on him, without training to think differently, and to look for the correct tradeoffs. No offense, to those of you here, I'm all for it - but more training is in order.

Whew. What a long post...
But to answer your original question, @Shane:

  • Explain to the customer the proper way to do things.
  • If he still insists, explain some more, insist, argue. Throw a tantrum, if needed.
  • Explain the BUSINESS RISK to him. Details are good, figures are better, a live demo is usually best.
  • IF HE STILL insists, AND presents valid business reasons - it's time for you to do a judgement call:
    Is this site low-to-no-value? Is it really a valid business case? Is it good enough for you? Are there no other risks you can consider, that would outweigh valid business reasons? (And of course, is the client NOT a malicious site, but thats duh).
    If so, just go right ahead. It's not worth the effort, friction, and lost usage (in this hypothetical situation) to put the necessary process in place. Any other decision (again, in this situation) is a bad tradeoff.

So, bottom line, and an actual answer - encrypt it with a simple symmetrical algorithm, protect the encryption key with strong ACLs and preferably DPAPI or the like, document it and have the client (someone senior enough to make that decision) sign off on it.

AviD
  • 12,592
  • 6
  • 59
  • 90
  • 5
    Passwords shared between your low-value site with "no" expensive assets and Facebook/GMail/your bank **are** an expensive asset, even on low-value sites. – jammycakes Feb 23 '10 at 15:56
  • 3
    I think the problem here is that the above-mentioned groups of users tend to use the same password for all applications from many different security levels (from banking to recipe blogs). So the question is, if it is the responsibility of the developer to protect the users even from themselves. I would definitely say, yes! – ercan Feb 23 '10 at 16:04
  • 1
    @jammycakes, @ercan - I agree in principle, but note I said no high value asset, *identity itself included*. Now, its semantics whether a shared password is the identity... And yes, the passwords will be shared between all applications - but, the developer cannot possibly protect the user from giving out his bank password at any site there is - malicious sites included! So, to avoid getting into any further argument about this, I should just emphasize - there should indeed be some form of onscreen instructions to the user NOT to use the same password, informing him of the risks etc... – AviD Feb 23 '10 at 19:25
  • 7
    I'm sorry, but identity itself **is** a high value asset, period. No exceptions, no excuses. No matter how small and inconsequential you think your site is. And a shared password **is** the identity if it lets the hacker into your users' Facebook accounts, Gmail accounts, bank accounts, etc. It's nothing whatsoever to do with semantics, but it's everything to do with consequences. Just ask anyone who was affected by a hacker attack such as this one: http://www.theregister.co.uk/2009/08/24/4chan_pwns_christians/ – jammycakes Feb 23 '10 at 23:41
  • 1
    It's still not clear what the user or hypothetical company actually gains from having a recoverable password over having the ability to reset passwords. So even if it's not our responsibility to help protect our users, why give our clients a potential liability for no added value? If our clients are sued for damages because Emma's other accounts are compromised, they're paying a price whether or not the courts find in her favor. – Jacob Mar 01 '10 at 15:16
  • 2
    @Jacob, in what world would your client be sued because Erma's accounts on other systems have been compromised? Even given gross negligence on the part of your client (which is NOT a given, as I elaborated), and BESIDES the fact that there is NO WAY to prove that the other system was breached because of yours, there is no possible legal standing to claim any form of damages from one system on the other system. It would be thrown out of court with prejudice, and finding the plaintiff in contempt. However, Erma might be at fault for violating numerous Terms of Service... – AviD Mar 01 '10 at 19:40
  • 2
    If the accounts were compromised because somebody at your company recovered Emma's password, and it was proved that it happened, your company could be sued for having lax security practices. And it doesn't matter whether the courts would find in her favor; your company still has to devote resources to addressing the litigation and faces the consequence of losing customers because of the case. Why put the company through that? There's still no reason the company should event want a recoverable password. – Jacob Mar 01 '10 at 22:56
  • 5
    @Jacob, that is so very wrong. Just because the password happens to be the same (which would clearly violate your ToS and their security policy), their is no legal standing to corrolate the two. Thats even BESIDES the fact that PROVING it would be darn near impossible. As an aside, I will also point out that there is NO LAW that requires a random company to not have lax security practices, unless specific regulations are relevant. And on top of that, the passwords ARE encrypted (!), so the laxness is far from a forgone conclusion. – AviD Mar 02 '10 at 06:28
  • 2
    And by the by, besides the above discussion re business requirements, there ARE legitimate (though not good) TECHNICAL reasons for reversible encrypted passwords. That is part of the reasoning behind PCI-DSS's requirement that user passwords are symmetrically encrypted! – AviD Mar 02 '10 at 06:29
  • 2
    Still haven't heard what those legitimate reasons are, unless some new ones were added to the discussion that I didn't see. And you missed my point that it doesn't even matter whether the law would be on the side of the company. No company would want a reputation for being responsible for their customers' passwords being exposed, even if a court didn't find them at fault. And yes, it can be proven if somebody steals passwords with the right logging and auditing, not that the difficulty of proving of wrongdoing justifies the wrongdoing anyway. – Jacob Mar 02 '10 at 08:16
  • Keeping any information you are given safe is always your problem not someone elses – Anigel Jul 12 '16 at 07:14
21

How about a halfway house?

Store the passwords with a strong encryption, and don't enable resets.

Instead of resetting passwords, allow sending a one-time password (that has to be changed as soon as the first logon occurs). Let the user then change to whatever password they want (the previous one, if they choose).

You can "sell" this as a secure mechanism for resetting passwords.

Oded
  • 463,167
  • 92
  • 837
  • 979
  • You know, I have used that in several situations (usually this is my middle ground), but I have had folks tell me that the end user is just not going to get the interaction and that support needs to be able to 'tell them their password' due to circumstances in that business' model. I agree though that when possible this is preferable. – Shane Feb 17 '10 at 20:03
  • You can alway tell your client about the risk of their DB falling into evil hands and them getting the publicity of stolen passwords... There are plenty of examples around. – Oded Feb 17 '10 at 20:05
  • Ha! Yes--that falls into the 'fight bitterly' category. I even warn of potential lawsuits (though I don't have firsthand knowledge of that ever happening I don't see why it couldn't) if passwords are lost due to security breaches. – Shane Feb 17 '10 at 20:12
  • 6
    Ask them to sign off on the "design", with an added clause that they can't sue you if what you warned them about does indeed happen... At least then you cover yourself. – Oded Feb 17 '10 at 20:18
  • If you're going to send one-time passwords, then you could have stored a salted hash in the first place. – Steven Sudit Feb 17 '10 at 20:49
  • 4
    -1 passwords should never be "encrypted" It is a violation of CWE-257 http://cwe.mitre.org/data/definitions/257.html – rook Feb 17 '10 at 21:53
  • 34
    @Michael Brooks: There's no need to downvote and copy-paste the same comment over and over; we're all aware that it's bad practice. Shane stated that he lacks leverage in the matter, though, and so the next best thing(s) are being proposed. – Johannes Gorset Feb 17 '10 at 22:57
13

The only way to allow a user to retrieve their original password, is to encrypt it with the user's own public key. Only that user can then decrypt their password.

So the steps would be:

  1. User registers on your site (over SSL of course) without yet setting a password. Log them in automatically or provide a temporary password.
  2. You offer to store their public PGP key for future password retrieval.
  3. They upload their public PGP key.
  4. You ask them to set a new password.
  5. They submit their password.
  6. You hash the password using the best password hashing algorithm available (e.g. bcrypt). Use this when validating the next log-in.
  7. You encrypt the password with the public key, and store that separately.

Should the user then ask for their password, you respond with the encrypted (not hashed) password. If the user does not wish to be able to retrieve their password in future (they would only be able to reset it to a service-generated one), steps 3 and 7 can be skipped.

Nicholas Shanks
  • 9,417
  • 3
  • 51
  • 73
  • 5
    Most users don't have a PGP key (I still don't have one; after 20 years in the industry, I've never felt the need), and it's not a frictionless process obtaining one. Further, a private key is really just a proxy for an actual password anyway. It's a password for a password, in other words; it's turtles all the way down. – Robert Harvey Jul 21 '13 at 22:04
  • 1
    @RobertHarvey The goal is to allow a user to retrieve their password without allowing the site employees or any hackers to get at it. By requiring that retrieval process happen on the user's own computer, you enforce this. There may well be alternatives to PGP which could achieve the same. Turtles all the way down it may be (perhaps some elephants along the way), but I don't see any other way. For the general population (not likely to be targeted individually) having your passwords on a bit of paper, and being unable to retrieve them from the service, would be more secure than we are currently – Nicholas Shanks Jul 22 '13 at 07:34
  • I like it because it forces everyone to have a PGP public key, which is, strangely, a very ethical thing to do. – Lodewijk Aug 05 '14 at 01:39
  • you could just generate one and give it to the user. – My1 Feb 17 '16 at 10:21
  • @RobertHarvey You may be right that this is "not a frictionless process", but it could be an extra service for power users, which regular users can ignore it. As for the argument about a PK being "a password for a password", remember that it can in theory be so for *many* passwords; you might use different passwords for different services, and encrypt them all using the same key. Then the PK will be more valuable than just a single password. Perhaps it is in an abstract way comparable to a password manager(?). It's not immediately clear to me what consequences this may have though... – Kjartan Nov 23 '16 at 13:01
12

I think the real question you should ask yourself is: 'How can I be better at convincing people?'

z-boss
  • 14,861
  • 12
  • 46
  • 79
  • 4
    @sneg - Well, I am a furly convincing guy, but sometimes it's a boss and sometimes a customer so I don't always have the leverage I need to convince them one way or another. I will practice in the mirror some more though.. ;) – Shane Feb 17 '10 at 21:02
  • To be convincing you don't really need any leverage other than your competence and communication skill. If you know a better way of doing something but people don't listen... Think about it. – z-boss Feb 18 '10 at 22:11
  • 7
    @z-boss - Apparently you haven't worked with/for some of the hard heads that I have had the pleasure of working with. Sometimes it doesn't matter if your tongue is plated in gold and you could reprogram Google Chrome in a day (which arguably might actually make it useful) they still won't budge. – Shane Feb 25 '10 at 18:15
11

I have the same issue. And at the same way I always think that someone hack my system it's not a matter of "if" but of "when".

So, when I must to do a website that need to store a recoverable confidential information, like a credit card or a password, what I do it's:

  • encrypt with: openssl_encrypt(string $data , string $method , string $password)
  • data arg:
    • the sensitive information (e.g. the user password)
    • serialize if necessary, e.g. if the information is a array of data like multiple sensitive information
  • password arg: use a information that only the user know like:
    • the user license plate
    • social security number
    • user phone number
    • the user mother name
    • a random string sended by email and/or by sms at register time
  • method arg:
    • choose one cipher method, like "aes-256-cbc"
  • NEVER store the information used in the "password" argument at database (or whatever place in the system)

When necessary to retrive this data just use the "openssl_decrypt()" function and ask the user for the answer. E.g.: "To receive your password answer the question: What's your cellphone number?"

PS 1: never use as a password a data stored in database. If you need to store the user cellphone number, then never use this information to encode the data. Always use a information that only the user know or that it's hard to someone non-relative know.

PS 2: for credit card information, like "one click buying", what I do is use the login password. This password is hashed in database (sha1, md5, etc), but at login time I store the plain-text password in session or in a non-persistent (i.e. at memory) secure cookie. This plain password never stay in database, indeed it's always stay in memory, destroyed at end of section. When the user click at "one click buying" button the system use this password. If the user was logged in with a service like facebook, twitter, etc, then I prompt the password again at buying time (ok, it's not a fully "on click") or then use some data of the service that user used to login (like the facebook id).

atiquratik
  • 1,154
  • 3
  • 22
  • 30
Daniel Loureiro
  • 2,914
  • 25
  • 35
9

Securing credentials is not a binary operation: secure/not secure. Security is all about risk assessment and is measured on a continuum. Security fanatics hate to think this way, but the ugly truth is that nothing is perfectly secure. Hashed passwords with stringent password requirements, DNA samples, and retina scans are more secure but at a cost of development and user experience. Plaintext passwords are far less secure but are cheaper to implement (but should be avoided). At end of the day, it comes down to a cost/benefit analysis of a breach. You implement security based on the value of the data being secured and its time-value.

What is the cost of someone's password getting out into the wild? What is the cost of impersonation in the given system? To the FBI computers, the cost could be enormous. To Bob's one-off five-page website, the cost could be negligible. A professional provides options to their customers and, when it comes to security, lays out the advantages and risks of any implementation. This is doubly so if the client requests something that could put them at risk because of failing to heed industry standards. If a client specifically requests two-way encryption, I would ensure you document your objections but that should not stop you from implementing in the best way you know. At the end of the day, it is the client's money. Yes, you should push for using one-way hashes but to say that is absolutely the only choice and anything else is unethical is utter nonsense.

If you are storing passwords with two-way encryption, security all comes down to key management. Windows provides mechanisms to restrict access to certificates private keys to administrative accounts and with passwords. If you are hosting on other platform's, you would need to see what options you have available on those. As others have suggested, you can use asymmetric encryption.

There is no law (neither the Data Protection Act in the UK) of which I'm aware that states specifically that passwords must be stored using one-way hashes. The only requirement in any of these laws is simply that reasonable steps are taken for security. If access to the database is restricted, even plaintext passwords can qualify legally under such a restriction.

However, this does bring to light one more aspect: legal precedence. If legal precedence suggests that you must use one-way hashes given the industry in which your system is being built, then that is entirely different. That is the ammunition you use to convince your customer. Barring that, the best suggestion to provide a reasonable risk assessment, document your objections and implement the system in the most secure way you can given customer's requirements.

Thomas
  • 61,164
  • 11
  • 91
  • 136
  • 10
    Your answer completely ignores that passwords are reused across multiple sites/services, which is central to this topic and the very reason why recoverable passwords are considered a serious weakness. A security professional does not delegate security decisions to the non-technical client; a professional knows that his responsibilities extend beyond the paying client and does not offer options with high risk and **zero** reward. -1 for a rant heavy on rhetoric and extremely light on facts - and for not even really answering the question. – Aaronaught Feb 28 '10 at 16:53
  • 4
    Again, you are completely overlooking risk assessment. To use your argument, you cannot stop at just 1-way hashes. You must ALSO include complexity requirements, password lengths, password reuse restrictions and so on. Arguing that users will use dumb passwords or reuse passwords is not a sufficient business justification if the system is irrelevant and frankly, I did answer the question. The short answer: push for using a std implementation, document your objections if you are overruled and move on. – Thomas Feb 28 '10 at 17:00
  • 2
    Oh, I see, so essentially you're saying that security is "all or nothing", and if you can't have all those other things, then you shouldn't bother with one-way hashes either? I assume you've never actually done a risk assessment because recoverable passwords is normally one of the first things you look for. – Aaronaught Feb 28 '10 at 17:11
  • 7
    And again you repeat the canard that none of this matters for a low-value system! **The value of a user's password has nothing to do with the value of a user's account on your system.** It's a *secret*, and one that you must always keep. I wish I could give another -1 for the comment demonstrating that you **still** don't understand the issues here. – Aaronaught Feb 28 '10 at 17:13
  • 1
    @Aaronnaught: ", and if you can't have all those other things, then you shouldn't bother with one-way hashes either?" You are being sophomoric. I'm saying for YOU to justify YOUR position to protect against pwd reuse, then clearly every system you build must have all those other pwd requirements as well or your argument is empty. – Thomas Feb 28 '10 at 17:19
  • 5
    Risk assessment is the core issue. Password reuse is only one potential issue in a much larger set of issues. Why not require fobs? Why not require that the person drive to your office to login? Without a reasonable assessment of the risks, there is no way to answer these questions. In your world, everything is a risk so every system requires FBI level login security. That is simply not how the real world works. – Thomas Feb 28 '10 at 17:23
  • 7
    The only thing that is clear here is that your entire argument is nothing more than a Slippery Slope fallacy and that you are trying to steamroll readers with buzzwords like "risk assessment" in order to cover up that fact. Rest assured whatever system the "FBI" uses is going to be far more secure than a bcrypt hash and a minimum password length. If requiring industry-standard authentication systems makes me a "security fanatic", then I guess I am a fanatic; personally, it pains me to know that there are people out there willing to sacrifice **MY** security for money. That is **unethical**. – Aaronaught Feb 28 '10 at 18:01
  • 4
    ""FBI" uses is going to be far more secure than a bcrypt hash and a minimum password length" Why do you suppose the FBI systems have more stringent security requirements than most systems or more specifically, why is it that most systems do not have these security requirements? Answer that question and you are able to answer why there is a vast difference between the concept of reasonable security measures and "you must do x in all situations." – Thomas Feb 28 '10 at 20:18
  • 3
    @Thomas: you are arguing here as if salting and hashing passwords is an expensive and complex burden. It isn't. It's very simple (a day or so's work for a competent developer) and it has *no* usability impact if implemented correctly. Now, the Data Protection Act. Agreed, it does not *explicitly* mandate salted hashes, but this is strongly implied by its insistence that you take into account (a) available technical measures and (b) cost of implementing them. If you actually *do* a risk analysis you'll see that failing to implement a one-way salted hash on passwords is out and out recklessness. – jammycakes Apr 27 '11 at 12:42
  • 1
    @jammycakes - First, from a legal standpoint, there is nothing in any law that specifically mandates 1-way hashes. Every law has language along the lines of "taking *reasonable* steps" to security which 2-way encryption would definitely satisfy. – Thomas Apr 27 '11 at 14:54
  • 1
    @jammycakes - (Continued) Second, the issue in this case has absolutely nothing to do with the technical difficulties of implementing hashes or encryption or key management. It has to do with the business case of the client specifically mandating recoverable passwords. It is absolutely possible to do this using 2-way encryption without being reckless otherwise no government or military would use 2-way encryption for anything. – Thomas Apr 27 '11 at 14:54
  • 3
    I acknowledged that there's nothing **explicit** in the legislation about salted hashes, but it is **implicit**. Also, there's **no** business case **whatsoever** (other than ignorance) for using reversible encryption with user passwords. Salting and hashing passwords is not difficult, and sending users a one-time link to a password reset page (rather than a new password or an existing password) is not only more secure, it is also more user-friendly since they don't have to copy and paste anything or navigate back to the login form, they are taken straight there with a single click. – jammycakes Apr 27 '11 at 23:04
  • 2
    Also, I don't buy the comparison to the military. If you want two-way reversible encryption on a website, you need to store both your public and private keys where they can be accessed by the web server. Bottom line is this: if your users can request their password in plain text, so too potentially can a hacker if you have a security hole somewhere in the system -- that would totally negate the point of having the two way encryption in the first place. Why implement something that **may** be flaky when you can have a **rock solid** alternative at little or no additional cost? – jammycakes Apr 27 '11 at 23:11
  • 2
    @jammycakes - There is no such thing as "implicit" legal obligation. There is that which will hold up in a court of law and that which won't. To date, the laws regarding pwd mgmt require *reasonable* steps of security of which 2-way encryption would qualify. You might find lawsuits revolving around plaintext passwords but nothing using 2-way. Again, the issue at hand has **absolutely nothing whatsoever** to do with the ease or difficulty of implementation. Nothing. It is all about the client making a specific request for reversible passwords despite the protests of the OP. – Thomas Apr 27 '11 at 23:31
  • 2
    @jammycakes - *Why implement something that may be flaky when you can have a rock solid alternative at little or no additional cost?* Because the client, despite arguments such as this, overruled you. I agree with the accepted answer in that it is *likely* due to ugly automatic passwords generated when reset. No one questions that 1-way hashes are definitely the best practice. I would argue long and hard against anything else and would document my objections. However, at the end of the day, the client is the one paying the bills and it is ultimately their choice barring a legal limitation. – Thomas Apr 27 '11 at 23:33
  • 3
    @jammycakes - *you need to store both your public and private keys where they can be accessed by the web server*. Everyday thousands of bits of people's personal info is transmitted and stored via 2-way. Example: credit cards. When Amazon saves your cc information, they are using 2-way encryption. How can they do this ethically? The short answer is key management. It can be done such that only the web server and trusted individuals have access to the keys and access to the key is logged. Is it simpler and safer than 1-way? Not by a long shot. But clearly it can be done ethically. – Thomas Apr 27 '11 at 23:37
  • 2
    The reason I mentioned the Data Protection Act was because it gives developers a legal argument to counter the client's objections, and questioning its validity in this situation undermines this argument. If you have the kind of client who goes and gets legal counsel on this one, chances are they'll accept a workable alternative anyway if you can show them a prototype. On the other hand, if you have shown them a working prototype alternative, **and** they still overrule you despite being told there are legal implications, chances are that you have a Problem Client on your hands. – jammycakes Apr 28 '11 at 08:34
  • 1
    Another thing. Amazon does not let you recover your password to plain text -- they implement a password reset link, as of course they should. Nor does Amazon let you recover your credit card (except for the last four digits) to plain text either. No doubt they have the public and private keys to encrypt and decrypt full credit cards on completely different servers separated by firewalls making it **impossible** for the public web server to decrypt credit card information. It is unlikely that someone who demanded recoverable passwords would be willing to invest in that kind of infrastructure. – jammycakes Apr 28 '11 at 11:43
  • 1
    @jammycakes - I have suggested multiple times, both in comments and my post, that if there is legal precedent for requiring unrecoverable passwords, that is definitely weapon. In fact, that would end the conversation. The problem is that such legal precedence does not exist using 2-way encryption. The Data Protection Act does not state that you must use unrecoverable passwords. It simply states that you have to take reasonable steps and the client's lawyers would also see that. Let's be clear, if you have a client insisting on recoverable passwords, they are by definition a problem client. – Thomas Apr 28 '11 at 16:01
  • @jammycakes - My point about Amazon wasn't that an individual could recover their information. Rather, it is secret information likely stored using encryption. At some point, Amazon has to decrypt the CC in order for the customer to use it. Everyday, we have our information, even secrets, stored using encryption. It is possible to do ethically; it's just far better to do using hashes. I cannot imagine not being able to convince a client to use hashes but at the end of the day it's the client's money. If they mandate recoverable pwds, document your objections, do the best you can and move on. – Thomas Apr 28 '11 at 16:10
  • 1
    Yes, I know I can't point to a definite legal **precedent**, but **that is not the point.** My point is that **it is an argument that you can present to the client** in response to them overruling you. If they did end up losing passwords, and if it was shown in court that they had been **advised** about a more secure alternative yet had rejected it, they could easily end up **becoming** that legal precedent. Besides, even if it is not illegal, **it is still unethical,** because you (or rather your client) is compromising security of a valuable asset with no valid justification whatsoever. – jammycakes Apr 29 '11 at 06:48
  • @jammycakes - Sure, you can *suggest* the client that it *might* be open to lawsuit to use encryption. But if the client knows even a little, they will know this is BS. Even if they lost the db, itself an unlikely event, all that has to be shown is that you took *reasonable* steps and encryption qualifies. Using the argument that hashes are more secure solution is unlikely to convince them. They would probably agree but say it is unnecessary given the low value of the system. – Thomas Apr 29 '11 at 16:22
  • 2
    @jammycakes - Using encryption instead of hashes, it is no less ethical than storing any other type of personal or private information using encryption. It is not a compromise of security. Rather, you are simply leaving open another *potential* attack vector. By your logic, it could be argued that allowing passwords less than say five characters is unethical. Encryption does require more trust in the people managing the system than with hashes. I would never recommend encryption for passwords, but the claim that encryption in any form for passwords is *unethical* is just being extremist. – Thomas Apr 29 '11 at 16:28
  • 2
    (1) It's not being extremist, it's being responsible. (2) I don't know if you realise it, but by saying "It is not a compromise of security, rather, you are simply leaving open another potential attack vector" you have just contradicted yourself. (3) Losing the database is not an "unlikely event." Sony just last week lost seventy million Playstation users' login & credit card details despite what they thought of as stringent security measures. We are talking about **real world** situations that have happened over and over and over again. – jammycakes Apr 29 '11 at 21:25
  • 1
    About the Data Protection Act again. The relevant parts are Schedule 1, Part I, Paragraph 7 and Schedule 1, Part II, Paragraphs 9-12. It requires you to use a level of security "appropriate to the harm that might result from ... accidental loss" and "Having regard to the state of technological development and the cost of implementing any measures." I covered it in more detail some time ago in a blog post [here](http://jamesmckay.net/2009/09/if-you-are-saving-passwords-in-clear-text-you-are-probably-breaking-the-law/). – jammycakes Apr 30 '11 at 06:49
  • 1
    @jammycakes - 1. Leaving open a potential attack vector is NOT a compromise (past tense) of security. Get your vocabulary straight. 2. Arguing that it is impossible to implement any form of password security that is moral (equivalent to ethical) is ridiculous for reasons not the least of which is that secure information is passed using encryption every minute of every day. 3. Claiming that because Sony lost its db and therefore any and all dbs are just as likely to be stolen is extremist nonsense not to mention statistically faulty. Sony is a high value target. Bob's fancy blog site isn't. – Thomas May 02 '11 at 05:22
  • 1
    @jammycakes - Btw, there is nothing that stops you from encrypting the db, so if it is stolen, you have yet another layer of protection. – Thomas May 02 '11 at 05:44
  • 1
    @jammycakes - DPA, Schedule 1, Part 2, 7th principle: *Having regard to the state of technological development and the cost of implementing any measures, the measures must ensure a level of security appropriate to (a)the harm that might result from such unauthorised or unlawful processing or accidental loss, destruction or damage as are mentioned in the seventh principle,* To win in court, you would have to show that no technology other than hashes fit that bill. To do that, you would have to show that the military, passing secrets using encryption, are not also violating this principle. – Thomas May 02 '11 at 05:46
  • 1
    @jammycakes - In case, you missed the cogent parts: **must ensure a level of security appropriate to**. Encryption would absolutely fit that bill. If you encrypt the passwords with reasonable levels of controls on the keys, you would have an extraordinary tough time arguing in court that nothing less than hashes are appropriate. – Thomas May 02 '11 at 05:48
  • 2
    @Thomas: (1) You are obviously understanding my use of the word "compromise" to be a synonym for "breach." I have been using the word "compromise" as a synonym for "trade-off." In other words, you are using a **much less secure alternative than one-way hashes** -- and what makes it unethical is that there is **no valid technical justification whatsoever** for this decision. Certainly if they had been **advised** to use a stronger form of encryption and **deliberately rejected it** they would certainly be on legal thin ice. – jammycakes May 02 '11 at 11:48
  • 1
    @Thomas: (2) The argument that "Bob's fancy blog site is not a high value target" is fallacious for two reasons. First, as far as passwords are concerned there is **no such thing as a low value site** simply because **people re-use passwords.** Secondly, Bob's database is **more** likely to be stolen because hackers **love** sites that think they're too low value to be worth bothering with security. They run bots probing 24/7 for security holes such as SQL injection vulnerabilities, for instance. – jammycakes May 02 '11 at 11:55
  • 1
    @Thomas: (3) "Must ensure a level of security appropriate to" is **qualified** by "having regard to the state of technological development and the cost of implementing any measures." If a client is advised that there is a **more secure** alternative to encrypting what is probably the **most sensitive information they are processing** and that there is **no practical disadvantage to implementing it** then they are quite clearly **disregarding** both the state of technological development and the cost of implementing such measures. Which sounds like a clear cut violation to me. – jammycakes May 02 '11 at 12:06
  • @jammycakes - It is *less* secure but to claim *much less* secure to the point of being unethical you would need to explain how the military can put people's lives at risk using the same forms of encryption. It is simply extremist to claim that you cannot protect secrets using encryption. Technical justification isn't at issue. It is a business decision by the client. – Thomas May 02 '11 at 15:17
  • @jammycakes - The value of the target affects the probability of a breech because it affects the desirability to attack it by hackers. Therefore, trying to equate a potential attack on Sony to a potential attack on nothing site are simply not on par with each other. Furthermore, the breech's by Sony and others were often though unencrypted backups. A problem easily solved. – Thomas May 02 '11 at 15:20
  • @jammycakes - Remember, that we are talking about using encryption. That means the attacks had to have stolen the database *and* the encryption keys. – Thomas May 02 '11 at 15:22
  • 1
    @jammycakes - *If a client is advised that there is a more secure alternative..* A client is not obligated to use the most secure methods available just as you are not obligated to use dead bolts on every door. The client has made their decision based on reasons that have **absolutely nothing to do with technical implementation** and that is their right. They are obligated to take reasonable measures and encryption qualifies. No court is going to convict someone that used encryption and decent management of the keys and was breached. – Thomas May 02 '11 at 15:26
  • @jammycakes - "people re-use passwords" This is the same canard used by earlier. That is a risk USERS are taking upon themselves. It is akin to using simple to guess passwords or pasting their password into some other non-hashed/non-encrypted column. – Thomas May 02 '11 at 15:36
  • 1
    @Thomas: "People re-use passwords" is **not** a canard. It is a very real problem. Most users are non-technical. They do not understand (or underestimate) the risks. They do not see what else to do about it. Yes, you can tell them to use a password manager such as KeePass, but that is still a faff and most of them won't bother. Whether you like it or not, they are **entrusting you** with data that could cause them a lot of harm if released and abused. By using a less secure option, you are **breaching their trust**. – jammycakes May 02 '11 at 15:53
  • 1
    @Thomas: Sure, you are not obligated to use dead bolts on doors to your own house that are only protecting **your own property.** However, when you are protecting **valuable** assets belonging to **other people** you **are** obligated to provide higher standards of security. – jammycakes May 02 '11 at 15:59
  • 1
    @Thomas: it is **everything** to do with technical implementation. As I have repeatedly stated, the Data Protection Act explicitly says "Having regard to the state of technological development." – jammycakes May 02 '11 at 16:01
  • 1
    @Thomas: the value of the target is only one consideration that hackers will take into account. Another consideration is **ease of penetration.** A "tiny nothing site" that has a security hole allowing users' passwords to be revealed and their **bank accounts** to then be compromised has exposed information that could result in **thousands** if not millions of pounds' worth of damage. – jammycakes May 02 '11 at 16:13
  • 1
    @jammycakes - Pwd resue - Again, reuse of passwords is the user's problem as is choosing poor passwords. If you are going to go down this road of protecting users from themselves, you cannot stop with hashes. You must include min pwd lengths, complexity requirements, regular pwd resets and so on. You cannot on the one-hand say I want to protect users that resuse pwds and then allow three letter passwords. – Thomas May 02 '11 at 16:41
  • @jammycakes - Users are *also* entrusting you with a whole host of *other* information too. Can't use hashes for everything. Choosing an **insecure** option is breaching their trust. Choosing a secure, but not as secure as it could possibly be, is absolutely not breaching their trust. By your argument, I could say that not requiring key fobs is breaching their trust because it is not the most secure option. – Thomas May 02 '11 at 16:43
  • @jammycakes - Even when protecting other people's property, you are not obligated to choose the "most" secure option. Example, if you are storing people's property, you are not obligated to require retinal scans. It is more secure that not requiring them. They are easy to implement. Encryption absolutely qualifies as *having a regard for the technical implementation* and because it does, it is not an issue in this case. – Thomas May 02 '11 at 16:47
  • @jammycakes - RE: A key element to the equation is that the site in question is NOT storing bank accounts nor any other high value information other than *potentially* their password if they are dumb and use the same password they use for their bank account and even that information would be encrypted. A breach would require **both** the data and the encryption key. The assumption is that the encryption key is protected. – Thomas May 02 '11 at 16:50
  • 2
    @Thomas: It is **totally unrealistic** to expect users not to re-use passwords. For one thing, non-technical users simply do not understand the risk. For another thing, avoiding password re-use is difficult. Memorising hundreds of different logins is simply not an option, and password managers such as KeePass, while useful by tech-savvy individuals such as ourselves, are nowhere near as user-friendly as simply remembering the same password across the board. – jammycakes May 02 '11 at 17:12
  • 1
    @Thomas: The difference between salted hashes and the other things you mention -- min password lengths, complexity requirements, regular password resets, key fobs and so on -- is that there is a **significant additional cost** to implement these measures either in terms of infrastructure, or training, or usability. The DPA does specifically mention cost of implementation when taking measures into account. With one-way salted hashes, the additional cost is **to all intents and purposes, zero.** Taking your argument to a conclusion, plain text passwords would be just fine. – jammycakes May 02 '11 at 17:17
  • 1
    @Thomas: There are actually a lot of ways that you can quickly get passwords out of a database with two-way encryption even if you don't have the private key. As I've said, bugs in your UI front end are one such example. – jammycakes May 02 '11 at 17:19
  • @jammycakes - Again, if your argument is protection against pwd reuse, you cannot stop with hashes. You must implement other means of protection if that is your goal or your argument is empty. Min password lengths, salts, complexity requirements and even required password resets are **easy** to implement from a technological perspective and thus arguing that *these* are technologically difficult but hashes are not does not logically follow. – Thomas May 02 '11 at 18:24
  • @jammycakes - It is not unrealistic to expect users not to reuse passwords anymore than it is unrealistic to expect them to make smart decisions with respect to passwords. Password managers are built into most browsers in addition to third-party solutions. Users put themselves at risk when they reuse passwords no matter the protection given by the site in question. – Thomas May 02 '11 at 18:25
  • @jammycakes - Even a hole in your UI would not necessarily enable the ability to pull the encrypted data from the database in a way that would allow you to decrypt it. You need the encryption key and any salts that might be added and the assumption here is that the keys are managed well. – Thomas May 02 '11 at 18:31
  • 2
    @Thomas: it is **totally unrealistic** to expect your users not to reuse passwords. Not reusing passwords is actually quite an effort, even for a technical user armed with password managers. The big difficulty is synchronising them across multiple devices. Your iPhone. Your PlayStation. Your Internet TV. Your friend's PC. Internet cafes. Locked-down workstations on corporate networks. Not reusing passwords involves discipline, effort and sacrifices. With non-technical users who don't understand the risks and whose eyes glaze over when you talk about these things, it's pretty much a lost cause. – jammycakes May 03 '11 at 08:28
  • 1
    @Thomas: the argument about minimum password lengths, complexity requirements and required password resets is that they are detrimental to the user experience, not that they are difficult to implement. – jammycakes May 03 '11 at 08:38
  • 1
    @Thomas: The other (and main) point about minimum password lengths etc (and yes, you should give the user at least a warning about weak passwords) is that they are a **completely separate issue** to the question of two way encryption versus salted hashes. They have completely different trade-offs and completely different cost/benefit considerations, and as such are not within the scope of this discussion. – jammycakes May 03 '11 at 10:07
  • @jammycakes - Drivers exceed the speed limit everyday. That is a risk the driver takes. Reusing passwords is a risk users take just as using short or easy to guess passwords. We all do it to an extent, but we are the ones bearing that risk. – Thomas May 03 '11 at 15:12
  • @jammycakes - *the argument about minimum password lengths, complexity requirements and required password resets is that they are detrimental to the user experience, not that they are difficult to implement.* EXACTLY! It has nothing to do with the *technical* implementation and therefore arguing that hashes are easier to implement than encryption from a technological perspective is equally irrelevant to the discussion. They are both easy to implement technologically. – Thomas May 03 '11 at 15:15
  • @jammycakes - RE: min pwd lengths, complexity requirements etc. The other thing to note about this argument is that it ties into your argument of protecting against password reuse. You cannot legitimately argue that you can only use hashes to ethically store pwds specifically to protect against pwd reuse and not also concede that if that is the goal you must also implement all those others measures as well. – Thomas May 03 '11 at 15:18
  • @jammycakes - It comes down this. Everyday sites use encryption to store sensitive info. Many laws in many countries require that your personal info is at least encrypted. Pwds are another piece of sensitive information. If it is possible to ethically store other bits of sensitive information using encryption, then it is possible to do so with pwds. Absolutely not recommended. I'd argue up a storm against it. I've never been unable to convince someone to use hashes. But to argue that it cannot be done using encryption *ethically* is silly. – Thomas May 03 '11 at 15:28
  • 1
    @Thomas: Sticking to the speed limit is **easy.** Choosing a secure password is **easy.** It **is** reasonable to expect these of your users. Avoiding password reuse, on the other hand, is **difficult.** The point is that password policies have a **significant** detrimental effect on usability and possibly even accessibility. The effect on usability of one-way hashes, properly implemented, is **negligible.** And this brings us back to the "cost to implement" aspect. The fact that there's no argument about *technical implementation* to answer for does not mean there's *no argument at all.* – jammycakes May 05 '11 at 08:49
  • 1
    @Thomas: Two-way encryption is much less secure than one-way encryption, since there is always the possibility that your encryption keys could be compromised or your front end could have a security flaw. Before you start throwing words like "silly" or "extremist" around, take a look at Simon Willison's presentation "[Web Security Horror Stories](http://bit.ly/bVN5zd)". Code injection, XSS, CSRF, clickjacking, path discovery, error exposure and many other attack vectors can all potentially allow for privilege escalation and installation of a rootkit. – jammycakes May 05 '11 at 09:10
  • 2
    @Thomas: It comes down to this. With most sensitive information, two way encryption is **the best you can do.** With passwords, this is simply not the case. Two-way encryption of passwords involves weakening the protection you are offering to your users with no justifiable benefit, and regardless of whether it does actually CYA from a legal standpoint (which it might not), it is still unethical. – jammycakes May 05 '11 at 09:21
  • @jammycakes - RE: Pwd reuse - You are setting yourself up for heartache if you assume that every site manages your passwords well. If you avoid password reuse, you avoid that headache and thus, that is a **risk taken by the user**. Whether lots of people do it, like speeding, you are putting yourself at risk by assuming every site you visit manages your password securely. Reusing your passwords and then complaining about poor handling is pissing in the wind. Avoiding password reuse is **NOT** difficult. There are many password managers that make it easy. – Thomas May 05 '11 at 15:31
  • @jammycakes - RE: Cost to implement - Actually, there is technical cost with making hashes usable. The password reset process has a cost to to implement. If you want to provide users with easy to remember passwords resets, that generator has to be built. To do it right, there are elements that must be built that you wouldn't have to build using encryption. They are worth it IMO, but don't try to convince people that hashes do not have their costs. – Thomas May 05 '11 at 15:32
  • @jammycakes - RE: "Encryption much less secure" - Then you have *a lot* more to worry about than your password. The military and governments should be worried too. If your medical records and credit card number can be stored using encryption ethically, then so can your password. It is *less* secure, but I simply do not buy the argument of *much less* secure. **That is entirely a factor of the implementation.** – Thomas May 05 '11 at 15:33
  • 1
    @jammycakes - Clients are not obligated to use the "best security solution they can." They *should*, but they are not *obligated*. For example, is it ethical to encrypt your medical information using AES-128 instead of AES-256? What about DES-56? What about playfair cypher? To the OP client, it would seem there is a perceived benefit in dealing with and building the password reset mechanism. Whether it is justified is questionable but that isn't our call. – Thomas May 05 '11 at 15:36
  • 1
    @Thomas: Re password reuse: Your assertion that avoiding password reuse is not difficult **completely ignores** my point about syncing your password managers across multiple devices, including iPhones, PlayStations, Internet TVs, Internet cafes, and locked down corporate workstations where using your password manager may not be an option. Besides, I'm not talking about you and me here, I am talking about non-technical people who [don't know the difference between a web browser and a search engine](http://www.youtube.com/watch?v=o4MwTvtyrUQ) let alone appreciate the risks of password reuse. – jammycakes May 05 '11 at 18:25
  • 1
    @Thomas: Re cost to implement: I take it you're talking about usability here? The most user friendly way of handling login recovery is to e-mail the user a one-time link to a page where they can choose a new password -- like what Twitter and Amazon do. It may take a little bit more effort to code up but no more so than coding up two-way encryption **and** getting a solid key management system in place. And it doesn't need passwords to be recovered to plain text. – jammycakes May 05 '11 at 18:35
  • 1
    @Thomas: Let's get it clear what we're discussing about legalities. I'm in no doubt that **no encryption at all** is a violation of the Data Protection Act. Will two-way encryption be enough to CYA legally? I don't know, that would have to be tested in court. But there's a big difference between "legal" and "ethical." Certainly, if you are presented with two approaches, and you choose the less secure one despite being told that the more secure option is just as effective, that is unethical, regardless of the legalities. – jammycakes May 05 '11 at 18:43
  • 1
    @Thomas: re the military, medical records etc: bear in mind that what we're talking about here are **public facing webservers** open to attack from around the world 24/7. In medical, government and military settings, public facing webservers are on untrusted networks, separated from sensitive data by firewalls with strict configuration settings and enterprise-standard protocols surrounding key management, server access etc, with staff subject to rigorous security clearance and all sorts of things. In many medical applications, even developers are not allowed access to live patient data. – jammycakes May 05 '11 at 18:52
  • @jammycakes - RE: Cost to Implement - I agree but the difficulty or ease of implementation is orthogonal to the issue of the client choosing the more difficult path and whether that can be done "ethically". Sure it can. It's a pain to be sure. It is obviously riskier. However, that is different than saying it is *not possible* to do in a way that reduces many of those risks. Clearly it can because lots of other information is stored encrypted. – Thomas May 05 '11 at 19:39
  • @jammycakes - It is not true that all medical, government and military information is not accessible on public networks. Example: your medical insurance information. Much if it can now be accessed online. Again, using encryption *can* be done ethically even if it is more cumbersome and riskier than hashes. It isn't preferred. It isn't recommend but it can be done. – Thomas May 05 '11 at 19:41
  • 2
    @Thomas: Taking an option that you **know** to be riskier than the alternative, with sensitive data that other people have entrusted to you, when there is no justifiable reason whatsoever for doing so, is unethical, period. – jammycakes May 05 '11 at 20:07
  • @jammycakes - Risk*ier*. That is not the same thing as saying it *risky*. Not encrypting passwords at all is very risky. Encrypting them is less risky. Hashing has the lowest risk. People are **already** entrusting information to you that is probably "only" encrypted. You don't seem to care about that. Clearly the client did have a justification. You and I do not agree with that justification but that is a far cry from saying it is unethical. Like I said, by your logic using AES-128 instead of AES-256 for encryption is unethical because there is no justification for doing otherwise. – Thomas May 05 '11 at 20:28
  • 1
    @Thomas: Did you even look at Simon Willison's presentation which I linked to earlier? The Internet is a **minefield** for security. Hackers run **botnets** looking for vulnerable servers. A typical website gets probed **daily** for vulnerabilities. A web server that uses two-way encryption but then gets a rootkit installed on it by an attacker, **or its encryption keys stolen by an insider,** might as well be storing everything in plain text. AES-128 versus AES-256 won't make a whit of difference. – jammycakes May 05 '11 at 21:24
  • 2
    @Thomas: And yes I do care about sensitive personal information. But passwords are different altogether. With a compromised database of passwords, an attacker can **trivially** impersonate **a large percentage of your user base.** They can **take over** their e-mail accounts, and from there, they can quite easily take over their entire lives. As I said, passwords are probably **the single most sensitive** piece of information that you are storing on your entire site. – jammycakes May 05 '11 at 21:31
  • 1
    @Thomas: It all boils down to this. If you lose your users' medical details, and you have encrypted them using a suitably strong encryption method, at least you have a defence that you took reasonable measures to protect them. On the other hand, losing your users' passwords in the same way is **inexcusable.** – jammycakes May 05 '11 at 21:38
  • 1
    @jammycakes - *at least you have a defense that you took reasonable measures to protect them* That is EXACTLY the defense that would be used with encrypted passwords: you took reasonable measures to protect them. Your argument is that nothing less than hashes (even if you allow two letter passwords) is reasonable and that simply isn't, well, reasonable. Did you eliminate the risk? No. Did you take steps to *reduce* the risk? Yes. The issue is a matter of degree. – Thomas May 05 '11 at 22:24
  • 1
    @jammycakes - If an attacker puts a rootkit on your webserver, steals the salts, hashing keys and the data, having hashes on weak passwords will not help one bit. Yet, users being allowed to use weak passwords which are presumably resued often is not a risk born by the user? – Thomas May 05 '11 at 22:25
  • 1
    @jammycakes - Btw, if you get a rookit on your webserver, they could probably just use that to sniff the IIS traffic and read the passwords, unencrypted *or* hashed in real time. If you have direct access to the web server, there is no end of trouble you can cause. – Thomas May 05 '11 at 22:27
  • 1
    @Thomas: (1) A rootkit is only one of many ways an attacker can get their hands on your password database and the keys. **The majority of threats are internal.** (2) If an attacker steals your two-way encrypted database, the **strong passwords as well as the weak ones** will be compromised. Salted hashes can only be cracked by a brute force attack, and if you are using a suitably strong hash algorithm such as bcrypt with a suitably high work function, they'll only be able to get the most **brain-dead** of passwords out of the database. – jammycakes May 05 '11 at 23:23
  • 1
    @Thomas: Taking your argument to a conclusion, it's perfectly OK to use something ridiculously weak such as XOR encryption. Did you eliminate the risk? No. Did you take steps to reduce the risk? Yes. Yes, it's all a matter of degree, but did you take **all the steps you can reasonably be expected to take**? No. – jammycakes May 05 '11 at 23:25
  • 1
    @Thomas: Oh, and by the way, I **do** think that you should use a strong (ie, slow) hash algorithm such as bcrypt to hash your passwords. MD5 and SHA-1 are too fast to be useful. See http://codahale.com/how-to-safely-store-a-password/ – jammycakes May 05 '11 at 23:27
  • @jammycakes - If you are going to try to use the law as an argument, then we have to evaluate how decisions would play out in the courts. If a company used the weakest hash algorithm known with no salting, it is unlikely any Court would rule that they had not take reasonable steps. If a company used a form of encryption trusted by an authoritative source (like a government or military) with some protocols for key protection, it is likely that would be seen a reasonable. If they rolled their own encryption despite protests from the developers, I have no idea how that would play out. – Thomas May 06 '11 at 05:21
  • @jammycakes - This is all getting away from the core issue. It is not whether encryption imparts more risk than hashes. It isn't about best practices, ease of implementation, or best algorithms to use. The core issue is whether it is **POSSIBLE** to store passwords using encryption ethically. Since other equally if not more sensitive information, including secrets, are stored using encryption, by definition it is possible. – Thomas May 06 '11 at 05:22
  • 1
    @Thomas: (1) The whole point of the legal argument is that it strengthens your position as a developer to push back against unreasonable clients, and say to them, "Sorry, I can't do that." (2) as I've said before, there's a difference between "legal" and "ethical" and using an option that you **know** to be **significantly** weaker than a **reasonable** alternative is simply not ethical, regardless of the legalities. – jammycakes May 06 '11 at 13:58
  • @jammycakes - 1. **There is no legal argument**. No law or precedent requires hashes or even the best available solution. 2. Saying you "can't" do it would be interpreted as you "won't" do it. That's your choice but that just means **you** don't know of a means to do it safely (Btw, also a valid argument to use). 3. I simply do not buy the extremist POV that no form of encryption is "ethical". That far more secret and sensitive information is being stored with encryption is overwhelming proof. That some systems require recoverable passwords for pass-through is also proof. – Thomas May 06 '11 at 16:19
  • I hope, no developer that agrees to this response implements an application that requires **secure** storage of passwords. I think Aaronaught's architect example is enough to explain why implementing a wrong business case should be avoided at all costs. – Utku Zihnioglu May 18 '11 at 23:43
  • @utku.zih - By that response you aren't addressing the OP. The question is A: ways to convince your client, B: *is it possible* to do it ethically and C: what to do if overruled. Aaronnaught would have us equating legally required standards of building fire code safety with any and all websites even those that take some reasonable precautions like asymmetric encryption. – Thomas May 19 '11 at 00:03
  • @utku.zih - Btw, let me add that there isn't one person here including myself that would ever *recommend* using anything other than hashes. It simply *shouldn't* be done. However, clients can be dumb and it is their money. As long as reasonable precautions are taken, one can do it with encryption even though it is not recommended by any stretch of the imagination. – Thomas May 19 '11 at 00:07
  • @Thomas - I may have been disintegrated from the OPs question after reading 85 comments on this answer. I understand what you are saying; however, I personally think why there has been such a debate on your answer is that your acknowledgement of a situation where most of the developers would complete the customers request and do not ever talk about it. So yes, it is a valid case; but it is not acceptable if the developer is responsible of recommending or deciding the business case. – Utku Zihnioglu May 19 '11 at 00:35
  • Veeeery long comments discussion, TL;DR - and anyway, after skipping a few pages worth of comments, it seems you guys are right back where you started, so I didn't miss anything. Speaking as a [security professional](http://security.stackexchange.com/users/33/avid), I notice 2 problems with this discussion, mainly because of semantics: 1. "Security" is too broad a topic, it is unqualified and means too many different things to different people. Thus it is pointless to try to discuss whether or not it is "secure". – AviD Sep 25 '11 at 07:30
  • 2. @jammycakes raised the issue of "trade-offs" - however, you seemed to think that was a *bad* thing. In reality, **good** security (sic) is based on rational, risk-based tradeoffs *in the business context*. Thus, your argument was halfway correct, but missed in the final conclusion: yes, storing the passwords in reversible encryption *is* a tradeoff between business requirement and security requirement - BUT that is a *good* thing. Now, whether or not that is the correct tradeoff, that is something that can be discussed - but it *must* be in the context of the business risk. – AviD Sep 25 '11 at 07:33
  • @utku.zih - The whole point of the OP is that the developer *isn't* in charge of deciding the business case or else they'd have chosen hashes. The developer already made their recommendation of hashes and it was dismissed. The fundamental question is whether a sufficiently secure system can be created without hashes and whether that solution is an ethical one. – Thomas Sep 26 '11 at 13:50
  • @AviD - Yes, the discussion is whether the tradeoff of not using hashes in a business context is worth the risk. From the OP, it is not a decision the developer in the OP has the authority to make. At the end of the day, the company paying for the development has the authority to make that decision. The other part of the discussion is that in light of the company choosing against hashes, is it possible to build a solution that will provide "enough" security to be ethical? IMO, it is (albeit more cumbersome to do right) as it is done regularly with other forms of data. – Thomas Sep 26 '11 at 13:55
  • @Thomas agreed (and see my loong answer below). And that's what this argument is really all about, whether "tradeoffs" are a good thing or not. Good risk management isn't just about closing off risk, it's also accepting it judiciously. As I often say, security without risk management is like a speeding car with no steering, no breaks, and no map - you'll get there fast, but you have no control over where "there" is, and wouldn't know even if you got there. – AviD Sep 26 '11 at 20:44
8

Make the answer to the user's security question a part of the encryption key, and don't store the security question answer as plain text (hash that instead)

Rob Fonseca-Ensor
  • 15,227
  • 41
  • 56
  • The user may answer differently the question too. Some questions beg longer answers that are easy to rephrase later. – Monoman Oct 13 '10 at 14:34
  • 5
    Security questions are a bad idea. How do you get your mom to change her maiden name once the information is breached? Also see Peter Gutmann's [Engineering Security](http://www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf‎). – jww Dec 30 '13 at 07:32
7

I implement multiple-factor authentication systems for a living, so for me it is natural to think that you can either reset or reconstruct the password, while temporarily using one less factor to authenticate the user for just the reset/recreation workflow. Particularly the use of OTPs (one-time passwords) as some of the additional factors, mitigates much of the risk if the time window is short for the suggested workflow. We've implemented software OTP generators for smartphones (that most users already carry with themselves all day) with great success. Before complains of a commercial plug appear, what I'm saying is that we can lower the risks inherent of keeping passwords easily retrievable or resettable when they aren't the only factor used to authenticate an user. I concede that for the password reuse among sites scenarios the situation is still not pretty, as the user will insist to have the original password because he/she wants to open up the other sites too, but you can try to deliver the reconstructed password in the safest possible way (htpps and discreet appearance on the html).

Monoman
  • 721
  • 8
  • 12
  • Temporarily removing a factor from a multiple-factor authentication system is indeed a much more secure means to *reset* a password than the infuriating "secret question" systems seen on so many sites. But as for storing passwords in a recoverable format, I'm not sure how it helps, unless you're somehow using the second factor to encrypt or obfuscate the first, and I'm not sure how that's possible with something like a SecurID. Can you explain? – Aaronaught Oct 13 '10 at 20:56
  • @Aaronaught What I've said is, if you are 'required' to have recoverable passwords, the inherent risk is lower if it is not the only authentication factor, and it is also easier for the end user, if that workflows reuses factors that he/she already possesses and has current access to, than trying to remember probably also forgotten 'secret answers' or using time-limited links or temporary passwords, both sent through unsafe channels (unless you are using S-MIME with client certificates, or PGP, both incurring costs specially on management of correct association and expiration/substitution) – Monoman Oct 18 '10 at 16:45
  • 1
    I suppose all of that's true, but the risk of public compromise is minimal to begin with; the more serious issue with recoverable passwords is internal security and potentially allowing a disgruntled employee to walk off with the e-mail passwords of thousands of customers, or a crooked CEO to sell it to phishers and spammers. Two-factor authentication is great at combating identity theft and password guessing but doesn't really bring much to the table as far as keeping the actual password database safe. – Aaronaught Oct 18 '10 at 17:12
5

Just came across this interesting and heated discussion. What surprised me most though was, how little attention was payed to the following basic question:

  • Q1. What are the actual reasons the user insists on having access to plain text stored password? Why is it of so much value?

The information that users are elder or young does not really answer that question. But how a business decision can be made without proper understanding customer's concern?

Now why it matters? Because if the real cause of customers' request is the system that is painfully hard to use, then maybe addressing the exact cause would solve the actual problem?

As I don't have this information and cannot speak to those customers, I can only guess: It is about usability, see above.

Another question I have seen asked:

  • Q2. If user does not remember the password in first place, why does the old password matter?

And here is possible answer. If you have cat called "miaumiau" and used her name as password but forgot you did, would you prefer to be reminded what it was or rather being sent something like "#zy*RW(ew"?

Another possible reason is that the user considers it a hard work to come up with a new password! So having the old password sent back gives the illusion of saving her from that painful work again.

I am just trying to understand the reason. But whatever the reason is, it is the reason not the cause that has to be addressed.

As user, I want things simple! I don't want to work hard!

If I log in to a news site to read newspapers, I want to type 1111 as password and be through!!!

I know it is insecure but what do I care about someone getting access to my "account"? Yes, he can read the news too!

Does the site store my "private" information? The news I read today? Then it is the site's problem, not mine! Does the site show private information to authenticated user? Then don't show it in first place!

This is just to demonstrate user's attitude to the problem.

So to summarize, I don't feel it is a problem of how to "securely" store plain text passwords (which we know is impossible) but rather how to address customers actual concern.

Dmitri Zaitsev
  • 11,773
  • 9
  • 61
  • 103
5

Sorry, but as long as you have some way to decode their password, there's no way it's going to be secure. Fight it bitterly, and if you lose, CYA.

Steven Sudit
  • 18,659
  • 1
  • 44
  • 49
4

Handling lost/forgotten passwords:

Nobody should ever be able to recover passwords.

If users forgot their passwords, they must at least know their user names or email addresses. Upon request, generate a GUID in the Users table and sent an email containing a link containing the guid as a parameter to the user's email address.

The page behind the link verifies that the parameter guid really exists (probably with some timeout logic), and asks the user for a new password.

If you need to have hotline help users, add some roles to your grants model and allow the hotline role to temporarily login as identified user. Log all such hotline logins. For example, Bugzilla offers such an impersonation feature to admins.

devio
  • 35,442
  • 6
  • 73
  • 138
  • GUID is a bad idea, not nearly random enough and easy to bruteforce. there are other issues with this, see http://stackoverflow.com/questions/664673/how-to-implement-password-resets/711767#711767 – AviD Feb 23 '10 at 15:04
3

What about emailing the plaintext password upon registration, before getting it encrypted and lost? I've seen a lot of websites do it, and getting that password from the user's email is more secure than leaving it around on your server/comp.

casraf
  • 17,682
  • 7
  • 48
  • 83
  • I wouldn't assume that email is more secure than any other system. Though this does take the legal concern out of my hands there is still the issue of someone losing/deleting their email and now I am back to square one. – Shane Feb 25 '10 at 18:05
  • Provide both a password reset and email the plaintext password. I think that's the most you can do on the subject, without keeping a copy of the password yourself. – casraf Feb 25 '10 at 18:25
  • This is an absolutely horrible idea. It's both ineffective (many users actually delete emails after reading) and worse than what you're trying to protect against (since e-mail is unencrypted by default and passes through untrustworthy networks). Better to suggest to the user that they make a note of the password themselves, at least sending oneself an email the information never goes farther than the email server, not across the entire Internet! – Ben Voigt Nov 21 '16 at 00:17
3

If you can't just reject the requirement to store recoverable passwords, how about this as your counter-argument.

We can either properly hash passwords and build a reset mechanism for the users, or we can remove all personally identifiable information from the system. You can use an email address to set up user preferences, but that's about it. Use a cookie to automatically pull preferences on future visits and throw the data away after a reasonable period.

The one option that is often overlooked with password policy is whether a password is really even needed. If the only thing your password policy does is cause customer service calls, maybe you can get rid of it.

anopres
  • 2,336
  • 3
  • 25
  • 29
  • As long as you have an e-mail address associated with a password, and that password is recoverable, you've potentially leaked the password for that e-mail address due to password reuse. That's actually the primary concern here. There's really no other personally-identifiable information that matters. – Aaronaught Oct 13 '10 at 20:49
  • 1
    You completely missed my point. If you don't really need a password, don't collect one. Developers often get stuck in the "that's just the way we do it" mode of thinking. Sometimes it helps to throw out pre-conceived notions. – anopres Oct 14 '10 at 14:24
2

Do the users really need to recover (e.g. be told) what the password they forgot was, or do they simply need to be able to get onto the system? If what they really want is a password to logon, why not have a routine that simply changes the old password (whatever it is) to a new password that the support person can give to the person that lost his password?

I have worked with systems that do exactly this. The support person has no way of knowing what the current password is, but can reset it to a new value. Of course all such resets should be logged somewhere and good practice would be to generate an email to the user telling him that the password has been reset.

Another possibility is to have two simultaneous passwords permitting access to an account. One is the "normal" password that the user manages and the other is like a skeleton/master key that is known by the support staff only and is the same for all users. That way when a user has a problem the support person can login to the account with the master key and help the user change his password to whatever. Needless to say, all logins with the master key should be logged by the system as well. As an extra measure, whenever the master key is used you could validate the support persons credentials as well.

-EDIT- In response to the comments about not having a master key: I agree that it is bad just as I believe it is bad to allow anyone other than the user to have access to the user's account. If you look at the question, the whole premise is that the customer mandated a highly compromised security environment.

A master key need not be as bad as would first seem. I used to work at a defense plant where they perceived the need for the mainframe computer operator to have "special access" on certain occasions. They simply put the special password in a sealed envelope and taped it to the operator's desk. To use the password (which the operator did not know) he had to open the envelope. At each change of shift one of the jobs of the shift supervisor was to see if the envelope had been opened and if so immediately have the password changed (by another department) and the new password was put in a new envelope and the process started all over again. The operator would be questioned as to why he had opened it and the incident would be documented for the record.

While this is not a procedure that I would design, it did work and provided for excellent accountability. Everything was logged and reviewed, plus all the operators had DOD secret clearances and we never had any abuses.

Because of the review and oversight, all the operators knew that if they misused the privilege of opening the envelope they were subject to immediate dismissal and possible criminal prosecution.

So I guess the real answer is if one wants to do things right one hires people they can trust, do background checks and exercise proper management oversight and accountability.

But then again if this poor fellow's client had good management they wouldn't have asked for such a security comprimised solution in the first place, now would they?

JonnyBoats
  • 4,997
  • 32
  • 55
  • A master key would be awfully risky, the support staff would have access to every account -- and once you give that key out to a user, they then have the master key and access to everything. – Carson Myers Feb 27 '10 at 20:43
  • A master key is a terrible idea, since if someone discovers it (or has it disclosed to them by accident), they can exploit it. As per-account password reset mechanism is far preferable. – Phil Miller Feb 27 '10 at 20:46
  • I am curious, I thought Linux by default had a super user account with root level access? Isn't that a "master key" to access all the files on the system? – JonnyBoats Mar 02 '10 at 03:16
  • @JonnyBoats Yes, it is. That's why modern Unixes like Mac OS X disable the root account. – Nicholas Shanks May 28 '13 at 09:04
  • @NicholasShanks: Disable the root account, or disable interactive login on the root account? There's still a lot of code running with unlimited permissions. – Ben Voigt Nov 21 '16 at 00:19
  • @BenVoigt You're right: I was meaning "the ability for people to log in as…" – Nicholas Shanks Nov 21 '16 at 15:02
2

From the little that I understand about this subject, I believe that if you are building a website with a signon/password, then you should not even see the plaintext password on your server at all. The password should be hashed, and probably salted, before it even leaves the client.

If you never see the plaintext password, then the question of retrieval doesn't arise.

Also, I gather (from the web) that (allegedly) some algorithms such as MD5 are no longer considered secure. I have no way of judging that myself, but it is something to consider.

Jeremy C
  • 53
  • 4
1

open a DB on a standalone server and give an encrypted remote connection to each web server that requires this feature.
it does not have to be a relational DB, it can be a file system with FTP access, using folders and files instead of tables and rows.
give the web servers write-only permissions if you can.

Store the non-retrievable encryption of the password in the site's DB (let's call it "pass-a") like normal people do :)
on each new user (or password change) store a plain copy of the password in the remote DB. use the server's id, the user's ID and "pass-a" as a composite key for this password. you can even use a bi-directional encryption on the password to sleep better at night.

now in order for someone to get both the password and it's context (site id + user id + "pass-a"), he has to:

  1. hack the website's DB to get a ("pass-a", user id ) pair or pairs.
  2. get the website's id from some config file
  3. find and hack into the remote passwords DB.

you can control the accessibility of the password retrieval service (expose it only as a secured web service, allow only certain amount of passwords retrievals per day, do it manually, etc.), and even charge extra for this "special security arrangement".
The passwords retrieval DB server is pretty hidden as it does not serve many functions and can be better secured (you can tailor permissions, processes and services tightly).

all in all, you make the work harder for the hacker. the chance of a security breach on any single server is still the same, but meaningful data (a match of account and password) will be hard to assemble.

Amir Arad
  • 6,436
  • 8
  • 40
  • 47
  • 3
    If the app server can access it, so can anyone with access to the app server (be it a hacker or malicious insider). This offers zero additional security. – molf Feb 25 '10 at 00:06
  • 1
    The added security here is that the application server's DB does not hold passwords (so a second hack is needed - to the passwords DB), and the passwords DB can protected better against irregular activity, as you control the accessibility of the password retrieval service. so you can detect bulk data retrievals, change the retrieval SSH key on a weekly basis, or even not allow automatic pass retrieval, and do it all manually. This solution also fits any other internal encryption scheme (like the public+private keys, salt, etc.) for the passwords DB. – Amir Arad Feb 25 '10 at 09:04
1

Another option you may not have considered is allowing actions via email. It is a bit cumbersome, but I implemented this for a client that needed users "outside" their system to view (read only) certain parts of the system. For example:

  1. Once a user is registered, they have full access (like a regular website). Registration must include an email.
  2. If data or an action is needed and the user doesn't remember their password, they can still perform the action by clicking on a special "email me for permission" button, right next to the regular "submit" button.
  3. The request is then sent out to the email with a hyperlink asking if they want the action to be performed. This is similar to a password reset email link, but instead of resetting the password it performs the one-time action.
  4. The user then clicks "Yes", and it confirms that the data should be shown, or the action should be performed, data revealed, etc.

As you mentioned in the comments, this won't work if the email is compromised, but it does address @joachim 's comment about not wanting to reset the password. Eventually, they would have to use the password reset, but they could do that at a more convenient time, or with assistance of an administrator or friend, as needed.

A twist to this solution would be to send the action request to a third party trusted administrator. This would work best in cases with the elderly, mentally challenged, very young or otherwise confused users. Of course this requires a trusted administrator for these people to support their actions.

Sablefoste
  • 3,680
  • 3
  • 35
  • 52
1

Salt-and-hash the user's password as normal. When logging the user in, allow both the user's password (after salting/hashing), but also allow what the user literally entered to match too.

This allows the user to enter their secret password, but also allows them to enter the salted/hashed version of their password, which is what someone would read from the database.

Basically, make the salted/hashed password be also a "plain-text" password.

Eliott
  • 312
  • 2
  • 13
  • Why do you think it is necessary to compare what the user entered directly to the hashed password stored in the database? What kind of functionality does that give you? Also, when doing this it is hugely important to apply constant-time comparison function between the user entered password and the hashed password from the database. Otherwise, it is almost trivially possible to determine the hashed password based on timing differences. – Artjom B. May 10 '17 at 23:05
  • 1
    @ArtjomB. The question asks how to protect the user's password while also allowing customer support (CS) to talk/email a user through entering their forgotten password. By making the hashed version be useable as a plain-text password, CS can read it out and have the user use it in lieu of their forgotten password. CS can also use it to log-in as the user and help them through a task. This also allows users who are comfortable with automated forgotten-password systems to be protected, as the password they enter and use is hashed. – Eliott May 11 '17 at 00:23
  • I see. I haven't read the question in its entirety. The use of a constant-time comparison is still necessary to prevent trivial breakage of the whole system. – Artjom B. May 11 '17 at 18:27