Welcome Guest.   Make a donation to an author on the site April 25, 2014, 02:17:26 AM  *

Please login or register.
Or did you miss your validation email?


Login with username and password (forgot your password?)
Why not become a lifetime supporting member of the site with a one-time donation of any amount? Your donation entitles you to a ton of additional benefits, including access to exclusive discounts and downloads, the ability to enter monthly free software drawings, and a single non-expiring license key for all of our programs.


You must sign up here before you can post and access some areas of the site. Registration is totally free and confidential.
 
The N.A.N.Y. Challenge 2014! Download dozens of custom programs!
   
   Forum Home   Thread Marks Chat! Downloads Search Login Register  
Pages: [1]   Go Down
  Reply  |  New Topic  |  Print  
Author Topic: Essays on Proper Storage of Site Passwords  (Read 4506 times)
mouser
First Author
Administrator
*****
Posts: 32,704



see users location on a map View Profile WWW Read user's biography. Give some DonationCredits to this forum member
« on: June 12, 2012, 01:41:08 AM »

Two interesting essays on how to properly store and handle user passwords for a site -- not quite as simple as you thought -- it's not good enough just to salt and use a hash function.


Also of related interest:


« Last Edit: June 12, 2012, 04:58:15 AM by mouser » Logged
phitsc
Honorary Member
**
Posts: 920



see users location on a map View Profile Give some DonationCredits to this forum member
« Reply #1 on: June 12, 2012, 02:21:12 AM »

Very interesting!
Logged

Ath
Supporting Member
**
Posts: 2,134



see users location on a map View Profile WWW Give some DonationCredits to this forum member
« Reply #2 on: June 12, 2012, 02:31:27 AM »

+1 a very good read!
Logged

Mark0
Charter Honorary Member
***
Posts: 586


see users location on a map View Profile WWW Give some DonationCredits to this forum member
« Reply #3 on: June 12, 2012, 05:36:05 PM »

Nice, thanks!
Logged

Renegade
Charter Member
***
Posts: 10,366



Tell me something you don't know...

see users location on a map View Profile WWW Give some DonationCredits to this forum member
« Reply #4 on: June 12, 2012, 10:21:11 PM »

I remember a Security Now! show a few years ago where they went on to explain exactly how iteration increases entropy and that the net effect was indeed cumulative and not simply a single step in entropy. It's very much the same thing as what they're talking about there with stretching password hashes. While the discussion was in a symmetrical cryptographic context (IIRC), the principles are all pretty much the same.

It's kind of funny how these exact same issues come up again and again in security. You'd think that people would learn their lessons by now...  undecided

Anyways, the articles were good and really well focused on that issue.
Logged

Slow Down Music - Where I commit thought crimes...

Freedom is the right to be wrong, not the right to do wrong. - John Diefenbaker
TaoPhoenix
Supporting Member
**
Posts: 3,227



Making a Post, Editing it twice to make it nice.

see users location on a map View Profile Give some DonationCredits to this forum member
« Reply #5 on: June 13, 2012, 01:54:42 AM »

A few point by point comments from the Brian Krebs article:

"Separate password breaches last week at LinkedIn, eHarmony and Last.fm exposed millions of credentials, and once again raised the question of whether any company can get password security right. "

Hmm, I hadn't heard of the ones at eHarmony and Last.fm. And interesting choice of phrase  - "whether *any* company can get it right" (emphasis mine).

"Ptacek:  The difference between a cryptographic hash and a password storage hash is that a cryptographic hash is designed to be very, very fast. ... Well, that’s the opposite of what you want with a password hash. You want a password hash to be very slow. "

Okay, so I can *almost* see a somewhat smaller "less important" site like Last.fm making this mistake. (Although even they are too big.) But LinkedIn strikes me as different. In some ways, except for a couple of annoying software features they have that lead to address book invasions, I respect LinkedIn more than any of the "recreational" social networks. LinkedIn is for a fairly high grade of professional - not that many fast food workers, etc. So I would think that would be a very demanding clientele. Wouldn't anyone at that level have wanted LinkedIn to just spend $50,000 for a month's worth of consulting to review their overall practices? The security guy Brian Krebs talked to nailed in twelve seconds. Add another $100,000 for a two-man security programming team for a year. Done (sorta).

Okay, here we go: further down:
Ptacek: At a certain point, the cost of migrating that is incredibly expensive. And securing Web applications that are as complex as LinkedIn is an incredibly hard problem."

So now we get a new question, that maybe someone *did* notice, but it got tabled as a migration cost issue. That's a whole different notion.

Edit 2:
So okay, in the NY Times article, it seems my off the cuff guess wasn't so bad after all:
"Mr. Grossman estimates that the cost of setting up proper password, Web server and application security for a company like LinkedIn would be a one-time cost of “a couple hundred thousand dollars."


« Last Edit: June 13, 2012, 02:09:18 AM by TaoPhoenix » Logged
mouser
First Author
Administrator
*****
Posts: 32,704



see users location on a map View Profile WWW Read user's biography. Give some DonationCredits to this forum member
« Reply #6 on: June 13, 2012, 04:11:02 AM »

Good observations from Tao.

Let me add a few of my own.

First, I think this whole debacle is just more evidence that there is some value in core User Management code projects that can be reused when building custom sites and is focused on getting things like this right -- which is exactly the kind of thing I hope to accomplish with my Yumps project.

I'm guessing most modern sites get the password thing mostly right.  The most important thing is salting and hashing.  Using a slow hash vs a cryptographic hash is important, but not nearly as much as the core concepts of salting+hashing.  Only a *really* sophisticated and dedicated hacker is going to be able to employ timing info to exploit the "mistake" of using a fast cryptographic hash.  In fact, I think you could argue that you are about a trillion times more likely to be attacked by someone who is trying to crash your site by hitting it with requests that slow it down than you are to be attacked by someone trying to exploit timing differences in password checks -- and so a slow password check might even hurt you there, unless you put in place an anti-hammering thing, which is actually a bit of work to get right.  Furthermore, a timing attack on passwords is likely to be pretty low on the list of exploits to search for.  Before you worry too much about that I would worry about network traffic interception, forcing https login, and a bunch of other stuff.  If you are building a site where you think you might be so attractive that you are going to have world class hackers attempting timing attacks on your user passwords, you might want to reconsider the entire concept of allowing simple password logins, and implement additional checks with things like hardware tokens.

A meta issue, which is touched on by Tao above when we talks about costs of migration, is building in a mechanism by which you can migrate passwords to new approaches.  So dont just store a password hash and salt, store extra info like: When the password was last changed, and the hashing algorithm/parameters used when it was stored.  So that if you decide to move from using 5000 rounds of blowfish as your hash algorithm to 10000 rounds of sha512, you can identify which algorithm was used to store each users password, and you won't break people's logins as you migrate them (looks like some of the modern password hashing algorithms are being clever and embedding this infomation in the hashed output) to make it easier to keep track of.  And have an automated system in place for forcing users to upgrade their passwords, etc.
« Last Edit: June 13, 2012, 04:31:57 AM by mouser » Logged
db90h
Coding Snacks Author
Charter Member
***
Posts: 455


Software Engineer

View Profile WWW Read user's biography. Give some DonationCredits to this forum member
« Reply #7 on: June 13, 2012, 05:16:36 AM »

A meta issue, which is touched on by Tao above when we talks about costs of migration, is building in a mechanism by which you can migrate passwords to new approaches.  So dont just store a password hash and salt, store extra info like: When the password was last changed, and the hashing algorithm/parameters used when it was stored.  So that if you decide to move from using 5000 rounds of blowfish as your hash algorithm to 10000 rounds of sha512, you can identify which algorithm was used to store each users password, and you won't break people's logins as you migrate them (looks like some of the modern password hashing algorithms are being clever and embedding this infomation in the hashed output) to make it easier to keep track of.  And have an automated system in place for forcing users to upgrade their passwords, etc.

*EXCLUDING SALTS FOR SIMPLICITY OF DISCUSSION*

As I posted elsewhere, my approach to updating my own personal database was to use double hashing. So, say I had the initial passwords stored as:

SHA1(password)

Now, to update them I could either use metadeta like you say, and update when they login, OR double hash ... That's right, hash the hash Wink. The new algorithm would then become:

SHA2-512(SHA1(password))

...

or to be precise with salting,

SALT^SHA2-512(SALT^SHA1(password))

...

This is an easy way to update existing unbreached databases with new hashing algorithms, and it also increasing the computation complexity at the same time, and -- as an added benefit -- creates a 'unique' combination of algorithms that can serve to further protect you. Later, some day, if I need to increase the hash algorithm again, I can continue to add additional hash algorithms, using a third or fourth round of hashing the hash of the original password.
Logged
mouser
First Author
Administrator
*****
Posts: 32,704



see users location on a map View Profile WWW Read user's biography. Give some DonationCredits to this forum member
« Reply #8 on: June 13, 2012, 05:27:00 AM »

it might be more flexible if you instead moved to using a prefix in the stored hash which contained the meta information.

so your original stored hash strings are: SHA1(password)
your new ones would be:
HASHVERSION_2:SHA2-512(SHA1(password))

The only change being that you would explicitly be storing some meta data that would make it easier for you to identify which users had upgraded their passwords, and make it easy to change schemes in the future.
Logged
mouser
First Author
Administrator
*****
Posts: 32,704



see users location on a map View Profile WWW Read user's biography. Give some DonationCredits to this forum member
« Reply #9 on: June 13, 2012, 05:29:19 AM »

I will say that it's a clever and neat idea you have there of running the extra new higher security hash on the OLD_HASH rather than on the plain text, so that you could in fact upgrade the entire database any time you upgrade your hash algorithm, rather than having to wait until the person next logs in.  That's clever.
Logged
db90h
Coding Snacks Author
Charter Member
***
Posts: 455


Software Engineer

View Profile WWW Read user's biography. Give some DonationCredits to this forum member
« Reply #10 on: June 13, 2012, 05:51:41 AM »

Well, thanks. I dunno if someone has done it before or not, but it seemed the only way to do it without waiting for users to log in. Necessity is the mother of all invention and such. I'm sure others have done the same.

While reading those articles, the premise of what I'm doing is very similar to what they suggest with, say, PBKDF2 ... That algorithm apparently iterates the hash in a similar fashion, X times. Now, they are not rehashing the *plaintext representation of the hash*, and instead are rehashing the last iteration, but I think the result is similar if they increase the iterations of the hashing algorithm. Of course, they go through far more iterations, making it more secure... except that it is not clear if they allow for multiple algorithms to be used.

Of course, PBKDF2's intention isn't to allow instant updating of a database, but to provide strong initial security.

In my case, I actually did change the database field, just to be sure and certain of which password hashes were updated (since I didn't initially do it all at once, but later did). So, while not storing metadata, I did implicitly give an indication of which accounts had been updated to the new algorithm.
« Last Edit: June 13, 2012, 06:19:59 AM by db90h » Logged
TaoPhoenix
Supporting Member
**
Posts: 3,227



Making a Post, Editing it twice to make it nice.

see users location on a map View Profile Give some DonationCredits to this forum member
« Reply #11 on: June 13, 2012, 06:23:35 AM »


Bruce Schneier posted an article somewhere on his blog about double-layering of different algorithms on top of each other. Last I recall you added a couple of new corollaries.
Logged
db90h
Coding Snacks Author
Charter Member
***
Posts: 455


Software Engineer

View Profile WWW Read user's biography. Give some DonationCredits to this forum member
« Reply #12 on: June 13, 2012, 07:01:51 AM »

Bruce Schneier posted an article somewhere on his blog about double-layering of different algorithms on top of each other. Last I recall you added a couple of new corollaries.

I figured SOMEONE had done this before, as it is the ONLY way to INSTANTLY update an entire database. Still, sometimes the most simple things are overlooked. I don't know whether he was mentioning this as a method to improve security or update a database, but still.

After reading this thread, I'm actually going to go ahead and take it another step further and add one or two more algorithms on top.. and that's the beautiful thing, how far it can be extended (infinitely). So long as nobody gets access to the code, they'll also not know the algorithm.
Logged
mouser
First Author
Administrator
*****
Posts: 32,704



see users location on a map View Profile WWW Read user's biography. Give some DonationCredits to this forum member
« Reply #13 on: June 13, 2012, 07:08:10 AM »

I'm always a bit weary of layering stuff like this though.. all it takes is one bad hash algorithm that by accident maps all inputs into a small hash space and you are in trouble.
Logged
TaoPhoenix
Supporting Member
**
Posts: 3,227



Making a Post, Editing it twice to make it nice.

see users location on a map View Profile Give some DonationCredits to this forum member
« Reply #14 on: June 13, 2012, 07:08:56 AM »

If my ailing memory serves, he was definitely remarking on the security side, I think it was in relation to the data - I don't recall speed considerations.

But heh I think I also walked into the "obscurity" theme a while back - that's one of those topics where it protects lower level situations and it is somewhat useful, but I learned that you have to assume that the algorithm could be discovered.

Re: Mouser, you were talking about the damage level of a breach of LinkedIn. I think it's worse than it sounds, because it's a dangerous Phish opportunity. I am grumpy at LinkedIn because they *already* turbo-spam people's addressbook - I got two separate ones and that's from normal accounts. Give attackers an hour logged in and all kinds of fun could happen.
Logged
db90h
Coding Snacks Author
Charter Member
***
Posts: 455


Software Engineer

View Profile WWW Read user's biography. Give some DonationCredits to this forum member
« Reply #15 on: June 13, 2012, 08:50:10 AM »

I'm always a bit weary of layering stuff like this though.. all it takes is one bad hash algorithm that by accident maps all inputs into a small hash space and you are in trouble.

If all algorithms chosen are secure, it should be good .. real good. I am not a cryptologist or mathematician though. I think with each iteration it would grow in strength. Who knows, I may be wrong. Of course, the larger the digest size, the better.

You know what really pisses me off though, about hackers in general? It is *MUCH EASIER* to breach a site than it is to keep one secure. They think they are so smart for exploiting a site, etc... but they have the easier task in almost all cases. Of course, 99% of them are just using exploits discovered by other people, then think they are so brilliant for doing so.

Just like it is easier to DESTROY than it is to CREATE, true of everything ... same with security.
Logged
Renegade
Charter Member
***
Posts: 10,366



Tell me something you don't know...

see users location on a map View Profile WWW Give some DonationCredits to this forum member
« Reply #16 on: June 13, 2012, 10:14:10 AM »

What I'd said above applies there for multiple and iterative processes.

The entire question is about entropy. This also goes for compression, though in a different manner.

What you want to do is to maximize entropy when encrypting (or compressing in a sense) data.

By layering on the same algorithm (or another one) you effectively increase the entropy each time you iterate the process.

So, if you want stronger encryption (assuming no exploits against the algorithm), you merely need to run it several times, or use multiple algorithms in succession.

Every time you go through the process, you increase entropy, which basically means stronger encryption.

So yes - SHA512 x 2 is stronger than SHA512 x 1. Or whatever.

IIRC - This is true for symmetric and hash encryption.
Logged

Slow Down Music - Where I commit thought crimes...

Freedom is the right to be wrong, not the right to do wrong. - John Diefenbaker
db90h
Coding Snacks Author
Charter Member
***
Posts: 455


Software Engineer

View Profile WWW Read user's biography. Give some DonationCredits to this forum member
« Reply #17 on: June 13, 2012, 10:36:57 AM »

Quote
By layering on the same algorithm (or another one) you effectively increase the entropy each time you iterate the process.

That is what I thought Wink. So as long as you don't throw a malfunctional or non-secure algorithm in the sequence, e.g. one that often hashes to 0 or something, you are good ;p. Myself, I have a policy of using *only* algorithms that produce at least a 512 bit digest. The exception is, of course, the first sequence in my hash, which is SHA1, only 160 bits.

Going on about my rant on hackers ... part of the problem is how the media treats them. Calling them brilliant, etc... No, it takes brilliance to keep a sever secure.

Right now, my #1 problem, and maybe mouser can sympathize, is not having the TIME to dedicate myself to constantly securing and monitoring my server. I have 10 different jobs, at least, here at my one man show, and web server admin is *definitely* a job in and of itself.
Logged
nudone
Cody's Creator
Columnist
***
Posts: 4,116



see users location on a map View Profile WWW Read user's biography. Give some DonationCredits to this forum member
« Reply #18 on: June 13, 2012, 11:16:01 AM »

... part of the problem is how the media treats them. Calling them brilliant, etc... No, it takes brilliance to keep a sever secure.

Part of the problem is the the Media knows scary stories keep people interested (however bogus they tend to be). No one wants to hear about the good news - unless it's on the level of a puppy being rescued from a mine shaft.
Logged
Stoic Joker
Honorary Member
**
Posts: 4,883



View Profile WWW Give some DonationCredits to this forum member
« Reply #19 on: June 13, 2012, 11:28:00 AM »

... part of the problem is how the media treats them. Calling them brilliant, etc... No, it takes brilliance to keep a sever secure.

Part of the problem is the the Media knows scary stories keep people interested (however bogus they tend to be). No one wants to hear about the good news - unless it's on the level of a puppy being rescued from a mine shaft.

 Thmbsup
~ The bubble headed bleach blond comes on at 5
She can tell you about the plane crash with a gleam in her eye
Get the widow on the set, we love dirty laundry ~


-- I'm filling in for 40Hz in the song lyrics quiping department.  cheesy
Logged
db90h
Coding Snacks Author
Charter Member
***
Posts: 455


Software Engineer

View Profile WWW Read user's biography. Give some DonationCredits to this forum member
« Reply #20 on: June 13, 2012, 01:50:09 PM »

The entire question is about entropy. This also goes for compression, though in a different manner.

Indeed, Renegade is right, as always, but I wanted to comment on this when I got a chance, to elaborate on compression, since that is one field where I can claim expertise (being the author of more than one LZ77/LZSS derivative algorithm). Entropy in compression is different indeed, but similar too. In compression, it of course represents the minimum theoretical size you can squeeze the data into, with it remaining in-tact (reconstructable in decompression without loss).

In compression though, passing data through more than one compression algorithm does *not* improve entropy. In fact, it may decrease it.

Now, you can pass it through different pre-processing algorithms that re-arrange the data and THEN compress it, which improves entropy, but most compression algorithms have these pre-processing algorithms built in. And those are not compression algorithms, they are pre-processing/re-arranging algorithms. For example, with PECompact, by making tweaks to x86 code before compression, the compression ratio can be improved by 20% in many cases, depending on the code (could be more, could be less). LZMA now has this pre-processor (known as BCJ2) built in. There are MANY more that target different types of data. By making these tweaks, you improve the chances for a 'match' in dictionary based compression (where it matches data it has already seen, and emits a backwards reference to that data, there-by saving space).

My POINT is to MAKE SURE that nobody misunderstands Renegade's accurate and wise comment as meaning they should pass their data through more than one compression algorithms. I *hate* seeing this, ZIPs inside of RARs, inside of ZIPs, etc.. absurd. Don't anybody do that, please Wink.
« Last Edit: June 14, 2012, 07:05:21 AM by db90h » Logged
Renegade
Charter Member
***
Posts: 10,366



Tell me something you don't know...

see users location on a map View Profile WWW Give some DonationCredits to this forum member
« Reply #21 on: June 13, 2012, 10:40:48 PM »

My POINT is to MAKE SURE that nobody misunderstands Renegade's accurate and wise comment as meaning they should pass their data through more than one compression algorithms. I *hate* seeing this, ZIPs inside of RARs, inside of ZIPs, etc.. absurd. Don't anybody do that, please Wink.

Ooops. Sorry about that. You're quite right. Successive compression doesn't guarantee size reduction, and in fact often results in larger file sizes. I didn't clarify that properly and left it open there to the wrong impression.

Thanks for the clarification there~! cheesy
Logged

Slow Down Music - Where I commit thought crimes...

Freedom is the right to be wrong, not the right to do wrong. - John Diefenbaker
Pages: [1]   Go Up
  Reply  |  New Topic  |  Print  
 
Jump to:  
   Forum Home   Thread Marks Chat! Downloads Search Login Register  

DonationCoder.com | About Us
DonationCoder.com Forum | Powered by SMF
[ Page time: 0.05s | Server load: 0.05 ]