Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not surprising at all. It is a standard pattern when programming for performance (which is what 99.9% of programmers are going for 99.9% of the time). Once you are sure the answer's wrong, don't bother testing any more of it. And for performance, this is correct, because you don't execute useless unneeded instructions.

The problem, of course, is that in the security realm, coding to break the loop as soon as the answer is known wrong is what provides these types of side channel attacks. But since 99.9% of programmers also do not write code for these environments when tasked to do so they just do what they are used to doing everywhere else (break early, when you know you are done with the checks) without realizing they are leaving an unlocked backdoor as a result.



Exactly. If you're doing something like:

    if hashing_function(unknown_pass) == hashed_pass:
you're doing it wrong...

Instead, use the compare function built into your crypto library, which should hopefully be hardened against timing attacks.

In most languages, == and === are not safe against timing attacks.


Why would that leak key material? Knowing that you matched a few characters of the hash (based on earlier exit and quicker response time) doesn't tell you that some of the password characters match -- a partially-matching password shouldn't have an relationship to a fully-matching one, right?


Read the article[1] linked in yock's comment for the gory details, but to sum up: instead of having to calculate the entire set of possibilities for a given hash, you only have to iterate just enough to get the first byte. Then you can do the same thing again for the next byte, and so on. You can't just fix this with some random timing or additional noise.. it's a very difficult problem to solve.

It's telling that even the very smart person who did such a great write-up made a few mistakes again in initial attempts at demonstrating a corrected algorithm that actually blew the whole thing up again.

Because it's so subtle, the odds are very good that most current systems are fallible to this.

1. https://codahale.com/a-lesson-in-timing-attacks/


I understand that much, but, as I said in my comment, that would only work if the partial match were informative about the full match. A partial password match is informative of what the full password is, but a partial hash collision is not informative about the full collision. By design, learning about one hash preimage tells you nothing about other hash preimages.

By contrast, if you were comparing my submitted password to a stored password, I could use your response time to know that I got the first n characters right. Since n is a prefix of the password, I've learned something about the password, I can lock down those characters, and then guess more, etc. That much I understand.

But the case you're talking about is a comparison of hashes. In that case, timing attacks can tell me that my guess's hash's first n characters match the password's hash's first n characters. So now what? How do I adapt future guesses to learn more characters? What have I learned about (a preimage of the hash of) the password?

If partial hash collisions were, in any way, helpful in guessing the full collision, Bitcoin mining algorithms would adaptively change their guesses as they discovered that their nonces only led to a partial match.


To some extent, unless there is a flaw in the hash (fortunately, those are mythical;)), you're right, except leaking the hash itself can still leak useful data (here's a comment explaining how this can be used for an early-exit byte comparison[1]), and, for some hashes (such as MD5, SHA, etc) leaking the salt can significantly reduce the number of hashes that would need to be brute forced as well. Also, leaking the hash itself can provide a great deal of information about the scheme in use, which can either cause him to accelerate his attacks offline (DES!) or run away crying (bcrypt/scrypt!) :)

1. https://security.stackexchange.com/questions/111040/should-i...


Depending on the hash used, it could be subject to a length extension attack. That was one of the big engineering goals for SHA3, to not be susceptible to those.

https://en.wikipedia.org/wiki/Length_extension_attack


I never knew that damn. What if I added a random delay of a few ms after I compare them?


Nope! If you run the same input enough times, you can get enough data to be able to subtract the random noise out; the larger random number range, the more times you have to run it. The way to properly combat timing attacks is to write constant time code. Daniel Bernstein has made his career talking about and implementing this class of crypto.


Humor me, I think figured it out, run an identical "testPass()" function first and time it ('password' == 'password' if you will), then simply wait at least that length of time before acting on the result. As a bonus add a random delay after that so adversaries might waste time on isolating the randomness as you describe.


If you really want to add some sort of randomness, make it based on the user input. That way your randomness depends on what you send and won't be able to be subtracted out. Or use constant time things (which is hard for the record!) and not have to deal with it.


This was somewhat famously broken in Java until late 2009:

https://codahale.com/a-lesson-in-timing-attacks/

tl;dr, the method for doing "secure" string comparison was vulnerable to timing attacks for much of Java's history.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: