Thursday, March 25, 2010

The Re-Keying Conundrum

Eric Rescorla recently posted his view on including a re-keying function in security protocols, and he concluded that on cryptographic grounds re-keying is unnecessary. Re-keying here means refreshing or renegotiating a session key (say an AES key) after a certain amount of traffic has been encrypted under the same key, or a given amount of time has elapsed (say several weeks or months). Rescorla opens his post with the statement that “in my opinion, rekeying is massively overrated, but apparently I've never bothered to comprehensively address the usual arguments”. Re-keying is probably something of a preoccupation for Rescorla at the moment as he is reviewing several IETF proposals that include this functionality, and he is also fresh from completing a proposal for circumventing the well-publicised TLS Renegotiation Attack.

Regularly changing critical parameters  is part of the conventional wisdom of security, based on the general principle that reliance on a given security parameter must decrease over time with its use. Extended usage leaks more and more information to an attacker, and security parameters don’t improve with age. But need we be so cautious with the bulk encryption keys of a modern cipher like AES? Do long sessions or large encrypted transfers really give an attacker an advantage that outweighs the additional complexity of including a re-keying sub-protocol?

The Unicity Distance

Let’s begin with the observation that only a small amount of data needs to be encrypted under a given key before the key is uniquely determined. This threshold measure is called the unicity distance of a cipher, and for symmetric key ciphers it is proportional to the entropy of the key. So for example, DES and AES each only require a handful of encryptions to be produced for the corresponding key to be uniquely determined. Therefore, given just a few plaintext and ciphertext pairs an attacker has sufficient information in principle to mount a brute force attack. In the case of public key systems, the unicity distance is in fact zero since a certificate contains all the information required to determine the corresponding private key.

More General Attacks

One aspect of the re-keying issue is therefore to determine whether attacks more efficient than brute force become available as the amount of data encrypted under a given key is permitted to increase. A general upper bound is provided by the birthday paradox which, under the assumption that encryption acts as a random mapping, states that ciphertext blocks will start repeating for purely statistical reasons when the amount of encrypted data is approximately the square root of the number of possible ciphertexts. At this point the attacker gains advantages (not directly in key breaking), and a basic requirement of the new AES cipher was to have a block size of at least 128 bits. On the other hand,  linear and differential cryptanalysis are attacks that directly use large amounts of encrypted data to amplify small statistical biases in the structure of a cipher to determine certain key bits. In general there is no simple relation between the block size of a cipher and the data requirements of these attacks, since the number of rounds is also important. For DES, linear cryptanalysis was the more efficient of the two attacks, and it still required 2^{44} ciphertexts, amounting to about 280 terabytes of encrypted data. Even under more favourable circumstances, the data requirements for mounting this type of attack on AES-256 are considerably higher.

Rescorla quotes two IETF submissions where re-keying is justified on the grounds of limiting the time to perform cryptanalysis or the amount of ciphertext that a attacker can collect. While such precautions sound prudent, they make little sense with respect to AES with 128- or 256-bit keys since the corresponding key and ciphertext spaces are so vast. In short, any attacks based on counting, enumerating, or storing encrypted data are not expected to be feasible. The recent so-called Luxembourg Attacks on AES, and their variants, are very data intensive but limiting traffic encrypted under a given key to (say) 1 Petabyte would thwart these attacks.

But Expect Re-Keying

Even so, cryptographic arguments are most convincing to cryptographers, and cryptographic protocols are implemented within systems that are not engineered to the same level of security as say AES, and nor can they be. The manager responsible for approving the transfer of sensitive financial data across the internet will probably feel better knowing all possible precautions are taken to prevent data loss, and that will include re-keying. It is simply viewed as a prudent operational security practice no matter how convincing the cryptographic arguments are to the contrary. Perhaps the most convincing argument for re-keying – if not the only argument – is to limit the consequences of a key compromise. The manager mentioned above will probably be reassured to know that an accidentally exposed key only compromises 4 hours worth of transmitted data, and none of the data sent before or after that. Of course you need more than just re-keying to guarantee such a property, but no amount of strong cryptography will give it to you either. So we may agree with Eric that re-keying is overrated, but it is certainly rated.

3 comments:

Anonymous said...

you know that mensusa is the best source of man suit they have big discount off and free shipping promo.

Unknown said...

As far as I can tell it is completely painless to protect previously encrypted data by periodically rekeying when implemented efficiently. For instance, the authenticated encryption scheme that I maintain called Calico on github currently does not rekey at all, but it looks like adding rekeying to it does not noticeably impact performance or memory usage, nor does it add any extra packet overhead. But the advantage of being able to run the same connection for 30 days without worrying about key compromise is fantastic =)

See the protocol sketch here for more details:

http://catid.mechafetus.com/news/news.php?view=374

And hopefully soon it will be implemented in Calico here:

https://github.com/catid/calico

Anna Schafer said...

I've never bothered to comprehensively address the usual arguments”.D&A 24/7 Locksmiths