Wednesday, March 31, 2010

MoMA acquires the @ Symbol

image

The MoMA in New York has added (“acquired” in museum-speak) the @ symbol into a collection in its Department of Architecture and Design, as reported in an Inside/Out article. The article says that @ was included as part of the original ASCII set defined in 1963 as a shorthand for the common accounting phrase “at the rate of”. Its choice for formatting email addresses was made a few years later

In 1967, American electrical engineer Ray Tomlinson joined the technology company of Bolt Beranek and Newman (BBN), where he created the world’s first e-mail system a few years later, in 1971, using a Model KSR 33 Teletype device. BBN had a contract from the Advanced Research Projects Agency of the U.S. Department of Defense to help in the development of ARPAnet, an early network from which the Internet later emerged. Working with Douglas Engelbart on the whole program, Tomlinson was in particular responsible for the development of the sub-program that can send messages between computers on this network. It was the first system able to send mail between users on different hosts connected to the ARPAnet, while previously mail could be sent only to hosts that used the same computer.

In January 1971, @ was an underused jargon symbol lingering on the keyboard and marred by a very limited register. By October, Tomlinson had rediscovered and appropriated it, imbuing it with new meaning and elevating it to defining symbol of the computer age. He chose the @ for his first e-mail because of its strong locative sense—an individual, identified by a username, is @ this institution/computer/server, and also because…it was already there, on the keyboard, and nobody ever used it.

The MoMA collection is attributing @ as a contemporary work of art by Mr. Tomlinson, and the image added to the gallery is “displayed in ITC American Typewriter Medium, the closest approximation to the character used by a Model 33 Teletype in the early 1970s”.

Reblog this post [with Zemanta]

Sunday, March 28, 2010

A look back at posts in March 2009

Here is a quick review of my posts from this time last year.

Some dramatic news that eventually came to my attention was the The Great Laptop Giveaway at US Airports, where a study sponsored by Dell concluded that over 12,000 laptops go missing each week in US airports. More big numbers were reported in The Flap over Twitter where new statistics for Twitter showed that almost every metric was growing exponentially, and the micro-blogging service was dominated by power laws. I also posted on Twitter and Extremistan, discussing how Twitter provides the basis for extreme behaviour since we are connecting a viral technology with social media. Extremistan is a mythical country introduced in Taleb's Black Swan book to characterize non-Gaussian dynamics.

I had spent some time tracking down a copy of the paper Randomness Tests for Packed Malware which proposed statistical tests to distinguish packed malware from purely random data. Sticking with statistical arguments, there was also a New Birthday Attack on DJBDNS, a secure version DNS.

I had also come across a presentation The Positive Trust Model and Whitelisting discussing the general merits of whitelisting. Whitelisting is gaining credence, and while it may not be rapidly replacing blacklisting technologies, several companies are opting for hybrid solutions as the exponential growth of malware variants is simply outstripping the scan-and-detect (sometimes called the block-and-tackle) paradigm.

There was also a repost of The Wisdom of a Random Crowd of One describing how Google uses link data to create the PageRank algorithm. The post was created in the context of answering the question of how we in IT Security can make better use of data. Finally, I uploaded a few more of my FreeMind maps for my posts to the FreeMind Flash server, which provides an online navigation service. Now you can get all the FreeMind sources here.
Reblog this post [with Zemanta]

Saturday, March 27, 2010

Some Clarification on the RSA Power Attack

I recently posted on the some new work by researchers at the University of Michigan which showed how an RSA private key can be recovered by sifting through a collection of faulty signatures. The faults are created by lowering the power to the chip computing the signature, causing bits to be flipped in the signature multiplication chain. 

I remarked that the coverage from The Register was a bit unfair in its headline of OpenSSL being broken, which was just the cryptographic library used in the proof-of-concept experiment to test the private key recovery procedure. But as pointed out by Bradley Kuhn, the researchers seemed to stress the weakness of OpenSSL against the attack by mentioning it both the abstract and the conclusion of their paper.

But there have been some other posts bordering on the hysterical. Techie Buzz, 1024 Bit Cracked, New Milestone, leads the chorus of RSA’s demise

The RSA encryption was believed to be quite safe and this level of a crack was not achieved, until now. The methods used here are pretty low level and have given results in 100 hours. The crack which was assumed to take a lifetime with brute force, has taken a mere four days. This breaks the very backbone of RSA which believes that as long as the private key is safe, it is impossible to break in, unless guessed.

The post celebrates the fault-based factoring as an advance on the 768-bit factoring record achieved late last year. However the factoring of the RSA 768-bit modulus was unaided by any tricks, and the researchers used modern factoring methods in a distributed computation amounting to over 10^{20} operations. The results are simply not comparable since the fault-based attack is so advantageous to the attacker.

A more sober review is presented by Phil Brass, of Accuvant Insight, in his post Recent Encryption Research Demystified, where he describes the publicity of the attack as “headline hyperbole”. You should not be too worried about your online banking service since

to carry out this attack on an online banking server, the attacker would need physical access to the online banking server’s power supply, which means they would need to be inside the bank’s data centre.  Given the “wealth” of other targets available to an attacker standing inside of a bank’s data centre, theft of the online banking web server private SSL key by a difficult and time-consuming voltage regulation attack seems rather unlikely.

And as to the key recovery experiment itself

The really strange thing about this paper is that while the researchers claim to have implemented the attack on a full-size SPARC/Linux/OpenSSL workstation, the actual hardware was a Xilinx Virtex2Pro FPGA, which was emulating a SPARC CPU, and which the researchers claim is representative of an off-the-shelf commercial embedded device. It seems as if they are trying to have it both ways – i.e. it is an attack against a full-size workstation, and by the way it also is an attack against something you might see in a typical embedded system.

He provides a good summary of the attack and offers that a better headline would be “Obscure bug in OpenSSL library poses little risk to consumers.”  A similar reality check for the hoopla is given by Nate Lawson of Root Labs Rdist, who begins by recalling that the brittleness of the RSA signature operation was identified in 1997 (referenced by the Michigan researchers), with the advice to verify all signatures very carefully before returning them. Lawson provides some more details on the error checking steps taken for signature computations in OpenSSL and he concludes that a much better job could have been done. He is also a bit sceptical of the experimental environment but finally concludes “this was a nice attack but nothing earth-shattering”.

Finally, there is another de-hyped review by Luther Martin at Voltage, where he refers to the PBA Attack after the initials of the three Michigan authors. He states that

Devices that are designed to be secure, like HSMs and smart cards, filter the power so that you can't do attacks like the PBA attack, and with devices that aren't designed to be secure, there's always an easier way to recover a key from them than doing something like the PBA attack. This means that we won't be seeing hackers using the PBA attack any time soon, but you'd never think this from seeing the way it was reported by the media.

Fortunately for this incident Voltage products use DSA rather than RSA for signatures, so advanced debunking for customers will not be required.

All in all, most everyone agrees (and is happy to say) that the work is clever and worthwhile to undertake, adding to our operational knowledge of cryptography. The publicity was a bit overdone, and probably too many hours were spent by security professionals explaining the circumstances and implications of the attack.

Reblog this post [with Zemanta]

Friday, March 26, 2010

8 characters long and include at least one capital

A cute password anecdote passed on by Jeff Bardin:

During a company’s recent password audit, it was found that one employee was using the following password:

"MickeyMinniePlutoHueyLouieDeweyDonaldGoofySacramento"

When asked why they had such a long password, the person said they were told that it had to be at least 8 characters long and include at least one capital!

image

Bruce Schneier Post Template

A funny take on the format of some posts from Bruce Schneier

Catchy one-liner ("interesting," with link):

In this part of the blog post, Bruce quotes something from the article he links to in the catchy phrase. It might be the abstract to an academic article, or the key points in a subject he's trying to get across. To get the post looking right, you have to include at least a decent sized paragraph from the quoted source or otherwise it just looks like crap. So I will continue typing another sentence or two, until I have enough text to make this look like a legitimately quoted paragraph. See, now that wasn't so hard after all.

He might offer a short comment about the article here.

Finally, he will let you know that he wrote about the exact same subject link to previous Schneier article on the exact same topic and link to another previous Schneier article on the exact same topic.

I tried it myself about a month ago:

Interesting:

Last week I heard Sun Microsystems Cloud CTO Lew Tucker predict that IT expenses would increasingly track to the cost of electricity. “Lew’s Law” (as described to a room of thought leaders) is a brilliant theorem that weaves a microcosm of IT trends and recent reports into a single and powerful concept.

Lew’s Law is a powerful idea whose time has come, with profound and far reaching impacts, including the automation of the network.

Full story here from Gregory Ness with some additional remarks here.

It was very liberating.

Thursday, March 25, 2010

The Re-Keying Conundrum

Eric Rescorla recently posted his view on including a re-keying function in security protocols, and he concluded that on cryptographic grounds re-keying is unnecessary. Re-keying here means refreshing or renegotiating a session key (say an AES key) after a certain amount of traffic has been encrypted under the same key, or a given amount of time has elapsed (say several weeks or months). Rescorla opens his post with the statement that “in my opinion, rekeying is massively overrated, but apparently I've never bothered to comprehensively address the usual arguments”. Re-keying is probably something of a preoccupation for Rescorla at the moment as he is reviewing several IETF proposals that include this functionality, and he is also fresh from completing a proposal for circumventing the well-publicised TLS Renegotiation Attack.

Regularly changing critical parameters  is part of the conventional wisdom of security, based on the general principle that reliance on a given security parameter must decrease over time with its use. Extended usage leaks more and more information to an attacker, and security parameters don’t improve with age. But need we be so cautious with the bulk encryption keys of a modern cipher like AES? Do long sessions or large encrypted transfers really give an attacker an advantage that outweighs the additional complexity of including a re-keying sub-protocol?

The Unicity Distance

Let’s begin with the observation that only a small amount of data needs to be encrypted under a given key before the key is uniquely determined. This threshold measure is called the unicity distance of a cipher, and for symmetric key ciphers it is proportional to the entropy of the key. So for example, DES and AES each only require a handful of encryptions to be produced for the corresponding key to be uniquely determined. Therefore, given just a few plaintext and ciphertext pairs an attacker has sufficient information in principle to mount a brute force attack. In the case of public key systems, the unicity distance is in fact zero since a certificate contains all the information required to determine the corresponding private key.

More General Attacks

One aspect of the re-keying issue is therefore to determine whether attacks more efficient than brute force become available as the amount of data encrypted under a given key is permitted to increase. A general upper bound is provided by the birthday paradox which, under the assumption that encryption acts as a random mapping, states that ciphertext blocks will start repeating for purely statistical reasons when the amount of encrypted data is approximately the square root of the number of possible ciphertexts. At this point the attacker gains advantages (not directly in key breaking), and a basic requirement of the new AES cipher was to have a block size of at least 128 bits. On the other hand,  linear and differential cryptanalysis are attacks that directly use large amounts of encrypted data to amplify small statistical biases in the structure of a cipher to determine certain key bits. In general there is no simple relation between the block size of a cipher and the data requirements of these attacks, since the number of rounds is also important. For DES, linear cryptanalysis was the more efficient of the two attacks, and it still required 2^{44} ciphertexts, amounting to about 280 terabytes of encrypted data. Even under more favourable circumstances, the data requirements for mounting this type of attack on AES-256 are considerably higher.

Rescorla quotes two IETF submissions where re-keying is justified on the grounds of limiting the time to perform cryptanalysis or the amount of ciphertext that a attacker can collect. While such precautions sound prudent, they make little sense with respect to AES with 128- or 256-bit keys since the corresponding key and ciphertext spaces are so vast. In short, any attacks based on counting, enumerating, or storing encrypted data are not expected to be feasible. The recent so-called Luxembourg Attacks on AES, and their variants, are very data intensive but limiting traffic encrypted under a given key to (say) 1 Petabyte would thwart these attacks.

But Expect Re-Keying

Even so, cryptographic arguments are most convincing to cryptographers, and cryptographic protocols are implemented within systems that are not engineered to the same level of security as say AES, and nor can they be. The manager responsible for approving the transfer of sensitive financial data across the internet will probably feel better knowing all possible precautions are taken to prevent data loss, and that will include re-keying. It is simply viewed as a prudent operational security practice no matter how convincing the cryptographic arguments are to the contrary. Perhaps the most convincing argument for re-keying – if not the only argument – is to limit the consequences of a key compromise. The manager mentioned above will probably be reassured to know that an accidentally exposed key only compromises 4 hours worth of transmitted data, and none of the data sent before or after that. Of course you need more than just re-keying to guarantee such a property, but no amount of strong cryptography will give it to you either. So we may agree with Eric that re-keying is overrated, but it is certainly rated.

Wednesday, March 24, 2010

The Last Days of A5/1

I have been following the long-predicted public demise of A5/1 for a few years now, first in The Long Tail of Vulnerability for A5/1, and more recently in Another crack at open Rainbow Tables for A5/1, where the task of producing public rainbow tables for A5/1 was taken up in a project led by Karsten Nohl last August. Nohl and his team are leveraging previous work done by The Hacker’s Choice (THC) and using the parallel computing architecture of CUDA graphics chips to rapidly compute the rainbow tables. They were asking for people to donate computing power and hoping for some results by last Christmas.

Nohl and colleague Chris Paget gave a project update at the 26th Chaos Communication Congress (CCC), held in Berlin last December, and indeed they completed the computations by year end. The project recovered from a major bug announced last October, which indicated a fundamental coding error in the linear feedback shift register (LFSR) components of A5/1 (it has three). In any case, it seems that the error was corrected, and the required table computations were completed in 3 months using 40 CUDA nodes – considerably less than the 80-node initial estimate, and the 100,000 year effort required on a single-threaded PC. You can read more about the parameters of the rainbow tables on this project page, with some additional background on time-memory trade-off (TMTO) collected here. The tables can be downloaded from a collection of torrents.

On the project page the official status is given as “A full GSM interceptor to collect GSM data could hypothetically be built from open source components. We have not done so as it may be illegal in some countries”. So it seems that the project may have hit legal complications, in the same manner as previous efforts. The last part of the CCC update describes how to intercept GSM traffic from open source components to be used as input to the table look-up process, but a few important issues remain to be resolved before GSM traffic interception can be automated. And as reported in the Tech Herald, the GSM Alliance believes that the channel hopping features of GSM will make the remaining stage difficult to complete, if not a practical impossibility – and illegal in Britain and the US to boot. Nonetheless, updates indicate that the project is continuing, and A5/1 must surely be entering the last days of its more than 20 year operational life.

Sunday, March 21, 2010

In Search of Encrypted Search

Last December Moxie Marlinspike announced a $34 cloud service for auditing the strength of WPA passwords. The service tries to reconstruct the password from the user-supplied WPA session material and a hand-crafted dictionary of 135 million passwords. Marlinspike's cloud service searches through the optimised dictionary in just 20 minutes, a computation that would otherwise take 5 days or so on a standard desktop. Notice then that Moxie learns the password if it is found in his dictionary. This might be an example of where customers would prefer Moxie to perform an encrypted search on your behalf – the parameters are encrypted and Moxie just learns a simple yes or no as to whether the corresponding password was found.

This is an encrypted search problem that could be solved in principle by the “fully homomorphic” encryption system devised by IBM researcher Craig Gentry in June of last year. There was much fanfare around the breakthrough and googling “ibm homomorphic encryption” returns over 21,000 results today. There is a long and informative article in Forbes under the headline IBM's Blindfolded Calculator, and you can also see a lecture by Gentry on his result here, given at the Fields Institute in Toronto last December.

Homomorphisms in RSA

But what is homomorphic encryption? Well RSA gives us a nice example, and recall that the “textbook” RSA encryption of a message M is given as

E(M) = M^e mod N

It is not hard to verify that

E(M1) * E(M2) = E(M1 * M2)

which in words states that the product of two encryptions is equal to the encryption of their product. And this is the defining property of a homomorphism – given a function (here encryption) and two inputs (here messages) with a binary operation defined on those inputs (here multiplication), we can interchange the order of applying the function and the operation to get the same result. So we say that the RSA function is a homomorphism with respect to multiplication, or more simply, just a multiplicative homomorphism.

So for an unknown message M2, if we are given E(M2) we can multiply E(M2) by E(M1) to produce E(M1*M2) without needing to decrypt M2. In the early days of RSA this homomorphic property was deemed to be undesirable, due to posited scenarios like the following one (which in fact seems more likely today than they did 20 years ago). Let’s say Alice is going to transfer $M to the account of Bob and sends an encrypted message E(M) instructing her bank to perform the transfer. But Bob intercepts the message and multiples it by E(2), the encryption of 2. The new message becomes E(2)*E(M) = E(2*M) and Bob doubles his money without having to decrypt E(M). The PKCS 1 format has removed the threat of manipulating messages in this way by adding additional padding and fixed fields to messages which can detect such (multiplicative) manipulation.

While RSA is multiplicatively homomorphic it is not additively homomorphic, since in general

E(M1) + E(M2) != E(M1 + M2).

Getting back to the encrypted search problem, does RSA being a multiplicative homomorphism help us? Well not really. Say we are searching for a string S in some data D, where we are only given E(D). We could encrypt S to get get E(S), and then compute

E(S) * E(D)  =  E(S*D)

but E(S*D) won’t tell us that S is somewhere in D because treating S and D as numbers then multiplying them is not related to searching. So we need to find an encryption scheme that has more homomorphic properties than just multiplication if we are to perform encrypted search. And this is what Mr. Gentry did.

Fully Homomorphic Encryption

This elusive property is what Gentry has created in his “fully homomorphic” encryption system, a system which can support the secure homomorphic evaluation of any function given an inout, such as search over data, and not just simple arithmetic functions like multiplication. The underlying computational problem that furnished this property was neither RSA nor discrete logarithm systems but rather a geometric construction referred to as an ideal lattice. Lattices are special vector spaces that furnish computationally difficult problems that have been used in several other cryptosystems such as NTRU, and provide an alternative to the “standard” hard problems used from number theory.

Using the ideal lattice Gentry was able to show how to compute a “circuit” for problems such as encrypted search that can be evaluated without decrypting the data. Both the encrypted term and data are used as inputs to the circuit, and the trick is to devise a method for securely and correctly evaluating the gates of the circuit. Your computer uses built-in gates and circuits to evaluate computations, such as performing the multiplications and divisions to evaluate an RSA encryption or decryption during an SSL session for example. And as such the computations are small and fast. However for encrypted search the corresponding circuit needs to be built and represented in software, which leads to large and inefficient computations. So this is why the result is a breakthrough but not a practical one at the moment. Gentry has estimated that building a circuit to perform an encrypted Google search with encrypted keywords would multiply the current computing time by around 1 trillion.

The Outlook for Encrypted Search

It remains to be seen whether Gentry’s theoretical breakthrough can be converted into a practical service, in the cloud or otherwise. Gentry has worked with some other researchers from IBM and MIT to remove the requirement of using lattices and to just treat the fully homomorphic encryption as operating over the integers, which provides a simpler conceptual framework. Making the central ideas more accessible will increase the chances of finding more practical solutions. There is a recent review of homomorphic encryption by Craig Stuntz, giving more details and promising code in an upcoming post. Stuntz also points to an article by Gentry in the respected Communications of the ACM which begins

Suppose that you want to delegate the ability to process your data, without giving away access to it. We show that this separation is possible: we describe a "fully homomorphic" encryption scheme that keeps data private, but that allows a worker that does not have the secret decryption key to compute any (still encrypted) result of the data, even when the function of the data is very complex. In short, a third party can perform complicated processing of data without being able to see it. Among other things, this helps make cloud computing compatible with privacy.

Reading further requires a subscription but it is clear that the work of Gentry and his co-authors is being positioned as a core service for cloud platforms, and I am suspecting IBM’s platform in particular. However, just at the moment, you will have to trust Moxie with your WPA password.

Friday, March 19, 2010

Faster Rainbow Tables

It was announced last month by Objectif Sécurité that Ophcrack, the open source framework for breaking Windows password hashes, has passed 10 million downloads. Objectif Sécurité is a Swiss security consultancy founded by Philippe Oechslin, the inventor of rainbow tables. One of the products offered by Objectif Sécurité is a collection of larger and more specialized rainbow tables to extend the base table distributed with Ophcrack. The set of available table options is shown below



These are the professional tables and sell for $999, and see the site for the definition of the character sets covered. The basic tables included with the Ophcrack distribution are derived from a set of dictionary words. To celebrate the Ophcrack milestone,
Oechslin has also given free on-line access to a large rainbow table representing the LAN Manager hashes of passwords made of 52 mixed case letters, 10 numbers and 33 special characters and up to length 14. He estimates that the time to crack most such Windows passwords is less than 6 seconds. This fast password recovery is approximately 100 times faster than previous methods and is supported by hosting 90GB tables on an SSD device. Memory usage is intense for rainbow tables as described here.

Oechslin has reported to Heise Security that for tougher, better selected passwords, the entire password space can be searched at a rate of 300 billion passwords per second. Please take a look at the password recovery speeds at LockDown.uk to understand what this innovation means. For example, passwords of length 8 consisting of only upper and lower case letters can be cracked in 2 days, and only 9 days if digits are present in the password.

Update March 21st

Matt Weir has made some excellent points in his comment below, and clarified several of the statements about the 300 billion password per second search rate. I corrected the link to Oechslin’s remarks reported in Heise Security, and though not stated explicitly it seems that this tremendously fast password search rate applies only to the now-obsolete LAN Manager hashing scheme. Please read Matt’s comment as an improvement/correction for the final text above. Thank you very much Matt.

Thursday, March 18, 2010

The Fabled 25 Sigma Event

Last week I was reading a document published by my company called Dealing with the Unexpected, which gives some lessons learnt from the recent credit crisis. Early in the paper the authors speak about a 10 sigma event, or an event that we only expect to see once in 10,000 years. This piqued my interest because I remembered an infamous statement made by some financial leader during the onset of the financial crisis to the effect that we are now experiencing repeated 25 sigma events. Already a 10 sigma event is quite unlikely, but a 25 sigma event is just absurd –exponentially smaller, but just exactly how much? A one in a million year event? One in a billion years?

First, what is sigma? In statistics and probability, the lower case Greek letter sigma is used to denote the standard deviation of a distribution, which as the name implies, is the accepted unit to measure how much an outcome can vary from its mean or average. A Wikipedia article lists the following table for the likelihood of sigma deviations for the standard normal distribution

image

So at 6 sigma deviations we are already talking about events that occur once every 1.5 million years. Note that the scale is not linear, and the difference between 2 and 3 sigma events is less than the difference between 3 and 4 sigma events.

A 25 sigma event must be very unlikely indeed – so unlikely that the references I searched don’t even bother listing this value, including the news articles that carried the original quote. But after searching directly for sigma events I found a wonderful paper from researchers at the business school of the University College Dublin. The researchers are a little more pessimistic (by a factor of 2) in their calculations since they are only concerned with positive deviations away from the mean, as shown in the diagram below for the the 2 sigma case.

image

The paper reminds us that the 25 sigma quote came from David Viniar, CFO of Goldman Sachs, who actually said in the Financial Times in August 2007 that

We were seeing things that were 25-standard deviation moves, several days in a row.

Not just one 25 sigma event, but several! So the likelihood of those higher deviations was calculated to be

image

or 1-in-10^{135} years for a 25 sigma event. For example, this is much less likely than guessing an AES-256 key in one attempt. The researchers offer the following comparison as to how unlikely such an event is

To give a more down to earth comparison, on February 29 2008, the UK National Lottery is currently was offering a prize of £2.5m for a ticket costing £1. Assuming it to be a fair bet, the probability of winning the lottery on any given attempt is therefore 0.0000004. The probability of winning the lottery n times in a row is therefore 0.0000004^n , and the probability of a 25 sigma event is comparable to the probability of winning the lottery 21 or 22 times in a row.

Either Mr. Viniar was very confused or his models were very confused. Or potentially both. A final quote from the paper

However low the probabilities, and however frequently 25-sigma or similar events actually occur, it is always possible that Goldman’s and other institutions that experienced such losses were just unlucky – albeit to an extent that strains credibility.

But if these institutions are really that unlucky, then perhaps they shouldn’t be in the business of minding other people’s money. Of course, those who are more cynical than us might suggest an alternative explanation – namely, that Goldmans and their Ilk are simply not competent at their job. Heaven forbid!

Yes Heaven forbid!

The ISACA Risk IT Framework rising on Scribd

Under the Quick Links list at the top left of the No Tricks homepage, you can now access my document collection at Scribd, which contains about 100 interesting documents on risk, security and analytical methods.

The most popular document by far is the ISACA Risk IT Framework, which since I published it on Scribd almost a year ago, has received just over 4,000 visits and 1000 downloads. It was recently selected by the Scribd administrators to be moved to their Rising List page. The document is not too long at 94-pages and really develops a solid framework for developing and deploying an enterprise IT Risk program.

Tuesday, March 9, 2010

Recovering RSA Private Keys using Faulty Signatures

Researchers at University of Michigan recently devised a method for recovering the RSA private key used for signing by sifting through a collection of faulty signatures produced by the key. A faulty signature is one where the attacker has caused bits in the signature computation to be flipped from one to zero, or zero to one. Fault patterns that consist of just a single flipped bit permit an attacker to guess and verify a small piece of the private key (called a window), and to eventually recover the entire private key given a sufficient number of such faulty signatures. The researchers ran proof-of-concept experiments to demonstrate that a 1024-bit RSA private key can be recovered from a sample of just under 9,000 faulty signatures created by manipulating the voltage level to the processor computing the signatures.

There are two main ingredients to the attack: a method to recover information about the private key given some faulty signatures, and then a method to actually generate or induce the faulty signatures. We deal with the second point first.

Generating Faults Through Power Fluctuations

The researchers chose to exploit vulnerabilities in modern circuitry to changes in environmental conditions (such as temperature or power fluctuations) that inhibit signal propagation, which lead to faults in computations. The researchers experimented with varying the voltage on a SPARC-based Leon3 server running Debian, using OpenSSL for signing operations. Using runs of 10,000 multiplications the researchers discovered that 1.25V was the optimal value to induce single fault errors, and that reducing the voltage much further resulted in an exponential increase in faults (totally spurious results).

image

At this optimal voltage level the researchers found that from a sample of 10,000 signatures about 90% would contain at least one fault, and 12% contained exactly one fault. Note that the attacker can distinguish between faulty and non-faulty (correct) signatures by verifying with the public exponent, but cannot distinguish a priori between faulty signatures that contain one fault as opposed to those that contain multiple faults. As explained below, the most computationally intensive part of recovering the private key is performing repeated searches over the pool of faulty signatures looking for a particular single fault pattern.

Guessing and Verifying Exponentiation Windows

The signature of a message M is computed as the exponentiation M^D mod N where D is the private RSA exponent, and N the corresponding modulus. Exponentiation is computationally intensive, and is achieved through repeated (modular) multiplication. The well-known binary method computes M^d mod N by processing D bit-by-bit from left-to-right (most significant to least significant bit) in a series of squarings and multiplications. The binary method can be improved upon by processing the exponent in groups of bits, say 4 bits at a time. This approach is called the m-ary method and the groups of exponent bits are called windows (the binary method can be thought of as the 1-ary method, processing 1-bit windows).

At each step in the signature computation, m squarings are performed followed by a single multiplication if the window is non-zero. The attack devised by the researchers works by recovering the bits in the most significant window, followed by the next most significant window, until the entire private key is recovered. If the current window targeted for recover is window W, then the attacker requires a faulty signature on M such that a single fault was induced during the processing of the exponent bits represented by W.

The researchers derive an equation that involves the message to be signed M, the faulty signature S, the value of the window W, the position of the fault, and the sign of the fault (add or subtract). By equation here we mean an expression that has a left-hand side and a right-hand side which can be compared for equality. The attacker can then plug all possible values of W, the position fault and fault sign into this equation and collects solutions. There will either be no solutions, one (unique) solution, or more than one (multiple) solution. In the first and last cases, no information about D can be extracted since either the fault did not occur during the multiplication for W, or there are multiple faults in S. and another pair (M, S) must be examined. If the solution is unique then the bits of W are determined and the attacker can move on to breaking the next window using the same guess-and-verify approach.

The search for a given pair (M, S) that has a single fault in a position covered by a specified window W is computationally intensive. If S is a faulty signature then the attacker cannot easily determine the number or position of the faults in the signature. The researchers generated 10,000 signatures, of which 8,800 were faulty, and tried to recover the 1024-bit exponent. Using an 81-processor cluster this took 104 hours, with the guess-and-verify step for each window taking 2.5 seconds. The private key was recovered after 650 single fault signatures processed.

image

Conclusion

The Register has headlined the announcement as breaking OpenSSL, but this unfair. OpenSSL was used in the experiments but other libraries are also likely to be vulnerable to restricting voltage to induce hardware faults. As to the practicality, the attacker is required to have physical access to the device housing the private key, and be able to request or observe 10,000 or so signatures, computed under reduced voltage. These seem to be quite generous circumstances for the attacker, and given such proximity the attacker may prefer to try for a Cold Boot Attack instead. In any case, the attack shows the importance of verifying the correctness of signing operations – all the faulty signatures could be detected by merely checking that M = S^E before returning any results. Apparently this is what OpenSSL developers are doing at the moment.

Monday, March 8, 2010

More Microsoft SDL Giveaways

Recently Microsoft published a simplified version of their SDL methodology, reducing the detail in the hope of making implementations a bit easier. Microsoft has also made available its four core SDL Training classes (introductions to SDL & Threat Modeling, Basics of Secure Design, and Privacy for SDL) as well as the supporting tools. Finally, Adam Shostack has also made available Elevation of Privilege, the Threat Modeling Game, which he thinks is the easiest way to get started threat modeling – just try it!

Sunday, March 7, 2010

Passwords for USB Keypads

Bruce Schneier recently posted about a new USB stick that comes with its own on-board numeric keypad, permitting a password consisting of digits to be entered directly into the USB device to authorize unlocking. Such a stick and keypad would circumvent the recent USB password vulnerability that was derived from a poor implementation of password verification on the desktop.

image

The stick in question from Corsair (shown above) also uses AES-256 encryption to protect the data on the stick. The AES-256 key for the stick is then likely to be derived from the user-supplied password (say using PKCS #5 or RFC 2898), or used to protect a file which contains a full-length 256-bit key. In either case the 256-bit key will be derived from, or protected by, a password which has a much lower entropy.

Bruce points out that a 77-digit password would be needed to produce the same entropy as a 256-bit key (since the logarithm to the base 10 of 2^{256} is about 77 ). I made the same point in Are AES 256-bit keys too large? where I calculated that a password based on the 94 printable ASCII characters would need to be 40 characters in length to achieve the same entropy of a 256-bit key (since the logarithm to the base 94 of 2^{256} is about 40). Deriving or bootstrapping AES keys from passwords is really an exercise in self-deception, especially when considering 256-bit keys. The discrepancy between the low entropy of passwords and the astronomical keyspace of AES-256 simply cannot be reconciled.

Perhaps the situation would improve if a biometric such as a fingerprint was used to bootstrap a 256-bit key. I did some research about a year ago and posted what I found in On the Entropy of Fingerprints. Some work has been done by IBM researchers who estimate the entropy of fingerprints to be at most 85 bits, or approximately the same as a length 13 password based on the 94 printable ASCII characters. An improvement, but still a long way from 256 bits of entropy.

Monday, March 1, 2010

RSA-512 factoring service: two weeks effort for $5,000

Ron Kane is offering a service to factor 512 RSA keys for a cost of about $5,000, taking two weeks or so. He describes the service as

Estimation on calculating the private key is 2 weeks (we have 2 separate clusters running, sometimes its rented to another company) .

We will be calculating on our private cluster / super computer, no groups / companies / other individuals involved, your private key will not be exposed to the internet anywhere!


Your private key will be sold once, only to you. In case someone else asks us to factor the same numbers, we will decline the request.


How we work:

  • You make a down payment
  • We will start calculating the key
  • When the private key is ready, we will sign something with it and send it to you to verify it
  • You pay us the remaining sum we agreed
  • We give you plain numbers, or we can embed the private key in a smartcard (multos or JCOP)

You can contact him here, but if you are a bit more patient you can do it yourself. Some recent sampling results showed that a few percent of web servers are still running 512-bit keys.