Showing posts with label FUD. Show all posts
Showing posts with label FUD. Show all posts

Saturday, March 27, 2010

Some Clarification on the RSA Power Attack

I recently posted on the some new work by researchers at the University of Michigan which showed how an RSA private key can be recovered by sifting through a collection of faulty signatures. The faults are created by lowering the power to the chip computing the signature, causing bits to be flipped in the signature multiplication chain. 

I remarked that the coverage from The Register was a bit unfair in its headline of OpenSSL being broken, which was just the cryptographic library used in the proof-of-concept experiment to test the private key recovery procedure. But as pointed out by Bradley Kuhn, the researchers seemed to stress the weakness of OpenSSL against the attack by mentioning it both the abstract and the conclusion of their paper.

But there have been some other posts bordering on the hysterical. Techie Buzz, 1024 Bit Cracked, New Milestone, leads the chorus of RSA’s demise

The RSA encryption was believed to be quite safe and this level of a crack was not achieved, until now. The methods used here are pretty low level and have given results in 100 hours. The crack which was assumed to take a lifetime with brute force, has taken a mere four days. This breaks the very backbone of RSA which believes that as long as the private key is safe, it is impossible to break in, unless guessed.

The post celebrates the fault-based factoring as an advance on the 768-bit factoring record achieved late last year. However the factoring of the RSA 768-bit modulus was unaided by any tricks, and the researchers used modern factoring methods in a distributed computation amounting to over 10^{20} operations. The results are simply not comparable since the fault-based attack is so advantageous to the attacker.

A more sober review is presented by Phil Brass, of Accuvant Insight, in his post Recent Encryption Research Demystified, where he describes the publicity of the attack as “headline hyperbole”. You should not be too worried about your online banking service since

to carry out this attack on an online banking server, the attacker would need physical access to the online banking server’s power supply, which means they would need to be inside the bank’s data centre.  Given the “wealth” of other targets available to an attacker standing inside of a bank’s data centre, theft of the online banking web server private SSL key by a difficult and time-consuming voltage regulation attack seems rather unlikely.

And as to the key recovery experiment itself

The really strange thing about this paper is that while the researchers claim to have implemented the attack on a full-size SPARC/Linux/OpenSSL workstation, the actual hardware was a Xilinx Virtex2Pro FPGA, which was emulating a SPARC CPU, and which the researchers claim is representative of an off-the-shelf commercial embedded device. It seems as if they are trying to have it both ways – i.e. it is an attack against a full-size workstation, and by the way it also is an attack against something you might see in a typical embedded system.

He provides a good summary of the attack and offers that a better headline would be “Obscure bug in OpenSSL library poses little risk to consumers.”  A similar reality check for the hoopla is given by Nate Lawson of Root Labs Rdist, who begins by recalling that the brittleness of the RSA signature operation was identified in 1997 (referenced by the Michigan researchers), with the advice to verify all signatures very carefully before returning them. Lawson provides some more details on the error checking steps taken for signature computations in OpenSSL and he concludes that a much better job could have been done. He is also a bit sceptical of the experimental environment but finally concludes “this was a nice attack but nothing earth-shattering”.

Finally, there is another de-hyped review by Luther Martin at Voltage, where he refers to the PBA Attack after the initials of the three Michigan authors. He states that

Devices that are designed to be secure, like HSMs and smart cards, filter the power so that you can't do attacks like the PBA attack, and with devices that aren't designed to be secure, there's always an easier way to recover a key from them than doing something like the PBA attack. This means that we won't be seeing hackers using the PBA attack any time soon, but you'd never think this from seeing the way it was reported by the media.

Fortunately for this incident Voltage products use DSA rather than RSA for signatures, so advanced debunking for customers will not be required.

All in all, most everyone agrees (and is happy to say) that the work is clever and worthwhile to undertake, adding to our operational knowledge of cryptography. The publicity was a bit overdone, and probably too many hours were spent by security professionals explaining the circumstances and implications of the attack.

Reblog this post [with Zemanta]

Sunday, November 22, 2009

FUDgeddaboudit

imageI first came across the term fuhgeddaboudit in writing while reading the The Black Swan, where Taleb was answering the question as to whether journalists can be relied on to unearth all of the silent evidence on a given topic - fuhgedaboudit! The term is short for “forget about it”, popularized in US gangster media such as the Sopranos, which google defines as

  • An issue is not worth the time, energy, mental effort, or emotional resources
  • Whatever the current topic of discussion may be, regardless of who has stated it (even the speaker) is thereby declared null and void and without merit

Both of these sentiments were called forth when I read the recent post from Anton Chuvakin on FUD-based security. Anton was reminding us that FUD is alive and well in IT Security, and actually it has nowhere to go but up in terms of mindshare since more sophisticated methods, such as ROSI, have nowhere to go but down.

Even though FUD is a blunt instrument, Anton argues that it is very effective when it comes to getting things done, allowing real issues to be brought to the table, and limits reliance on decision makers to do the right thing (which they often don’t). He even jokes that FUD is a more pragmatic triad for security than the venerated CIA.

The whole post was ethically stomped on by RThomas (Russell Thomas) from the New School of Information Security blog (NSOIS) who stated in a comment that

FUD is the distorted and irrational exaggeration of fears and uncertainties for the sole purpose of manipulating the decision-maker.

The term "FUD" originated in the 1970s regarding IBM's selling tactics against competitors. The FUD technique was used to destabilize the decision-maker's thinking process regarding potentially viable alternatives. FUD issues raised could not really be answered by the decision-maker or the competitor, and so nagged at the back of the mind. They had the effect of causing the decision-maker to retreat to the safe decision, which was IBM. "Nobody ever got fired for buying IBM" was one famous phrase embodying the effects of FUD …

There are substantial reasons for framing risks in a way that goes beyond simple statement of facts and statistics, namely to deal with the psychology of risk. The ethical security or risk professional will take pains to present scenarios that are feared in a way that the decision-maker can understand and, most important, to see those scenarios in perspective relative to other possibilities and probabilities.

and Russ further drove home his point in an additional post over at the NSOIS, concluding that

Security is always a secondary objective to some other (upside) enterprise objectives. Security investments are always subject to evaluation relative to other investment alternatives, both inside and outside of IT. These are the realities of enterprise performance and leadership. Some security people may stomp their feet in protest, or resort to unethical tactics like FUD, but don’t delude yourself that you are making the world (or the enterprise) a better place.

This is the same sentiment that I addressed in my The Relegation of Security to NFR Status post. NFR stands for non-functional requirement and includes things like ensuring that there is sufficient network capacity, that the servers are adequately sized for peak loads, help desk support is created, back-up and recovery is deployed, the web interface is friendly, and so on. FUD is not really IT Security’s opportunity to get some skin back in the functional (i. e. business) requirements game, as we will still look like uninvited gate crashers at best, and bullies at worst.

At the recent CSI meeting in Washington, as reported by Dark Reading, with my take here in Security Muggles, several CSOs opined that we need better communication with business people on their terms so that Security people are earning a seat at the decision-making table. They want to do more RSVP-ing than crashing.

Wade Baker over on the Verizon blog recently asked how people make security decisions, beginning from the frank assumption that

In most cases, it is impossible to precisely formulate all factors in the decision, so we abandon the “scientific” route and revert to some other method of making it (see below). This is where our predominantly engineering mindset hurts us. Instead, we should realize that organizations have always made decisions using varying amounts of information of varying quality. Our dilemma is not new. Valid and vetted approaches exist for structured decision problems with an abundance of precise data and also for unstructured problems with sparse amounts of “fuzzy” data. These approaches are out there and are eagerly waiting for us to apply them to problems in our domain.

FUD can be seen as a response to this reality, but not a very structured response, and one that ignores the methods and techniques developed in other fields for coping with decisions under uncertainty. Wade also ran a little survey on the approaches that security people use for decision-making and he received just over 100 responses. You can read his summary of the response here, and his summary graph is below.

image

Even given the small sample size it seems that some people are opting away from FUD, far away in fact. I don’t think IT Security as a profession, or any profession (except maybe politics), has a long run future based on FUD since you don’t need much technical skill or experience to pursue this approach, and there are probably plenty of people for hire to carry out such campaigns who are not particularly well-qualified in security.

So ethical considerations aside, I have never considered FUD a long term strategy. Its persistence I imagine can be attributed largely to regular changes in the ranks of security decision makers, and a mind-numbing churn in technology and the IT sector as a whole. The same “new fears” are being presented to new people, as FUD has gone into heavy syndication in the IT Security industry and its always showing in re-runs somewhere. Put your time and energy somewhere else.

In short fuhgeddaboudit !