Sunday, September 13, 2009

Another crack at open Rainbow Tables for A5/1

About a year ago I posted on The Long Tail of Vulnerability for A5/1, the stream cipher for GSM data encryption, after earlier in the year David Hulton, director of applications for the high-performance computing company Pico, and Steve Muller, a researcher for mobile security firm CellCrypt, announced new results which claimed that A5/1 keys can be recovered in 30 minutes for $1000. I concluded that

A5/1 has operated unchanged for the last 21 years but it has now reached its cryptographic end-of-life, engulfed by the march of Moore's Law. However, the operational end-of-life of A5/1 may still be decades away as there are approximately 2 billion GSM subscribers, commanding about 80% of the global mobile market. This would be a tough product recall indeed. A5/1 is well-positioned to become the NT of the mobile crypto world, and I see the makings of a long tail of GSM vulnerability.

The Hulton & Muller attack was based on pre-computing rainbow tables that enable the direct "lookup" of A5/1 keys when indexed by a small amount of observed traffic (3 to 4 GSM frames). However these tables have not been materialized, apparently due to potential legal issues.

But a new project announced in August could deliver these tables in 6 months - or by Christmas if we are lucky. Karsten Knol, best known for his work on reverse engineering the Mifare chip, announced, at the Hacking At Random conference in August, a new project to produce A5/1 rainbow tables that would be open and available to the public. Knol described the project details in his conference paper with the somewhat sinister title of Subverting the security base of GSM. In subsequent interviews Knol has gone to some lengths to emphasize that his intentions are not to destabilize GSM, but rather to highlight through public demonstration the weaknesses GSM encryption, and hopefully initiate a migration towards a more secure solution for mobile telephony.

What are these Rainbow Tables?

Several recent articles paint quite a bleak picture for GSM, such as this one from the Tech Herald, which says of Knol’s project, “if successful, will allow anyone with some RF equipment, patience, and a $500 USD laptop, the ability to decode GSM-based conversations and data transmissions”. We need a little technical background to decode this statement.

Imagine that an attacker is given (or intercepts) a plaintext-ciphertext pair P, C encrypted under an unknown key. In a brute force attack the attacker arranges all possible keys in some order, and then starts searching the keys until the right one is found – that is, the key K that encrypts P to C. In this case the attacker is facing a needle-in-a-haystack search problem since there is only one point in the key space (the target key K) that provides information on when the search can stop.

The idea of rainbow tables is to pre-compute a compressed form of the key space that contains a collection of “checkpoint” keys. When searching the compressed key space, hitting a checkpoint identifies a relatively small collection of keys (say several billion) that is known to contain the target key. So key search with rainbow tables has two steps; first, start the search to find a checkpoint, and then second, search the keys defined by the checkpoint to find the target key.

The advantage over brute force search is that many checkpoints can be created, and the search process will therefore hit one of these checkpoints much faster than waiting to stumble across the lone target key. Of course the disadvantage is that the checkpoints points must be pre-computed and stored. So there is a time-memory trade-off (TMTO) to be considered – the search time is reduced with more checkpoints but at a cost of more pre-computation and storage.

On the Shoulders of Giants

We have glossed over many important ideas in an effort to give the gist of how rainbow tables offer improvements over brute force search. In practice generating the required set of a tables to allow key recovery with high probability is quite tricky. Conveniently for Nohl and his team, the appropriate parameters for generating an effective set of tables was done previously by The Hacker’s Choice (THC), including the special optimisations that give rise to the name rainbow tables.

The THC project failed to produce a set of public A5/1 tables (apparently for legal reasons), and Knol’s project is to use the THC base to re-compute the tables in an open and distributed fashion. One innovation is to compute the tables using CUDA chips, which is a parallel computing architecture developed by the graphics co-processor company NVIDIA. Knol estimates that computing the tables on a single-threading Intel-like processor would require 100,000 years but only a matter months on 80 CUDA nodes. So Knol’s project (homepage here) is to find volunteers who will download optimized A5/1 code for the construction of the tables. It is estimated that the total computational effort to produce the tables is 2^{57}.

Tables in the Cloud

Knol emphasizes that the tables are computed through a volunteer distributed computation and then made available to all and sundry through BitTorrent for example. The point is not to centrally reconstitute the tables – in fact he wants to avoid that. He does not want to have a central point or legal entity where the project can be potentially shut down, and this also helps to provide anonymity for the volunteers if they seek it. In any case, this sounds a little bit like the defence that Pirate Bay unsuccessfully used in their recent prosecution – we don’t distribute illegal music, only the index for where you can find it.

Impact and Responses

Public rainbow tables for A5/1 will provide a definite and public demonstration of the shortcomings of GSM encryption - currently relied on by 3 billion subscribers in over 200 countries, representing about 80% of the mobile telephony market. While it has been known for sometime that A5/1 is weak, there is nothing like an actual demonstration to drive the point home, as was the case in the breaking of 56-bit DES or more recently with the Cold Boot attack.

Potentially all GSM communication, including voice and SMS, as well as newer security services that rely on GSM as an out-of-band mechanism for authentication and authorization, will be under threat. Knol’s advice, and the main reason for undertaking the project, is to raise awareness on the need to upgrade to more secure encryption. The stronger A5/3 algorithm is being phased in during upgrades to 3G networks, providing 128-bit encryption. However A5/3 may only be used to encrypt data traffic, and voice traffic will remain with A5/1 until a full upgrade is made.

The GSM Association (GSMA) has politely scoffed at the project, claiming that the practical complexities of carrying out actual attacks, in particular intercepting the required GSM traffic, are being underestimated. However I think the project team will rise to the challenge.

Knol hopes to have proof-of-concept tables by the end of 2009, and you can see some graphs on the main project page showing current computational results. In risk terms the project promises to show that what is perceived as a high-impact/low-probability event is actually medium- to high-probability. We should have more news by Christmas.

imageYou can find additional information on this topic from the FreeMind map I created for this post, rendered into Flash here.

Wednesday, September 2, 2009

New US Digital Border Search directives

The US Department of Homeland Security (DHS) announced last week new directives to enhance and clarify oversight for searches of computers and other electronic media at U.S. ports of entry, justified as "a critical step designed to bolster the Department’s efforts to combat transnational crime and terrorism while protecting privacy and civil liberties". Border searches as an issue flared up last year when a group including the Electronic Frontier Foundation (EFF), the American Civil Liberties Union and the Business Travel Coalition, published an open letter calling on the House Committee on Homeland Security to limit searches to be appropriate and non-invasive. The purpose of the latest policies is
To provide guidance and standard operating procedures for searching, reviewing, retaining, and sharing information contained in computers, disks, drives, tapes, mobile phones and other communication devices, cameras, music and other media players, and any other electronic or digital devices, encountered by U.S. Customs and Border Protection (CBP) at the border
Notice that the scope of devices that can be searched is quite broad. And well it might be, since the policy is designed to detect electronic evidence relating to terrorism, human trafficking, bulk cash smuggling, contraband, and child pornography, as well as other run-of-the-mill crimes. Where practical, these searches will be conducted in the presence of a supervisor and the owner of the electronic device.

Devices should only be detained if there is probable cause to believe that the device contains evidence of a crime that border authorities are authorized to prosecute. Devices may be “detained” for up to 5 days without justification, and this period can be extended in the case of extenuating circumstance but requires supervisor approval. The policy also makes clear that your device or a copy of its contents can be sent to another location for additional assistance, which includes requests for translation, decryption, or general subject matter consulting Such requests must be approved by a supervisor, transmitted securely, and processed within 15 days.

The DHS reports that between Oct. 1, 2008, and Aug. 11, 2009, border authorities encountered more than 221 million travellers at U.S. ports of entry. Amongst these travellers, approximately 1,000 laptop searches were performed, and less than 50 of the searches were detailed. So using these figures, your chances of being searched are less than one percent of one percent per traveller volume. However, I think the chances of being searched would be higher if the figures were restricted to travellers who were actually carrying a laptop or some other visible electronic device.

Tuesday, August 25, 2009

Solo Desktop Factorization of an RSA-512 Key

It was recently announced that Benjamin Moody has factored an RSA-512 bit key in 73 days using only public software and his desktop computer. The first RSA-512 factorization in 1999 required the equivalent of 8,400 MIPS years over an elapsed time of about 7 months. The announcement was made all the more intriguing in that it came from a mailing list associated with a user forum for Texas Instruments calculators.

The public key in question is used to verify signed OS binaries for the TI 83 Plus and the factorization means that

… any operating system can be cryptographically signed in a manner identical to that of the original TI-OS. Third party operating systems can thus be loaded on any 83+ calculators without the use of any extra software (that was mentioned in recent news). Complete programming freedom has finally been achieved on the TI-83 Plus!

The original post from Moody was not very informative, but the subsequent Q & A thread drew out more details. Moody used public source factoring software on a dual-core Athlon64 at 1900 MHz. Just under 5 gigabytes of disk was required and about 2.5 gigabytes of RAM for the sieving process. The final computation involved finding the null space of a 5.4 million x 5.4 million matrix.

Most security people would rightly claim that RSA-512 is known to provide little security as a key length, however this does not mean that 512-bit public keys are not in use today by companies other than Texas Instruments. By the way, someone is offering an RSA 512 factoring service with a “price indication” of $5000.

CORRECTION, Feb 19, 2010: The original post said the factoring was done in 73 hours when the correct value is 73 days, as pointed out in the comment below. Thank you to Samuel for pointing out the mistake.

Self-Destructing Digital Data with Vanish

University of Washington researchers recently announced a system for permanently deleting data from the internet. The solution, called Vanish, can be used for example to delete all copies of an email cached at intermediate relay sites and ISPs during transmission from sender to receiver. Advertising that Vanish provides self-destructing data conjures up the digital equivalent of a tape that bursts into flames after its message has been played. But data protected by Vanish neither self-destructs nor does the system actively seek out data at third party sites for deletion.

Vanish works by encrypting data with a secret key (say an AES-256 key), splitting the key into secret shares, and then storing the shares at a randomly selected set of nodes in the distributed hash table (DHT) of a public P2P network. In this way Vanish creates an encrypted object for inclusion in email, for example, that the sender can transmit to a receiver or group of receivers.
image
When a receiver opens the encrypted object, Vanish attempts to access a sufficient number of DHT nodes with shares so that the key can be recovered and the data decrypted.

The self-destructing aspect is that the key shares will be deleted as part of the natural node churn in the DHT, quite independent of the actions of both the sender and the receiver. The lifetime of a share is about 8 to 9 hours in the DHT, after which there is a low probability that there will be a sufficient number of shares to reach the recovery threshold.

So the encrypted data does not self-destruct - but rather key recovery information is placed in volatile public storage that is very likely to delete that information after a short delay as part of its normal operation. And with that deletion also disappears logical access to all copies of the unencrypted data.

The full paper that describes the Vanish system has extensive results of experimenting with the Vuze P2P network, as well as other local simulations, examining the interplay of various parameters such as the number shares versus the deletion rate.

Monday, August 24, 2009

Twitter in the Land of Power Laws

This is my first post after a long summer holiday and I am glad to say that my blog has still been receiving a reasonable number of visits in the absence of new content, albeit some visits were barely more than a glance.

Gartner has released a collection of Hype Cycles for various technology niches, as reported by Eric Auchard at Reuters for example. The particular Hype Cycle below is for Emerging technologies (double-click to enlarge).

image

I was immediately surprised to see Quantum Computing registering in the Technology Trigger region, which is used to denote technologies that are 10 years or more out from acceptance. I think this is something of an understatement as I argued here.

Auchard picked up on microblogging (read Twitter) being positioned near the peak of expectations, and therefore about to experience the full G-force of descending into the Trough of Disillusionment. I am not really sure from what perspective Gartner is making this prediction, since a recent study released Sysmos shows that user growth has more than doubled this year – in fact, over 70% of Twitter users joined in 2009.

image

The report shows that almost every aspect of Twitter is operating under a power law. In the case of new users this power law means exponential growth, whereas for other measures the power law typically means the domination by the few. For example, 92% of people follow less than 100 other people, but 1% of people follow more than a 1000 others. Less than 1% of people have more than a 1000 followers, and more than 90% have less than 100. Interestingly, 21% of people with a registered Twitter account have never made a tweet. Most people make only one Tweet per day but just over 1% make at least 10 on average. More generally, Sysmos observed that there is a 75/5-rule in operation, meaning that 75% of activity is accounted for by 5% of the Twitter user base.

image

Is Gartner right then? Perhaps in the words of Neil Postman the majority of Twitter users are just “amusing themselves to death”, while a few Twitter users are really tweeting themselves to death. Where is the business case here? But as I argued in The Sub-Timed Crisis in Web 2.0, we are not heading for collapse:

Unlike our current financial structures, the web is not at threat of collapsing, though many foot soldiers will fall by the way (they see themselves as pioneers, but in fact they are easily replaced). Our informational structures are not hierarchical but relational, and as such, are much more resilient to the removal of individuals. It is not the case that there are eager underlings waiting to replace leaders – the underlings are here and functioning already.

Web 2.0 losses will largely go unnoticed. New users/readers, whose information experience begins today, are being added all constantly. They are essentially unaware of what happened last month and will remain that way. Joining is not a generational wait, no corporate ladder to be climbed. Everyone joins and progresses simultaneously. This turnover goes largely unnoticed since leavers are dwarfed by joiners.

I am quite sure that Twitter will weather the Trough of Disillusionment – in fact, it already has I would say. Gartner is as overly optimistic of Quantum Computing as it is pessimistic of Microblogging.

Related Posts

Saturday, July 4, 2009

How will my loved ones break my password?

image Just a few days ago I posted about a new Swiss web service from DataInherit to manage the life cycle of your sensitive data and credentials. Coincidentally Cory Doctorow has an article in the Guardian this week on the same topic, fretting about passwords being carried off with loved ones into the next life. While creating a will with his wife, Doctorow was stumped by how to deal with his data, and specifically the secrets that protect that data. His various hard disks are protected by AES-128 bit encryption and a passphrase that is unlikely to succumb to anything less than quantum leaps in quantum computing. So while Doctorow feels safe against attacks on his data, he wonders about the following scenario:

But what if I were killed or incapacitated before I managed to hand the passphrase over to an executor or solicitor who could use them to unlock all this stuff that will be critical to winding down my affairs – or keeping them going, in the event that I'm incapacitated?

After considering several technical and non-technical approaches he finally decided on the following solution

I'd split the passphrase in two, and give half of it to my wife, and the other half to my parents' lawyer in Toronto. The lawyer is out of reach of a British court order, and my wife's half of the passphrase is useless without the lawyer's half (and she's out of reach of a Canadian court order).

Doctorow remarks that the surprising outcome of this process was the realisation that we are missing a well-known service for handling key escrow in an era of military grade encryption being available to home users. He concludes that “you need to figure this stuff out, before you get hit by a bus and doom your digital life to crypto oblivion”. I think that DataInherit will be giving him a call.

Friday, July 3, 2009

Excellent Awareness talk from British Airways

There were several great talks at the recent ENISA conference on raising IT Security Awareness. I would like to mention one here from Robert Hadfield of British Airways called “Silver Bullets, Kangaroos and Speed Cameras”, which is embedded below from Scribd.

Hadfield began by reporting on an experiment where 100 identical emails with an executable attachment were sent to employees marked as urgent. The result was that 84 people opened the email, and 69 also executed the attachment. So he said we have a problem with people. To justify a security awareness program he gave the following very wise reasons

  1. Simple human error, ignorance or omission is most commonly at the root of any security breach
  2. We need to enable employees to acquire security knowledge by using there own reason, intuition and perception. We must seek long term behavioural change.
  3. Pound for pound, raising awareness will improve security far more effectively than any technical solution can ever hope to achieve.

He also noted that since the average cost of a security breach is about £50,000 then awareness programs can pay for themselves if they can prevent one or two of these incidents per year. Even so, how do you effect change on a group of 45,000 mostly disinterested employees? Hadfield found great success in meet-the-people workshops & roadshows, which were reported as a very effective awareness mechanism by other speakers and the ENISA workshop as well, and also the main conclusion from an ENISA survey conducted by PwC last year. Hadfield reports that over 200 workshops have been undertaken this year resulting in over 2000 people being trained. BA also uses other channels besides workshops, and one of their clever posters is shown below - a reminder to users to lock their desktops when wandering off for a coffee.

image

I am leaving out many clever observation and graphics so please take a look at the presentation for yourself.

IT Security Awareness presentation from British Airways, June 2009

Wednesday, July 1, 2009

The DataInherit Service – Swiss Secure Internet Escrow

I would like to announce the availability of a new secure internet storage service called DataInherit, co-founded by one of my former Swiss colleagues Tobias Christen. DataInherit is more than secure storage – it is a service for keeping sensitive data and credentials in trusted escrow for defined beneficiaries. This is an implementation of digital inheritance, supporting the ongoing life cycle of digital data. The DataInherit site contains a good explanation of their vision, and you can read more about the DataInherit security architecture on Scribd (document embedded below).

Digital Inheritance

Wednesday, June 24, 2009

The Risk of Degradation to GPS

In April the Government Accountability Office (GAO), the audit and investigative arm of the US Congress, announced the results of their study on sustaining the current GPS service. The main finding was that the GPS service is likely to degrade over the next few years, both in terms of coverage and accuracy, due a decrease in the number of operational satellites. Using data provided by the US Department of Defense (DoD), the GAO ran simulations to determine the likelihood that GPS can be maintained at its agreed performance level of 24 satellites operating at 95% availability. The graph below (double-click to enlarge) shows a 24-strong GPS constellation dipping below 95% availability in the 2010 fiscal year, and dropping as low as 80% before recovering in 2014. The jittery sawtooth nature of the graph is derived from the tussle between the failure of existing satellites and the launching of replacements, with the failure rate dominating for the next few years.
imageNeedless to say the GAO findings have been widely discussed, and were further publicised in a recent televised congressional hearing. The US Air Force, who runs the GPS program for the DoD, has had to assure its military peers, various congressmen and an anxious public that the GPS service is in fact not on the brink of failure – a scenario not even considered by the GAO report. Articles in the popular press such as Worldwide GPS may die in 2010, say US gov from the Register are not helping matters. So how did the GPS service end up in this predicament? According to GAO, the culprit is poor risk management in the execution of the GPS modernisation program.
GPS is a critical service, particularly for the military, as it provides information for the calculation of position, velocity and time. As noted in the GAO report, “GPS has become a ubiquitous infrastructure underpinning major sections of the economy, including telecommunications, electrical power distribution, banking and finance, transportation, environmental and natural resources management, agriculture, and emergency services in addition to the array of military operations it services”. Specifically, GPS is used to guide bombs and missiles to their targets – and we don’t want inaccuracy in those calculations!
There are currently 31 operational satellites, orbiting 12,600 miles (20,200 kilometres) above the Earth, a seemingly safe margin over the required 24. The constellation has grown to this size as the current roster of satellites have performed far beyond their expected operational lifetimes. Even so, according to a DoD report issued last October, 20 satellites are past their design life, and 19 are without redundancy in critical hardware components.
The main threat scenario is that a substantial number of satellites will reach their operational end-of-life before they can be replaced, thus reducing the size of the constellation. Or simply put, the satellite failure rate may exceed the refresh rate. This is not really a question of whether GPS will become extinct (all satellites fail) since GPS will become ineffective long before the number of satellites gets anywhere near zero.
What is the impact of a degraded GPS service? Well the first point is that GPS currently delivers a much better service than committed to, due to the additional satellites above the required 24. So the service impact when dropping below 24 satellites will be quite noticeable. The accuracy of GPS-guided missiles and bombs will decrease, therefore increasing the risk of collateral damage. This leads to a viscous circle where even more missiles or bombs will be required to take out a given target.
Since the current generation of satellites have lasted so long, and GPS still remains at threat from dropping below 24-strong constellation, then there must be some problems with the rate at which the constellation is being replenished. And according to the GAO report, there have indeed been severe problems in executing the GPS program as planned. The current GPS program has experienced cost increases and schedule delays. The launch of the first new satellite is almost 3 years late and the cost to complete the new program will be $870 million over the original estimate.
GAO cites a multitude of reasons for this predicament including multiple contractor mergers, moves and acquisitions, technology over-reach (a common malady for military projects), the short tenure of program leaders, and general “diffuse leadership” (no one group or person is really in charge).
GAO strongly recommends an improved risk management process. In a recent post The Risk Analysis of Risk Analysis I reviewed an article on when to apply a sophisticated risk methodology called Probabilistic Risk Assessment (PRA). The conclusion was that the difficulty, expense and potential inaccuracy of PRA can only be justified when projects are on a grand scale, and the multi-billion dollar GPS program certainly qualifies. And here the risk equation is not merely about technicalities and project management (hard as they are). There is also an overarching directive from the US government to be the premier global provider of GPS services. Europe, Russia and China are creating their own constellations, but relying on these “foreign” constellations does not seem to be an option.
Various representatives from the DoD have responded to the GAO report, stating that action must and will be taken to improve the current GPS constellation. It is likely that the service will experience degradation over the next 5 years, but the DoD claims it be managed and predicted (you can calculate when and where there will be gaps). Let’s hope they’re right.

Tuesday, June 23, 2009

Spike in ToR Clients from Iran

ToR (The Onion Router) is a well-known public anonymity service that obscures routing information through encryption and packet path randomization. I posted about the basics of ToR last year in Anonymity at the Edge, concerning an incident where a 21-year old Swedish computer security consultant ran afoul of various authorities for his involvement in the exposure of account details harvested from ToR.

As a by-product of the current turmoil in Iran and the censorship on Internet connections, there has been a dramatic increase in the number of ToR clients (connection points into the ToR network) created from Iran. Tim O'Brien at O'Reilly Radar spoke to Andrew Lewman, the Executive Directory of the Tor Project, and Lewman stated that
New client connections from within Iran have increased nearly 10x over the past 5 days. Overall, Tor client usage seems to have increased 3x over the past 5 days. There are a lot of rough numbers in these statements, and they are very conservative.
You can find some additional technical details from Lewman's own post on the topic, including this graphic


Lastly, I recently recommended the Compass site for a good collection of technical documents on security, and you can find their description of an attack on ToR here.

Sunday, June 21, 2009

My ENISA Awareness presentation

Last Friday I gave a presentation at an ENISA conference on raising IT Security Awareness. I have just one idea per slide and next to no text beyond the title. You can find the slides below on Scribd.

IT Security Awareness Tips

Sunday, June 14, 2009

Enterprise Password Management Guidelines from NIST

Those industrious people over at NIST have produced another draft publication in the SP-800 series on Guidelines for Enterprise Password Management. At 38 pages it will be a slim addition to your already bulging shelf of NIST reports. The objective of the report is to provide recommendations for password management, including “the process of defining, implementing, and maintaining password policies throughout an enterprise”. The report is consistently sensible and, in places, quite sagely. Overall the message is that passwords are complex to manage effectively, and you will need to spend considerable time and effort on covering all the bases. My short conclusion is that a CPO – a Chief Password Officer – is required.

Let’s begin with a definition. A password is “a secret (typically a character string) that a claimant uses to authenticate its identity”. The definition includes the shorter PIN variants, and the longer passphrase variants of passwords. Passwords are a factor of authentication, and as is well-known, not a very strong factor when used in isolation. Better management of the full password lifecycle can reduce the risks of security exposures from password failures. This NIST document will help your enterprise get there.

Storage and Transmission

NIST begins by discussing password storage and transmission, since enforcing more stringent password policies on users is counterproductive if those passwords are not adequately protected while in flight and at rest. Web browsers, email clients, and other applications may store user passwords for convenience but this may not be done in a secure manner. There is an excellent article on Security Focus by Mikhael Felker from 2006 on password storage risks for IE and FireFox. In general, applications that store passwords and automatically enter them on behalf of a user make unattended desktops more attractive to opportunistic data thieves in the workplace for example. Further, as noted in the recent Data Breach Investigation Report (DBIR) from Verizon, targeted malware is not just extracting passwords from disk locations but directly from RAM and other temporary storage locations. From page 22 of the DBIR, “the transient storage of information within a system’s RAM is not typically discussed. Most application vendors do not encrypt data in memory and for years have considered RAM to be safe. With the advent of malware capable of parsing a system’s RAM for sensitive information in real-time, however, this has become a soft-spot in the data security armour”.

As NIST observes, many passwords and password hashes are transmitted over internal and external networks to provide authentication capabilities between hosts, and the main threat to such transmissions are sniffers. Sniffers today are quite sophisticated, capable of extracting unencrypted usernames and passwords sent by common protocols such as Telnet, FTP, POP and HTTP. NIST states that mitigating against sniffing is relatively easy, beginning with encrypting traffic at the network layer (VPN) or at the transport layer (SSL/TLS). A more advanced mitigation is to use network segregation and fully switched networks to protect passwords transmitted on internal networks. But let’s not forget that passwords can also be captured at source by key loggers and other forms of malware, as noted by NIST and the DBIR.

Guessing and Cracking

NIST then moves on to a discussion of password guessing and cracking. By password guessing NIST means online attacks on a given account, while password cracking is defined as attempting to invert an intercepted password hash in offline mode. Password guessing is further subdivided into brute force attacks and improved dictionary attacks. The main mitigation against guessing attacks is mandating appropriate password length and complexity rules, and reducing the number of possible online guessing attempts. Restricting the number of guesses to a small number like 5 or so is not a winning strategy.

The main mitigation against password cracking is to increase the effort of the attacker by using salting and stretching. Salting increases the amount of storage to invert a password hash by pre-computation (for example using rainbow tables), while stretching increases the time to compute a password guess. Stretching is not a standard term as far as I know, and it is more commonly referred to as the iteration count, or more simply, as password spin.

Complexity

Next is a discussion of password complexity, and the size of various combinations and length and composition rules as shown in the table below (double click to enlarge). Such computations are common, but security people normally take some pleasure in seeing them recomputed. NIST observes that in general the number of possible passwords increases more rapidly with longer lengths as opposed to permitting additional characters sets.

image

Of course, the table above does not take into account user bias in password selection. A large fraction of a password space may effectively contain zero passwords that will be selected by a user. And password cracking tools, like the recently upgraded LC6, made their name on that distinction. NIST briefly mentions the issue of password entropy but not in any systematic manner. There is a longer discussion on password entropy in another NIST publication. NIST does suggest several heuristics for strong password selection and more stringent criteria for passwords chosen by administrators.

Get a CPO

The NIST guidelines go on further to discuss some strategies for password reset and also provides an overview of existing password enterprise solutions. A useful glossary is presented as well. Overall the issues surrounding password management are complex and involved, and NIST gives good guidance on the main issues. It would appear that large companies which run big heterogeneous IT environments will require the services of a CPO – Chief Password Officer – to keep their password management under control and within tolerable risk limits.

Saturday, June 13, 2009

How to Choose a Good Chart

There is a nice 1-page guide to chart selection on Scribd as shown below. Seriously, I can't emphasize enough what a resource I find Scribd to be.

Choosing a Good Chart Choosing a Good Chart Mark Druskoff This fantastic chart was produced by Andrew Abela. Here's a link to the original post in 2006 where he debuted his creation. Whole site is worth checking out:
http://extremepresentation.typepad.com/blog/2006/09/choosing_a_good.html

Wednesday, June 10, 2009

Paper Now Available - The Cost of SHA-1 Collisions reduced to 2^{52}

Australian researchers Cameron McDonald, Philip Hawkes and Josef Pieprzyk recently announced at the Eurocrypt 2009 conference a new attack to find collisions in SHA-1 requiring only 2^{52} operations. A premilimary version of the paper is now available here on the eprint service of the IACR.

This new result decreases the cost of a collision attack by a factor of over 2000 as compared to previous methods. The researchers note that “practical collisions are within resources of a well funded organisation”. An article by the Register provides some more background.

Tuesday, June 9, 2009

The Long Tail of Life

In performing risk assessments we are often asked (or required) to make estimations of values. Typically once a risk is identified it needs to be rated for likelihood (how often) and severity (how bad). The ratings may be difficult to make in the absence of data or first-hand experience with the risk. We often therefore rely on “guesstimates”, calibrated against similar estimates by other colleagues or peers.

Here is a question to exercise your powers of estimation.

I recently came across an article by Carl Haub of the US Population Reference Bureau which seeks to answer the following question - How many people have ever lived? Put another way, over all time, how many people have been born? Haub says that he is asked this question frequently, and apparently there is something of an urban legend in population circles which maintains that 75% of all people who had lived were living in the 1970s. This figure sounds plausible to the lay person since we believe most aspects of the 20th century are characterised by exponential growth.

Haub sought to debunk this statement with an informed estimate. He observes that any estimate of the total number of people who have ever been born will depend basically on two factors: (1) the length of time humans are thought to have been on Earth and (2) the average size of the human population at different periods. Haub assumes that people appeared about 50,000 years ago, and from then till now he creates ten epochs (benchmarks) characterized by different birth rates

image

The period from 50,000 B.C. till 8,000 B.C., the dawn of agriculture, is a long struggle. Life expectancy at birth probably averaged only about 10 years for this period, and therefore most of human history. Infant mortality is thought to have been very high — perhaps 500 infant deaths per 1,000 births, or even higher. By 1 A.D. the population had risen to 300 million which represents results a meagre growth rate of only 0.0512 percent per year.

By 1650, world population rose to about 500 million, not a significant increase over the 1 A.D. estimate. Numbers were kept in check by the Black Plague which potentially killed 100 million people. By 1800 however, the world population had passed the 1 billion mark, and has increased rapidly to the current 6 or so billion.

The graph below shows another analysis which corroborates the estimates of Haub. The curve exhibits a long tail or power law with a steep increase in population from about the 18th century.

image

Looking at the curve above you may be tempted to believe the urban legend that 75% of all people ever born were in fact alive in the 70s. Haub in fact reports that in 2002 just under 6% of all people ever born were in fact living. Or put another way, approximately 106 billion people had been born over all time, of which 6 billion are currently living.

The key to this conclusion is the extremely high birth rate (80 per thousand) required to keep humans from becoming extinct between the period of 50,000 B.C. and 1 A.D. According to Haub there had been about 46 billion births by 1 A.D. but only a handful of people had survived. That is, the vast majority of people who have been born have also died.

How good was your estimate to the original question?

Interestingly, WolframAlpha returns the correct answer to the question. I was closer to 30% – 40% of all people being alive today, certainly no where near 6%.