Tuesday, August 25, 2009

Solo Desktop Factorization of an RSA-512 Key

It was recently announced that Benjamin Moody has factored an RSA-512 bit key in 73 days using only public software and his desktop computer. The first RSA-512 factorization in 1999 required the equivalent of 8,400 MIPS years over an elapsed time of about 7 months. The announcement was made all the more intriguing in that it came from a mailing list associated with a user forum for Texas Instruments calculators.

The public key in question is used to verify signed OS binaries for the TI 83 Plus and the factorization means that

… any operating system can be cryptographically signed in a manner identical to that of the original TI-OS. Third party operating systems can thus be loaded on any 83+ calculators without the use of any extra software (that was mentioned in recent news). Complete programming freedom has finally been achieved on the TI-83 Plus!

The original post from Moody was not very informative, but the subsequent Q & A thread drew out more details. Moody used public source factoring software on a dual-core Athlon64 at 1900 MHz. Just under 5 gigabytes of disk was required and about 2.5 gigabytes of RAM for the sieving process. The final computation involved finding the null space of a 5.4 million x 5.4 million matrix.

Most security people would rightly claim that RSA-512 is known to provide little security as a key length, however this does not mean that 512-bit public keys are not in use today by companies other than Texas Instruments. By the way, someone is offering an RSA 512 factoring service with a “price indication” of $5000.

CORRECTION, Feb 19, 2010: The original post said the factoring was done in 73 hours when the correct value is 73 days, as pointed out in the comment below. Thank you to Samuel for pointing out the mistake.

Self-Destructing Digital Data with Vanish

University of Washington researchers recently announced a system for permanently deleting data from the internet. The solution, called Vanish, can be used for example to delete all copies of an email cached at intermediate relay sites and ISPs during transmission from sender to receiver. Advertising that Vanish provides self-destructing data conjures up the digital equivalent of a tape that bursts into flames after its message has been played. But data protected by Vanish neither self-destructs nor does the system actively seek out data at third party sites for deletion.

Vanish works by encrypting data with a secret key (say an AES-256 key), splitting the key into secret shares, and then storing the shares at a randomly selected set of nodes in the distributed hash table (DHT) of a public P2P network. In this way Vanish creates an encrypted object for inclusion in email, for example, that the sender can transmit to a receiver or group of receivers.
image
When a receiver opens the encrypted object, Vanish attempts to access a sufficient number of DHT nodes with shares so that the key can be recovered and the data decrypted.

The self-destructing aspect is that the key shares will be deleted as part of the natural node churn in the DHT, quite independent of the actions of both the sender and the receiver. The lifetime of a share is about 8 to 9 hours in the DHT, after which there is a low probability that there will be a sufficient number of shares to reach the recovery threshold.

So the encrypted data does not self-destruct - but rather key recovery information is placed in volatile public storage that is very likely to delete that information after a short delay as part of its normal operation. And with that deletion also disappears logical access to all copies of the unencrypted data.

The full paper that describes the Vanish system has extensive results of experimenting with the Vuze P2P network, as well as other local simulations, examining the interplay of various parameters such as the number shares versus the deletion rate.

Monday, August 24, 2009

Twitter in the Land of Power Laws

This is my first post after a long summer holiday and I am glad to say that my blog has still been receiving a reasonable number of visits in the absence of new content, albeit some visits were barely more than a glance.

Gartner has released a collection of Hype Cycles for various technology niches, as reported by Eric Auchard at Reuters for example. The particular Hype Cycle below is for Emerging technologies (double-click to enlarge).

image

I was immediately surprised to see Quantum Computing registering in the Technology Trigger region, which is used to denote technologies that are 10 years or more out from acceptance. I think this is something of an understatement as I argued here.

Auchard picked up on microblogging (read Twitter) being positioned near the peak of expectations, and therefore about to experience the full G-force of descending into the Trough of Disillusionment. I am not really sure from what perspective Gartner is making this prediction, since a recent study released Sysmos shows that user growth has more than doubled this year – in fact, over 70% of Twitter users joined in 2009.

image

The report shows that almost every aspect of Twitter is operating under a power law. In the case of new users this power law means exponential growth, whereas for other measures the power law typically means the domination by the few. For example, 92% of people follow less than 100 other people, but 1% of people follow more than a 1000 others. Less than 1% of people have more than a 1000 followers, and more than 90% have less than 100. Interestingly, 21% of people with a registered Twitter account have never made a tweet. Most people make only one Tweet per day but just over 1% make at least 10 on average. More generally, Sysmos observed that there is a 75/5-rule in operation, meaning that 75% of activity is accounted for by 5% of the Twitter user base.

image

Is Gartner right then? Perhaps in the words of Neil Postman the majority of Twitter users are just “amusing themselves to death”, while a few Twitter users are really tweeting themselves to death. Where is the business case here? But as I argued in The Sub-Timed Crisis in Web 2.0, we are not heading for collapse:

Unlike our current financial structures, the web is not at threat of collapsing, though many foot soldiers will fall by the way (they see themselves as pioneers, but in fact they are easily replaced). Our informational structures are not hierarchical but relational, and as such, are much more resilient to the removal of individuals. It is not the case that there are eager underlings waiting to replace leaders – the underlings are here and functioning already.

Web 2.0 losses will largely go unnoticed. New users/readers, whose information experience begins today, are being added all constantly. They are essentially unaware of what happened last month and will remain that way. Joining is not a generational wait, no corporate ladder to be climbed. Everyone joins and progresses simultaneously. This turnover goes largely unnoticed since leavers are dwarfed by joiners.

I am quite sure that Twitter will weather the Trough of Disillusionment – in fact, it already has I would say. Gartner is as overly optimistic of Quantum Computing as it is pessimistic of Microblogging.

Related Posts