The recent book, The Black Swan: The Impact of the Highly Improbable, by Nassim Nicholas Taleb (NNT) is a runaway best seller addressing our seemingly inherent inability to predict (let alone plan for) those events that will produce the highest impacts in our lives, professions, countries and the world at large. In particular, he is interested (obsessed, in fact) with single events that scuttle all our expectations about a given topic, whose origins are without precedent, and whose consequences are extreme. The name of the book is derived from the discovery of black swans in Australia by Europeans in the 17th century, which overturned the apparently self-evident statement that "all swans are white". His more modern examples of Black Swan events include WWI, the Internet and 9/11.
In a recent post I stated that the problems with the random number generator on Debian may well turn out to be a Black Swan event since the impact of all the weak keys created and distributed over an 18 month period could introduce drastic security vulnerabilities in system relying on Debian-generated keys.
It is difficult at the moment to state if the Debian debacle constitutes a true Black Swan since the consequences are still being played out. In the meantime, I have compiled a list of Black Swan events that we have witnessed and endured. Naturally my list is subjective and some justification is provided below.
- One Time Pad
- Computers, Cryptography and Cryptanalysis
- Public Key Cryptography and RSA
- The Internet Worm
- Basic internet protocol insecurity
- Bruce Schneier
- PKI
- Passwords
- Good Enough Security
The One Time Pad
Shannon proved that the entropy (uncertainty) of the cryptographic key must be at least as large as the entropy of the plaintext to provide unbreakable security. In practice this translated into adding (modulo 2, or XORing) a fully random key stream to a the plaintext. This system had been used previously but what Shannon provided was the entropy arguments to prove that the cipher was unbreakable. His tools furnished the absent proof. In most situations it is not practical to use one time pads, since fully random keys must be pre-distributed. Nonetheless, with a single stroke then cryptographers were cast out of Eden. Like Adam and Eve, and their descendants (Alice and Bob?), cryptographers would have to work for a living, labouring over systems that they know could be broken either directly or over time. Perfection had been demonstrated but in practice unbreakable security would be unobtainable.
Computers, Cryptography and Cryptanalysis
The introduction of computers at the end of WWII enabled some codes to be broken, or at least provided critical information that lead to their compromise. In the next 20 years (the 50's and 60's) there was a revolution away from classical ciphers to programmable ciphers that were designed on new principles that radically departed from classical methods. While the principles of classical ciphers were not wholly dismissed they were certainly diminished. Ciphers no longer needed to be been pen-and-paper based, table driven or reliant on electromechanical devices. Such systems are clever, and at times ingenious, but are limited within a certain framework of design. With computers cryptography was able to shed its puzzle persona. A key development was to increase the block length of ciphers. Plaintext was traditionally encrypted one character at a time (8-bit block size in modern parlance) and this greatly limits the type of dependencies that can be created between the plaintext, ciphertext and the key. Block sizes extended to 6 to 8 characters (48 to 64 bits). Cryptography was not just faster but better - you could have an Enigma with 20 rotors if you wished. The Black Swan was that great swaths of the information intelligence landscape went dark or became clear.
Public Key Cryptography and RSA
Public key cryptography (PKC) as a theory was published in 1976 and the RSA algorithm, an uncanny match to the stated requirements of PKC, invented the next year. The impact here is difficult to describe. This brought cryptography into the public research community, and has attracted the interest of many brilliant researchers who would of otherwise deployed their talents in different disciplines. The development of PKC demonstrated that civilian cryptography could make significant contributions even without extensive training in classical methods or military systems. There is good evidence that military organizations were the first to invent PKC, and perhaps rightly they thought that this technology could be kept a secret for some years to come. The Black Swan was to underestimate civilian capability and to pass up the opportunity to patent PKC technology. PKC also tacked cryptography further away from its traditional roots, and drove it deeper into pure mathematics and computing. PKC designers are not looking to create good ciphers from scratch but rather to take an existing hard problem and harness its difficulty as a basis for security. The point here is to exploit not create, reduce not claim. The skill of the designer is in making connections.
The Internet Worm
The Internet Worm, also known as the Morris Worm, of November 1988 was a true Black Swan for IT Security. It was a defining digital 9/11 moment. According to its author the purpose of the worm was to gauge the size of the Internet by a conducting a door-to-door survey of connected hosts. The worm itself and its propagation mechanism were intended to have minimal impact on CPU and network resources. However several flawed assumptions ensured that the exact opposite would occur - within a few hours the CPU performance of infected hosts was reduced to a level that made them unusable. It is estimated that around 6,000 (mostly UNIX) hosts were infected by the worm, or approximately 10% of the then connected hosts. The cost of recovery was put at USD 10m - 100m by the US Government Accounting Office.
The worm did not destroy files, intercept private mail, reveal passwords, corrupt databases or plant Trojan horses. But it did exhaust CPU on infected machines, producing a denial-of-service attack. While the worm author may claim that the worm was an experimental type of network diagnostic, he went to great lengths to disguise the code (similar to malware today), make the worm difficult to stop, and relied on security weaknesses to propagate the worm rapidly and extensively (again similar to malware today). In particular the worm used password cracking, shared password on multiple accounts, buffer overflows and weaknesses in well-known services to propagate itself. While these attacks were known, the impact of exploiting these vulnerabilities on an Internet scale was at best only vaguely comprehended.
The worm galvanised in the minds of the people in and around the cleanup effort just how vulnerable the Internet was to rogue programs. Eugene Spafford commented that the damage could of been much more devastating if the worm had been programmed better and acted maliciously. As it was, an alleged unintentional act brought 10% of the Internet to its knees - what if someone was really trying? And now the code was being distributed to show them how.
Basic Internet Protocol Insecurity
The Internet was designed for reliable connectivity in the presence of failures (even malicious attacks), based on a decentralised architecture which can adapt to local changes (outages, loss of servers) and recover for service. Also it is modular in its layered design so services can be exchanged and replaced (SSL could easily be slotted in, as was HTTP, Skype required a few more changes, VPN at Layer 2, and so on). This plug-and-play property with reliability came at the cost of security. The basic protocols, even higher level ones such as email (SMTP) operate on the "honour system" - exchanging and processing network information with little or no authentication. Basic protocols are designed to support reliable connectivity, which was the fundamental objective in designing a global packet-switched networked. There is little confidence in basic services as evidenced by their secure counterparts: HTTP/S, Email/SMIME, DNS/DNSSEC, L2/VPN, TCP/SSL. It is usually easier to introduce a new protocol rather than patch existing ones. The Black Swan is that the Internet is now a critical communication infrastructure for which security was a not a fundamental design requirement.
Naive trust models and assumptions have lead to endemic problems for many years, with the latest manifestation being the well-publicised DNS debacle. In this case well-known inherent security vulnerabilities had been ignored for several years, and attacks were made more feasible with increased bandwidth and the integrity of DNS itself is now critical. But it took a lot of security theatre on the global stage to get IT people to the point of patching. The US Government has announced that DNSSEC, a secure version of DNS available for over 10 years now, will be deployed in government offices by 2010. Most recently several researchers claim to have found inherent weakness in the basis TCP/IP protocols that lead to denial-of-service attacks.
Bruce Schneier
Bruce Schneier is the best known security authority in the world. His blog has hundreds of thousands of readers, his posts can yield hundreds of comments, and his books are bestsellers. His opinions hold sway over both technical people and executives, as well as all the layers in between. He is the Oprah of security - a public figure and a leading opinion maker. The Black Swan aspect of Mr. Schneier is that he has achieved this status through excellent communication (and yes cunning publicity as well) rather than technical prowess. Of course he has technical prowess but that is rather common in security and cryptography. What is uncommon, or even uncanny, is the ability to explain security in terms that can be understood by non-specialists whether it be programmers, professionals, managers or executives. Bruce has literally written himself into the modern history books of security. He has shown, once again, that communication is king - the security explanation is mightier than the security deed.
Public Key Infrastructure (PKI) Since its inception Public key cryptography (PKC) had been feted to unravel the greatest Gordian knot in security: how to establish secure communication between previously unacquainted partners. With PKC we did not require secret key exchange (SKE) to bootstrap secure communication - only authenticated key exchange (AKE) was sufficient. The assumption was that setting up an infrastructure for AKE was obviously easier than establishing a similar infrastructure for SKE, if that could be done at all. The rise of the Internet in the mid 90's created an obsession with e-commerce, and PKC was presented with a test bed of a potentially fantastic scale to demonstrate its power and utility.
So PKC became PKI in deployment, heralded as the great catalyst and foundational technology of the commercial Internet. But the fallacy was to believe that technology could be used to solve the intangible social issue of trust, or even provide a functioning digital equivalent. The PK part of PKI was excellent - good algorithms, formats and protocols. The problem was with the I - the notion melding the PK components to a legally-binding trust infrastructure. PKI was successfully discredited and repelled by the legal profession, and hence by business as well. The main conundrum was who would carry liability for business transacted using certificates? The Black Swan here was that brilliant security mathematics and technology could not convince the market. People were eventually happy with PayPal, a graft of credit card payments onto the Internet, essentially a mapping of an existing trust infrastructure onto a digital medium.
The discrediting of PKI began a long cycle of decline between business and IT Security, which itself is part of a longer decline in the general relationship between business and IT. SOA is the great peace offering from IT to bring appeasement between the two camps. But SOA is even more complicated than PKI, and PKI is contained in the security standards of SOA.
Passwords Passwords were introduced in the 60's to manage users on timesharing systems. The roots of passwords are largely administrative rather than to support security. Passwords have proliferated extensively with client serving computing, and now with its generalization, web-based computing. Passwords are badly chosen, written down, prominently displayed, sent unencrypted, phished out of users, infrequently updated, and commonly forgotten. The traditional business case for using passwords has been their ease and low cost of deployment from the viewpoint of system administrators, while the true cost and inconvenience are passed on downstream to users (and help desk operators). Provisioning and managing user accounts and passwords is a major cost and compliance concern for most large companies
Passwords are an example of a cumulative Black Swan - 40 years of compounded reliance on an unscalable 1-factor authentication technology without a plausible migration strategy. Employees commonly require 5 or 6 passwords to perform even the most basic IT business functions, and perhaps double that number if they require access to some special applications. This number itself could easily double once again in their private lives where another gaggle of passwords is required to support personal online transactions and services. In the short term we will require more and longer (stronger) passwords which, as people, we are stupendously ill-equipped to cope with.
Good Enough Security
This Black Swan is the most pernicious and potentially crippling for IT Security as an industry and a profession. Security is no longer an end in itself, if it ever was. Just as physics is called the queen of the sciences, security people believed their profession was venerated within the IT sphere. Of course IT Security is important in the sense that if it is not done then great catastrophes will be summoned forth. But the issue is not about ignoring security but actually about how much attention to give it. This is the fundamental asymmetry of security - it is easy to do badly (and be measured) but it is difficult to do it well in a demonstrable fashion. Security has been likened to insurance - having none is a bad idea, but once you decide to get insurance, how much is appropriate and how can you minimize your costs?
We have called this Black Swan "Good Enough Security" but we may also have chosen risk-based security, the transition from risk to assurance, the diminishing returns of security, or knowing your security posture. Managers and other stakeholders want to know that their IT assets are adequately protected, and it is up to the IT Security person to define that level of adequacy and provide assurance that it is reached and maintained. Most security people are woefully ill-equipped to define and deliver such assurance in convincing business or managerial language.
The burden of risk management has fallen heavily on IT Security. It is common that once the torch of enterprise risk management is kindled in the higher corporate echelons, it is passed down the ranks and settles with IT Security people to assume responsibility for the management of IT Risk. And these people are ill-equipped to do so. Ironically, if they persist in their task they are likely to ascertain that security is just a piece of the IT Risk Landscape, and perhaps a minor piece at that. For example, availability is much more important to business than security, which is a mere non-functional requirement.