Monday, December 22, 2008

Three posts over at U2

Over at my other blog I have made a few short posts that you might find interesting


Sunday, December 14, 2008

The spin on passwords for AES

In Are AES-256 keys too Large? I discussed that 256-bit keys will fall short of their implied security when derived from passwords or embedded in protocols with other cryptography. To achieve the equivalent of 256-bit security users would need to select 40 character passwords at random, and protocols would need to employ RSA keys with at least 13,000 bits. At times 256-bit keys are just too big for their own good.

Nonetheless AES-256 is being widely deployed since it conveniently lies at the intersection of good marketing and pragmatic security. In upgrading from AES-128 to AES-256 vendors can legitimately claim that their products use maximum strength cryptography, and key lengths can be doubled (thus squaring the effort for brute force attacks) for a modest 40% performance hit.

Perhaps this reasoning prevailed at Adobe when they recently upgraded their document encryption scheme from AES-128 in v8 to AES-256 in v9. However Adobe later had to announce that v9 in fact offers less security against brute force attacks as compared to v8. What went wrong? They forgot about the spin.

The Spin Factor

A well-known standard for converting passwords (or in fact arbitrary bit strings) into cryptographic keys is the password-based cryptography standard PKCS #5 from RSA. PKCS #5 has been stable at version 2 for almost 10 years now, and has also been published as RFC 2898. PKCS #5 defines a generic PBKDF (Password-Based Key Derivation Function) to produce a cryptographic key from a password or passphrase

Key = PBKDF( salt, password, iteration count)

PBKDF takes the bit string and salt value, and repeatedly applies a pseudo-random function, such as a hash function, to produce the cryptographic key.

The number of applications of the hash function is called the iteration count. Both the salt and the iteration count have been introduced to thwart exhaustive attacks. The salt increases the space required to store a table of pre-computed password/key pairs, while the iteration counts increases the cost of computing a single password guess. PKCS #5 recommends that the iteration count be at least 1,000. Additional operations inserted to slow down performance for security reasons are also called spin.

Too Little Spin

Let's get back to Adobe with these ideas in mind. In Adobe v8, 50 calls to the MD4 hash function were required as well as 20 calls to RC4 to perform a single password guess. This limited the password guess rate to about 50,000 trials/second on a modern Intel processor. However in v9 the PBKDF computation was replaced by a single call to SHA-256, which according to ElcomSoft, allows a highly optimised attacker to undertake over 73 million trials/second. Attackers can brute force their way through the password space 1500 times faster in v9 than in v8.

Adobe has said that the intention of the changes from v8 to v9 were to make encrypted documents open faster, which they certainly achieved as compared to v8. But in this case security can only decrease when encryption key lengths are increased but are still bootstrapped off the same password base. To their credit, Adobe increased the length of passwords from 32 Roman characters in v8 to a massive 127 Unicode characters in v9, so it would be possible to select a password as secure as an AES-256 key. Even so, for higher-assurance applications, Adobe continues to recommend using PKI-based encryption or Adobe LiveCycle Rights Management encryption.

Micro Spin

David LeBlanc, a well-known security professional and author, recently described Microsoft's new agile encryption to be supported in Office 2007 SP2. The Microsoft standard is based on PKCS #5 and allows for a spin of up to 10 million iterations and a salt of length 64,000. LeBlanc states that brute-forcing passwords is a real threat, so it is a necessary evil to artificially inflate the work factor of performing password trials. In other words he's a fan of spin when required.

There was more than a hint of Schadenfruende when LeBlanc stated that their new method using a spin of 50,000 was over 15,000 times "slower" than Adobe v9. To be fair, LeBlanc also provides a link to another post from July where he reviewed Microsoft's spotty Office encryption history. It's quite unpleasant reading actually, and Leblanc went as far as saying that

As of Office 2007, we do warn you that the encryption we do on the binary documents is weak. Most of the time, it's so weak that it will only act as a mild deterrent. In some cases, we missed encrypting things entirely (which is actually called out in a KB article some time ago).

My advice is that if you must encrypt a binary document, use a 3rd party tool to do it.

The line break and italics were added by me. The default encryption in Office 2007 is 40-bit RC4 and is only being corrected with the new standard available in Office 2007 SP2. Keep that in mind when using MS encryption before SP2.

Toxic Spin

If password policies are not made more complex when moving from AES-128 to AES-256 then this upgrade only brings marketing advantages but not security advantages. Deriving or bootstrapping AES keys from passwords is really an exercise in self-deception, especially when considering 256-bit keys. The discrepancy between the low entropy of passwords and the astronomical keyspace of AES-256 simply cannot be reconciled.

Adding spin to password-based computations is a workaround to the unpleasant fact that human habits and memory are vastly outmoded in today's IT environment. Everything is getting faster, better and cheaper - except us. Passwords remain the most toxic asset on the security balance sheet, but don't expect a bailout any time soon.

Related Posts

Tuesday, December 9, 2008

On the Bottom of Things: reflections on a year of Blogging

The 3,000th visit to the No Tricks blog arrived on December 5th. This was as pleasant as it was unexpected, since just over a thousand people had ventured onto the blog by the end of summer. The thought of reaching even 2,000 visits by Christmas seemed to be relying on personal intervention by the jolly man in red.

image

The cumulative number of visits is plotted above. Linear trending indicates an average of just over 10 visits/day with the true average being just over 12. For the last four months the average has been 16 visits/day, while its up to 21 visits/day for the last two months. This is probably due to more regular posts being made in the second half of the year, and more exposure through other channels such as Twitter. A rate of 20 visits per day seems quite respectable for my efforts. But indeed, what is the effort?

The Burden of Blogging

I have made 22 posts this year so far, and expect to end up with around 30 by year end. A busy blog month for me means weekly posts, which is a modest rate compared to many casual bloggers. I am surprised how people can do so much blogging, in particular people who I surmise are near or over 40 (Chris Hoff comes to mind). In April the NYT ran an article on the 24 x 7 stress of keeping up appearances on the Web. I have eaten into a considerable amount of weekend family time getting posts out, notably to finish Counting Restricted Password Spaces. This post required an inordinate amount of mundane formatting and calculation. A satisfying post in the end, but ironically I felt that a follow-up post was required to show that the detailed counting methodology presented could be applied more generally. The ingeniously-named More on Counting Restricted Password Spaces was the product of another weekend hiding behind my laptop.

The time burden is not just writing your blog but reading those of others. To gain a sense of what was topical in security and risk blogging, I signed up to about 50 feeds via Google Reader. Such aggregators make it very easy to arrange for a tsunami of information to inundate your browser on a regular basis.

I read quite a few of the posts and scanned many more, creating my own tag cloud for arranging articles, posts and PDFs into categories, with the intention to mine them later. But I have been overwhelmed in the sense that my tag-to-post ratio is quite low. Too much time is spent on front-end processing rather than back-end writing. A blogger will be known by the quality of their output and not by the quality of their cached input.

Of Tools and Text

I have also gone through something of an Odyssey with tools to collect, calibrate, organise, represent and display information. In any survey of tooling through experimentation there is substantial waste and thrashing. Most of my information is stored in Treepad, which basically allows me to forget about the underlying directory structure on my hard disk. It is a great tool but poor on several key areas - which is a familiar pattern for most of the tools I committed to, including OneNote, Wikidpad, ConnectText and Evernote (sorry but I am not going to link those tools for you - get Hyperwords).

I also often organise ideas and references for a post in Freemind. This is yet another desktop tool to navigate information in to and out of, but for laying out post structure FreeMind is excellent. You can find the mindmap used to write Anonymity on the Edge in an interactive Flash format here.

I regularly commit the cardinal sin of editing posts long after they first appeared - if for no other reason than I am a terrible typist and proof-reader. Ominously, I recently discovered a site that produces Latex mathematics graphics for inclusion in blogs. There is a temptation to re-edit quite a few articles that were written using text-based math (yuck!) such as my post on the Birthday Paradox. Time well spent or a distraction from a new post?

On the Bottom of Things

The famous computer scientist Donald Knuth stopped using email on January 1st, 1990. His reasons were simple

Email is a wonderful thing for people whose role in life is to be on top of things. But not for me; my role is to be on the bottom of things. What I do takes long hours of studying and uninterruptible concentration. I try to learn certain areas of computer science exhaustively; then I try to digest that knowledge into a form that is accessible to people who don't have time for such study.

While I am no Knuth (who is or could be?) , in a small way I try to echo his conviction in my blog - being on the bottom of things as opposed to participation in the seething and amorphous exchange of information that defines being on top of things. I have to agree with Larry and Lou who say

Larry: Now I’m guessing you are a pretty traditional guy Lou. What’s your take on all this Internet technology?

Lou: Couldn’t support it more Larry. One of our big plays is to convince people that the place to be is “on top of things” rather than “at the bottom of things” – that is, to focus on the fleeting, not the foundational. It’s a win-win situation: people get to find a few cheap holidays and outsmart their doctor on something like the glycemic index, while we get mindshare that nothing is really relevant unless it arrives in your mailbox personally addressed to you as part of a competition.

Larry: Short term memory can be measured in mouse clicks.

Lou: Precisely. History becomes a hobby, not a lesson.

Knuth has opined that probably few people who buy his books actually really them fully. So perhaps I should not be overly disappointed that my long posts are not frequently visited - such as Quantum Computing, Zero Knowledge Proofs, and Anonymity. Being on the bottom of things is more a personal responsibility than a populatrity contest.

Pareto Posts

I had two surprise success posts which, together with the main blog page, account for about 50% of all page views. The blue below represents home page visits and the gray all other pages with less than 1% hits.

image

My first success (orange slice, 330 views or 7.27% of total) was a short post on the Entrust v5 PKI which links to a longish PDF explaining the product's architecture and key management functions. Certainly this is a "on the bottom of things" document. Most of the traffic comes from Wikipedia where I posted a link under the PKI topic. The lesson here was to leave links to well-visited sites that are right on your technical topic.

The second success - and the most popular post by far (green slice, 648 views or 13.61% of total) - addressed the question of whether AES-256 bit key too large? Most traffic comes directly from Google searches on AES, AES-256, or key lengths - perennial favourites to many crypto aficionados. Originally I had this material contained in a much longer post (too long) but I came to my senses and created 3 smaller articles (the other two being The Long Tail of Vulnerability for A5/1 and The Cold Boot Attack).

The lesson here is to make searchable titles visible to Google, dismembering longer posts as required. I just sliced out the PageRank details from this post and published it as stand-alone content at my new blog U2.

This liberating exercise has got me thinking about the notion of a Least Bloggable Unit.

Harvard and Bruce

Quite a few years ago now I read a book by the then dean of the Arts and Science Faculty of Harvard. He stated that the 3 goals of their undergraduate program were to

  1. Ensure students could construct a written argument
  2. Ensure students could construct a quantitative argument
  3. Receive exposure to another culture.

For me blogging is an open invitation to hone the skills of points 1 and 2 - mainly the former and the latter where applicable (I have point 3 covered after living away from Australia for almost 20 years). As I mentioned in Some Black Swans for IT Security, Bruce Schneier has mainly conquered the security world through written communication

The Black Swan aspect of Mr. Schneier is that he has achieved this status through excellent communication (and yes cunning publicity as well) rather than technical prowess. Of course he has technical prowess but that is rather common in security and cryptography. What is uncommon, or even uncanny, is the ability to explain security in terms that can be understood by non-specialists whether it be programmers, professionals, managers or executives. Bruce has literally written himself into the modern history books of security. He has shown, once again, that communication is king - the security explanation is mightier than the security deed.

Indeed the security explanation is mightier than the security deed. And blogging is the beginning of that process.

Monday, December 8, 2008

Barring an Act of God

The following is a transcript of a recent “Late Night with Larry” broadcast.

Larry: Thanks for joining us tonight and I’m pleased to welcome this evening’s guest, Louis S. Seefer, the long time head of General Souls. He’s taking some time out of his busy schedule to share his experiences and market predictions with us. It’s a pleasure to have you here Mr. Seefer.

Lou: Please, call me Lou. Most of the time people call me Lou Seefer but let’s drop the formalities tonight.

Larry: Alright, Lou, and once again thanks for taking some time to stop by.

Lou: Not at all Larry, my pleasure to be here. It’s not all presidents, oil cartels, and major tax authorities you know.

Larry: Glad to hear it, but I imagine you have to focus on the market leaders and trend makers most of the time.

Lou: Sure, but we are in a big market and in the end its penetration that counts. I mean, just last week I had a chance to meet with two neighboring families who were discussing the details of getting a dividing fence between their properties. Things were going pretty badly, I mean they were close to a reasonable agreement, when I was able to bring things back to the main issue.

Larry: The dividing fence?

Lou: No, no - just the division. Whose property was larger, whose dog was relieving himself where, who would have more shade in the afternoon. One instant sale, and one that I am counting on for the end of the month.

Larry: Sounds like you’ve still got it Lou. I mean you’ve been running GS for, well, as long as I can remember. Tell us that how you got started.

Lou: Love to Larry. It’s a great story that I never tire of. Well it all started when I had a falling out with the Old Man. Back then he was really running the show, you know, the original God Father. It was his way or the highway. Not that we had many highways back then, but you get the idea.

Larry: Indeed I do.

Lou: Anyway, the Old Man had a sweet deal. Good products, good access to clients, and an army of fanatical staff at his beck and call. He’d practically just have to think something and it would be done. And vision! Taking him on didn’t seem very wise, but I was young and foolish, well mostly foolish really.

But to his credit, the Old Man gave me some leeway, and didn’t stamp me out the first chance he had. He didn’t see any need. He really believed (and still does) that his product line is the only one with any lasting value.

Larry: I guess he’s got a point there Lou.

Lou: No doubt he has Larry, but the customer is king, or at least wants to become a king.

Larry: Wouldn’t have it any other way Lou.

Lou: Now we’re talking. So there I was out on the road, wandering from failure to failure. Anyway, there I was in an orchard one day when an apple fell on my head. Scientific stimulations aside, it tasted delicious and got me thinking. By the end of the month I had signed up my first two clients, literally tempting them away from the Old Man. Their dress sense was not particularly well developed, but dramatically improved following unforeseen changes in their accommodation and food supply. An unexpected surprise was that one of their sons eventually turned out to be a major investor. Thus I learned early on that while I was not in a family business, I was in the business of families.

Larry: Now I’m guessing you are a pretty traditional guy Lou. What’s your take on all this internet technology?

Lou: Couldn’t support it more Larry. One of our big plays is to convince people that the place to be is “on top of things” rather than “at the bottom of things” – that is, to focus on the fleeting, not the foundational. It’s a win-win situation: people get to find a few cheap holidays and outsmart their doctor on something like the glycemic index, while we get mindshare that nothing is really relevant unless it arrives in your mailbox personally addressed to you as part of a competition.

Larry: Short term memory can be measured in mouse clicks.

Lou: Precisely. History becomes a hobby, not a lesson.

Larry: Speaking of history, I guess you’re going to miss Jean-Paul, like we all will.

Lou: What a guy! JP was running one of the Old Man’s biggest and largely successful subsidiaries for over a decade. Sure we locked horns often (ok, mostly my horns), but our customers expect nothing less.

Larry: And what a showman!

Lou: No arguments there Larry. The hats, flowing robes, and the palatial HQ in Rome! Got his own Swiss security team as well.

Larry: Their hats are pretty good too.

Lou: They sure are. Anyway, I repeatedly offered JP the full range of our longevity products, at a significant discount with no tangible reduction in benefits. However he always found our product line to be too costly, preferring his own stuff based on a single book with the nominal advantages of actually working and being widely available for free.

Larry: I guess that Microsoft vs. Linux debate is not really that new.

Lou: 10-4 on that one Larry. Thankfully for us JP’s organization has a spotty PR record (which we claim no small part in establishing), and can’t quite match many of our short term benefits. Also their business plan seems fatally flawed – I mean do they really expect their clients to regularly read a few hundred pages of marketing material and attend weekly meetings?

Larry: And what about the new guy?

Lou: Well, he’s a little green but don’t forget the hats and robes. I mean it’s a tall order for their board of directors to pick two winners in a row.

Larry: Not like the Old Man and his son.

Lou: Well Larry, that was a match made in heaven. The son was a real chip off the old block. He took the existing ideas, made some key modifications and then ran with them. For a few years he worked real miracles in the Middle East market, until the local racket boys nailed him.

Larry: Yes that was a bit unethical, even by local standards. I thought the Old Man might have stepped in to do some damage control.

Lou: Yes we all thought that. I gave the son a few outs myself, laid it all at his feet if he would just sign on the dotted line. He turned me down flat. I managed to nab one of his lieutenants, but he burnt out soon after.

Larry: Even so, the son’s legacy was significant by any measure.

Lou: Sure, his long term market penetration figures are impressive. His local team took the stuff global. But where’s the growth now?

Larry: Certainly nothing like your recent figures!

Lou: Well, we had a great last century - fantastic returns, unbelievable growth and great publicity. It’s one for the history books. We can’t expect compound annual growth like that to be sustained, but it was a momentous run.

Larry: Every industry has cycles, but that boom was unprecedented. Looking back, what were the conditions that made it all possible?

Lou: That’s a good question Larry. In the end it seems quite simple. A lot of our competitors believe in this “one size fits all” approach. Buy the products, go the meetings and you can look forward to a comfortable retirement.

Larry: It’s a familiar claim.

Lou: Yeah, the Old Man’s crazy about it. His marketing people must be bored to death as they have been banging on about being “the way” for years now. Our strength is that we play to your weaknesses, and to those of the people around you. No absolutes, just what you can fit into your busy day. More management by subjectives than management by objectives – just be yourself, but worse.

Larry: Let’s talk mission statements for a moment Lou. Standard fare these days.

Lou: That they are Larry, but we’ve never had to work too hard in that department. Some of our competitors are so desperate to distinguish themselves in the market that they actually write mission statements for us. Never one to turn down a few centuries of scholarship, we quickly adopted the Seven Deadly Sins (SDS) as our vision of the future (and justification of the past). I mean you can’t buy this type of PR.

Larry: Yes, I’ve heard of that 7-step program.

Lou: Some people think of it that way. We often present SDS as a program, but that’s mainly to give over-achievers a sense of advancement. Typically we are quite content with excellence at one or two of the steps, and passing familiarity with the rest.

Larry: We’re reaching the end of our time. What’s the future hold Lou?

Lou: As always Larry the future is uncertain. In the end, we’re all playing in the Old Man’s patch, and he can turn off the water works any time he wants. Shut the market down and call in a reckoning.

Larry: Why doesn’t he?

Lou: Well the Old Man’s a big believer in the market and consumer choice. He loathes interfering in daily business, which is great for the rest of us.

Larry: So business as usual then Lou?

Lou: Looks that way Larry - barring an act of God, as they say.

Unapologetically Unstructured

I have started a new blog on Wordpress called Unapologetically Unstructured, or U2 for short. I will keep No Tricks for more polished posts and use the new blog for more ad hoc processes (i.e. what ever I want).


Wednesday, December 3, 2008

Not One in a Million, but a Million and One

The summer Olympics took place in August this year, hosted by both Beijing and Hong Kong. Every event has its own group of dedicated followers who are prepared to miss sleep and discuss their sports heroes endlessly - whether it is a freestyle swimmer, a javelin thrower, or a marathon runner. As always, one of the premier events was the 100 meter sprint where athletes complete for the title of the fastest man or woman in the world.

Top athletes will cover the 100 meter distance in less than 10 seconds, meaning that they are travelling at an average speed of 10 meters per second. Unless you have some experience in sprinting you may not appreciate this feat. Imagine what would happen if you and few of your work colleagues were to race an Olympic athlete. Well, it is likely that the results would be quite embarrassing. The Olympian would probably finish between two and five seconds—that's 20 to 50 meters—ahead of the non-olympian competitors. And if you raced later that day, the next day, the next week, and the next month, the result would always be the same. If we think of sprinting as a field of expertise, then it is simple to distinguish the experts from the non-experts. Expertise is easily demonstrated and recognized in many fields. Piano playing, ballet and cooking are examples. But there is one field where the track record of many so-called experts is quite dismal, and that is in the area of decision making.

We need look no further than information technology (IT) for predictions and decisions that have turned out to be spectacularly wrong. In 1943 the chairman and founder of IBM, Thomas Watson, thought that there was a world market for about 5 computers (they were much bigger back then). About thirty years later, Ken Olsen the then head of DEC computers, could not see why anyone would want a computer in his home. And more recently, Tim Berners-Lee spent several years trying to convince managers at CERN (the European Center for Nuclear Research) that his HTTP protocol was a good thing (he later went on to invent the World Wide Web). Apparently there is a shortage of Olympian IT decision makers.

Picture2 - book In a recent book, The Wisdom of Crowds, author James Surowiecki examines a collection of problems that seem better suited to solve by involving many non-experts rather than relying on a few experts. His book opens with an anecdote from Francis Galton, a famous British scientist, as he strolled through a country fair in 1906.

Galton came upon a contest where people were asked to guess the weight of an ox on display. Around 800 people tried their luck, paying a sixpence to guess the ox’s weight in return for the chance of winning a prize. After the contest was over, Galton decided to perform an impromptu experiment—take all the submitted guesses and see how close the average of these answers was to the true weight of the ox. Galton thought surely that the average guessed weight must be far from the true weight since so many people of varied backgrounds and abilities (a general crowd) had submitted guesses.

But to Galton’s surprise, the true weight of the ox was 1,198 pounds and the average of the guesses was 1,197 pounds. Thus a crowd of people at a country fair had collectively determined the weight of the ox to within one pound, or less than half a kilogram.

Picture3 - crowd The book goes on further to discuss which types of problems can be effectively solved by crowds, and under what conditions the crowd will be expected to produce a good solution. Risk management is about making decisions today that will protect us from the uncertainty of the future. We are not looking for one in a million (the expert) but rather a million and one (the power of many). The recent subprime debacle has highlighted the shortcomings of quantitative models. Risk scenarios and rankings produced through a consensus process involving many people (a crowd) are likely to produce more meaningful results.

Your involvement is both necessary and critical.

Tuesday, December 2, 2008

One More Risk Profile Graph

I recently posted a collection of risk graphs that I found through Google image search. There was one graph that I wanted to include but could not find again until this morning. It was produced as part of Dutch study on work-related stress in the police force. The study took the approach of identifying the main risk factors in workers' psychological profiles that impact work-related stress. The risk profile below shows a Tornado graph derived from interviewing several thousand workers in 1999 and then again five years later in 2005.

image

Actions were taken to reduce the most significant risk factors (rated as unfavourable on the right) which included work satisfaction, intention to leave the job, relation at work, feedback and quantitative job demands (overwork?). On the other hand, some already favourable risk factors were improved further.

The graph is neither colourful nor visually striking (easy to fix) yet I like the representation in terms of risk factors. In fact I now believe that identifying and rating the main contributing risk factors is one of the best approaches to risk analysis. I see risk factors as the basic variables in a risk model that need not be instantiated further. One could attempt to quantify and combine the risk factors above, but in my experience this exercise would prove difficult to justify and likely to be of little additional value beyond identification of the risk factors themselves.

I recently posted on risk factors for identifying malware, based on a patent application for risk-based scanning by Kapersky. Though many people disagreed that the patent would be useful, again it was the risk factor decomposition that interested me. In many instances of IT risk, the mere process of identifying and rating risk factors will bring the most value.

Related Posts

Wednesday, November 19, 2008

On the DNS Birthday Probability

bb In a recent post I stated that a common reason why the Birthday Paradox (BP) seems puzzling is because people often confuse a 1-to-N matching problem (matching their birthday to someone else in a group) with the required N-to-N matching problem (matching a birthday amongst any pair of people in a group, including themselves). I called this the Johnny Fallacy after famed American TV Johnny Carson host mistook these two events. The post also looked at a recent variant of the Johnny Fallacy applied to DNA matching, and here we consider another variant applied to a cache poisoning attack on the Internet Domain Naming System (DNS).

DNS is a fundamental Internet service to resolve symbolic names into IP addresses (for example, DNS tells us that www.google.com is currently at IP address 209.05.129.99). While people are quite good at remembering symbolic names they are lousy at remembering IP addresses, and packets can't be routed using names like www.google.com. So DNS is the bridge between what people can remember and what Internet packets require.

When you attach your laptop to the Internet you are assigned a local (default) DNS server. As you enter new symbolic references to services, your laptop contacts your local DNS server for resolution, and your laptop stores the returned result in a local cache. If your local DNS server can't resolve your request then it passes it onto a "better informed" DNS server to for resolution. A resolution that involves a chain of DNS servers is called recursive. To your laptop the final resolution still looks like it was returned by your local DNS server. The trust between your laptop and your local DNA server is implicit - there is no formal authentication for resolution requests. What is returned by the local DNS server is accepted by your laptop as a correct resolution.

In 2002 it was announced that BIND, a common implementation of the DNS protocol, permitted DNS server resolutions to be falsified, called cache poisoning. The diagram below gives an overview of the attack. The goal of the attacker is to poison the cache of DNS server with false resolutions, and then poison the client cache on the next request to the DNS server.
image
So how does the attacker get the DNS server to accept false resolutions? The attacker can make a name request to the targeted (victim) DNS server that requires a recursive resolution and then also replies to the recursive request with false information! This is entirely possible due to the unauthenticated nature of DNS requests and replies, and a feature of an earlier version of BIND that allows simultaneous requests for the same name. The only piece of information that links a request to its reply is a 16-bit identifier, that takes on at most 65,536 distinct values. This identifier is chosen by the victim DNS server and the attacker must guess it to have a false response accepted. But then the attacker sends n recursive requests for the same domain and n replies with random identifiers. When n is about 700 the birthday paradox shows that one of the name requests generated by the victim DNS server will match one of the responses generated by the attacker with high probability.

There are many excellent articles that explain the DNS/BIND Birthday Attack and consider the following ones here and here. Both give details on the success of the birthday attack using the following formula for a collision (match) amongst n uniform samples from t possible values

collision

But this collision formula is not quite correct. When n > t then a collision occurs with probability one, but the collision formula above will never be one since (1-1/t)^k is never zero for any value of k. What's the glitch?

Well n*(n-1)/2 is the number of possible pairings amongst n objects, each of which could yield a match. But instead of computing the match as an n-to-n problem, the collision formula computes the probability of a match as a 1-to-(n*(n-1)/2) problem. That is, the collision formula assumes that each of the potential matches from pairing n objects are independent. The collision formula therefore provides an underestimate on the correct probability, which is given as

Now when n = t + 1 the term (1-(n-1)/t) = 0 and the formula yields a collision with probability one. The collision formula is convincing because it looks like the correct function when graphed

image

The graph says that there is a 50% likelihood of a match around 300 samples (or DNS queries in this case), which seems fine since we expect a repeat at slightly more than sqrt(65536) = 256 samples, and the correct value is about 280. As we mentioned above the collision formula underestimates the correct probability, but for increasing n the error as compared to the correct value is decreasing. The yellow part of the graph shows the probability of the conventional spoofing attack where the attacker has a likelihood of n/65536 to match. In a conventional spoofing attack the attacker generates false replies to a single DNS request.

Monday, November 17, 2008

Examples of Risk Profile Graphs

I was looking through the current set of risk presentations on slideshare, and found an interesting one called Creating Risk Profile Graphs, adapted from an article on managing project risks by Mike Griffiths over at the Leading Answers blog. The approach taken is to define and rate project risk factors and then plot their development over time. The graph below (double click for a better view) shows the profile of 10 technology risks over 4 months charted into Excel as "stacked" area graphs. There is another view on the same risk factors here.

risk_profile_graph

The risk factors measured include JDBC driver performance, calling Oracle stored procedures via a web service, and legacy system stability. The project manager rates each risk factor monthly for severity and probability to produce a numerical measure of the risk factor. Plotted over time we see 4 persistent project risks (the steel and light blue, tan and pink plots).

I was not sure if this was a good way to create a risk profile, so I did a Google image search to see what else was out there. The results were very interesting, and reflect the varying approaches risk managers use to present their results.

This first risk graph really leapt off the search page results, if for no other reason than the prominent pink silhouette of Dilbert's manager in the middle. I must confess that the meaning of the graph was not apparent, but luckily its part of an article called Risk Management for Dummies (click and scroll down) where it is explained to be a depiction of the peaks and troughs in the risk exposure pipeline. The graph is tracking the cost of risks over time.

r1

The next profile is a common, if not traditional, format for risks, taken from the integrated risk management guide of the Canadian Fisheries and Oceans department. This is also called a risk rating table, risk map or risk matrix. Likelihood is represented on the horizontal axis and impact (severity) on the vertical axis.

image

In the first two graphs these two dimensions were combined into a single risk measure (vertical) and then plotted over time (horizontal). The graph above has no representation of time and risks are rated against their current status red, yellow and green regions in the table correspond to measures of high, medium and low risk respectively, and the exact choice of the boundaries is flexible but normally fixed across a company. The table is 5 x 5 but it is also common to have 4 x 4, 5 x 5, 6 x 4, and so on. Below is a 3 x 3 variant of the risk rating table with the high risk region moved to the upper left corner rather than the traditional upper right corner. Note that there is an additional qualitative region and that the regions do not respect the table format. This one is also about fish and oceans, and also from those Canadians but part of another guide.

image

The risk rating or profile table below is quite close to the format used in my company, taken from a recent annual report of a healthcare company. Again there are the two dimensions of likelihood and severity. There are 6 risks in the table represented by current rating and target rating after mitigation actions have been performed (the arrows show the improvement of each risk). Note that the mitigation of risk 6 yields no improvement, a reasonably common occurence in practice.

image

Note also that the red, yellow and green have been replaced by different shades of the single colour blue. The colour red has such a strong association with danger that using it in a risk profile is often counterproductive since all attention is drawn to the red region. In the above graph different shades of blue have been used to represent the relative profiles of risks.

The next risk profile is more graphic than graph, and communicates that concentrating your portfolio in shares is high risk, while short term deposits are low risks. It is taken from an investment article in a newspaper. This graph(ic) is not very sophisticated but it does get its main point across.

image

The final profile is more quantitative. The graph rates probability against result (impact), which can be positive (upside), neutral or negative (downside). It is from a post on the risk of global warming. In this case the upside seems rather limited while the negative impact tail extends much further. So the profile of this risk is dominated by its downside. The fat tail of the downside is hard to capture in a rating table for example.

image

Related Post

Friday, November 7, 2008

Weapons of Math Instruction: The Birthday Paradox

In November 2007, a family from Ohio was blessed with the birth of their third child Kayla, who arrived on October 2nd. Kayla will share the same birthday as her two older siblings who were also born on October 2nd, in 2003 and 2006 respectively. Bill Notz, a statistics professor at Ohio State University concluded that the odds of a family having three children born on the same date in different years are less than 8 in a million. That said, the Paez family from San Diego, California, also had their third child born on July 30th of this year. Are these events extraordinary, or just plain ordinary?

wmi For some time I have been wanting to start a series of posts on Weapons of Math Instruction for IT Risk. The name comes from a joke I saw a few a years ago about the evil Al-gebra group. I will be aiming for less math and more instruction since the text format of the blog is somewhat formula-challenged. The first post will be on the Birthday Paradox, a perennial favourite, recently recounted to me by a senior manager. Also there was a recent furore over the reported odds of DNA matching which was a misinterpretation of the Birthday Paradox at heart.

Quadratic Football

The Birthday Paradox (BP) poses the following simple question: how many people need be gathered in a room so that the chance of any two people sharing a birthday is at least 50%? We assume that each person is equally likely to be born on each of the 365 days of the year , and we ignore leap years. Given these assumptions, the surprising answer is 23. The BP is not a true paradox (a logical contradiction) but is rather deeply counterintuitive since 23 seems too smalddl a number to produce a common birthday. Keep this in mind next time you are watching a football (soccer!) game which has 23 people on the field (the two teams of 11 plus the referee). If you want to see the mathematics behind the result, Google spoils for choice with over 53,000 hits. As is often the case with mathematical topics, the Wikipedia article has an excellent explanation, graphics and references.

The BP is an archetypal problem in probability whose solution is less important than the principles it demonstrates. What is the lesson? If we are trying to obtain a match on an attribute with M distinct values, then just under sqrt(2*M) uniform samples are required for better than 50% success. For birthdays, M = 365 and sqrt(2*365) = 27.1, a little bit higher than the correct value of 23. To generalise, if you were looking for a match amongst 1,000,000 attributes then you would need less than 1500 uniform samples - probably lower than you would have estimated/guessed. Where does the sqrt(2*M) term come from?

Well if we have N objects to compare for a potential match, then the number of possible comparisons is N*(N-1)/2. This number can be approximated as (N^2)/2, which grows in proportional to the square of N, and is therefore called a quadratic function. The source of the surprise in the BP is that we are not thinking quadratically, and therefore underestimate the number of potential matches. When N = sqrt(2*M) then (N^2)/2 yields M possible matches. If we assume that the probability of a match is 1/M, then the average number of matches over the M possibilities is M*(1/M) = 1. A closer analysis shows that there is a 50% chance of a match when N is approximately sqrt(2*M*ln2) where sqrt(2*ln2) = 1.18. You can find a straightforward 1-page derivation of a slightly higher bound here.

Johnny's Fallacy with DNA matching

Another reason why the BP is puzzling is that people often mistake the question to be the following: how many people need to be gathered in a room such that the chance of someone sharing a birthday with me is at least 50%? jcIn this case the person is confusing a 1-to-N matching problem (the person to the group) with the required N-to-N matching problem (the group to itself). Famed US talk show host Johnny Carson made this mistake (as related here, p.79), which we will refer to as Johnny's Fallacy. One night there were about 100 people in his studio audience, and he started to search for someone with the same birthday as him. He was disappointed when no one with his birthday turned up. In 100 people the probability that someone would have the same birthday as Johnny is just under 1/4, while the probability that some pair of people share a birthday is 0.9999, essentially a certainty.

Recently there was a occurrence of Johnny's Fallacy on the topic of identifying suspects using DNA testing. The episode was reported in the Freakanomics blog under the title Are the FBI's Probabilities About DNA Matches Crazy?, reporting on a piece from the Los Angeles Times How reliable is DNA in identifying suspects?. In 2001, Kathryn Troyer, a crime lab analyst in Arizona, was running some tests on the state's DNA database and found two felons with remarkably similar genetic profiles. Remarkable in the sense that the men matched at nine of the 13 locations on chromosomes (or loci) commonly used to distinguish people, and that the FBI estimated the odds of such a match to be 1 in 113 billion. This is the 1-in-N probability of a DNA match. Since her initial discovery Troyer has found among about 65,000 felons, there were 122 pairs that matched at nine of 13 loci. The matches here are the result of the N-to-N probabilities. Johnny's Fallacy here would be to conclude that a search of the Arizona DNA database against the given DNA sample would return 122 matches.

trans98Gradually Troyer's findings spread and raised doubts concerning the veracity of DNA testing. The FBI has declared that the Troyer searches (sounds like something from the X Files) are misleading and meaningless, which is not a particularly helpful assessment of Johnny's Fallacy. David Kaye, an expert on science and the law at Arizona State University, remarked that since people's lives are at stake based on DNA evidence, “It [the Troyer matches] has got to be explained.” Steven Levitt in the Freakanomics blog steps up to the plate to provide some numbers. He assumes that the likelihood of a match at a given loci is 7.5%, yielding the odds of a 13-loci match to be about 1 in 400 trillion, and 9-loci match to be 1 in 13 billion. So a DNA database of 65,000 people yields over 2 billion potential matches, and about 100 expected matches on at least 9 loci.

Levitt's numbers are examples to explain the fallacy, or actually a version of the BP where matching of a single personal trait has been substituted with a number of loci. If you google "Kathryn Troyer" there are over a 1000 hits, and her results have generated a storm of controversy. Reading a few of the articles returned by google shows that it will be some time before people fully understand the apparent paradox in this case.

As I mentioned in the introduction to this post, a senior manager recently told me that he used the BP as an example to get people thinking quantitatively in risk assessment workshops. Numbers come with their own logic - calculations can be surprising.

Related Posts

Wednesday, October 22, 2008

Risk Factors for AV Scanning

repair If you work in a large company then you probably are familiar with the all-too-regular process of your desktop AV software performing a scheduled full disk scan. This may happen a few times a month, and during the scan (which may last a few hours) you typically experience quite sluggish performance. You may also get the sinking feeling that most files are being needlessly scanned (again). Kaspersky, a security software vendor that includes AV, gets that feeling as well. Treating all files on your desktop as equally likely to contain malware is wasteful, but without any criteria to discern less-likely from more-likely malware candidates, the current regime remains. Kaspersky observes that users are only willing to wait a few seconds at the outside for AV to perform its checks, and typically AV scans are limited to what can be done in these windows of user-defined expectations.

Kapersky has decided to make AV scanning more efficient not by making it faster but by doing less, as determined by risk-based criteria. And they were recently issued a US patent for this approach. If you have been looking for a simple example to highlight the difference between traditional security and risk-based security then your search is over.

Be warned that the patent is repetitive and not very clearly written, in keeping with the style of such documents. Patents are lawyers' solution to the legal optimisation problem of being both vague (to support the broadest claims) and specific (giving details in an embodiment that demonstrates the invention). Also towards the end, beginning with the paragraph describing FIG. 2, you can read the convoluted legalese required to define a "computer" and a "network".

Trading Risk against Speed

The purpose of the patent is "balancing relatively quick (but less thorough) anti-malware checks with more thorough, but also more time-consuming, anti-malware checks". The basic approach is to employ different scanning strategies for known files that have been previously scanned, and new files whose status is unknown to the AV software. Known files will receive a quick signature-based scan, while unknown files may be subject to more detailed scans.

When unknown executable files are first launched a risk assessment is performed to determine the appropriate level of scanning. The risk assessment evaluates a collection of risk factors that produces a metric which determines whether to rate the file as having a high, medium or low risk of containing malware. This rating in turn determines the thoroughness of the scan to be performed on the file. Options for a more detailed anti-malware scan can include heuristics analysis, emulating file execution (in an isolated environment, and steeping through particular instructions) or the statistical analysis of instruction patterns. Beyond local checks, the AV software may also consult online scanning services for additional information. Employing these more sophisticated scanning methods increases the rate of detection at a cost of additional processing time.

Risk Factors

The patent provides some example risk factors for the purpose of satisfying the embodiment requirement of the invention. While the risk factors are intended only as examples, they are interesting nonetheless.

Online Status

The patent mentions that it can take between 15 minutes to 2 hours to update local AV databases when new malware appears. It is suggested to contact an AV server directly to obtain the latest information available. If the executable is sitting on a blacklist then it is likely to be malware (that's why its on the list), and if its on a whitelist then the likelihood of malware being present is low.

File Origin

If the origin of the file is a storage medium such as CD or DVD then it it less likely to have malware than if the software was distributed over the internet. Email attachments are always suspicious. For downloaded files, the URL source of the download should be considered to determine if the origin is a suspicious web site or P2P network.

File Compression

Malware is now commonly compressed (packed) to thwart signature-based methods of virus detection. Packed files should be treated as being more suspicious than unpacked (plain) files (see here for a general discussion on how malware author use packing to hide their payloads).

File Location

The current location and/or path to the file can also be considered, since some executable files install themselves in a particular directory, especially those directories that are infrequently used. For example, the Temporary Internet Files folder is a higher risk than the My Documents folder.

File Size

Relatively small executable files executed are more suspicious than a large executable files since propagating malware does not want to draw attention to itself by transferring large files. Sending a large number of emails with a relatively small attachment is much more practical. Kaspersky states that files sent out in this manner are on the order of 50-100 kilobytes (which, if packed, reduces to something on the order of 20-50 kilobytes).

Installer File

Malware often propagates by sending out small installer files that when executed triggers a process of downloading a much larger malware payload from a web server or a file server on the Internet.

Digital Signature

Files that are digitally signed are less likely to contain malware than unsigned files.

Surprisingly the patent also mentions the possibility of alerting the user with a popup that gives them the option to skip or minimize the scan of the unknown file. The patent states that "as yet a further option, the user can manually choose to run some of the anti-virus scans in the background after the new software has been launched, but not necessarily the entire spectrum of available technologies, which obviously increases the risk that a virus can infect the computer" (italics added).

Risk supporting Business

The idea of Kaspersky is to vary malware scanning sophistication based on well-defined risk-factors. Presumably they have a sufficiently large data set on these risk factors to facilitate a transformation into hard (numeric) decision criteria. The patent does not describe how the various risk factors will be combined to produce a risk decision, but much tuning will be required.

Note that the purpose of the patent is not to reduce the risk of being infected by malware but to provide (security) risk support for the decision to improve user response times by reducing the overall effort for malware scanning. And this is clearly the task of the IT Security Risk function - balancing business and security requirements using risk analysis.

But to be clear, we don't get something for nothing, and the likelihood of being infected by malware will actually increase simply because less scanning will be done and the risk factors will not correlate perfectly with the presence of malware. And this is the next important responsibility of the IT Security Risk function - to make business aware of the residual risk in following the Kaspersky approach and getting acceptance for this risk. Hopefully Kaspersky will provide some data to help here.

I posed the question on LinkedIn whether people would support deploying this type of AV, and the resounding answer was no.

Related Posts

Thursday, October 9, 2008

Some Black Swans in IT Security

images

The recent book, The Black Swan: The Impact of the Highly Improbable, by Nassim Nicholas Taleb (NNT) is a runaway best seller addressing our seemingly inherent inability to predict (let alone plan for) those events that will produce the highest impacts in our lives, professions, countries and the world at large. In particular, he is interested (obsessed, in fact) with single events that scuttle all our expectations about a given topic, whose origins are without precedent, and whose consequences are extreme. The name of the book is derived from the discovery of black swans in Australia by Europeans in the 17th century, which overturned the apparently self-evident statement that "all swans are white". His more modern examples of Black Swan events include WWI, the Internet and 9/11.

In a recent post I stated that the problems with the random number generator on Debian may well turn out to be a Black Swan event since the impact of all the weak keys created and distributed over an 18 month period could introduce drastic security vulnerabilities in system relying on Debian-generated keys.

It is difficult at the moment to state if the Debian debacle constitutes a true Black Swan since the consequences are still being played out. In the meantime, I have compiled a list of Black Swan events that we have witnessed and endured. Naturally my list is subjective and some justification is provided below.

  • One Time Pad
  • Computers, Cryptography and Cryptanalysis
  • Public Key Cryptography and RSA
  • The Internet Worm
  • Basic internet protocol insecurity
  • Bruce Schneier
  • PKI
  • Passwords
  • Good Enough Security

The One Time Pad

Shannon proved that the entropy (uncertainty) of the cryptographic key must be at least as large as the entropy of the plaintext to provide unbreakable security. In practice this translated into adding (modulo 2, or XORing) a fully random key stream to a the plaintext. This system had been used previously but what Shannon provided was the entropy arguments to prove that the cipher was unbreakable. His tools furnished the absent proof. In most situations it is not practical to use one time pads, since fully random keys must be pre-distributed. Nonetheless, with a single stroke then cryptographers were cast out of Eden. Like Adam and Eve, and their descendants (Alice and Bob?), cryptographers would have to work for a living, labouring over systems that they know could be broken either directly or over time. Perfection had been demonstrated but in practice unbreakable security would be unobtainable.

Computers, Cryptography and Cryptanalysis

The introduction of computers at the end of WWII enabled some codes to be broken, or at least provided critical information that lead to their compromise. In the next 20 years (the 50's and 60's) there was a revolution away from classical ciphers to programmable ciphers that were designed on new principles that radically departed from classical methods. While the principles of classical ciphers were not wholly dismissed they were certainly diminished. Ciphers no longer needed to be been pen-and-paper based, table driven or reliant on electromechanical devices. Such systems are clever, and at times ingenious, but are limited within a certain framework of design. With computers cryptography was able to shed its puzzle persona. A key development was to increase the block length of ciphers. Plaintext was traditionally encrypted one character at a time (8-bit block size in modern parlance) and this greatly limits the type of dependencies that can be created between the plaintext, ciphertext and the key. Block sizes extended to 6 to 8 characters (48 to 64 bits). Cryptography was not just faster but better - you could have an Enigma with 20 rotors if you wished. The Black Swan was that great swaths of the information intelligence landscape went dark or became clear.

Public Key Cryptography and RSA

Public key cryptography (PKC) as a theory was published in 1976 and the RSA algorithm, an uncanny match to the stated requirements of PKC, invented the next year. The impact here is difficult to describe. This brought cryptography into the public research community, and has attracted the interest of many brilliant researchers who would of otherwise deployed their talents in different disciplines. The development of PKC demonstrated that civilian cryptography could make significant contributions even without extensive training in classical methods or military systems. There is good evidence that military organizations were the first to invent PKC, and perhaps rightly they thought that this technology could be kept a secret for some years to come. The Black Swan was to underestimate civilian capability and to pass up the opportunity to patent PKC technology. PKC also tacked cryptography further away from its traditional roots, and drove it deeper into pure mathematics and computing. PKC designers are not looking to create good ciphers from scratch but rather to take an existing hard problem and harness its difficulty as a basis for security. The point here is to exploit not create, reduce not claim. The skill of the designer is in making connections.

The Internet Worm

The Internet Worm, also known as the Morris Worm, of November 1988 was a true Black Swan for IT Security. It was a defining digital 9/11 moment. According to its author the purpose of the worm was to gauge the size of the Internet by a conducting a door-to-door survey of connected hosts. The worm itself and its propagation mechanism were intended to have minimal impact on CPU and network resources. However several flawed assumptions ensured that the exact opposite would occur - within a few hours the CPU performance of infected hosts was reduced to a level that made them unusable. It is estimated that around 6,000 (mostly UNIX) hosts were infected by the worm, or approximately 10% of the then connected hosts. The cost of recovery was put at USD 10m - 100m by the US Government Accounting Office.

The worm did not destroy files, intercept private mail, reveal passwords, corrupt databases or plant Trojan horses. But it did exhaust CPU on infected machines, producing a denial-of-service attack. While the worm author may claim that the worm was an experimental type of network diagnostic, he went to great lengths to disguise the code (similar to malware today), make the worm difficult to stop, and relied on security weaknesses to propagate the worm rapidly and extensively (again similar to malware today). In particular the worm used password cracking, shared password on multiple accounts, buffer overflows and weaknesses in well-known services to propagate itself. While these attacks were known, the impact of exploiting these vulnerabilities on an Internet scale was at best only vaguely comprehended.

The worm galvanised in the minds of the people in and around the cleanup effort just how vulnerable the Internet was to rogue programs. Eugene Spafford commented that the damage could of been much more devastating if the worm had been programmed better and acted maliciously. As it was, an alleged unintentional act brought 10% of the Internet to its knees - what if someone was really trying? And now the code was being distributed to show them how.

Basic Internet Protocol Insecurity

The Internet was designed for reliable connectivity in the presence of failures (even malicious attacks), based on a decentralised architecture which can adapt to local changes (outages, loss of servers) and recover for service. Also it is modular in its layered design so services can be exchanged and replaced (SSL could easily be slotted in, as was HTTP, Skype required a few more changes, VPN at Layer 2, and so on). This plug-and-play property with reliability came at the cost of security. The basic protocols, even higher level ones such as email (SMTP) operate on the "honour system" - exchanging and processing network information with little or no authentication. Basic protocols are designed to support reliable connectivity, which was the fundamental objective in designing a global packet-switched networked. There is little confidence in basic services as evidenced by their secure counterparts: HTTP/S, Email/SMIME, DNS/DNSSEC, L2/VPN, TCP/SSL. It is usually easier to introduce a new protocol rather than patch existing ones. The Black Swan is that the Internet is now a critical communication infrastructure for which security was a not a fundamental design requirement.

Naive trust models and assumptions have lead to endemic problems for many years, with the latest manifestation being the well-publicised DNS debacle. In this case well-known inherent security vulnerabilities had been ignored for several years, and attacks were made more feasible with increased bandwidth and the integrity of DNS itself is now critical. But it took a lot of security theatre on the global stage to get IT people to the point of patching. The US Government has announced that DNSSEC, a secure version of DNS available for over 10 years now, will be deployed in government offices by 2010. Most recently several researchers claim to have found inherent weakness in the basis TCP/IP protocols that lead to denial-of-service attacks.

Bruce Schneier

Bruce Schneier is the best known security authority in the world. His blog has hundreds of thousands of readers, his posts can yield hundreds of comments, and his books are bestsellers. His opinions hold sway over both technical people and executives, as well as all the layers in between. He is the Oprah of security - a public figure and a leading opinion maker. The Black Swan aspect of Mr. Schneier is that he has achieved this status through excellent communication (and yes cunning publicity as well) rather than technical prowess. Of course he has technical prowess but that is rather common in security and cryptography. What is uncommon, or even uncanny, is the ability to explain security in terms that can be understood by non-specialists whether it be programmers, professionals, managers or executives. Bruce has literally written himself into the modern history books of security. He has shown, once again, that communication is king - the security explanation is mightier than the security deed.

Public Key Infrastructure (PKI)

Since its inception Public key cryptography (PKC) had been feted to unravel the greatest Gordian knot in security: how to establish secure communication between previously unacquainted partners. With PKC we did not require secret key exchange (SKE) to bootstrap secure communication - only authenticated key exchange (AKE) was sufficient. The assumption was that setting up an infrastructure for AKE was obviously easier than establishing a similar infrastructure for SKE, if that could be done at all. The rise of the Internet in the mid 90's created an obsession with e-commerce, and PKC was presented with a test bed of a potentially fantastic scale to demonstrate its power and utility.

So PKC became PKI in deployment, heralded as the great catalyst and foundational technology of the commercial Internet. But the fallacy was to believe that technology could be used to solve the intangible social issue of trust, or even provide a functioning digital equivalent. The PK part of PKI was excellent - good algorithms, formats and protocols. The problem was with the I - the notion melding the PK components to a legally-binding trust infrastructure. PKI was successfully discredited and repelled by the legal profession, and hence by business as well. The main conundrum was who would carry liability for business transacted using certificates? The Black Swan here was that brilliant security mathematics and technology could not convince the market. People were eventually happy with PayPal, a graft of credit card payments onto the Internet, essentially a mapping of an existing trust infrastructure onto a digital medium.

The discrediting of PKI began a long cycle of decline between business and IT Security, which itself is part of a longer decline in the general relationship between business and IT. SOA is the great peace offering from IT to bring appeasement between the two camps. But SOA is even more complicated than PKI, and PKI is contained in the security standards of SOA.

Passwords

Passwords were introduced in the 60's to manage users on timesharing systems. The roots of passwords are largely administrative rather than to support security. Passwords have proliferated extensively with client serving computing, and now with its generalization, web-based computing. Passwords are badly chosen, written down, prominently displayed, sent unencrypted, phished out of users, infrequently updated, and commonly forgotten. The traditional business case for using passwords has been their ease and low cost of deployment from the viewpoint of system administrators, while the true cost and inconvenience are passed on downstream to users (and help desk operators). Provisioning and managing user accounts and passwords is a major cost and compliance concern for most large companies

Passwords are an example of a cumulative Black Swan - 40 years of compounded reliance on an unscalable 1-factor authentication technology without a plausible migration strategy. Employees commonly require 5 or 6 passwords to perform even the most basic IT business functions, and perhaps double that number if they require access to some special applications. This number itself could easily double once again in their private lives where another gaggle of passwords is required to support personal online transactions and services. In the short term we will require more and longer (stronger) passwords which, as people, we are stupendously ill-equipped to cope with.

Good Enough Security

This Black Swan is the most pernicious and potentially crippling for IT Security as an industry and a profession. Security is no longer an end in itself, if it ever was. Just as physics is called the queen of the sciences, security people believed their profession was venerated within the IT sphere. Of course IT Security is important in the sense that if it is not done then great catastrophes will be summoned forth. But the issue is not about ignoring security but actually about how much attention to give it. This is the fundamental asymmetry of security - it is easy to do badly (and be measured) but it is difficult to do it well in a demonstrable fashion. Security has been likened to insurance - having none is a bad idea, but once you decide to get insurance, how much is appropriate and how can you minimize your costs?

We have called this Black Swan "Good Enough Security" but we may also have chosen risk-based security, the transition from risk to assurance, the diminishing returns of security, or knowing your security posture. Managers and other stakeholders want to know that their IT assets are adequately protected, and it is up to the IT Security person to define that level of adequacy and provide assurance that it is reached and maintained. Most security people are woefully ill-equipped to define and deliver such assurance in convincing business or managerial language.

The burden of risk management has fallen heavily on IT Security. It is common that once the torch of enterprise risk management is kindled in the higher corporate echelons, it is passed down the ranks and settles with IT Security people to assume responsibility for the management of IT Risk. And these people are ill-equipped to do so. Ironically, if they persist in their task they are likely to ascertain that security is just a piece of the IT Risk Landscape, and perhaps a minor piece at that. For example, availability is much more important to business than security, which is a mere non-functional requirement.

Sunday, August 24, 2008

Entropy Bounds for Traffic Confirmation

traffic analysis After quite a delay (since the initial idea) I have finally completed a good draft of some research on traffic confirmation attacks. Traffic analysis is a collection of techniques for inferring communication relationships without having access to the content that is being communicated. Traffic confirmation is a special subcase that attempts to infer communication relationships amongst users whose communications are mediated through an anonymity system.

We assume that the underlying anonymity system is a threshold MIX, with batch size B, supporting N participants, each of whom may act as both a sender or receiver. Typical MIX system parameters are N = 20,000 participants (all potentially send and receiving), and a batch of size B = 50 (the MIX accepts 50 messages, randomises their order, then delivers them). The MIX makes it difficult to correlate the 50 senders to the 50 receivers.

The traffic confirmation attack targets a specific user Alice and attempts to infer her M communication partners (who she is sending to) by observing her communication patterns through the MIX. Alice for example might be sending messages to M = 20 people on a regular basis.

The attacker observes when Alice sends a message in a batch, and can also observe who receives a message from that batch. This will include at least one of Alice's communication partners, but which one? The uncertainty that the attacker has with respect to the M communication of Alice is called the entropy her partners.

It is well-known that given the properties of the MIX and the power of the adversary (what she can see), that the entropy of Alice's partners is decreasing as the number of message batches that Alice participates in are observed (a higher entropy means more uncertainty). Eventually the entropy of her partners will go to zero and their identities will be known to the attacker. But how many such observations need to be made before the anonymity of Alice's communications partners is effectively zero?

My paper shows that after the attacker has observed approximately

M * log N

messages, subsequent messages drive the entropy to zero at a geometric rate. That is, the MIX system protects Alice's partners to a point but then essentially collapses from that point forward. So if the attacker is prepared to wait for (and observe) those first M * log N messages from Alice, then the uncertainty of her partners reduces rapidly as Alice sends additional messages. One reason that the attacker has to wait for that first group of messages to be sent by Alice is that she won't be able to uniquely determine Alice's communication partners until Alice has communicated with all of them at least once.

By way of an example, consider a system with N = 20,000 participants, a batch size of B = 50, and Alice having 20 communicating partners. In this case, after observing 292 initial messages, the attacker can identify Alice's partners with 99% certainty after observing an additional 110 messages (that's 402 messages in total). So way before she has sent 400 messages, Alice should either change her partners or change her MIX system (probably the later).

You can find the draft paper here. Comments welcome!