Saturday, February 27, 2010

Month Summary, Feb 2010

A quick summary of this month’s posts

Security

SSL

Risk

Visualization

Other

A look back, Jan – Feb 2009

As the No Tricks blog steadily builds up a body of posts (about 150 now), I can look back a year or so and even surprise myself at what I was posting about. Here are most of the topics I was considering early in 2009.

Opinion and Information

Twitter

Scribd

Visualizations

image

Friday, February 26, 2010

A Short Security Manifesto

From the Falcon's View

Stop talking about traditional "risk management" as some sort of magical rubric or panacea.
Start talking about threat modeling and legal defensibility.

Stop using ad hoc approaches to security architecture and solutions.
Start adopting a holistic, systemic ISMS-like approach.

Stop delegating ownership of security to IT or other non-business leadership.
Start requiring execs and the board to directly own and be responsible for security.

Stop relying on shortcuts to survive audits.
Start demonstrating actual due diligence by adopting a reasonable standard of care.

Stop looking for ROI to "justify" security.
Start thinking of security as a business enabler that facilitates better decisions and helps protect the business during both the good and the bad times.

Thursday, February 25, 2010

USB devices back on duty for the DoD

The US DoD has tentatively rescinded its universal ban on USB devices issued over a year ago, reintroducing them under controlled conditions and for limited use, as reported by Stars and Stripes. The DoD introduced the draconian ban to prevent malicious software from infecting defence networks. However it seems that the combat need to transfer data quickly and conveniently has trumped any blanket security veto. The new devices can only be connected to military networks, and used for data transfer when network resources are unavailable or overloaded. In short, as a method of last resort.

According to Defence News, the drives are designed so that they can be tracked by system administrators, are password-protected, and store information in encrypted form. Additional features include on-board anti-virus software and security rules that prevent copying or forwarding of certain information from the drive or saving unapproved information on the drive.

The move may seem somewhat untimely since suppliers of secure USB sticks are still reeling from a vulnerability that permits password-protection to be bypassed. Wired reported on the announcement as saying that both hackers and troops will be rejoicing.

NodeXL: Network Overview, Discovery and Exploration in Excel

Microsoft Research has released a new Excel 2007 add-in for rendering network visualizations

NodeXL is a powerful and easy-to-use interactive network visualisation and analysis tool that leverages the widely available MS Excel application as the platform for representing generic graph data, performing advanced network analysis and visual exploration of networks. The tool supports multiple social network data providers that import graph data (nodes and edge lists) into the Excel spreadsheet.

The graph visualizations seem stunning for Excel. An example is shown below from Visual Business Intelligence, where the graph depicts shared Board memberships of major US companies.

image

More information on using NodeXL and the external people Microsoft collaborated with to create the tool can be found here at CodePlex.

A dissection of Koobface

There is a very informative analysis of the social network trojan Koobface at abuse.ch. The analysis details the four stages of the victim infection, which involves malicious shorts links, registering false Blogger accounts and hijacked web sites serving out malicious javascript. At the time of writing (early December last year), there were just over 34,000 malicious blogposts and short URLs, directing victims to over 500 hijacked websites. Ultimately code is downloaded onto the victim machine to make it part of the Koobface command & control infrastructure.

A CAPTCHA breaking infrastructure is used to register new accounts with Blogger, as shown below.

image 

According to the post, the infrastructure is very sophisticated:

  • The time between grabbing a CAPTCHA and breaking it is less than three minutes (most of the time just a few seconds!)
  • Due to the way how Koobface’s infrastructure works, it’s possible to break hundreds of CAPTCHA per minute!
  • In this way it’s possible to register thousands of fake bit.ly/Blogspot accounts per day

The author wonders if the security industry is placing too much faith in CAPTCHAs.

Tuesday, February 23, 2010

Major Risks in the IT Industry

Researchers at the University of Wisconsin, from the Actuarial and Insurance department, conducted a study on risk terms in 2007. The study involved comparing notions and definitions of various terms in risk across several sectors and industries, including Information Technology (IT). The IT respondents listed the following major risks for their industry (click to enlarge)

image

The researchers noted that the IT sector had the largest number of risks. By way of comparison, the major risks for the energy industry looked like this

image

Notice that IT Failure risk is on the list but very much towards the bottom.

Metrics for Managing Project Risk

I was in a bookstore over the weekend and saw of copy of Identifying and Managing Project Risk by Tom Kendrick from HP. The book was published last year but I know of his work on project risk metrics from an earlier whitepaper that really showed how to get a handle on measuring and managing project risks.

Here are some examples of predictive risk metrics which serve as a distant early warning system for project difficulties.

Project size/scale risk

  • Project duration (elapsed calendar time)
  • Total effort (sum of all activity effort estimates)
  • Total cost (budget at completion)
  • Size-based deliverable analysis (component counts, number of major deliverables, lines of non-commented code, blocks on system diagrams)
  • Staff size (full-time equivalent and/or total individuals)
  • Number of planned activities
  • Total length (sum of all activity durations if executed sequentially)
  • Logical length (maximum number of activities on a single network path)
  • Logical width (maximum number of parallel paths)

Schedule risk

  • Activity duration estimates compared with worst-case duration estimates
  • Number of critical (or near-critical) paths in project network
  • Logical project complexity (the ratio of activity dependencies to activities)
  • Maximum number of predecessors for any milestone
  • Total number of external predecessor dependencies
  • Project independence (ratio of internal dependencies to all dependencies)
  • Total float (sum of total project activity float)
  • Project density (ratio of total length to total length plus total float)

General risk

  • Number of identified risks
  • Quantitative (and qualitative) risk assessments (severity analysis)
  • Adjusted total effort (project appraisal: comparing baseline plan with completed similar projects, adjusting for significant differences)
  • Survey-based risk assessment (summarized risk data collected from
    project staff, using selected assessment questions)
  • Aggregated overall schedule risk (or aggregated worst-case duration estimates)
  • Aggregated resource risk (or aggregated worst-case cost estimates)

And the last example, the Dilbert Correlation Factor: collect 30 recent Dilbert cartoons and circulate to staff. Have people mark each one that reminds them of your organization. If the team average is

  • under 10: Low organization risk.
  • 10-20: Time for some process improvement.
  • Over 20: Hire a cartoonist and make your fortune….)

Sunday, February 21, 2010

Simplified implementation of the Microsoft SDL

Microsoft has announced a new 17-page whitepaper that presents a simplified version of their Security Development Lifecycle (SDL). From the announcement

One of the common misconceptions about the Microsoft SDL is that you have to be an organization the size of Microsoft in order to be able to implement it. Another misconception is that the SDL is only appropriate for Microsoft languages and Microsoft platforms, and that you need to use some other methodology if you’re writing code with Ruby for OS X. The Simplified SDL white paper helps address these misconceptions by explaining how the SDL can be implemented with limited resources and applied to any platform.

image

Why use SSL?

Here is a short introductory post on the advantages of using SSL, and a nice FAQ as well. Cost and performance are listed as the main disadvantages. However you may want to also check out How to Render SSL Useless from Ivan Ristic and the additional comments here at Pat’s Daily Grind.

image

When to use Pie Charts

image

from Emergent Chaos.

Saturday, February 20, 2010

Lew's Law: IT expenses converge to the cost of electricity

Interesting:

Last week I heard Sun Microsystems Cloud CTO Lew Tucker predict that IT expenses would increasingly track to the cost of electricity. “Lew’s Law” (as described to a room of thought leaders) is a brilliant theorem that weaves a microcosm of IT trends and recent reports into a single and powerful concept.

Lew’s Law is a powerful idea whose time has come, with profound and far reaching impacts, including the automation of the network.

Full story here from Gregory Ness with some additional remarks here.

image

Friday, February 19, 2010

An Anonymity computation using R

Back in August 2008 I posted on some research I did on traffic confirmation attacks. Traffic analysis is a collection of techniques for inferring communication relationships without having access to the content that is being communicated. Traffic confirmation is a special sub-case that attempts to infer communication relationships amongst users whose communications are mediated through an anonymity system.

The graph below shows two curves representing the observed frequency of recipients from a MIX anonymity system. The attacker is targeting a particular user Alice, and the red curve represents the minimal number of times her communication partners are observed, while the blue curve represents the maximal number of observations for all other recipients. The graph is created from an anonymity system with N = 20,000 users, using a MIX of batch size 50 and assuming Alice has 20 communication partners.

image The graph shows that after the attacker has observed about 250 messages from Alice, each of her recipients has been observed more often than any of the other recipients. Therefore an attacker can conclude that the 20 most frequently observed recipients are the recipients of Alice, and the anonymity of her partners has been broken. The details are explained in the previous post.

What surprised me was how easily the data for this graph could be programmed in the R programming language. The native vector operations of R makes this all very simple, and the code is below.

b = 50
N = 20000
m = 20
Nset = c(1:N)
mset = c(1:m)
N.plot = 0
m.plot = 0
m.min = 0
N.max = 1
s = sample(N, b, replace=T)
s = c(s, sample(mset,1,replace=T))
for ( t in c(1:500) )
{
    s = c(s, sample(Nset, b, replace=T), sample(mset,1,replace=T))
    ts = table(s)
    m.min = min(ts[c(1:m)])
    N.max = max(ts[-c(1:m)])
    N.plot = c(N.plot, N.max)
    m.plot  = c(m.plot, m.min)
}
plot(m.plot, type="l", col = "red")
lines(N.plot, col = "blue")

FSA Security Controls for protecting Customer Data

In April 2008 the Financial Services Authority published their recommendations for protecting customer data. From the Executive Summary

This report describes how financial services firms in the UK are addressing the risk that their customer data may be lost or stolen and then used to commit fraud or other financial crime. It sets out the findings of our recent review of industry practice and standards in managing the risk of data loss or theft by employees and third-party suppliers.

At just over 100 pages, the report details controls and best practices in the following areas to protect customer data

  1. Governance
  2. Training and awareness
  3. Access rights
  4. Passwords and user accounts
  5. Monitoring access to customer data
  6. Data back-up
  7. Access to the internet and email
  8. Key-logging devices
  9. Laptops
  10. Portable media including USB devices and CDs
  11. Physical security
  12. Disposal of customer data
  13. Managing third-party suppliers
  14. Internal Audit and Compliance monitoring

Thursday, February 18, 2010

How to write an Information Security Policy

Nice 9-page advice from the UK Department of Trade and Industry.

image

Six Myths in Assessing Risk

Great 1-page summary with graphics from business advisory firm Corporate Executive Board

  1. The biggest risk my company faces is financial risk
  2. My company is safe because we review risks and
    prioritize mitigation efforts annually
  3. We are good at risk-sensing because we have invested in
    3 enterprise risk management (ERM) systems
  4. We are well protected because we have a strong
    quantitative model to measure risk
  5. Our risk assessment is comprehensive because we
    account for likelihood and impact
  6. We can sense and protect business better because we manage risks at the business unit (BU) level

Friday, February 12, 2010

AON 2010 Political Risk Map

Here on Scribd.

image

Another source of USB Randomness

Back in December I posted about a new USB device with dedicated hardware for producing a continuous stream of high entropy bits based on sampling P-N junctions. Another lower tech randomness source with a USB interface is described at this site, and is shown below

image The device has a small hourglass, and as the sand falls from the upper to the lower chamber, the pattern of grains is sampled against a light-sensitive detector at a rate of 100 times per second. The site claims that each sample yields about 9 bits of entropy based on statistical tests. The device detects when all the sand has passed to the lower chamber and then rotates the hourglass 180 degrees, so the sampling process can continue. The samples can be accessed through a USB interface.

The device is a prototype, and as yet not for sale, but costs about $100 to produce. The advantage of the hourglass method over other more sophisticated and higher yielding devices is simplicity and transparency. Perhaps so, and you can read more about the design here, and the entropy of the output here. Finally

While the hourglass is not precise, accurate, or repeatable as a timekeeper, and has been almost completely supplanted by better devices, it is a good source of random entropy. It is still manufactured in quantity at low cost, and it is clean, compact, durable, and uses little energy. The source of the random entropy can be easily understood, and observed to be functioning correctly without instruments. An off-the-shelf photointerrupter can be employed to electronically observe the random entropy, and an open-source, standardized microcontroller can be used to control the process and interface it with a host computer.

Monday, February 8, 2010

How to Render SSL Useless

This is the title of a recent talk from Ivan Ristic of SSL Labs on common mistakes in the deployment of SSL. This talk expands upon his SSL Threat Model that I posted about a few months ago. The main deployment mistakes Ristic sees for SSL are
  • Self-signed certificates
  • Own CA certificates
  • Mixing SSL and plain-text
  • Not using secure cookies
  • Using incomplete certificates
  • Not using EV certificate
  • Not using SSL
  • Mixed page content
  • Different sites on 80 and 443
  • Using SSL for “important”bits
  • Inconsistent DNS configuration
This is a great presentation which really gets to the heart of why SSL security has lately been the focus of much attention.

(via SSL Shopper)

Saturday, February 6, 2010

Get your Faraday Bag

Get your Faraday bag here. This is not a gimmick site, as these people are aiming at UK law enforcement, who need to shield mobile phones after seizure. You can view testing results here.

image

Single DES and Double Yolks

It was reported in the Daily Mail this week that a woman bought a carton of half a dozen eggs, which she later found to be all double-yolked, as shown below

image

Since the chances of getting a single double-yolk egg are  around 1-in-1000, then it appears that we have witnessed an extremely rare event, in fact one that is a practical impossibility. If we assume that the likelihood of each egg being double-yolked is independent, then the picture above is conclusive evidence of a  1-in-10^{18} event manifesting. This is a bit less likely than guessing a DES key at random at 1-in-10^{17}. The Daily Mail article goes on to give reasons why this event is not as unlikely as it seems, because on face value, the event is so unlikely that we would never expect to witness it over the lifetime of all eggs that have ever been produced.

Apparently the eggs are all likely to come from hens in the same flock and of the same age which reduces the likelihood to “only” 1-in-729 million. And the occurrence becomes even more likely (or less unlikely – take your pick) when we account for eggs of a similar weight being sorted into the same boxes.

A bit more detail is given over at the wonderful Understanding Uncertainty blog. If the 1-in-10^{18}  odds were correct then given the number of eggs consumed in Britain each year, we are looking at waiting 500 years to see the photo above, so the independence assumption is not plausible. Factoring in that eggs coming from the same group (who may have a propensity for double yolks), packing by weight, noting that some supermarkets can detect and sell double yolked eggs, then the event seems less impressive. But impressive nonetheless!

Friday, February 5, 2010

The USB Password Vulnerability

In early January Heise Security reported that a German security firm had discovered a vulnerability in the password authentication process of several USB sticks that are rated as being highly secure. The discovery has been widely reported, and led to various responses from USB vendors Sandisk, Verbatim and Kingston, including patching and recalling their devices from the field. The full list of effected sticks has been reported by Simon Hunt for example. Steve Ragan of the TechHerald has commented that the whole incident is “"quickly becoming the first FUD-based news cycle for 2010”.

What was the vulnerability?

Well when a user plugs in a password-protected USB stick their desktop starts the stick by launching a popup application prompting the user for their password. You would expect that the user supplied password is then transferred to the stick for verification, and the stick grants access if the password is correct.

What German security company SySS, discovered is that the password verification is actually performed in the popup application itself , and an acknowledgment code is sent back to the stick indicating if the candidate password is correct or not. By sniffing this traffic SySS determined that the acknowledgment code granting access is static, and in particular does not depend on the password entered by the user. Essentially the desktop popup verifies the user supplies password and then returns "yes" or "no" to the stick.

SySS captured the acknowledgment code, and then wrote  proof-of-concept exploit which injects the acknowledgement code into the memory space of the desktop popup so that the value returned to the stick is always the positive acknowledgement code. Thus regardless of what password the user enters the hack ensures that the stick will always grant access.

What was the impact?

Given the injection code, the password-protection can be defeated on sticks susceptible to the attack, which turns out to be a reasonably large class of commercial sticks that are marketed as being highly secure. All things being equal, the risk of a data breach from lost sticks is therefore increased, since the password-protection of the sticks can be bypassed with the right software. And losing sticks is increasing. CSO Online recently reported on a UK survey conducted by Credant which revealed that 4,500 memory sticks have been forgotten in people's pockets as they take their clothes to be washed at the local dry cleaners.

The impact is not limited to a single vendor product. The vulnerability exists in several families of secure USB devices across the major USB vendors because they all rely on a common USB chipset whose security properties have not been properly vetted.

FIPS Certification

The incident is all the more telling in that the vulnerability impacts devices that use AES 256-bit encryption and are rated as secure by the FIPS 140-2 certification process. Users are paying quite a premium over vanilla sticks for the advertised additional assurance that their data are protected by a certified device using strong cryptography, and for some US government agencies such purchases are mandatory. The relative ease with which the password protection was bypassed calls into question the value of the FIPS 140-2 process.

In Computerworld NIST is quoted as saying "From our initial analysis, it appears that the software authorizing decryption, rather than the cryptographic module certified by NIST, is the source of this vulnerability", and then also "Nevertheless, we are actively investigating whether any changes in the NIST certification process should be made in light of this issue”.

To be fair, the FIPS 140-2 focuses on verification of cryptographic modules and not the supporting software, however the incident highlights the narrowness of the approach and the expectation that certification is more than secure cryptography. Chris Merrit at Lumension has a good post on the fine print of the certification FIPS 140-2 process, and he concludes

So, bottom line: while this discovery seems to suggest an area to which NIST might want to bring some clarity and rigor, it does not mean that FIPS 140-2 is fatally flawed. It’s up to you, as the buyer, to understand what (potentially critical) functions occur inside & outside the cryptographic boundary, and how that might impact the security of the device in your case. And since what you’re looking for is what’s not certified, it might be useful to have an expert review the vendor security policy (posted with the certification on the NIST website) to help you understand the nuances.

AES-256 and Passwords

As I explained in Are AES 256-bit keys too Large? it is very unrealistic to equate password security with the security of AES-256. To achieve the equivalent of 256-bit security users would need to select 40 character passwords at random, and we are a long way from that. In fact so far away that we will never get there. So USB devices that protect their data using AES-256 encryption sound impressive, but when access control to those devices and the underlying keys is controlled by a password, then this setup sounds a lot less secure. The SySS vulnerability now shows that the whole AES-256 encryption process can be bypassed in the presence of weak password handling.

Conclusion?

Is there a useful conclusion from this incident? There is a lot of embarrassment all round and we have little confidence that a similar issue will not arise in the future. Security is just done poorly in general, and blatant examples are uncovered whenever someone takes the time to look under the hood. Some  articles and posts have focussed on verifying passwords in software as the culprit, which is partly true, but the real issue is not software but insecure programming of software - the password verification should never have been done on the desktop, and a static acknowledgement code should never have been used to unlock the USB device.

A trusted path should be established between the desktop keyboard and the USB device, and for smart cards this needs to be done with a secure reader. But this is at odds with the plug-and-play semantics of USB sticks where the portability of the ubiquitous USB connector is the selling point.

Wednesday, February 3, 2010

Plugging the Authentication Gap in SSL

The IETF has announced a draft document that specifies changes to the SSL protocol which closes the "authentication gap" exploited by the renegotiation attack announced last November. I posted a plain English explanation of the attack here, and you can find better graphical explanation here by Thierry Zoller (check for updates at his blog). The IETF and several vendors, including Google, Microsoft, and PhoneFactor, have been working on plugging this authentication gap since October. Until now, the best defence against the attack was to disable renegotiation in SSL.

The attack was initially dismissed as a quirk in the protocol - unexpected yet harmless - but it has ultimately resulted in several relatively small yet fundamental changes to SSL that introduce cryptographic state from one run of the handshake protocol to the next. The threat profile of the attack was raised when proof-of-concept code was written to demonstrate how the attack could be applied to steal Twitter passwords. PhoneFactor has described the attack as the most "severe" against the core SSL protocol to date.

The vulnerability that was uncovered is the observation that SSL does not distinguish between the initial protocol handshake of the client connecting to server, and any subsequent in-band handshakes for the renegotiation of cryptographic parameters. An attacker can establish an SSL session with a server and then later highjack an initial client SSL connection request to the same server, and splice the client request into the existing SSL session that the attacker has with the server. This is possible since the server interprets the client connection request as a renegotiation by the attacker. Any pending commands that the attacker has established with the server are then injected into the next client SSL-protected web session and executed in the context of the client (using the client cookies for example).

The solution proposed by the new IETF draft is for both client and server to cache the validation data that is currently computed and exchanged as the last step in the existing handshake protocol. A new protocol field has been introduced which indicates if the handshake is an initial or subsequent run (renegotiation), and in the latter case, the cached validation data is included in the renegotiation protocol. This prevents two independent sessions from being spliced together since the required validation data will either be absent or mismatched between the client and server.

It now remains to make any final tweaks to the protocol and to then deploy the revised protocol in existing SSL and TLS libraries, which is expected to be a slow and tedious undertaking. One of the authors of the new draft, Eric Rescorla, was a co-author in a recent study to examine the rate at which weak keys produced by a flaw in the random number generator of OpenSSL on Debian from 2008 were being patched. The study tracked a collection of approximately 50,000 public web servers over a period of 6 months. Initially around 1.5% of the servers (751 to be exact) were using Debian-flawed keys in their certificates, and 30% of the Debian-flawed certificates were still not re-issued after almost 180 days. Let’s hope for a better rate of patching in this case.

Monday, February 1, 2010

Fast computations on FPGA Clusters

Pico Computing recently announced a single hardware accelerated server that can process 280 billion DES decryptions per second, permitting the full 56-bit key space to be searched in 3 days. This is apparently a new record, and the announcement is a prelude to a presentation at the upcoming Black Hat DC 2010 conference in Virginia.

The effort was lead by David Hulton, Pico Computing Staff Engineer, who was previously involved in an abandoned effort to create rainbow tables to break GSM encryption. The latest effort focuses on understanding how FPGAs can be used to produce computations that are power efficient and scalable. In a companion Pico white paper, the impact of scalable FPGA clusters on password breaking is considered. A dedicated FPGA cluster with 77 nodes can recover an WPA key in 11 seconds, representing an improvement of almost a factor of a thousand over the same time required by a standard dual core processor.

image