Monday, June 2, 2008

Goodbye Yellow Brick Road


In 2003 the Computer Research Association sponsored a workshop on the Grand Research Challenges in Information Security & Assurance. The by-invitation-only event brought together 50 scientists, educators, business people, futurists, and others who have some vision and understanding of the big challenges (and accompanying advances) that should shape the research agenda in this field over the next few decades. The final report listed 4 main challenges worthy of sustained resourcing and effort:
  1. Eliminate epidemic-style attacks within the next decade
  2. Develop tools and principles that allow construction of secure large-scale systems
  3. Give end-users security controls they can understand and privacy they can control for the dynamic, pervasive computing environments of the future
  4. Develop quantitative information-systems risk management to be at least as good as quantitative financial risk management within the next decade.

In the 4th challenge security (risk) professionals are being asked to follow the yellow brick road to the emerald city of quantitative financial risk management (QFRM) and the wizards therein. A recent article from a May issue of the Economist examines the state of QFRM in light of the subprime debacle, highlighting the $30 billion write down of UBS (Used to Be Smart) as the (sub)prime example of flawed risk management. The outlook in the emerald city is professionally gloomy.

One of the main quantitative culprits identified by the Economist is Value-at-Risk, usually written as the contraction VaR (presumably to distinguish it from VAR and Var, long standing contractions for the Variance). VaR is the centrepiece in many QFRM toolboxes, being honoured with inclusion in the Basel II guidelines for calculating reserve capital. But the subprime debacle has highlighted one the well-known weaknesses of VaR - that it is not strong at predicting the low-probability/high-impact events that are attendant to catastrophe.

VaR is essentially a simple concept supported by arbitrarily complex modelling (see this paper from the upcoming WEIS 2008 conference for VaR applied to information security). Let A be an asset for which we may realise a loss over a defined time period T. Given a level of significance a, the VaR of A is the maximum loss L that will occur over the time period T with probability 1 - a. So if A is a stock of interest over the next T = 100 days, and we fix a to be 0.01, then the VaR of A is the maximum loss L that will occur over the next 100 days with probability 0.99 (or 99% of the time). The interpretation here is that the losses from A will be at most L on 99 out of 100 days of trading.

But what about that other one day of trading not covered? The magnitude of the loss on that rogue day is outside the scope of the VaR model, and need not be proportional to the loss bound predicted by VaR. In fact, VaR is designed to answer questions of the form "Within reason, how bad can things get?", which seem very sensible until we acknowledge that the subprime debacle was not "within reason". As the Economist observes, after a few years of bootstrapping and estimation, VaR models are transferred onto historical data and settle down to predicting a stable future from a stable past, leading to the conceit that risks can be quantified and regulated.

One pyrrhic benefit from the subprime debacle is that VaR models can now be recalibrated with catastrophic data sets, and should therefore produce better predictions. The Economist notes a new interest in non-statistical models based on enumerating risk scenarios that describe what could go wrong, and then thinking through the consequences of the scenario crystallizing. Scenario generation is typically a people-intensive activity, facilitated through brainstorming and workshops - not the forte of quants. Nonetheless scenario-driven risk analysis (SDRA) has the ability to uncover root causes and dependencies that may be absent or insufficiently weighted in quantitative models. On the other hand, the SDRA may fail to generate an exhaustive set of material scenarios, and more mundanely, poorly facilitated sessions can lead to futile bickering over rating the significance of scenarios.

tangle Regardless of the model used, the Economist notes that risk management is becoming less tractable due to complexities and dependencies. Partitioning risks into those related to credit, markets and liquidity is no longer sufficient since the risk inherent in some of the subprime financial products did not respect these organisational boundaries. In short, we have entanglement. Further, obtaining aggregate risk positions is becoming more difficult since some departments still maintain their risk exposure on desktop Excel models, and over-the-counter dealings that are not formally traded also contribute to uncertain aggregate positions. For shareholders, and even regulators, it is very difficult to unwind and assess the risk exposure of a company.

What conclusions might we have for IT (Security) risk management? Rich Bejtlich has commented on the Economist article, and made direct comparisons to the difficulties of risk in financial environments to those in IT environments. The good news is that we in IT Risk should no longer feel compelled to wed our futures to the QFRM yellow brick road, and perhaps we are better served by SDRA. We can also stop beating ourselves up on the point that the weakness of IT Risk is the absence of data - the real weakness is poor modelling, and the decisions based on the output of such models. The Computer Research Association grand challenges of 2003 may be just too grand, and in fact unnecessary.

6 comments:

Unknown said...

"Develop quantitative information-systems risk management to be at least as good as quantitative financial risk management within the next decade."

It won't take that long.

:)

Unknown said...

Yes we may get there and find it abondoned :-)

Unknown said...

abandoned!!!

Unknown said...

Interesting blog post!
The results overlap nicely with the research recommendations that I worked on for the European Union.

http://www.securitytaskforce.org/dmdocuments/SecurIST_AB_Recommendations%20Issue_V2_0.pdf

Cheers
Tobias

Anonymous said...

Theirs any information leak? how about the password security? I read it to your previous post.


Laby[seersucker suit

Smith said...

Great Article
IEEE Projects on Information Security
Project Centers in Chennai



JavaScript Training in Chennai
JavaScript Training in Chennai