By: Derek Brink
Sharpen your number two pencils everyone and use the following estimates to build a simple risk model:
- Average number of incidents: 12.5 incidents per month (each incident affects 1 user)
- Average loss of productivity: 3.0 hours per incident
- Average fully loaded cost per user: $72 per hour
Based on this information, what can your risk model tell me about the security risk?
My guess is that your initial answer is something along the lines of “the average business impact is $2,700 per month,” which you obtained by the following calculation:
12.5 incidents/month * 3.0 hours/incident * $72/hour = $2,700/month
But in fact, this tells us almost nothing about the risk—remember that risk is defined as the likelihood of the incident, as well as the magnitude of the resulting business impact. If we aren’t talking about probabilities and magnitudes, we aren’t talking about risks! (We can’t even say that 50% of the time the business impact will be greater than $2,700, and 50% of the time it will be less—that would be the median, not the mean or average. Even if we could, how useful would that really be to the decision maker?)
Let’s stay with this simplistic example, and say that your subject matter experts actually provided you with the following estimates:
- Number of incidents: between 11 and 14 per month
- Loss of productivity: between 1 and 5 hours per incident
- Fully loaded cost per user: between $24 and $120 per hour
This is much more realistic. As we have discussed in “What Are Security Professionals Afraid Of?,” the values we have to work with are generally not certain. If we knew with certainty what was going to happen and how big an impact it would have, it wouldn’t be a risk!
Based on these estimates, what would your risk model look like now?
For many of us, our first instinct would be to use the average for each of the three ranges to compute an “expected value”, which is of course exactly the result that we got before.
- Expected case = 12.5 * 3.0 * $72 = $2,700/month
- Low case = 11 * 1.0 * $24 = $260/month
- High case = 14 * 5.0 * $120 = $8,400/month
It would be tempting to say that the business impact could be “as low as $260/month or as high as $8,400/month, with an expected value of $2,700/month.” But again, this does not tell us about risk. What is the probability of the low case, or the high case? What is the likelihood that the business impact will be more than $3,000 per month, which happens to be our decision-maker’s appetite for risk?
Further, we would be ignoring the fact that the three ranges in our simple risk model actually move independently—i.e., it isn’t logical to assume that fewer incidents will always be of shorter duration and lower hourly cost, or the converse.
Unfortunately, this is the point at which so many security professionals throw up their hands at the difficulty of measuring security risks and either fall back into the trap of techie-talk or gravitate towards qualitative 5×5 “risk maps.”
The solution to this problem is to apply a proven, widely used approach to risk modeling called Monte Carlo simulation. In a nutshell, we can carry out the computations for many (say, a thousand, or ten thousand) scenarios, each of which uses a random value from our estimated ranges. The results of these computations are likewise not a single, static number; the output is also a range and distribution, from which we can readily describe both probabilities and magnitudes—exactly what we are looking for!
Staying with our same simplistic example, we can use those estimates provided by our subject matter experts plus the selection of a logical distribution for each range. Here are my choices:
- Number of incidents: Between 11 and 14 incidents per month—I will use a uniform distribution, meaning that any value between 11 and 14 is equally likely.
- Loss of productivity: Between 1 and 5 hours per incident—I will use a normal distribution (the familiar bell-shaped curve), meaning that the values are most likely to be around the midpoint of the range.
- Fully loaded cost per user: Between $24 and $120 per hour—I will use a triangular distribution, to reflect the fact that the majority of users are at the lower end of the pay scale, while still accommodating the fact that incidents will sometimes happen to the most highly paid individuals.
The following graphic provides a visual representation of the three approaches.
Based on a Monte Carlo simulation with one thousand iterations—performed by using standard functions available in an Excel spreadsheet—we can advise our business decision makers with the following risk-based statements:
- There is a 90% chance that the business impact will be between $500 and $4,500 per month.
- There is an 80% likelihood that the business impact will be greater than $1,000 per month.
- The mean (average) business impact is about $2,100 per month—note how this is significantly lower than the $2,700 figure computed earlier; the difference is in the use of the asymmetrical triangular distribution for one of the variables.
- There is a 20% likelihood that the business impact will be greater than $3,000 per month.
What to do, of course, depends entirely on each organization’s appetite for risk. But as security professionals, we will have done our jobs, in a way that’s actually useful to the business decision maker.
About the Author:
Derek E. Brink, CISSP is a Vice President and Research Fellow covering topics in IT Security and IT GRC for Aberdeen Group, a Harte-Hanks Company. He is also a adjunct faculty with Brandeis University, Graduate Professional Studies teaching courses in our Information Security Program. For more blog posts by Derek, please see http://blogs.aberdeen.com/category/it-security/ and http://aberdeen.com/_aberdeen/it-security/ITSA/practice.aspx