I think there’s something fundamentally wrong when you’re biggest fear at the end of a risk assessment isn’t so much that you’ve got a “critical*” finding, but it’s that you don’t know how to tell management. It’s an interesting phenomena and I believe most information security people run in to it all the time. What compounds it and makes me completely gob-smacked is when the discussion turns to ways that you can downgrade the finding.
And don’t try and pretend you haven’t been privy to these discussions. We’ve all seen it or heard of it happening. “What if we only account for a small population of users? What if we nudge up the value of our controls? What if…” I mean what they’re basically asking is “What if we just change some of these values and downplay what we as a group agree is the risk?”
The good news is, the probability of the risk having been exaggerated in the first place is often quite high *phew* – so perhaps this “base-lining” is useful?
This is one of the reasons why I’m a fan of FAIR, it makes it easy to:
- Reduce the probability of exaggerated risk statements in the first place – or at least make it more difficult for them to make it through to the end; and
- Eliminate the fact that you can even result in a “critical*” finding without putting statements around the frequency of loss events and the probable loss, as opposed to worst-case loss – which we bang on about all the time which leads us to the sky-is-falling situation
*Nb: “Critical” adj. Whatever-the-hell you want it to mean.
Image thanks to: http://www.flickr.com/photos/pinksherbet/3484925590/
Tags: business, management, Risk