Risk compensation

Context: Thursday at 4:05 pm, I’ll be keynoting in Hall D at the RSA Security Conference: “Mind over Matter: Managing Risk with Psychology instead of Brute Force”. There are two core topics covered by the keynote; the other is Understanding and Increasing Value.

One of the biggest challenges facing the information security profession is how we work with our business partners to better manage risk. We often make this harder on ourselves, asserting that “we are the custodians of risk” or “we are the conscience of the business”. This isn’t very productive or helpful, and it generally doesn’t start conversations off well with the business.

In fact, business partners often don’t want to talk to security, and may get reluctantly dragged in. When they ask if what they are doing is “safe enough”, they are dragged through the morass of the ISO27002 framework, asked questions about esoteric problems that haven’t affected anyone in a decade, and made to deal with lectures on the value of various entropy sources, and whether N+1 redundancy is sufficient in the case of the APT attacking during a natural disaster. And they end of that, they just want to leave, either with a “yes”, which makes them happy, or a “no” which they’re going to ignore, and hope they never get caught.

A critical part of thinking about risk is the concept of risk compensation, also known as the Peltzman effect. People have a set point of risk that they will tolerate (NB: that they are aware of!), and anything that increases this risk will cause them to decrease risk elsewhere; anything that decreases this risk will let them take more risk.

At steady state, they believe that the risks that arise in the business are being handled by the security machine, and that overall risk isn’t changing. True or not, this is the perception that companies have. If they believe that there are fewer risks coming into the business, then they’ll want to defund the machine. If they feel the machine isn’t effective and countering risk (and nothing bad is happening), they’ll believe there are fewer risks coming into the system … and defund the machine.

The overreaction to this that many of us in the security community have had is the Chicken Little approach – we make risks sound scarier than they are; or bring up risks that can’t be fixed. Unfortunately, humans have two ways of coping with unmitigated risk. One is to convince ourselves that we’ve always known about this risk, and that’s okay. Sadly, that’s the healthy response. The worse response is to tell ourselves that the risk isn’t real; more importantly, the person who told us about the risk isn’t credible, and we should ignore other risks they’ve told us about. Which, conveniently, leaves us feeling relatively risk-free, so let’s go do something more risky!

Our goal is to make people believe in something approximating the risks they’ve been ignoring. We do that by not letting them outsource risk analysis to the “experts”, but by using those experts to teach them to do the risk analysis themselves. This won’t always improve things right off the bat, but will, over time, cause people to change their behaviors.

We hope.


Posted

in

by

Tags: