UA-25768871-1

Risk compensation

Context: Thursday at 4:05 pm, I’ll be keynoting in Hall D at the RSA Security Conference: “Mind over Matter: Managing Risk with Psychology instead of Brute Force”. Slides are here. There are two core topics covered by the keynote; the other is Understanding and Increasing Value.

One of the biggest challenges facing the information security profession is how we work with our business partners to better manage risk. We often make this harder on ourselves, asserting that “we are the custodians of risk” or “we are the conscience of the business”. This isn’t very productive or helpful, and it generally doesn’t start conversations off well with the business.

In fact, business partners often don’t want to talk to security, and may get reluctantly dragged in. When they ask if what they are doing is “safe enough”, they are dragged through the morass of the ISO27002 framework, asked questions about esoteric problems that haven’t affected anyone in a decade, and made to deal with lectures on the value of various entropy sources, and whether N+1 redundancy is sufficient in the case of the APT attacking during a natural disaster. And they end of that, they just want to leave, either with a “yes”, which makes them happy, or a “no” which they’re going to ignore, and hope they never get caught.

A critical part of thinking about risk is the concept of risk compensation, also known as the Peltzman effect. People have a set point of risk that they will tolerate (NB: that they are aware of!), and anything that increases this risk will cause them to decrease risk elsewhere; anything that decreases this risk will let them take more risk.

At steady state, they believe that the risks that arise in the business are being handled by the security machine, and that overall risk isn’t changing. True or not, this is the perception that companies have. If they believe that there are fewer risks coming into the business, then they’ll want to defund the machine. If they feel the machine isn’t effective and countering risk (and nothing bad is happening), they’ll believe there are fewer risks coming into the system … and defund the machine.

The overreaction to this that many of us in the security community have had is the Chicken Little approach - we make risks sound scarier than they are; or bring up risks that can’t be fixed. Unfortunately, humans have two ways of coping with unmitigated risk. One is to convince ourselves that we’ve always known about this risk, and that’s okay. Sadly, that’s the healthy response. The worse response is to tell ourselves that the risk isn’t real; more importantly, the person who told us about the risk isn’t credible, and we should ignore other risks they’ve told us about. Which, conveniently, leaves us feeling relatively risk-free, so let’s go do something more risky!

Our goal is to make people believe in something approximating the risks they’ve been ignoring. We do that by not letting them outsource risk analysis to the “experts”, but by using those experts to teach them to do the risk analysis themselves. This won’t always improve things right off the bat, but will, over time, cause people to change their behaviors.

We hope.

Understanding and increasing value

Context: Thursday at 4:05 pm, I’ll be keynoting in Hall D at the RSA Security Conference: “Mind over Matter: Managing Risk with Psychology instead of Brute Force”. Slides are here. There are two core topics covered by the keynote; the other is Risk Compensation.

How do we understand how much value we provide to a business? One way is to first understand how much value a business a provides - a business spends money (resources), and (hopefully) makes money. The money it makes is it’s value; the ratio value over resources is its capabilities: how well it applies resources. We hope that our capabilities is greater than 1: that is, that we create surplus through our activities.

Organizations within a business can apply this same measure, even if the numbers are a bit fuzzier. Since we can’t always measure value, sometimes we measure capabilities instead as a proxy. Capabilities is simply our skill at using our resources, times our effort in applying them, times our effectiveness at changing at our environment.

Skill is simple to understand. There’s an apocryphal story about a maintenance engineer for a company who, after retiring, was called back in because one of the ancient mechanical systems had failed, and no amount of effort could restore it. He came in, made a chalk mark on the side of the system, and told them to hit that spot with a hammer. He presented them with a bill for $30,000. When asked for itemization, he noted:
Chalk: $5
Knowing where to make the mark: $29,995
That’s skill: the ease with you can accomplish a task.

Effort is about how we approach a task. Do we think it will fail, so we give it insufficient attention? Have we assigned it to someone overburdened, so they are distracted and fail to make progress? Do we give it to someone with true passion, and let it be a priority for them?

Effectiveness is often about the environment we are in: Did a project complete, or did we decide not to finish after investing 80% of the time? Did we have buy in from the business, or will our project collect dust? Did we end up shouting from rooftops, and no one listened? If, as a result of investing resources, there is no change to the business, then the resources were, generally, ineffective.

That last part is hard - we think of ourselves as preventing bad things, so how do we know if we were effective? The answer is simple - we should have enabled our organizations to take more risks! It sounds perverse - but all organizations take risks. We should enable them to understand the risks they are taking, and mitigate some so that they can take others - hopefully ones not related to security, of course.

While measuring capabilities is hard, it’s like three-dimensional differential equations in a non-ideal environment : really hard on paper, but almost anyone can catch a ball. Within an organization, teams are judged on their capabilities, and resources are redirected over time from the less capable to the more capable.

Leveling up Security Awareness

Context: Thursday morning at RSAC, Bob Rudis and I will be presenting “Achievement Unlocked: Designing a Compelling Security Awareness Program” at 10:40 am in Room 123. Slides are here.

Security Awareness has become a controversial topic. Many organizations have fallen back onto rote, annual, computer-based training (CBT), taking a cookie-cutter, one size fits all approach to the problem. Why? Because auditors started checking to see if programs existed -- and their measurement of success was whether or not you’d gotten every employee in the company to certify that they’d receive training. And that led to a checklist-based race to the bottom.

The first step in improvement is to separate policy awareness - that annual verification that employees have been “trained” from security awareness - the steps you take to improve the overall security posture of your employees. If, for instance, you require each of your employees to sit through a one hour CBT annually, then your effectively spending 1 FTE for every ~1600 employees you have just to check that box. That’s a waste of time and money, and your employees know it! By demonstrating that you’re willing to waste their time, they will treat your CBT with the same respect - but playing games to see how fast they can race through it, for instance. Or to find all the picayune errors they can, and laugh about how clueless you are.

You can solve this problem by racing to the bottom even faster: even what your auditors need is to see that every employee has checked a box annually, then one option is to give every employee a box to check annually. Create an automated system that reaches out each year to employees, driving them to a webpage that has an overview of the highlights of the security policy, with some bullets about why they care, with some links to more information for the enterprising souls. And then give them a box to check that records that they’ve checked the box for the year.

Having done that, you can focus on real security awareness training. Real awareness training is much more targeted. Engage users around specific topics. Social Engineering. Phishing. USB drives. Screensavers. Give them a way to respond: at Akamai, we have a mailing list, that everyone with a published phone number is on. When a pretexted call comes in, people can notify the next likely targets of the context of the phone call. Give them incentives: gift cards, or visits from the Penguin of Awesome. Give pro bono personal security training: teach them about attacks that might target their families, and educational resources for their children. And don’t worry about tracking that every single person has consumed every single resource - that’s a waste of energy. Give them what they need, and they’ll clamor for more.

Standard Infosec Management Guidance is Wrong. Sorry.

Context: Tuesday evening, I’ll be presenting at the RSAC Infragard/ISSA meeting (Room 120 at 6pm) a talk title “All our Infosec Management Guidance is Wrong. Sorry about that!”. Slides are here.

There’s an apocryphal story about five monkeys, a ladder, a banana, and a hose. Monkeys would go up the ladder to get the banana, get hosed down, and learn not to climb the ladder. New monkeys would be introduced, and “peer training” would teach them not to climb the ladder, until no monkeys who had been hosed down remained, but monkeys would fear the ladder.

Truthiness aside, the kernel of truth that causes this story to spread is a clear one: we pass down myths and legends about what we should, or how we should do it, but not always *why* we do it. And so, like monkeys, we become afraid of the ladder, rather than watchful for the researcher with a hose. And we pass these lessons down, or across, and turn them into pithy statements, without considering what they mean now. Like, “You should get a certification”, “Pick a good password”, or “Just add security to the contract”, these once useful pieces of advice may end up lost in translation.

In the talk, I discuss pithy quotes from long-dead philosophers, applying policy (or technology!) exclusively to solve problems, Return on Security Investment, Defense in Depth/Breadth/Height, and being “not faster than the bear.”

The value of professional certifications

Context: this afternoon, I’ll be joining a panel at RSAC (PROF-M03; Room 302 at 1450) titled “Information Security Certifications: Do They Still Provide Industry Value?”

Much ado is made about the relative merits of various certificates, certifying test, and administering organizations. Before arguing the value of those, we should first assess what intrinsic value a professional certificate might have; understanding the various models, and then see which fit the information security industry.

One model is the guild certificate - a certificate of competency, generally issued to a journeyman or master of their craft, which acknowledges their capability at their preferred trade. The building trades are the most common like this; but medical professionals, lawyers, and pilots are all examples. As purchasers of services, consumers like to know that the purveyor meets a minimum standard of the craft. Guild certificates are especially preferred where quality of work is important, but there tends to be a set of common tasks performed within the profession.

Another model, often a special case of a guild certificate, is the practitioner’s certificate, which is a certificate, generally issued directly or indirectly by a governmental organization, permitting an individual to practice on your behalf. Consider the CPA: an individual who is allowed to practice accounting before the government; and you are shielded from (some) liability for errors they make. Building inspectors are another example; practitioner’s certificates let us know that in trusting an individual, we don’t necessarily have to inspect their work. Practitioner’s certificates are especially effective where there is exactly one correct way to solve a problem or accomplish a task.

Yet a third model is the reputational certificate. A reputational certificate identifies a person as a member of a clique. Membership in that clique might imply certain capabilities, but is no guarantee. A college diploma, membership in a professional organization, or employment in a given company are examples of reputational certificates. A reputational certificate represents a transfer of the reputation of existing members to a new member: the first time you meet someone from MIT, you might accord them respect on the assumption that they are as competent as other MIT graduates. But reputation is a two-bladed sword: If you know a lot of incompetent people who joined The Southwest Weasel Security Association, you’ll judge the next person you meet from there as equally incompetent.

So what then, are infosec certifications?

There exist focused, guild certificates, often administered by a vendor: consider the CCIE or MCSE as general examples. But most certifications offered are more reputational: they bear the trappings of a guild certificate, like a common body of knowledge, or coursework, but given the lack of a common craft or single set of solutions in the industry, there is no general purpose guild certificate. Infosec is not unique in this case; sales professionals or product managers also have similar challenges.

And reputational certificates always devolve to the lowest common denominator: the value of the certificate will always devolve to the reputation of the lowest holder of the certificate, not the greatest.

Early RSAC coverage

I’ve done a couple of interviews this week about my upcoming keynote at RSAC. For the English/Spanish speakers, I’ve put up an early draft of the slides, which were the keynote I gave at Security Zone. The talk will have another iteration before RSAC, but you can take an early look, or watch the even earlier version I gave at Hack in the Box last year.