Understanding Risk

Operating or overseeing a business –  whether it’s as a director, executive, or manager – requires an understanding of risk, and especially how it impacts your strategy.  But risk is a nebulous concept.  It means something different to everyone, so it helps to levelset not just on a working definition of risk, but on approaches to thinking both about novel risks (those that aren’t yet on your radar) and known risks (those that are on your radar).

What is Risk?
Risk is anything that has a chance of adversely impacting your business.  Risk isn’t intrinsically a bad thing, all entities have a risk appetite that balances the risks they take against the rewards they seek.  Companies have to invite risk to pursue rewards; consider that merely making a profit invites competitors who will increase your risk.  Risks exist in many broad categories – often, a practitioner in one category thinks of their kind of risk as being the only kind that matters – and it’s important to apply risk management thoughts across the spectrum of risk.  The most prevalent risk just comes from liquidity (having enough cash to operate), which can include credit risk (the money you are owed … doesn’t materialize).  You might have risk that comes from your market (you need specific truths to operate, like a bear market or zero interest rates), your business strategy (your specific market strategy hinges on an underlying axiom, like people renting movies in a store), or compliance regimes (you can be put out of business simply for not following a rule). You can also face risks from reputation (if people no longer are willing to do business with you) or operations (your security and safety practices).

Within those broad risk areas, we can think of significant amounts of risk as coming from hazards.  Hazards are the subset of risks that aren’t intrinsic to your strategy, and have the potential to be surprisingly disruptive.

Hazards come in many flavors.  Some are procedural: in execution of your strategy, you might make errors.  Some are adversarial or environmental: other entities outside your control could harm you through this hazard.  And some are perverse incentives: you might incentive individuals on your team to do very dangerous things in execution of your strategy.  Each of these requires different forms of oversight to address, especially in places where they might interact.

Procedural Hazards
Many control regimes – from Sarbanes-Oxley to the NIST CSF to a whole host of ISO frameworks – are designed to help companies manage process risk.  Unfortunately, these frameworks, alone, seem to be insufficient to control those risks.  Overseeing risk can be challenging, as hundreds of detailed controls across an entire enterprise are potentially relevant, and identifying specific problematic areas isn’t an easy task.  Two important questions might help drive towards identifying hazards.

What is the scope of a control system?  Perhaps a company has a strong control in Identity and Access Management, and can report flawless execution in ensuring that only appropriate staff get access to systems.  But lost in the nuance of reporting is that the relevant control only applies to a subset of the systems in the company.  It’s the most important set of systems, of course, but importance is in the perception of management.  Right next to those important systems might be other, uncontrolled systems that don’t have good controls, which create hazards for the adjacent controlled systems.  Understanding where controls don’t cover the full scope of a company is an important first step.

How effective is the control system?  Some control systems look shiny from the outside, but on the inside, don’t actually provide meaningful protections.  It’s important to understand if there is a simple measurement that summarizes the control, which is also tied into the protections the control provides.  Perhaps the measure is reporting on activity (“We approved 75 products for launch this quarter”) and not on impact (“100% of products had absolutely no reported issues”).  An impact measure might reveal implausibility, a failure rate of 0% is not necessarily an indicator of a strong system.  It’s more likely an indicator of a control system that has no effect.

Combine these two questions as you consider how to report on the effectiveness of an overall control system.  Control effectiveness should report both on scope (what percentage of the system is controlled?) and effectiveness (what is the measure of process effectiveness?).  Risk appetite should be used to establish reasonable ranges for both of these measures, to identify when escalation will be needed to course correct, and how much escalation (telling executive management is likely a different threshold than telling the board). Identifying those thresholds before you cross them will save a lot of energetic conversation about whether or not something should be escalated.

Environmental and Adversarial Hazards
Some systems have defects that can go badly wrong if exploited in just the wrong way.  Sometimes that exploit needs a malicious actor, a criminal who wants to create harm for your business.  Other times that exploit doesn’t require malice; perhaps an extreme winter storm pushes your system outside its design limits.

These hazards are sometimes challenging to talk about.  The hazards aren’t always easy to find, and rarely with a simple checklist.  Sometimes the hazards are tolerable, you aren’t necessarily happy to have them, but you’ll tolerate them for a time.  Sometimes, these hazards are so intertwined into your system design and business process that even if you do decide to reduce the hazard, you’ll need to spend years coordinating cross-functional projects to root it out.

Discovery: One way that many companies identify these hazards is to employ experts who just know where to look.  Unfortunately, this approach relies on having a specific kind of unicorn: a deeply technical employee with broad-based knowledge of your entire system, a long memory to track issues, and the communication skills to educate your executive team about the hazards.  A more reliable approach is to embed hazard analysis all throughout the design process, and capture the hazards into a registry; and have that registry continuously reviewed – perhaps reassessing a few a month – to keep it updated with known hazards.

Mitigation: Some of those hazards you will need to mitigate.  You don’t need to reduce them all to zero (sometimes just taking the edge off by a little bit is sufficient to bring the risk back into your appetite), but once you decide to reduce the impact of the hazard, it’s helpful to identify success criteria.  Think of success criteria as a contract with your future self: “if I do this much work, measurable by this outcome, and the world hasn’t changed to make this more dangerous, then I get to celebrate success.”  It will be tempting along the way to move the goalposts closer, because mitigation projects can take longer than you originally expected.  Inspect that urge.  Did you really misestimate the danger originally, or do you just have fatigue and would like to be done, even if the hazard remains uncontrolled?

Awareness:  Some of your hazards you aren’t going to mitigate.  Perhaps the hazard is too embedded in your way of doing business.  Maybe the hazard is just below the level where it would be urgent to fix.  This is uncomfortable, because you’ll have to acknowledge the presence of these hazards, and it’s a natural reaction to avoid talking about them.  But you must, because the only real way to understand the risk appetite is to actually talk about the hazards that you accept, especially in the context of all of the hazards contained in your registry.  Gaining awareness (likely not comfort, but at least awareness) of which risks are accepted make assessing new and novel risks into an easier task.

Incentivized Hazards
The most pernicious hazards an organization faces are those that it creates for itself, by putting its own employees’ incentives at odds with its long-term best outcomes.  Sometimes this might be through ill-thought through systemic incentives (consider JPMC’s “London whale” or the Wells Fargo cross-selling debacle); other times it might be created by specific pressures to achieve results (look at Volkswagen’s DieselGate or Theranos).

Most incentivized hazards create a tension between what ought to be the values and cultures of a company (which are often just plaques on a wall, rather than living touchstones), and the short-term needs of a company.  Incentives can be novel solutions to a changing business environment, or might arise from impossible business needs.  But detecting perverse incentives isn’t impossible; it just requires extra care.

Culture:  We shouldn’t expect that employees will be the only line of defense against a hazard, but we should expect that they should feel uncomfortable with conflicting goals – but that they should feel comfortable raising that conflict with management.  Organizational values should be viewed like a detour sign: they indicate which paths to avoid.  Perhaps, to avoid a Wells Fargo style incident, a value like “Serve the best interests of our customers” would be helpful to create tension against “cross-sell as many products as possible to our existing customers.”

Changing business environment:  When the environment alters in a significant fashion, novel solutions to the problem create an automatic perverse incentive: the novel solution absolutely cannot be permitted to fail.  The team responsible for the solution is automatically incentivized to hide risks and adverse information, or, at minimum, to downplay it (the JPMC response to Basel III can be viewed in this light).  Look closely at those novel changes, and inspect them closely for concealed risks.

Impossible business requirements:  Sometimes an organization needs an outcome so desperately that it can only be achieved by some breakthrough that seems impossible.  Similar to novel business processes, this creates an incentive to ensure that the solution exists, even if it doesn’t!  Consider VW, which needed an innovation in diesel engine technology which was otherwise unheard of.  Much like in a changing business environment, this should be seen as an indicator to dig very deeply into the solution, to understand if the solution truly works as advertised, or creates new hazards for the business.

Planning for perversion of incentives:  Almost any structural incentive can become perverse – consider that as a structural hazard of an incentive – but incentives can be instrumented to look for those hazards.  A variant of the pre-mortem is very helpful: consider that the incentive will create a perverse incentive, and then try to identify how that happened.  Putting in place measurements to detect those outcomes can be helpful (Is this incentive significantly more effective than we anticipated?  Is the business generated by this incentive structurally good?).

How much is enough?
Ultimately, the question that risk management programs seek to answer is “How much risk reduction is enough to get us back into our risk appetite?”  Or, rephrased, “How do you know that you did the best that you could, given the circumstances?”

The answer to that question isn’t a simple one, but it boils down to an understanding of how comfortable you are with the actions you took, and the decisions you made, given what you could know at the time.  Of course, with perfect foresight, you would perfectly navigate the risk environment, and only make bets that are worthwhile.  But you don’t have perfect foresight, so don’t apply it in hindsight.

Are you paying attention to risk?  Are you willing to look in uncomfortable places for risk?  Are you controlling for the risk you incentivize?  Are you comfortable with where you’ve drawn the line between the hazards you’re mitigating and the ones you aren’t?