Security Subsistence Syndrome

Wendy Nather, of The 451 Group, has recently discussed "Living Below the Security Poverty Line," which looks at what happens when your budget is below the cost to implement what common wisdom says are the "standard" security controls. I think that's just another, albeit crowd-sourced, compliance regime. A more important area to consider is the mindset of professionals who believe they live below the security poverty line:
Security1 Subsistence Syndrome (SSS) is a mindset in an organization that believes it has no security choices, and is underfunded, so it minimally spends to meet perceived2 statutory and regulatory requirements.

Note that I'm defining this mindset with attitude, not money. I think that's a key distinction - it's possible to have a lot of money and still be in a bad place, just as it's possible to operate a good security program on a shoestring budget. Security subsistence syndrome is about lowered expectations, and an attitude of doing "only what you have to." If an enterprise suffering from security subsistence syndrome can reasonably expect no one to audit their controls, then they are unlikely to invest in meeting security requirements. If they can do minimal security work and reasonably expect to pass an "audit3", they will do so.

The true danger of believing you live at (or below) the security poverty line isn't that you aren't investing enough; it's that because you are generally spending time and money on templatized controls without really understanding the benefit they might provide, you aren't generating security value, and you're probably letting down those that rely on you. When you don't suffer security subsistence syndrome, you start to think with discretion; implementing controls that might be qualitatively better than the minimum - and sometimes, with lower long-term cost.

Security subsistence syndrome means you tend to be reactive to industry trends, rather than proactively solving problems specific to your business. As an example, within a few years, many workforces will likely be significantly tabletized (and by tablets, I mean iPads). Regulatory requirements around tablets are either non-existent, or impossible to satisfy; so in security subsistence syndrome, tablets are either banned, or ignored (or banned, and the ban is then ignored). That's a strategy that will wait to react to the existence of tablets and vendor-supplied industry "standards," rather than proactively moving the business into using them safely, and sanely.

Security awareness training is an example of a control which can reflect security subsistence syndrome. To satisfy the need for "annual security training", companies will often have a member of the security team stand up in front of employees with a canned presentation, and make them sign that they received the training. The signed pieces of paper go into someone's desk drawer, who hopes an auditor never asks to look at them. Perhaps the business uses an online computer-based training system, which uses a canned presentation, forcing users to click through some links. Those are both ineffective controls, and worse, inefficient (90 minutes per employee means that in a 1500 person company, you're spending over an FTE just to generate that piece of paper!).

Free of the subsistence mindset, companies get creative. Perhaps you put security awareness training on a single, click through webpage (we do!). That lets you drop the time requirement down (communicating to employees that you value their time), and lets you focus on other awareness efforts - small fora, executive education, or targeted social engineering defense training. Likely, you'll spend less time and money on improving security awareness training, have a more effective program, and be able to demonstrate compliance trivially to an auditor.

Security subsistence syndrome is about your attitude, and the choices you make: at each step, do you choose to take the minimal, rote steps to satisfy your perceived viewers, or do you instead take strategic steps to improve your security? I'd argue that in many cases, the strategic steps are cheaper than the rote steps, and have a greater effect in the medium term.

1 Nothing restricts this to security; likely, enterprise IT organizations can fall into the same trap.
2 To the satisfaction of the reasonably expectable auditor, not the perfect auditor.
3 I'm loosely defining audit here, to include any survey of a company's security practices; not just "a PCI audit."

Enterprise InfoSec Lessons from the TSA

The TSA and its security practices are fairly common targets for security commentary. You can find armchair critics in most every bar, living room, and, especially, information security team. But the TSA is a great analogue to the way enterprises tend to practice information security; so maybe we can learn a thing or three from them.

We can begin with the motherhood and apple pie of security: business alignment. TSA has little to no incentive that lines up with its customers (or with their customers). TSA's metric is, ostensibly, the number of successful airplane attacks. Being perceived to reduce that number is their only true metric. On any day where there is no known breach, they can claim success - just like an enterprise information security team. And they can also be criticized for being irrelevant - just like said enterprise information security team. The business, meanwhile (both airlines and passengers), are worrying about other metrics: being on time, minimized hassle, and costs. Almost any action the TSA undertakes in pursuit of its goals are going to have a harmful effect on everyone else's goals. This is a recipe for institutional failure: as the TSA (or infosec team) acknowledges that it can never make its constituents happy, it runs the risk of not even trying.

Consider the security checkpoint, the TSA equivalent to the enterprise firewall (if you consider airplanes as VPN tunnels, it's a remarkable parallel). The security checkpoint begins with a weak authentication check: you are required to present a ticket, and ID that matches. Unfortunately, unless you are using a QR-coded smartphone ticket, the only validation of the ticket is that it appears - to a human eyeball - to be a ticket for this date and a gate behind this checkpoint. Tickets are trivially forgeable, and can be easily matched to whatever ID you present. The ID is casually validated, and goes unrecorded. This is akin to a, sadly, standard enterprise practice, to log minimal data about connections that cross the perimeter, and to not compare those connections to a list of expected traffic.

In parallel, we find the cameras. Mounted all through the security checkpoint, the cameras are a standard forensic tool - if you know what and when you are looking for something, they'll provide some evidence after the fact. But they aren't very helpful in stopping or identifying attacks in progress. Much like the voluminous logs many of our enterprises deploy. Useful for forensics, useless for prevention.

Having entered the checkpoint, the TSA is going to split passengers from their bags (and their shoes, belts, jackets, ID, and, importantly, recording devices). Their possessions are going to be placed onto a conveyor belt, where they will undergo inspection via an X-ray machine. This is, historically, the biggest bottleneck for throughput, and a nice parallel to many application level security tools. Because we have to disassemble the possessions, and then inspect one at a time (or maybe two, or three, in a high-availability scenario), we slow everything down. And because the technology to look for problems is highly signature based, it's prone to significant false negatives. Consider the X-ray machine to be the anti-virus of the TSA.

The passengers now get directed to one of two technologies: the magnetometers, or the full body imagers. The magnetometers are an old, well-understood technology: they detect efforts to bring metal through, are useless for ceramics or explosives, and are relatively speedy. The imagers, on the other hand, are what every security team desires: the latest and greatest technology; thoroughly unproven in the field, with unknown side effects, and invasive (in a sense, they're like reading people's email: sure, you might find data exfiltration, but you're more likely to violate the person's privacy and learn about who they are dating). The body scanners are slow. Slower, even, than the x-ray machines for personal effects. Slow enough that, at most checkpoints, when under load, passengers are diverted to the magnetometers, either wholesale, or piecemeal (this leads to interesting timing attacks to get a passenger shifted into the magnetometer queue). The magnetometer is your old-school intrusion-detection system: good at detecting a known set of attacks, bad at new attacks, but highly optimized at its job. The imagers are that latest technology your preferred vendor just sold you: you don't really know if it works well, and you're exporting too much information to the vendor, and you're seeing things you shouldn't, and you have to fail-around it too often for it to be useful; but at least you can claim you are doing something new.

If a passenger opts-out of the imaging process, rather than pass them through the magnetometer, we subject them to a "pat-down". The pat-down is a punitive punishment, enacted whenever someone questions the utility of the latest technology. It isn't very effective (if you'd like to smuggle a box cutter into an airport, and don't want to risk the X-ray machine detecting it, taping the razor blade to the bottom of your foot is probably going to work). But it does tend to discourage opt-out criticism.

Sadly, for all of the TSA's faults, in enterprise security, we tend to implement controls based on the same philosophy. Rather than focus on security techniques that enable the business while defending against a complex attacker ecosystem, we build rigid control frameworks, often explicitly designed to be able, on paper, to detect the most recent attack (often, in implementation, these fail, but we are reassured by having done something).