Main

December 13, 2011

Security Subsistence Syndrome

Wendy Nather, of The 451 Group, has recently discussed "Living Below the Security Poverty Line," which looks at what happens when your budget is below the cost to implement what common wisdom says are the "standard" security controls. I think that's just another, albeit crowd-sourced, compliance regime. A more important area to consider is the mindset of professionals who believe they live below the security poverty line:

Security1 Subsistence Syndrome (SSS) is a mindset in an organization that believes it has no security choices, and is underfunded, so it minimally spends to meet perceived2 statutory and regulatory requirements.

Note that I'm defining this mindset with attitude, not money. I think that's a key distinction - it's possible to have a lot of money and still be in a bad place, just as it's possible to operate a good security program on a shoestring budget. Security subsistence syndrome is about lowered expectations, and an attitude of doing "only what you have to." If an enterprise suffering from security subsistence syndrome can reasonably expect no one to audit their controls, then they are unlikely to invest in meeting security requirements. If they can do minimal security work and reasonably expect to pass an "audit3", they will do so.

The true danger of believing you live at (or below) the security poverty line isn't that you aren't investing enough; it's that because you are generally spending time and money on templatized controls without really understanding the benefit they might provide, you aren't generating security value, and you're probably letting down those that rely on you. When you don't suffer security subsistence syndrome, you start to think with discretion; implementing controls that might be qualitatively better than the minimum - and sometimes, with lower long-term cost.

Security subsistence syndrome means you tend to be reactive to industry trends, rather than proactively solving problems specific to your business. As an example, within a few years, many workforces will likely be significantly tabletized (and by tablets, I mean iPads). Regulatory requirements around tablets are either non-existent, or impossible to satisfy; so in security subsistence syndrome, tablets are either banned, or ignored (or banned, and the ban is then ignored). That's a strategy that will wait to react to the existence of tablets and vendor-supplied industry "standards," rather than proactively moving the business into using them safely, and sanely.

Security awareness training is an example of a control which can reflect security subsistence syndrome. To satisfy the need for "annual security training", companies will often have a member of the security team stand up in front of employees with a canned presentation, and make them sign that they received the training. The signed pieces of paper go into someone's desk drawer, who hopes an auditor never asks to look at them. Perhaps the business uses an online computer-based training system, which uses a canned presentation, forcing users to click through some links. Those are both ineffective controls, and worse, inefficient (90 minutes per employee means that in a 1500 person company, you're spending over an FTE just to generate that piece of paper!).

Free of the subsistence mindset, companies get creative. Perhaps you put security awareness training on a single, click through webpage (we do!). That lets you drop the time requirement down (communicating to employees that you value their time), and lets you focus on other awareness efforts - small fora, executive education, or targeted social engineering defense training. Likely, you'll spend less time and money on improving security awareness training, have a more effective program, and be able to demonstrate compliance trivially to an auditor.

Security subsistence syndrome is about your attitude, and the choices you make: at each step, do you choose to take the minimal, rote steps to satisfy your perceived viewers, or do you instead take strategic steps to improve your security? I'd argue that in many cases, the strategic steps are cheaper than the rote steps, and have a greater effect in the medium term.


1 Nothing restricts this to security; likely, enterprise IT organizations can fall into the same trap.

2 To the satisfaction of the reasonably expectable auditor, not the perfect auditor.

3 I'm loosely defining audit here, to include any survey of a company's security practices; not just "a PCI audit."

December 7, 2011

Enterprise InfoSec Lessons from the TSA

The TSA and its security practices are fairly common targets for security commentary. You can find armchair critics in most every bar, living room, and, especially, information security team. But the TSA is a great analogue to the way enterprises tend to practice information security; so maybe we can learn a thing or three from them.

We can begin with the motherhood and apple pie of security: business alignment. TSA has little to no incentive that lines up with its customers (or with their customers). TSA's metric is, ostensibly, the number of successful airplane attacks. Being perceived to reduce that number is their only true metric. On any day where there is no known breach, they can claim success - just like an enterprise information security team. And they can also be criticized for being irrelevant - just like said enterprise information security team. The business, meanwhile (both airlines and passengers), are worrying about other metrics: being on time, minimized hassle, and costs. Almost any action the TSA undertakes in pursuit of its goals are going to have a harmful effect on everyone else's goals. This is a recipe for institutional failure: as the TSA (or infosec team) acknowledges that it can never make its constituents happy, it runs the risk of not even trying.

Consider the security checkpoint, the TSA equivalent to the enterprise firewall (if you consider airplanes as VPN tunnels, it's a remarkable parallel). The security checkpoint begins with a weak authentication check: you are required to present a ticket, and ID that matches. Unfortunately, unless you are using a QR-coded smartphone ticket, the only validation of the ticket is that it appears - to a human eyeball - to be a ticket for this date and a gate behind this checkpoint. Tickets are trivially forgeable, and can be easily matched to whatever ID you present. The ID is casually validated, and goes unrecorded. This is akin to a, sadly, standard enterprise practice, to log minimal data about connections that cross the perimeter, and to not compare those connections to a list of expected traffic.

In parallel, we find the cameras. Mounted all through the security checkpoint, the cameras are a standard forensic tool - if you know what and when you are looking for something, they'll provide some evidence after the fact. But they aren't very helpful in stopping or identifying attacks in progress. Much like the voluminous logs many of our enterprises deploy. Useful for forensics, useless for prevention.

Having entered the checkpoint, the TSA is going to split passengers from their bags (and their shoes, belts, jackets, ID, and, importantly, recording devices). Their possessions are going to be placed onto a conveyor belt, where they will undergo inspection via an X-ray machine. This is, historically, the biggest bottleneck for throughput, and a nice parallel to many application level security tools. Because we have to disassemble the possessions, and then inspect one at a time (or maybe two, or three, in a high-availability scenario), we slow everything down. And because the technology to look for problems is highly signature based, it's prone to significant false negatives. Consider the X-ray machine to be the anti-virus of the TSA.

The passengers now get directed to one of two technologies: the magnetometers, or the full body imagers. The magnetometers are an old, well-understood technology: they detect efforts to bring metal through, are useless for ceramics or explosives, and are relatively speedy. The imagers, on the other hand, are what every security team desires: the latest and greatest technology; thoroughly unproven in the field, with unknown side effects, and invasive (in a sense, they're like reading people's email: sure, you might find data exfiltration, but you're more likely to violate the person's privacy and learn about who they are dating). The body scanners are slow. Slower, even, than the x-ray machines for personal effects. Slow enough that, at most checkpoints, when under load, passengers are diverted to the magnetometers, either wholesale, or piecemeal (this leads to interesting timing attacks to get a passenger shifted into the magnetometer queue). The magnetometer is your old-school intrusion-detection system: good at detecting a known set of attacks, bad at new attacks, but highly optimized at its job. The imagers are that latest technology your preferred vendor just sold you: you don't really know if it works well, and you're exporting too much information to the vendor, and you're seeing things you shouldn't, and you have to fail-around it too often for it to be useful; but at least you can claim you are doing something new.

If a passenger opts-out of the imaging process, rather than pass them through the magnetometer, we subject them to a "pat-down". The pat-down is a punitive punishment, enacted whenever someone questions the utility of the latest technology. It isn't very effective (if you'd like to smuggle a box cutter into an airport, and don't want to risk the X-ray machine detecting it, taping the razor blade to the bottom of your foot is probably going to work). But it does tend to discourage opt-out criticism.

Sadly, for all of the TSA's faults, in enterprise security, we tend to implement controls based on the same philosophy. Rather than focus on security techniques that enable the business while defending against a complex attacker ecosystem, we build rigid control frameworks, often explicitly designed to be able, on paper, to detect the most recent attack (often, in implementation, these fail, but we are reassured by having done something).

December 15, 2010

Welcome AWS

By now you've certainly seen that Amazon Web Services has achieved PCI Compliance as a Level 1 merchant. This is great; the folks over at Amazon deserve congratulations on joining the cabal of compliant clouds. You've probably also seen the PCI pundits pithily pronouncing, "Just because AWS is PCI compliant, it doesn't mean they make you compliant by using them!". What does this mean?

Put simply, AWS, like Akamai's Dynamic Site Accelerator, is a platform over which your e-commerce flows. It's critically important that that platform is compliant if you're moving cardholder data through it (just as it's also important that PCI compliance is called out in your contract with AWS, per DSS 12.2). But more importantly, the way in which your organization, and your applications, handle credit card data is the key piece of getting compliant. What AWS's certification buys you is the same thing Akamai's does - the ability to exclude those components from your PCI scope.

(if you want to get compliant, the easiest way, of course, is to tokenize cardholder data before it hits your environment, using a service like EdgeTokenization).

August 23, 2010

Awareness Training

Implementing a good security awareness program is not hard, if your company cares about security. If they don't, well, you've got a big problem.

It doesn't start with the auditable security program that most standards would have you set up. Quoting PCI-DSS testing procedures:

12.6.1.a Verify that the security awareness program provides multiple methods of communicating awareness and educating employees (for example, posters, letters, memos, web based training, meetings, and promotions).
12.6.1.b Verify that employees attend awareness training upon hire and at least annually.
12.6.2 Verify that the security awareness program requires employees to acknowledge (for example, in writing or electronically) at least annually that they have read and understand the company's information security policy.

For many awareness programs, this is their beginning and end. An annual opportunity to force everyone in the business to listen to us pontificate on the importance of information security, and make them read the same slides we've shown them every year. Or, if you've needed to gain cost efficiencies, you've bought a CBT program that is lightly tailored for your business (and as a side benefit, your employees can have races to see how quickly they can click through the program).

But at least it's auditor-friendly: you have a record that everyone attended, and you can make them acknowledge receipt of the policy that they are about to throw in the recycle bin. And you have to have an auditor friendly program, but it shouldn't be all that you do.

I can tell you that, for our baseline, auditor-friendly security awareness program, over 98% of our employee base have reviewed and certified the requisite courseware in the last year; and that of the people who haven't, the vast majority have either started work in the last two weeks (and thus are in a grace period), or are on an extended leave. It's an automated system, which takes them to a single page. At the bottom of the page is the button they need to click to satisfy the annual requirement. No gimmicks, no trapping the user in a maze of clicky links. But on that page is a lot of information: why security is important to us; what additional training is available; links to our security policy (2 pages) and our security program (nearly 80 pages); and an explanation of the annual requirement. And we find that a large majority of our users take the time to read the supplemental training material.

But much more importantly, we weave security awareness into a lot of activities. Listen to our quarterly investor calls, and you'll hear our executives mention the importance of security. Employees go to our all-hands meetings, and hear those same executives talk about security. The four adjectives we've often used to describe the company are "fast, reliable, scalable, and secure". Social engineering attempts get broadcast to a mailing list (very entertaining reading for everyone answering a published telephone number). And that doesn't count all of the organizations that interact with security as part of their routine.

And that's really what security awareness is about: are your employees thinking about security when it's actually relevant? If they are, you've succeeded. If they aren't, no amount of self-enclosed "awareness training" is going to fix it. Except, of course, to let you check the box for your auditors.

May 26, 2010

Credit Card Tokenization

Every merchant that takes credit cards undergoes an annual ritual known as their PCI audit. Why? Because merchants take credit cards from end users, and the Payment Card Industry Data Security Standard (PCI-DSS) requires the annual audit of all environments that store cardholder data. Unfortunately, PCI audits are complicated, expensive, and distracting. But new technology may make them less necessary.

Consider what the credit card really is: It's a promise of payment; an agreement between an end user and their bank (and, by proxy, the entire card network) that enables the user to promise funds to a merchant. That's also the weakness of the system: the 16 digits of the card (plus a few other data points, like expiration date, CVV, and name) are equivalent to a public currency.

And that currency needs to be stored in a lot of places, for a long period of time. For user convenience and increased revenues, merchants want to support repeat buying without card reentry, so they need to keep the card (and to support chargebacks). For fraud detection, merchants want to check for card overuse. This leads to a threat to the currency, and much of the purpose behind the (PCI-DSS): cards at rest are a valuable, and omnipresent target.

Tokenization

Tokenization is a technology that replaces the credit card (a promise between a user and their bank) with a token (a promise between a merchant and their bank). With tokenization, when presented a credit card, a merchant immediately turns it over to their payment gateway, and receives a token in exchange. The token is only useful to that merchant when talking to their gateway; the public currency (credit card usable anywhere) has been converted to private scrip (token tied to merchant and gateway).

The gateway has to maintain a mapping between tokens and credit cards, so that when a merchant redeems a token, the gateway can convert the token to a credit card charge. But a merchant can use the token in their internal systems for anything they would have used a credit card for in the past: loyalty programs, fraud detection, chargebacks, or convenience.

Tokenization isn't a panacea for a merchant. The merchant still has a point in their infrastructure that takes credit cards, and that point is subject to PCI-DSS. If the merchant has a brick and mortar operation, PCI-DSS requirements will still apply there (although some POS terminals do support tokenization). But it can take a merchant a long way to defending their customers' credit cards, and represents several steps forward to a model where merchants don't need to see a credit card at all.

March 8, 2010

Why is PCI so successful?

While at the RSA Conference, I took an afternoon out to head over to Bsides, where I participated in a panel (video in two parts) looking at the PCI Data Security Standard, and its impact on the industry. The best comment actually came to me in email afterward:

It's worth remembering that, no matter what your opinion of PCI, the simple fact of the panel discussion today says something impressive about the impact the standard has had, and the quality of the industry's response.  Other areas of infosec haven't matured nearly as much, or as quickly.

Far more than any standard to date, PCI has improved the state of security across the board. Some of that is in its simplicity -- the bulk of the control objectives are clearly written, and easy to implement against. Some is in its applicability -- it crosses industries like few other standard. Even more, I think, is in its narrowness. Rather than trying to improve security everywhere (and failing), it focuses on one narrow piece of data, and aims to protect that everywhere.

This isn't to say that PCI is the best we could have. Far from it. But it's the best we have, and we should look at its model and learn for future compliance standards.

October 13, 2009

Compliance, Security, and the relations therein

Last week, Anton Chuvakin shared his latest in the "compliance is not security" discussion:

Blabbing "compliance does not equal security" is a secret rite of passage into the League of High Priests of the Arcane Art of Security today. Still, it is often quietly assumed that a well-managed, mature security program, backed up by astute technology purchases and their solid implementation will render compliance "easy" or at least "easier." One of my colleagues also calls it "compliance as a byproduct." So, I was shocked to find out that not everyone agrees...

I think there are two separate issues that Anton is exploring.

The first is that a well-designed security control should aid in compliance. As one of his commenters notes, a good security program considers the regulatory issues; or, more plainly, a good security control considers the compliance auditor as an adversary. If you do not design controls to be auditable, you are building risk into your system (sidebar: what security risks are worse than failing an audit?).

But the second point is more interesting. Most compliance frameworks are written to target the industry standard architectures and designs. What if you are doing something so different that a given control has no parallel in your environment? Example: You have no password authentication in your environment; what do you do about controls that require certain password settings? What if your auditor insists on viewing inapplicable settings?

Then, you have three options:

  1. Convince your auditor of the inapplicability of the controls.
  2. Create sham controls to satisfy the auditor
  3. Find another auditor

May 31, 2006

Disclosure Laws

At a conference recently, one of the panelists asserted that the California Disclosure Law (SB-1386) was the worst information security law in memory. I disagree. I think it is the best regulation around information security; even better than GLBA. Most information security regulations are about controls - that is, they specify how one should protect information assets. Sometimes, those are relevant controls - but in practice, every IT environment is a little different, and what works for one environment isn't going to work for another environment.

What SB-1386 has done for the industry is create a very clear cost for an information security breach. Now, companies will, hopefully, think about what controls are relevant for their environment. And maybe, just maybe, that will lead to better security.