Main

December 7, 2011

Enterprise InfoSec Lessons from the TSA

The TSA and its security practices are fairly common targets for security commentary. You can find armchair critics in most every bar, living room, and, especially, information security team. But the TSA is a great analogue to the way enterprises tend to practice information security; so maybe we can learn a thing or three from them.

We can begin with the motherhood and apple pie of security: business alignment. TSA has little to no incentive that lines up with its customers (or with their customers). TSA's metric is, ostensibly, the number of successful airplane attacks. Being perceived to reduce that number is their only true metric. On any day where there is no known breach, they can claim success - just like an enterprise information security team. And they can also be criticized for being irrelevant - just like said enterprise information security team. The business, meanwhile (both airlines and passengers), are worrying about other metrics: being on time, minimized hassle, and costs. Almost any action the TSA undertakes in pursuit of its goals are going to have a harmful effect on everyone else's goals. This is a recipe for institutional failure: as the TSA (or infosec team) acknowledges that it can never make its constituents happy, it runs the risk of not even trying.

Consider the security checkpoint, the TSA equivalent to the enterprise firewall (if you consider airplanes as VPN tunnels, it's a remarkable parallel). The security checkpoint begins with a weak authentication check: you are required to present a ticket, and ID that matches. Unfortunately, unless you are using a QR-coded smartphone ticket, the only validation of the ticket is that it appears - to a human eyeball - to be a ticket for this date and a gate behind this checkpoint. Tickets are trivially forgeable, and can be easily matched to whatever ID you present. The ID is casually validated, and goes unrecorded. This is akin to a, sadly, standard enterprise practice, to log minimal data about connections that cross the perimeter, and to not compare those connections to a list of expected traffic.

In parallel, we find the cameras. Mounted all through the security checkpoint, the cameras are a standard forensic tool - if you know what and when you are looking for something, they'll provide some evidence after the fact. But they aren't very helpful in stopping or identifying attacks in progress. Much like the voluminous logs many of our enterprises deploy. Useful for forensics, useless for prevention.

Having entered the checkpoint, the TSA is going to split passengers from their bags (and their shoes, belts, jackets, ID, and, importantly, recording devices). Their possessions are going to be placed onto a conveyor belt, where they will undergo inspection via an X-ray machine. This is, historically, the biggest bottleneck for throughput, and a nice parallel to many application level security tools. Because we have to disassemble the possessions, and then inspect one at a time (or maybe two, or three, in a high-availability scenario), we slow everything down. And because the technology to look for problems is highly signature based, it's prone to significant false negatives. Consider the X-ray machine to be the anti-virus of the TSA.

The passengers now get directed to one of two technologies: the magnetometers, or the full body imagers. The magnetometers are an old, well-understood technology: they detect efforts to bring metal through, are useless for ceramics or explosives, and are relatively speedy. The imagers, on the other hand, are what every security team desires: the latest and greatest technology; thoroughly unproven in the field, with unknown side effects, and invasive (in a sense, they're like reading people's email: sure, you might find data exfiltration, but you're more likely to violate the person's privacy and learn about who they are dating). The body scanners are slow. Slower, even, than the x-ray machines for personal effects. Slow enough that, at most checkpoints, when under load, passengers are diverted to the magnetometers, either wholesale, or piecemeal (this leads to interesting timing attacks to get a passenger shifted into the magnetometer queue). The magnetometer is your old-school intrusion-detection system: good at detecting a known set of attacks, bad at new attacks, but highly optimized at its job. The imagers are that latest technology your preferred vendor just sold you: you don't really know if it works well, and you're exporting too much information to the vendor, and you're seeing things you shouldn't, and you have to fail-around it too often for it to be useful; but at least you can claim you are doing something new.

If a passenger opts-out of the imaging process, rather than pass them through the magnetometer, we subject them to a "pat-down". The pat-down is a punitive punishment, enacted whenever someone questions the utility of the latest technology. It isn't very effective (if you'd like to smuggle a box cutter into an airport, and don't want to risk the X-ray machine detecting it, taping the razor blade to the bottom of your foot is probably going to work). But it does tend to discourage opt-out criticism.

Sadly, for all of the TSA's faults, in enterprise security, we tend to implement controls based on the same philosophy. Rather than focus on security techniques that enable the business while defending against a complex attacker ecosystem, we build rigid control frameworks, often explicitly designed to be able, on paper, to detect the most recent attack (often, in implementation, these fail, but we are reassured by having done something).

January 27, 2011

Tanstaafl

I was reading Rafal Los over at the HP Following the White Rabbit blog discussing whether anonymous web browsing is even possible:

Can making anonymous surfing still sustain the "free web" concept? - Much of the content you surf today is free, meaning, you don't pay to go to the site and access it. Many of these sites offer feature-rich experiences, and lots of content, information and require lots of work and upkeep. It's no secret that these sites rely on advertising revenue at least partly (which relies on tracking you) to survive ...if this model goes away what happens to these types of sites? Does the idea of free Internet content go away? What would that model evolve to?

This is a great point, although insufficiently generic, and limited to the gratis view of the web -- the allegedly free content. While you can consider the web browsing experience to be strictly transactional, it can more readily be contemplated as an instantiation of a world of relationships. For instance, you read a lot of news; but reading a single news article is just a transaction in the relationships between you and news providers.

A purchase from an online retailer? Let's consider a few relations:

The buyer's relationship with:
  • the merchant
  • their credit card / alternative payment system
  • the receiver
The merchant's relationship with:
  • the payment gateway
  • the manufacturer
  • the shipping company
The payment card industry's interrelationships: payment gateways, acquiring banks, card brands, and card issuers all have entangled relationships.

The web is a world filled with fraud, and fraud lives in the gaps between these relationships (Often, relationships are only used one way: buyer gives credit card to merchant, who gives it to their gateway, who passes it into the banking system. If the buyer simply notified their bank of every transaction, fraud would be hard; the absence of that notification is a gap in the transaction). The more a merchant understands about their customer, the lower their cost can be.

Of course, this model is harder to perceive in the gratis environment, but is nonetheless present. First, let's remember:

If you're not paying for something, you're not the customer; you're the product being sold.

Often, the product is simply your eyeballs; but your eyeballs might have more value the more the merchant knows about you. (Consider the low value of the eyeballs of your average fantasy football manager. If the merchant knows from past history that those eyeballs are also attached to a person in market for a new car, they can sell more valuable ad space.) And here, the more value the merchant can capture, the better services they can provide to you.

A real and fair concern is whether the systemic risk added by the merchant in aggregating information about end users is worth the added reward received by the merchant and the user. Consider the risk of a new startup in the gratis world of location-based services. This startup may create a large database of the locations of its users over time (consider the surveillance possibilities!), which, if breached, might expose the privacy and safety of those individuals. Yet because that cost is not borne by the startup, they may perceive it as a reasonable risk to take for even a small return.

Gratis services - and even for-pay services - are subsidized by the exploitable value of the data collected. Whether or not the business is fully monetizing that data, it's still a fair question to ask whether the businesses can thrive without that revenue source.

May 28, 2010

NSEC3: Is the glass half full or half empty?

NSEC3, or the "Hashed Authenticated Denial of Existence", is a DNSSEC specification to authenticate the NXDOMAIN response in DNS. To understand how we came to create it, and the secrecy issues around it, we have to understand why it was designed. As the industry moves to a rollout of DNSSEC, understanding the security goals of our various Designed Users helps us understand how we might improve on the security in the protocol through our own implementations.

About the Domain Name Service (DNS)

DNS is the protocol which converts mostly readable hostnames, like www.csoandy.com, into IP addresses (like 209.170.117.130). At its heart, a client (your desktop) is asking a server to provide that conversion. There are a lot of possible positive answers, which hopefully result in your computer finding its destination. But there are also some negative answers. The interesting answer here is the NXDOMAIN response, which tells your client that the hostnames does not exist.

Secrecy in DNS

DNS requests and replies, by design, have no confidentiality: anyone can see any request and response. Further, there is no client authentication: if an answer is available to one client, it is available to all clients. The contents of a zone file (the list of host names in a domain) are rarely publicized, but a DNS server acts as a public oracle for the zone file; anyone can make continuous requests for hostnames until they reverse engineer the contents of the zone file. With one caveat: the attacker will never know that they are done, as there might exist hostname that they have not yet tried.

But that hasn't kept people from putting information that has some form of borderline secrecy into a zone file. Naming conventions in zone files might permit someone to easily map an intranet just looking at the hostnames. Host names might contain names of individuals. So there is a desire to at least keep the zone files from being trivially readable.

DNSSEC and authenticated denials

DNSSEC adds in one bit of security: the response from the server to the client is signed. Since a zone file is (usually) finite, this signing can take place offline: you sign the contents of the zone file whenever you modify them, and then hand out static results. Negative answers are harder: you can't presign them all, and signing is expensive enough that letting an adversary make you do arbitrary signings can lead to DoS attacks. And you have to authenticate denials, or an adversary could poison lookups with long-lived denials.

Along came NSEC. NSEC permitted a denial response to cover an entire range (e.g., there are no hosts between wardialer.csoandy.com and www.csoandy.com). Unfortunately, this made it trivial to gather the contents of a zone: after you get one range, simply ask for the next alphabetical host (wwwa.csoandy.com) and learn what the next actual host is (andys-sekrit-ipad.csoandy.com). From a pre-computation standpoint, NSEC was great - there are the same number of NSEC signed responses in a zone as all other signatures - but from a secrecy standpoint, NSEC destroyed what little obscurity existed in DNS.

NSEC3

NSEC3 is the update to NSEC. Instead of providing a range in which there are no hostnames, a DNS server publishes a hashing function, and a signed range in which there are no valid hashes.. This prevents an adversary from easily collecting the contents of the zone (as with NSEC), but does allow them to gather the size of the zone file (by making queries to find all of the unused hash ranges), and then conduct offline guessing at the contents of the zone files (as Dan Bernstein has been doing for a while). Enabling offline guessing makes a significant difference: with traditional DNS, an adversary must send an arbitrarily large number of queries (guesses) to a name server (making them possibly detectable); with NSEC, they must send as many queries as there are records; and with NSEC3, they must also send the same number of requests as there are records (with some computation to make the right guesses), and then can conduct all of their guessing offline.

While NSEC3 is an improvement from NSEC, it still represents a small step down in zone file secrecy. This step is necessary from a defensive perspective, but it makes one wonder if this is the best solution: why do we still have the concept of semi-secret public DNS names? If we have a zone file we want to keep secret, we should authenticate requests before answering. But until then, at least we can make it harder for an adversary to determine the contents of a public zone.

"Best" practices in zone secrecy

If you have a zone whose contents you want to keep obscure anyway, you should consider:
  • Limiting access to the zone, likely by IP address.
  • Use randomly generated record names, to make offline attacks such as Dan Bernstein's more difficult.
  • Fill your zone with spurious answers, to send adversaries on wild goose chases.
  • Instrument your IDS system to detect people trying to walk your zone file, and give them a different answer set than you give to legitimate users.

Jason Bau and John Mitchell, both of Stanford, have an even deeper dive into DNSSEC and NSEC3.

May 20, 2006

Pseudonymity

Pseudonymity, for those new to it, is the use of a semi-permanent, but incomplete or false identity. For instance, in many online communities, I'll just go by my first name, with a specific Gmail address so that people can distinguish me from all the different Andys out there. I have different pseudonyms in different spaces; in some of them, people who already know me know my pseudonym, but strangers don't.

This is a pretty common practice. It's better than anonymity for community building, as I've noticed that when people feel truly anonymous, they tend be less courteous and more inflammatory. But as the Michael Hitzlik furor has pointed out, people can still abuse pseudonymity; how do you know when 50 people are really only one person?

The answer is pretty simple. A pseudonym should be much like a nickname - you can have a different one for each group you hang out with, but it still doesn't let you pretend to be 4 or 5 different people at once.