UA-25768871-1

NSEC3: Is the glass half full or half empty?

NSEC3, or the "Hashed Authenticated Denial of Existence", is a DNSSEC specification to authenticate the NXDOMAIN response in DNS. To understand how we came to create it, and the secrecy issues around it, we have to understand why it was designed. As the industry moves to a rollout of DNSSEC, understanding the security goals of our various Designed Users helps us understand how we might improve on the security in the protocol through our own implementations.

About the Domain Name Service (DNS)


DNS is the protocol which converts mostly readable hostnames, like www.csoandy.com, into IP addresses (like 209.170.117.130). At its heart, a client (your desktop) is asking a server to provide that conversion. There are a lot of possible positive answers, which hopefully result in your computer finding its destination. But there are also some negative answers. The interesting answer here is the NXDOMAIN response, which tells your client that the hostnames does not exist.

Secrecy in DNS


DNS requests and replies, by design, have no confidentiality: anyone can see any request and response. Further, there is no client authentication: if an answer is available to one client, it is available to all clients. The contents of a zone file (the list of host names in a domain) are rarely publicized, but a DNS server acts as a public oracle for the zone file; anyone can make continuous requests for hostnames until they reverse engineer the contents of the zone file. With one caveat: the attacker will never know that they are done, as there might exist hostname that they have not yet tried.

But that hasn't kept people from putting information that has some form of borderline secrecy into a zone file. Naming conventions in zone files might permit someone to easily map an intranet just looking at the hostnames. Host names might contain names of individuals. So there is a desire to at least keep the zone files from being trivially readable.

DNSSEC and authenticated denials


DNSSEC adds in one bit of security: the response from the server to the client is signed. Since a zone file is (usually) finite, this signing can take place offline: you sign the contents of the zone file whenever you modify them, and then hand out static results. Negative answers are harder: you can't presign them all, and signing is expensive enough that letting an adversary make you do arbitrary signings can lead to DoS attacks. And you have to authenticate denials, or an adversary could poison lookups with long-lived denials.

Along came NSEC. NSEC permitted a denial response to cover an entire range (e.g., there are no hosts between wardialer.csoandy.com and www.csoandy.com). Unfortunately, this made it trivial to gather the contents of a zone: after you get one range, simply ask for the next alphabetical host (wwwa.csoandy.com) and learn what the next actual host is (andys-sekrit-ipad.csoandy.com). From a pre-computation standpoint, NSEC was great - there are the same number of NSEC signed responses in a zone as all other signatures - but from a secrecy standpoint, NSEC destroyed what little obscurity existed in DNS.

NSEC3


NSEC3 is the update to NSEC. Instead of providing a range in which there are no hostnames, a DNS server publishes a hashing function, and a signed range in which there are no valid hashes.. This prevents an adversary from easily collecting the contents of the zone (as with NSEC), but does allow them to gather the size of the zone file (by making queries to find all of the unused hash ranges), and then conduct offline guessing at the contents of the zone files (as Dan Bernstein has been doing for a while). Enabling offline guessing makes a significant difference: with traditional DNS, an adversary must send an arbitrarily large number of queries (guesses) to a name server (making them possibly detectable); with NSEC, they must send as many queries as there are records; and with NSEC3, they must also send the same number of requests as there are records (with some computation to make the right guesses), and then can conduct all of their guessing offline.

While NSEC3 is an improvement from NSEC, it still represents a small step down in zone file secrecy. This step is necessary from a defensive perspective, but it makes one wonder if this is the best solution: why do we still have the concept of semi-secret public DNS names? If we have a zone file we want to keep secret, we should authenticate requests before answering. But until then, at least we can make it harder for an adversary to determine the contents of a public zone.

"Best" practices in zone secrecy


If you have a zone whose contents you want to keep obscure anyway, you should consider:
  • Limiting access to the zone, likely by IP address.
  • Use randomly generated record names, to make offline attacks such as Dan Bernstein's more difficult.
  • Fill your zone with spurious answers, to send adversaries on wild goose chases.
  • Instrument your IDS system to detect people trying to walk your zone file, and give them a different answer set than you give to legitimate users.


Jason Bau and John Mitchell, both of Stanford, have an even deeper dive into DNSSEC and NSEC3.

Credit Card Tokenization

Every merchant that takes credit cards undergoes an annual ritual known as their PCI audit. Why? Because merchants take credit cards from end users, and the Payment Card Industry Data Security Standard (PCI-DSS) requires the annual audit of all environments that store cardholder data. Unfortunately, PCI audits are complicated, expensive, and distracting. But new technology may make them less necessary.

Consider what the credit card really is: It's a promise of payment; an agreement between an end user and their bank (and, by proxy, the entire card network) that enables the user to promise funds to a merchant. That's also the weakness of the system: the 16 digits of the card (plus a few other data points, like expiration date, CVV, and name) are equivalent to a public currency.

And that currency needs to be stored in a lot of places, for a long period of time. For user convenience and increased revenues, merchants want to support repeat buying without card reentry, so they need to keep the card (and to support chargebacks). For fraud detection, merchants want to check for card overuse. This leads to a threat to the currency, and much of the purpose behind the (PCI-DSS): cards at rest are a valuable, and omnipresent target.

Tokenization


Tokenization is a technology that replaces the credit card (a promise between a user and their bank) with a token (a promise between a merchant and their bank). With tokenization, when presented a credit card, a merchant immediately turns it over to their payment gateway, and receives a token in exchange. The token is only useful to that merchant when talking to their gateway; the public currency (credit card usable anywhere) has been converted to private scrip (token tied to merchant and gateway).

The gateway has to maintain a mapping between tokens and credit cards, so that when a merchant redeems a token, the gateway can convert the token to a credit card charge. But a merchant can use the token in their internal systems for anything they would have used a credit card for in the past: loyalty programs, fraud detection, chargebacks, or convenience.

Tokenization isn't a panacea for a merchant. The merchant still has a point in their infrastructure that takes credit cards, and that point is subject to PCI-DSS. If the merchant has a brick and mortar operation, PCI-DSS requirements will still apply there (although some POS terminals do support tokenization). But it can take a merchant a long way to defending their customers' credit cards, and represents several steps forward to a model where merchants don't need to see a credit card at all.

Skynet or The Calculor?

Technology-based companies can either be organized around silicon (computers), or carbon (people). Depending on which type of company you are, security practices are, by necessity, different. So why then do we delude ourselves into thinking that there is one perfect set of security practices?

In a silicon-based organization, business processes are predominantly driven by predefined rules, implemented in computing systems (think: call centers). Where humans serve a role, it is often as an advanced interactive voice recognition (IVR) system, or as an expert system to handle cases not yet planned for (and, potentially, to make your customers happier to talk to a human). Security systems in this type of a world can be designed to think adversarially about the employees -- after all, the employees are to be constrained to augment the computing systems which drive the bulk of the business.

In a carbon-based organization, business processes are organized around the human, and are typically more fluid (think: software engineering, marketing). The role of computers in a carbon-based organization is to augment the knowledge workers, not constrain them. The tasks and challenges of a security system are far different, and require greater flexibility -- after all, employees are to be supported when engaging in novel activities designed to grow the business.

Consider a specific problem: data leakage. In a silicon-based organization, this is a (relatively) simple problem. Users can be restricted in exporting data from their systems, consumer technology more advanced than pen and paper can be banned from the facility, and all work must be done in the facility. Layer on just-in-time access control (restricting a user to only access records that they have been assigned to), and you've limited the leakage ... to what a user can remember or write down. And that's still a problem: twenty years ago, I worked in a company that used social security numbers as the unique identifier for its employees. Two decades later, a handful of those numbers are still rattling around in my head, a deferred data leakage problem waiting to happen.

Now compare that simple scenario against a knowledge worker environment. People are very mobile, and need to access company data everywhere. Assignments show up by word of mouth, so users need quick access to sources of data they had not seen before. Users are on multiple platforms, even when performing the same job, so system and application management is a challenge. Interaction with customers and partners happens rapidly, with sensitive data flying across the wires. Trying to prevent data leakage in this world is a Herculean task. Likely, given the impossibility of the task, most security implementations here will reduce business flexibility, without significantly reducing business risk.

But what can the enterprising security manager do? First, understand your business. If it's a silicon-based organization, you're in luck. Most security vendors, consultants, and best practices guides are looking to help you out (and take some of your money while doing so). If, on the other hand, you're in a carbon-based business, you've got a much harder task ahead of you. Most solutions wont help a lot out of the box, and risk acceptance may be so endemic to your organizational mindset that changing it may well feel impossible. You'll need to focus on understanding and communicating risk, and designing novel solutions to problems unique to your business. Sounds hard, but take heart: it's not like the silicon-based security team is going to get it right, either.

Contracting the Common Cloud

After attending CSO Perspectives, Bill Brenner has some observations on contract negotiations with SaaS vendors. While his panel demonstrated a breadth of customer experience, it was, unfortunately, lacking in a critical perspective: that of a cloud provider.

Much of the point of SaaS, or any cloud service, in the economy of scale you get; not just in capacity, but also in features. You're selecting from the same set of features that every other customer is selecting from, and that's what makes it affordable. And that same set of features needs to extend up into the business relationship. As the panel noted, relationships around breach management, data portability, and transport encryption are all important, but if you find yourself arguing for a provider to do something it isn't already, you're likely fighting a Sisyphean battle.

But how did a customer get to that point? Enterprises typically generate their own control frameworks, in theory beginning from a standard (like ISO 27002), but then redacting out the inapplicable (to them), tailoring controls, and adding in new controls to cover incidents they've had in the past. And when they encounter a SaaS provider who isn't talking about security controls, the natural tendency is to convert their own control framework into contractual language. Which leads to the observations of the panel participants: it's like pulling teeth.

A common request I've seen is the customer who wants to attach their own security policy - often a thirty to ninety page document - as a contract addendum, and require the vendor to comply with it, "as may be amended from time to time by Customer". And while communicating desires to your vendors is how they'll decide to improve, no cloud vendor is going to be able to satisfy that contract.

Instead, vendors need to be providing a high-water mark of business and technology capabilities to their customer base. To do this, they should start back from those original control frameworks, and not only apply them to their own infrastructure, but evaluate the vendor-customer interface as well. Once implemented, these controls can then be packaged, both into the baseline (for controls with little or no variable cost), and into for-fee packages. Customers may not want (or want to pay for) all of them, but the vendors need to be ahead of their customers on satisfying needs. Because one-off contract requirements don't scale. But good security practices do.