UA-25768871-1

The Future of The Internet -- and how to secure it

Once, there was an Internet. And it was a happy place with no security concerns whatsoever, because only a dozen or so people got to use it.

That fairy tale is not the world we live in today, and thanks to high profile problems like Heartbleed and Shellshock, more people recognize it. Unfortunately, some of the design ethos from that fantasyland still impacts us. The web isn’t secure for the uses it sees today—and HTTP was never designed to be. SSL, intended to provide a secure connection layer between systems, has evolved through multiple versions into TLS, each attempting to reduce the vulnerabilities of the prior.
The vulnerabilities and problems of HTTPS, while not numberless, are legion. And each of these vulnerabilities presents an opportunity for an adversary to defeat the goals of Internet users—whether they seek financial security, privacy from government surveillance, or network agnosticism.

What is HTTPS, anyway?

HTTPS isn't a standalone protocol; HTTP over TLS is two separate protocols, isolated from one another. The effects of one protocol's actions on another are rarely studied as much as the actual protocols themselves. That isolation has led to vulnerabilities—using compression in HTTP to improve transfer speed is good, except that the secrecy goals of TLS can be subverted through variable-sized content, as in the BREACH security exploit.

Who do you trust?

TLS certificates are issued by certificate authorities (CAs); these CAs sign the certificates that a web site presents to its users to ‘prove’ who they are. You could almost consider them like a driver’s license—issued by some authority. But who are these authorities? They are the dozens of entities—some commercial, some governmental—who are trusted by our browsers. Unlike a driver’s license, any trusted CA can issue a certificate for any website—it’s like having your local library issue an ID card for a Pentagon employee or one government issue certificates for another government’s website.
Illegitimately gaining a trusted certificate can be achieved in at least three distinct paths:
  • compromise a CA publishing interface, either directly or by compromising a user’s credentials;
  • for Domain Validated certificates, have publication control of the website that the CA can observe (by compromising DNS, the publication interface, or the server directly); or
  • by modifying the browser’s list of trusted certificates. This is a common practice in many enterprises, to enable the enterprise to run a CA for their own websites, or to deploy a web filtering proxy. But these CAs are then able to issue certificates to any website.
Once an adversary has a certificate, they merely need to also become a ‘man in the middle’ (MITM), able to intercept and modify traffic between a client and a server. With this power set, they are able to read and modify all traffic on that connection.
Certificate Transparency (CT) is an initiative to begin monitoring and auditing the CAs to determine whether they have issued rogue certificates and to provide browsers an interface to collectively validate certificates. This may lead to a reduction in the number of trusted CAs to only those that don’t behave in a rogue fashion. There is another possible mitigation called DANE (DNSSEC Assertion of Named Entities), where the information about the validity of certificates/authorities for hostnames/domains is published through DNS and signed by DNSSEC, reducing the number of trusted entities who can publish SSL keys.

I can haz TLS?

Until recent versions of TLS that incorporate Server Name Indication (SNI), a server was required to first present the certificate that declared for which hosts it was able to conduct an HTTPS session. This meant that no IP address could have more than one certificate. In HTTP, a single IP address can, through virtual hosting, have many hostnames, as the client will signal to the server which hostname from which it would like a web page. While the advent of multi-domain certificates has allowed multiple hostnames, it hasn’t provided the freedom to have ‘unlimited’ TLS-secured hostnames. SNI is an extension to TLS that provides this capability, allowing a browser to tell a server what certificate it would like to be presented.
But SNI isn’t supported by all browsers—most notably, Windows XP and early versions of Android. The former is on its way out, but the latter is still being deployed on lower-end feature phones, especially in the developing world. And unfortunately, there are no good strategies for supporting both SNI and non-SNI clients available today. Until either SNI is fully supported, or IPv6 adoption achieves critical mass, many websites will not be able to have HTTPS.

TLS is only Transport Layer Security

Often, a client isn’t talking directly to the content provider—there is some other entity in the middle. It might be an enterprise proxy; it might be a network operator gateway; it might be a content delivery network. In these cases, the TLS connection only provides secrecy on the first leg—the client has to hope that the secrecy is preserved across the public Internet. Few of the mid-point entities provide any assertions about how they’ll handle the security of the forward connections that were prompted from a TLS connection; some even advertise the convenience of having the ‘flexibility’ to downgrade from HTTPS to merely HTTP.
Until HTTP contains a signaling mechanism through which the mid-points can communicate about the TLS choices they’ve made, a client will not know whether a TLS connection is robust (or even exists!) across public links.

TLS isn’t privacy

TLS provides encryption for the information contained inside a request, thus hiding the specific content you’re engaging with. It’s useful for hiding the specific details of similarly shaped data, like social security numbers or credit cards; but very poor at hiding things like activism or research. The design of the system doesn’t conceal the ‘shape’ of your traffic—and the Wikipedia pages for Occupy Central have a different shape than the shape of the Wikipedia page for the Large Hadron Collider. It also doesn’t prevent traffic analysis—while the contents of a user-generated video may be secret, the identity of the systems (and hence the users) that uploaded and downloaded it aren’t. Some privacy systems like Tor may provide useful protections, but at the cost of performance.

Don’t trust the lock


All together, the architecture of TLS and HTTPS doesn’t provide enough safety against all adversaries in all situations. There are some steps underway that will improve safety, but many hazards will still remain, even absent the highly publicized implementation defects. But these steps will increase the cost for adversaries, sometimes in measurable and observable ways.
That icon lock in your browser is useful for securing your commerce and finances, but be cautious about trusting it with your life.

This article originally appeared in The Internet Monitor 2014: Reflections on the Digital World.

Dancing Poodles

SSL is dead, long live TLS

An attack affectionately known as “POODLE” (Padding Oracle On Downgraded Legacy Encryption), should put a stake in the heart of SSL, and move the world forward to TLS. There are two interesting vulnerabilities: POODLE, and the SSL/TLS versioning fallback mechanism. Both of these vulnerabilities are discussed in detail in the initial disclosure, as well as a history lesson in Daniel Franke’s How Poodle Happened.

POODLE

POODLE is a chosen-plaintext attack similar in effect to BREACH; an adversary who can trigger requests from an end user can extract secrets from the sessions (in this case, encrypted cookie values). This happens because the padding on SSLv3 block ciphers (to fill out a request to a full block size) is not verifiable - it isn’t covered by the message authentication code. This allows an adversary to alter the final block in ways that will slowly leak information (based on whether their alteration survives verification or not, leaking information about *which* bytes are interesting). Thomas Pornin independently discovered this, and published at StackExchange.


On its own, POODLE merely makes certain cipher choices no longer as trustworthy. Unfortunately, these were the last ciphers that were even moderately trustworthy - the other ciphers available in SSLv3 having fallen into untrustworthiness due to insufficient key size (RC2, DES, Export ciphers); cryptanalytic attacks (RC4); or a lack of browser support (RC2, SEED, Camellia). The POODLE attack takes out the remaining two (3DES and AES) as trustworthy (and covers SEED and Camellia as well, so we can’t advocate for those).

One simple answer is for all systems to stop using these cipher suites, effectively deprecating SSLv3. Unfortunately, it isn’t that easy - there are both clients and servers on the Internet that still don’t support the TLS protocols or ciphersuites. To support talking to these legacy systems, an entity may not be able to just disable SSLv3; instead they’d like to be able to talk SSLv3 with those that only support SSLv3, but ensure that they’re using the best available TLS version. And that’s where the next vulnerability lies.

SSL/TLS Version Selection Fallback

We’ve probably all encountered - either in real life or in fiction - two strangers attempting to find a common language in which to communicate. Each one proposes a language, hoping to get a response, and, if they fail, they move on to the next. Historically, SSL/TLS protocol version selection behaved that way - a client would suggest the best protocol it could; but if it had an error - even as simple as dropped packets - it would try again, with the next best version. And then the next best … until it got to a pretty bad version state.

This is a problem if there’s an adversary in the middle, who doesn’t want you picking that “best” language, but would much prefer that you pick something that they can break (and we now know that since all of the ciphers available in SSLv3 are breakable, merely getting down to SSLv3 is sufficient). All the adversary has to do is to block all negotiations until the client and server drop down to a SSLv3.

There is a quick fix, to merely disable SSLv3. This means that if an adversary succeeds at dropping down, the connection will fail - the server will think it’s talking to a legacy client, and refuse the connection. But that’s merely a solution for the short term problem of POODLE, because there are other reasons an adversary might want to trigger protocol version downgrade today (e.g., to eliminate TLS extensions) or in the future (when TLS1.0 ciphers are all broken). A longer term fix is the TLS Signaling Cipher Suite Value (SCSV). This is a new “cipher suite,” that encodes the best protocol version that the client would have liked to use. This is carried as a new cipher suite; servers that support SCSV don’t actually treat it as a cipher to choose from (what cipher suites normally list), instead, if the value carried in the SCSV is *worse* than the best protocol version that the server supports, it treats this connection as one that has been attacked, and fails the connection. A client only sends an SCSV value if it has already been forced to version downgrade; it’s a way of signaling “I tried to connect with a better protocol than I think you support; if you did support it, then somebody is messing with us.”

So POODLE should put a stake most of the way through SSL’s heart, and SCSV will help us keep it there. Long live TLS.

Crossposted on the Akamai Blog.

Ninja Management

Kacy Catanzaro is the first woman to qualify for the American Ninja Warrior Finals. Her qualifying run in Dallas is not merely an athletic marvel, but also demonstrates a useful set of skills and practices for anyone tackling large and complex tasks.

Consider the structure of the course: a set of challenges, each one generally more difficult than the one before it. And when each challenge is completed, you move on to the next one. Note that the course isn’t a single challenge (although many other competitors approached it as a single challenge), and it is a lot like management - especially incident-based management. We often work on an urgent project, get it complete, and then move on to the next project. Or worse - the project extends for a long time, and we *don’t* treat it as a series of challenges. Sometimes, we use similar skills, competencies, and people, and wear them out. (Go watch some other competitors, and see how often their runs come to an end because of certain overused muscle groups giving out.)

Let’s consider Kacy’s approach. After each challenge, she undergoes some subset of the following ritual: celebration, gratitude, recovery, and preparation.

Celebration

It’s important when we finish some task to celebrate. To acknowledge that we just did a hard thing, and defeated it. This gives us mental closure (“I totally beat that!”), as well as builds up our energy level (“I beat this hard thing, the next hard thing can’t be that bad”). It gives us the mindset of winners (“I get things done”) instead of the oppressed (“I have a never-ending set of challenges”). You can see Kacy celebrate after clearing each challenge - even just a little fist-pump acknowledges her success at the previous challenge and gets her ready to face the next one.

Gratitude

Even when we complete a project “on our own,” we often receive a lot of help that can be easy to overlook - people helped train us, took work off our plate, cheered us on. And when we don’t do the work on our own - when many people contributed to an accomplishment - we should express our gratitude. We should remind them that their work is valued, and that when they do work on our behalf, we appreciate it. There is a shortage of gratitude in the world; recipients of gratitude will react strongly and positively to the feedback. You see Kacy thanking the crowd and her boyfriend for their support after many of the challenges.

Recovery

Challenges are *hard*; if they were easy, we’d call them cakewalks. We use (and abuse) the resources at our disposal - our bodies, our coworkers, our families, and our systems. After taking advantage of these things, we need to acknowledge the damage, and take even small steps to repair the damage. That might be taking a day off, taking action to reconnect with people we’ve ignored, taking care of ongoing maintenance; or merely relaxing for a while. Kacy is focused on finishing, and she takes the time to let overtaxed muscles rest and recover before asking more of them.

Preparation

After finishing a challenge, we will often face another challenge. It’s not often the same challenge, even if it looks a bit similar. Or it may look wildly different, but be addressable with strikingly similar strategies. Either way, we need to take the time to think through how we’re going tackle this challenge, and then go execute. Watch how Kacy plans her approach to the next challenge before she tackles it, rather than jumping into it blindly.

Life will present us with many sequences of challenges, some masked as single large challenges, others clearly separated. Taking the time to recharge ourselves and our fellow participants will increase not only our effectiveness at any given task, but also our ability to continue to efficiently operate over time. These four rituals are an easy rubric to apply in almost any situation, and, like Kacy, they can enable us to overcome the obstacles in our path.

The Brittleness of the SSL/TLS Certificate System

Despite the time and inconvenience caused to the industry by Heartbleed, its impact does provide some impetus for examining the underlying certificate hierarchy. (As an historical example, in the wake of CA certificate misissuances, the industry looked at one set of flaws: how any one of the many trusted CAs can issue certificates for any site, even if the owner of that site hasn't requested them to do so; that link is also a quick primer on the certificate hierarchy.)

Three years later, one outcome of the uncertainty around Heartbleed - that any certificate on an OpenSSL server *might* have been compromised - is the mass revocation of thousands of otherwise valid certificates.  But, as Adam Langley has pointed out, the revocation process hasn't really worked well for years, and it isn't about to start working any better now.

Revocation is Hard

The core of the problem is that revocation wasn't designed for an epochal event like this; it's never really had the scalability to deal with more than a small number of actively revoked certificates.  The original revocation model was organized around each CA publishing a certificate revocation list (CRL): the list of all non-expired certs the CA would like to revoke.  In theory, a user's browser should download the CRL before trusting the certificate presented to it, and check that the presented certificate isn't on the CRL.  In practice, most don't.  Partly because HTTPS isn't really a standalone protocol: it is the HTTP protocol tunneled over the TLS protocol.  The signaling between these two protocols is limited, and so the revocation check must happen inside the TLS startup, making it a performance challenge for the web, as a browser waits for a CA response before it continues communicating with a web server.

CRLs are a problem not only for the browser, which has to pull the entire CRL when it visits a website, but also for the CA, which has to deliver the entire CRL when a user visits one site.  This led to the development of the online certificate status protocol (OCSP).  OCSP allows a browser to ask a CA "Is this specific cert still good?" and get an answer "That certificate is still good (and you may cache this message for 60 minutes)."  Unfortunately, while OCSP is a huge step forward from CRLs, it still leaves in place the need to not only trust *all* of the possible CAs, but also make a real-time call to one during the initial HTTPS connection.  As Adam notes, the closest thing we have in the near term to operationally "revocable" certs might be OCSP-Must-Staple, in which the OCSP response (signed by the CA) is actually sent to the browser from the HTTPS server alongside the server's certificate.

One Possible Future

A different option entirely might be to move to DANE (DNSSEC Assertion of Named Entities).  In DANE, an enterprise places a record which specifies the exact certificate (or set of certificates, or CA which can issue certificates) which is valid for a  given hostname into its DNS zone file.  This record is then signed with DNSSEC, and a client would then only trust that specific certificate for that hostname. (This is similar to, but slightly more scalable than, Google's certificate pinning initiative.)

DANE puts more trust into the DNSSEC hierarchy, but removes all trust from the CA hierarchy.  That might be the right tradeoff.  Either way, the current system doesn't work and, as Heartbleed has made evident, doesn't meet the web's current or future needs.

(Footnote:  No conversation made herein around Certificate Transparency, or HSTS, both of which are somewhat orthogonal to this problem.)

This entry crossposted at blogs.akamai.com.

On Cognition

Context: Keynoting at ShowMeCon.

Here are my integrated slides (in PDF, each build its own slide) for my combined talk on how to apply some of the knowledge and research from the world of cognitive science to organization thinking, especially in the context of a security profession. This reading list still applies, to which I’ve added Traffic, by Tom Vanderbilt.

In brief: We often model other humans as one-dimensional caricatures - because it’s efficient for our brain to do so; because it’s hard to think in a different mode than our own; because we don’t see ourselves as the villain in our own story. To be effective partners in our organizations, we have to understand not only how this affects other people, but how it affects ourselves; and then rewire our behavior to move past these caricatures and have dialogues that change behavior, not just reinforce stereotypes.

Closing the Skills Gap

This morning, I sat in on a panel titled “Closing the Cybersecurity Skills Gap.” Javvad Malik has curated a collection of tweet observations; I thought I’d expand and share a few of my own observations:

The “skills gap” that recruiters see isn’t the right one. Often, we hear about the skills gap from recruiters in the context of “I couldn’t find candidates that met your requirements.” But if the requirements included, “15 years of experience securing Windows7,” we don’t have a skills gap, we have a problem writing job descriptions.

An often missing skill is relating to the business. Our jobs as security professionals often puts us at odds with the business. Why? Because we strive to be the “conscience of the business” and stop it from taking certain risks. Our job is to help the business take risks - but to do so more wisely, through actionable knowledge. Since our business partners are the ones making the decisions to take risks, they are the ones who need to understand risks that impact their decisions.

Think systematically. Many of our training programs focus on providing building block skills; this gives people hammers (so that all problems look like nails). An underdeveloped skill is the ability to think holistically about systems.

Problem solving is a needed skill. Not simply identifying how to solve a problem; actually digging in and solving a problem is a critical skill. It might require building servers; installing applications; designing processes; analyzing data, and a dozen other sundry capabilities.

Communications and translations are key. Every job function has its own jargon - and being able to communicate in the jargon of the business is a critical capability. Having the ability to quickly learn about how current events or new technologies will affect your business, and then provide coherent summaries and advice to the business will be extremely helpful.

Be kind. The security profession has often celebrated being unkind and hurtful to each other and our business partners (think of The Wall of Sheep). Instead, we should be trying to understand them; to be helpful to them; and to understand how we can improve their world.

And some thoughts from prior blogposts: Certification isn’t a marker of mastery. Think about measuring value. Are you applying your skills to just compliance, or solving security problems like awareness training in novel ways?

Cognitive Injection: A reading list

Context: Tuesday, February 25th, I’m presenting “Cognitive Injection: Reprogramming the Situation-Oriented Human OS” at RSAC in Moscone West, Room 3005 at 4 pm (Pacific). My slides are here.

I’ve formed my opinion about how the human brain works with the assistance of some great contributors. Some of them are humans I hang out with, but many of them are authors and researchers; in the interest of helping others come to the same, or better, understanding, here’s a short reading list:
  • Daniel Kahneman; Thinking, Fast and Slow
  • James Reason; Human Error
  • Atul Gawande; The Checklist Manifesto
  • Christopher Chabris and Daniel Simons; The Invisible Gorilla
  • Sam Peltzman,;“The Effects of Automobile Safety Regulation”, Journal of Political Economy, 1975. (see also: The Peltzman Effect)
  • Tom Vanderbilt; Traffic

Whither HSMs (in the cloud)

Hardware Security Modules (HSMs) are physical devices attached or embedded in another computer to handle various cryptographic functions. HSMs are supposed to provide both physical and logical protection of the cryptographic material stored on the HSM while handling cryptographic functions for the computer to which they are attached.

As websites move to the cloud, are HSMs the right way to achieve our goals?

Before we talk about goals, it is useful to consider a basic model for talking about them. Our Safety team often uses the following model to consider whether a system is safe:
  • What are the goals we are trying to achieve? (Or, in Leveson's STPA hazard-oriented view, what are the accidents/losses which you wish to prevent?)
  • What are the adversaries we wish to defeat?
  • What are the powers available to those adversaries? What *moves* are available to them?
  • And finally, what controls inhibit adversaries' use of their powers, thus protecting our goals?
Our hazards (or unacceptable losses) are:
  • An adversary can operate a webserver that pretends to be ours;
  • An adversary can decrypt SSL traffic; and
  • An adversary can conduct a man-in-the-middle attack on our SSL website.
In the protection of SSL certificates in the cloud, it would seem that our goals are two-fold:
  • Keep the private key *secret* from third parties; and
  • Prevent unauthorized and undetected use of the key in cryptographic functions. While SSL certificate revocation is a weak control (many browsers do not check for revocation), it is that which generally constrains this goal to both unauthorized *and* undetected; a detected adversary can be dealt with through revocation.
I could argue that the first is a special case of the second, except that I want to distinguish between "cryptographic functions over the valid lifetime of the certificate" and "cryptographic functions after the certificate is supposed to be gone."

As an aside, I could also argue that these goals are insufficient; after all, except for doing man in the middle attacks, *any* SSL certificate signed by any of the many certificate authorities in the browser store would enable an adversary to cause the first of the losses. HSMs don't really help with that problem.

Given that caveat, what are the interesting adversaries? I propose four "interesting" adversaries, mostly defined by their powers:
  • The adversary who has remotely compromised a server;
  • The adversary who has taken physical control of a server which is still online;
  • The adversary who has taken physical control of a server at end of life; and
  • The adversary who has been given administrative access to a system.
The moves available to these adversaries are clear:
  • Copy key material (anyone with administrative access);
  • Change which key material or SSL configuration we'll use (thus downgrading the integrity of legitimate connections)
  • Escalate privileges to administrative access (anyone with physical or remote access); and
  • Make API calls to execute cryptographic functions (anyone with administrative access).
What controls will affect these adversaries?
  • Use of an HSM will inhibit the copying of keying material;
  • Use of revocation will reduce the exposure of copied keying material;
  • System-integrated physical security (systems that evaluate their own cameras and cabinets, for instance) inhibit escalation from physical access to administrative access;
  • Auditing systems inhibits adversary privilege escalation;
  • Encrypting keying material, and only providing decrypted versions to audited, online systems inhibits adversaries with physical control of systems.
What I find interesting is that for systems outside the physical purview of a company, HSMs may have a subtle flaw: since HSMs must provide an API to be of use, *that API remains exposed to an adversary who has taken possession of an HSM*. This may be a minor issue if an HSM is in a server in a "secure" facility, it becomes significant in distributed data centers. On the contrary, the control system which includes tightly coupled local physical security, auditing, and software encryption may strike a different balance: slightly less stringent security against an adversary who can gain administrative access (after all, they can likely copy the keys), in exchange for greater security against adversaries who have physical access.

This isn't to say that this is the only way to assemble a control system to protect SSL keys; merely that a reflexive jump to an HSM-based solution may not actually meet the security goals that many companies might have.

(Full disclosure: I’m the primary inventor of Akamai’s SSL content delivery network, which has incorporated software-based key management for over a decade.)
cross posted on The Akamai Blog.

Cognitive Injection

Context: I’m giving a talk today at noon at DerbyCon, entitled “Cognitive Injection: Reprogramming the Situation Oriented Human OS”. Slides are here.

It's a trope among security professionals that other humans - mere mundanes - don't 'get' security, and make foolish decisions. But this is an easy out, and a fundamental attribution error. Everyone has different incentives, motivators, and even perceptions of the world. By understanding this -- and how the human wetware has evolved over the the last fifty thousand years or so -- we can redesign our security programs to better manipulate people.

Assessment of the BREACH vulnerability

The recently disclosed BREACH vulnerability in HTTPS enables an attack against SSL-enabled websites. A BREACH attack leverages the use of HTTP-level compression to gain knowledge about some secret inside the SSL stream, by analyzing whether an attacker-injected "guess" is efficiently compressed by the dynamic compression dictionary that also contains the secret. This is a type of an attack known as an oracle, where an adversary can extract information from an online system by making multiple queries to it.

BREACH is interesting in that it isn't an attack against SSL/TLS per se; rather, it is a way of compromising some of the secrecy goals of TLS by exploiting an application that will echo back user-injected data on a page that also contains some secret (a good examination of a way to use BREACH is covered by Sophos). There are certain ways of using HTTPS which make this attack possible, and others which merely make the attack easier.

Making attacks possible

Impacted applications are those which:
  • Include in the response body data supplied in the request (for instance, by filling in a search box);
  • Include in the response some static secret (token, session ID, account ID); and
  • Use HTTP compression.
For each of these enabling conditions, making it untrue is sufficient to protect a request. Therefore, never echoing user data, having no secrets in a response stream or disabling compression are all possible fixes. However, making either of the first two conditions false is likely infeasible; secrets like Cross-Site Request Forgery (CSRF) tokens are often required for security goals, and many web experiences rely on displaying user data (hopefully sanitized to prevent application injection attacks). Disabling compression is possibly the only "foolproof" and straightforward means of stopping this attack simply - although it may be sufficient to only disable compression on responses with dynamic content. Responses which do not change between requests do not contain a user-supplied string, and therefore should be safe to compress.

Disabling compression is likely to be expensive - some back-of-the-envelope numbers from Guy Podjarny, Akamai's CTO of Web Experience, suggest a significant performance hit. HTML compresses by a factor of around 6:1 - so disabling compression will increase bandwidth usage and latency accordingly. For an average web page, excluding HTML compression will likely increase the time to start rendering the page by around half a second for landline users, with an even greater impact for mobile users.

Making attacks easier

Applications are more easily attacked if they:
  • Have some predictability around the secrets; either by prepending fixed strings, or having a predictable start or end;
  • Are relatively static over time for a given user; and
  • Use a stream cipher.
This second category of enablers presents a greater challenge to evaluate solutions. Particularly challenging is the question of how much secrecy is gained and at what cost from each of them.

Altering secrets between requests is an interesting challenge - a CSRF token might be split into two dynamically changing values, which “add” together to form the real token (x * y = CSRF token). Splitting the CSRF token differently for each response ensures that an adversary can't pin down the actual token with an oracle attack. This may work for non-human parseable tokens, but what if the data being attacked is an address, phone number, or bank account number? Splitting them may still be possible (using JavaScript to reassemble in the browser), but the applications development cost to identify all secrets, and implement protections that do not degrade the user experience, seems unachievable.

Altering a page to be more dynamic, even between identical requests, seems possibly promising, and is certainly easier to implement. However, the secrecy benefit may not be as straightforward to calculate - an adversary may still be able to extract from the random noise some of the information they were using in their oracle. A different way to attack this problem might not be by altering the page, but by throttling the rate at which an adversary can force requests to happen. The attack still may be feasible against a user who is using wireless in a cafe all day, but it requires a much more patient adversary.

Shifting from a stream cipher to a block cipher is a simple change which increases the cost of setting up a BREACH attack (the adversary now has to “pad” attack inputs to hit a block size, rather than getting an exact response size). There is a slight performance hit (most implementations would move from RC4 to AES128 in TLS1.1).

Defensive options

What options are available to web applications?
  • Evaluate your cipher usage, and consider moving to AES128.
  • Evaluate whether supporting compression on dynamic content is a worthwhile performance/secrecy tradeoff.
  • Evaluate applications which can be modified to reduce secrets in response bodies.
  • Evaluate rate-limiting. Rate-limiting requests may defeat some implementations of this attack, and may be useful in slowing down an adversary.
How can Akamai customers use their Akamai services to improve their defenses?
  • You can contact your account team to assist in implementing many of these defenses, and discuss the performance implications.
  • Compression can be turned off by disabling compression for html objects. The performance implications of this change should be well understood before you make it, however (See the bottom of this post for specifics on one way to implement this change, limited only to uncacheable html pages, in Property Manager).
  • Rate-limiting is available to Kona customers.
  • Have your account team modify the cipher selections on your SSL properties.

Areas of exploration

There are some additional areas of interest that bear further research and analysis before they can be easily recommended as both safe *and* useful.
  • Padding response sizes is an interesting area of evaluation. Certainly, adding a random amount of data would at least help make the attack more difficult, as weeding out the random noise increase the number of requests an adversary would need to make. Padding to multiples of a fixed-length is also interesting, but is also attackable, as the adversary can increase the size of the response arbitrarily until they force the response to cross an interesting boundary. A promising thought from Akamai's Chief Security Architect Brian Sniffen is to pad the response by number of bytes based on the hash of the response. This may defeat the attack entirely, but merits further study.
  • An alternative to padding responses is to split them up. Ivan Ristic points us to Paul Querna's proposal to alter how chunked encoding operates, to randomize various response lengths.
  • It may be that all flavors of this attack are HTTPS responses where the referrer is from an HTTP site. Limiting defenses to only apply in this situation may be fruitful - for instance, only disabling HTML compression on an HTTPS site if the referrer begins with "http://". Akamai customers with Property Manager enabled can make this change themselves (Add a rule: Set the Criteria to "Match All": "Request Header", "Referer", "is one of", "http://*" AND "Response Cacheability", "is" "no_store"; set the Behaviors to "Last Mile Acceleration (Gzip Compression)", Compress Response "Never”. This requires you to enable wildcard values in settings.).

crossposted at blogs.akamai.com.