Main

September 11, 2012

Take Over, Bos'n!

Eleven years ago, Danny Lewin was murdered.

This is a story from before that -- and how Danny inspired me to change the web.

It starts about twelve years ago. Akamai had just launched EdgeSuite, our new, whole-site content delivery product. Instead of having to change the URLs on your objects to start with a7.g.akamai.net/v/7/13346/2d/, you could just CNAME your whole domain over, and we'd deliver everything - even the HTML. It was revolutionary, and would power our move to application delivery.

But Danny wasn't satisfied with that (Danny was rarely satisfied with anything, actually). I'd just become Akamai's Chief Security Architect - mostly focusing on protecting our own infrastructure - and Danny came to me and said, "What will it take to convince banks to use EdgeSuite?"

I'll be honest, I laughed at him at first. We argued for weeks about how paranoid bank security teams were, and why they'd never let their SSL keys be held by someone else. We debated what security model would scale (we even considered having 24x7 security guards outside our datacenter racks). We talked about the scalability of IP space for SSL. Through all of that, Danny was insistent that, if we built it, the market would accept it - even need it. I didn't really believe him at the time, but it was an exciting challenge. We were designing a distributed, lights-out security model in a world with no good guidance on how to do it. And we did.

But I still didn't believe - not the way Danny did. Then came the phone call. I'd been up until 4 am working an incident, and my phone rings at 9 am. It's Danny. "Andy, I'm here with [large credit card company], and they want to understand how our SSL network works. Can you explain it to them?"

I begged for thirty seconds to switch to a landline (and toss cold water on my face), and off we go. We didn't actually have a pitch, so I was making it up on the fly, standing next to the bed in my basement apartment, without notes. I talked about the security model we'd built - and how putting datacenter security into the rack was the wave of the future. I talked about our access control model, the software audits we were building, and our automated installation system. I talked for forty-five minutes, and when I was done, I was convinced - we had a product that would sell, and sell well (it just took a few years for that latter half to come true).

When I got off the phone, I went to my desk, and turned that improvisational pitch into the core of the security story I still tell to this day. More importantly, I truly believed that our SSL capability would be used by those financial services customers. Like Danny, I was wrong by about a decade - but in the meantime, we enabled e-commerce, e-government, and business-to-business applications to work better.

Danny, thanks for that early morning phone call.

"When you're bossman" he added, "in command and responsible for the rest, you- you sure get to see things different don't you?"

July 9, 2012

HITB Keynote

I recently keynoted at Hack in the Box 2012 Amsterdam. My topic was "Getting ahead of the Security Poverty Line", and the talk is below:

After giving the talk, I think I want to explore more about the set point theory of risk tolerance, and how to social engineer risk perception. Updated versions of this talk will appear at the ISSA conference in October, and at Security Zone in December.

December 13, 2011

Security Subsistence Syndrome

Wendy Nather, of The 451 Group, has recently discussed "Living Below the Security Poverty Line," which looks at what happens when your budget is below the cost to implement what common wisdom says are the "standard" security controls. I think that's just another, albeit crowd-sourced, compliance regime. A more important area to consider is the mindset of professionals who believe they live below the security poverty line:

Security1 Subsistence Syndrome (SSS) is a mindset in an organization that believes it has no security choices, and is underfunded, so it minimally spends to meet perceived2 statutory and regulatory requirements.

Note that I'm defining this mindset with attitude, not money. I think that's a key distinction - it's possible to have a lot of money and still be in a bad place, just as it's possible to operate a good security program on a shoestring budget. Security subsistence syndrome is about lowered expectations, and an attitude of doing "only what you have to." If an enterprise suffering from security subsistence syndrome can reasonably expect no one to audit their controls, then they are unlikely to invest in meeting security requirements. If they can do minimal security work and reasonably expect to pass an "audit3", they will do so.

The true danger of believing you live at (or below) the security poverty line isn't that you aren't investing enough; it's that because you are generally spending time and money on templatized controls without really understanding the benefit they might provide, you aren't generating security value, and you're probably letting down those that rely on you. When you don't suffer security subsistence syndrome, you start to think with discretion; implementing controls that might be qualitatively better than the minimum - and sometimes, with lower long-term cost.

Security subsistence syndrome means you tend to be reactive to industry trends, rather than proactively solving problems specific to your business. As an example, within a few years, many workforces will likely be significantly tabletized (and by tablets, I mean iPads). Regulatory requirements around tablets are either non-existent, or impossible to satisfy; so in security subsistence syndrome, tablets are either banned, or ignored (or banned, and the ban is then ignored). That's a strategy that will wait to react to the existence of tablets and vendor-supplied industry "standards," rather than proactively moving the business into using them safely, and sanely.

Security awareness training is an example of a control which can reflect security subsistence syndrome. To satisfy the need for "annual security training", companies will often have a member of the security team stand up in front of employees with a canned presentation, and make them sign that they received the training. The signed pieces of paper go into someone's desk drawer, who hopes an auditor never asks to look at them. Perhaps the business uses an online computer-based training system, which uses a canned presentation, forcing users to click through some links. Those are both ineffective controls, and worse, inefficient (90 minutes per employee means that in a 1500 person company, you're spending over an FTE just to generate that piece of paper!).

Free of the subsistence mindset, companies get creative. Perhaps you put security awareness training on a single, click through webpage (we do!). That lets you drop the time requirement down (communicating to employees that you value their time), and lets you focus on other awareness efforts - small fora, executive education, or targeted social engineering defense training. Likely, you'll spend less time and money on improving security awareness training, have a more effective program, and be able to demonstrate compliance trivially to an auditor.

Security subsistence syndrome is about your attitude, and the choices you make: at each step, do you choose to take the minimal, rote steps to satisfy your perceived viewers, or do you instead take strategic steps to improve your security? I'd argue that in many cases, the strategic steps are cheaper than the rote steps, and have a greater effect in the medium term.


1 Nothing restricts this to security; likely, enterprise IT organizations can fall into the same trap.

2 To the satisfaction of the reasonably expectable auditor, not the perfect auditor.

3 I'm loosely defining audit here, to include any survey of a company's security practices; not just "a PCI audit."

September 9, 2011

The Spy Who Wasn't

By now, many of you have seen either an original article when Eliot Doxer was arrested, or a more recent article covering his guilty plea. As the articles (and the original complaint) note, Mr. Doxer, then an Akamai employee, reached out to the Israeli government, offering to sell information to them. His outreach was passed along to the FBI, who acted out a multi-year cloak and dagger scenario in which Mr. Doxer was providing information -- he believed to Israeli intelligence -- that instead went solely to the FBI. Early on, Akamai was alerted to the matter on a confidential basis, and we provided assistance over the years. Obviously, we can't go into detail about that.

What was this information?

Mr. Doxer was an employee in our Finance Department on the collections team, and, in the course of his job, he had routine and appropriate access to a limited amount of Akamai's business confidential information - like who our customers are and what they buy from us. At no time, however, was Mr. Doxer authorized to access the confidential information of our customers - including access to our production networks, our source code, or our customer configurations.
In pleading guilty to one count of foreign economic espionage, Mr. Doxer stipulated that he gave an FBI undercover agent, among other things, copies of contracts between Akamai and some of our customers. The Justice Department has confirmed that the Akamai information was never disclosed to anyone other than a U.S. law enforcement officer.

Lessons Learned

We used this incident as an opportunity to review our controls, to assess whether or not a deficiency was exploited and identify areas for improvement. We looked both at this specific case, as well as the general case of insider threats, and have identified and implemented additional controls to reduce our exposure.

And we've given thanks to the FBI for their outstanding work.

March 24, 2011

How certificates go bad

The security echo chamber has gotten quite loud over the last few days over the Comodo sub-CA bogus certificate issuance. This is a good opportunity to look at what happened, why this isn't as startling as some might think, and general problems in the SSL CA model.

A primer on certificates and authorities

Certificates are built on top of asymmetric cryptographic systems - systems where you have a keypair that is split into a private half (held closely by the owner) and a public half (distributed widely). Information encrypted with one half is only decryptable with the other half. If you encrypt with the public key, we call it encryption (the information is now secret and can only be read by the private key owner); if you encrypt with the private key, we call it signing (the information can be verified by anyone, but only you could have generated it). There are additional optimization nuances around hashes and message keys, but we'll gloss over those for now.

Anyone can generate asymmetric keypairs; what makes them interesting is when you can tie them to specific owners. The SSL model is based on certificates. A certificate is just someone's public key, some information about that public key, and a signature of the key and information. The signature is what's interesting -- it's generated by another keyholder, whose private key & certificate we call a certificate authority (CA).

"You've got me. Who's got you?"

How do we trust a CA that has signed a certificate? It itself might be signed by another CA, but at some point, we have to have a root of trust. Those are the CAs that our web browsers and operating systems trust to sign other certificates. You should take a gander around the list (Mozilla ships about 50 organizations as root CAs, Internet Explorer far more). Those roots can directly sign any SSL certificate, or can sign an intermediate CA, which then signs certificates.

The most expensive part of issuing a certificate is verifying that the purchaser is authorized to hold one. Many CAs, including Comodo, have resellers who can instruct the CA to issue certificates; the reseller becomes what is known as the "Registration Authority (RA)." (Full disclosure: Akamai is a reseller of several CAs, including Comodo, although certificates we sign only work with the private keys that we hold on our platform.)

There are two major, fundamental flaws in this architecture.

First, the number of trusted CAs is immense. And each of those CAs can authoritatively sign certificates for any domain. This means that CA Disig (of the Slovak Republic) can issue authoritative certs for www.gc.ca, the Government of Canada's website. (Full disclosure: my wife's mother is from Slovakia, and my father's side of the family is Canadian.) Fundamentally, the list of root CAs in everyone's browser contains authorities based anywhere in the world, including governments known to be hostile to theirs. A related issue is that most enterprises have their own "private CA" which signs intranet certificates; that CA becomes valid for any domain when the user adds it to their trust chain.

Second, RAs are a very weak point. Not only are they in a race to the bottom (if you can buy an SSL cert for under $50, imagine how little verification of your identity the RA can afford to do), but any one of them, if compromised, can issue certificates good for any domain in the world. And that's what happened in the case of the bogus Comodo certificates.

Kudos to Comodo for good incident response, and explaining clearly what happened. I suspect that's the rarity, not the issuance of bogus certificates.

February 16, 2011

Malware hunting

Today at the RSA Conference, Akamai Principal Security Architect Brian Sniffen is giving a talk titled "Scanning the Ten Petabyte Cloud: Finding the malware that isn't there." In Brian's talk, he discusses the challenges of hunting for malware hooks in stored HTML pages of unspecified provenance, and some tips and tricks for looking for this malicious content.

In conjunction with his talk, Akamai is releasing the core source code for our vscan software. The source code is BSD3-licensed.

We are hopeful that our experiences can be helpful to others looking for malware in their HTML.

December 16, 2010

Architecting for DDoS Defense

DDoS is back in the news again, given the recent post-Cyber Monday DDoS attacks and the Anonymous DDoS attacks targeted at various parties. This seems like a good time to remember the concepts you need in the front of your mind when you're designing to resist DDoS.

DDoS mitigation isn't a point solution; it's much more about understanding how your architecture might fail, and how efficient DDoS attacks can be. Sometimes, simply throwing capacity at the problem is good enough (in many cases, our customers just start with this approach, using our WAA, DSA, and EDNS solutions to provide that instant scalability), but how do you plan for when simple capacity might not be sufficient?

It starts with assuming failure: at some point, your delicate origin infrastructure is going to be overwhelmed. Given that, how can you begin pushing out as much functionality as possible into the edge? Do you have a set of pages that ought to be static, but are currently rendered dynamically? Make them cacheable, or set up a backup cacheable version. Put that version of your site into scaleable cloud storage, so that it isn't relying on your infrastructure.

For even dynamic content, you'd be amazed at the power of a short-term caching. A 2 second cache is all but unnoticeable to your users, but can offload significant attack traffic to your edge. Even a zero-second cache can be interesting; this lets your front end cache results, and serve them (stale) if they can't get a response from your application.

After you think about disaster resilience, you should start planning for the future. How can you authenticate users early and prioritize the requests of your known good users? How much dynamic content assembly can you do without touching a database? Can you store-and-forward user generated content when you're under heavy load?

The important point is to forget much of what we've been taught about business continuity. The holy grail of "Recovery Time Objective" (how long you can be down) shouldn't be the target, since you don't want to go down. Instead, you need to design for you Minimum Uninterrupted Service Target - the basic capabilities and services you must always provide to your users and the public. It's harder to design for, but makes weathering DDoS attacks much more pleasant.

August 30, 2010

A Cloud Balancing Act

Over at F5's Dev Central, Lori MacVittie talks about load balancing and the cloud:

When you decide to deploy an application to the cloud you have a very similar decision: do you extend your existing dwelling, essentially integrating the environment with your own or do you maintain a separate building that is still "on the premises" but not connected in any way except that it's accessible via your shared driveway.

Using a garage expansion as a metaphor for scaling laterally to the cloud is a great one, and captures a lot of the nuances to be dealt with.

I'd like to add a third option to Lori's first two, based on our experience with hundreds of enterprises -- the valet strategy. Rather than simply load balancing between multiple systems, a cloud can sit in front of multiple sites, performing application delivery functions, as well as load balancing betwixt the backend sites.

Many Akamai customers do this today. They may have several data centers of their own, and want to integrate cloud-based services seamlessly into their existing sites (like taking advantage of user prioritization to load balance between an application server and a cloud-based waiting room; or using storage in the cloud to provide failover services). Akamai's cloud accepts the end user request, and either takes care of the user locally, or gathers the necessary resources from among multiple backend systems to service the user. And that's a way to load balance transparently to the user.

August 23, 2010

Awareness Training

Implementing a good security awareness program is not hard, if your company cares about security. If they don't, well, you've got a big problem.

It doesn't start with the auditable security program that most standards would have you set up. Quoting PCI-DSS testing procedures:

12.6.1.a Verify that the security awareness program provides multiple methods of communicating awareness and educating employees (for example, posters, letters, memos, web based training, meetings, and promotions).
12.6.1.b Verify that employees attend awareness training upon hire and at least annually.
12.6.2 Verify that the security awareness program requires employees to acknowledge (for example, in writing or electronically) at least annually that they have read and understand the company's information security policy.

For many awareness programs, this is their beginning and end. An annual opportunity to force everyone in the business to listen to us pontificate on the importance of information security, and make them read the same slides we've shown them every year. Or, if you've needed to gain cost efficiencies, you've bought a CBT program that is lightly tailored for your business (and as a side benefit, your employees can have races to see how quickly they can click through the program).

But at least it's auditor-friendly: you have a record that everyone attended, and you can make them acknowledge receipt of the policy that they are about to throw in the recycle bin. And you have to have an auditor friendly program, but it shouldn't be all that you do.

I can tell you that, for our baseline, auditor-friendly security awareness program, over 98% of our employee base have reviewed and certified the requisite courseware in the last year; and that of the people who haven't, the vast majority have either started work in the last two weeks (and thus are in a grace period), or are on an extended leave. It's an automated system, which takes them to a single page. At the bottom of the page is the button they need to click to satisfy the annual requirement. No gimmicks, no trapping the user in a maze of clicky links. But on that page is a lot of information: why security is important to us; what additional training is available; links to our security policy (2 pages) and our security program (nearly 80 pages); and an explanation of the annual requirement. And we find that a large majority of our users take the time to read the supplemental training material.

But much more importantly, we weave security awareness into a lot of activities. Listen to our quarterly investor calls, and you'll hear our executives mention the importance of security. Employees go to our all-hands meetings, and hear those same executives talk about security. The four adjectives we've often used to describe the company are "fast, reliable, scalable, and secure". Social engineering attempts get broadcast to a mailing list (very entertaining reading for everyone answering a published telephone number). And that doesn't count all of the organizations that interact with security as part of their routine.

And that's really what security awareness is about: are your employees thinking about security when it's actually relevant? If they are, you've succeeded. If they aren't, no amount of self-enclosed "awareness training" is going to fix it. Except, of course, to let you check the box for your auditors.

July 19, 2010

Edge Tokenization

Visa released its Credit Card Tokenization Best Practices last week, giving implementors a minimum guide on how to implement tokenization. It's a good read, although if you're planning on building your own tokenizer, I'd strongly recommend reading Adrian Lane's take on the subject, including practices above and beyond Visa's for building good tokenization systems.

But I don't recommend building your own tokenizer, unless you're a payment gateway (but if you're going to, please read Adrian's guidance, and design carefully). The big goal of tokenization is to get merchants' commerce systems out of scope for PCI. And if you want to try to remove your systems from PCI scope, you should never see the credit card number.

That's why I'm really excited about Akamai's Edge Tokenization service. As discussed at Forbes.com, we've been beta testing a service that captures credit card data in a customer website, hands it to our partner gateways, and substitutes the returned token to our customer's systems.
Image of Akamai Edge Tokenization service.  Consumer credit card is entered into a form on a merchant website.  Akamai server captures the credit card, and sends it to a payment gateway for processing.   The payment gateway returns a token to Akamai, and Akamai delivers the token in the POST body to the merchant.   The merchant never sees the credit card.
We don't do the tokenization ourselves, so that we never have the ability to reverse the tokens. But the capture and replacement all happens inside our Level 1 merchant environment, so our customers get to simply reduce the number of their systems that see credit cards (potentially removing them from scope).

Our EdgeTokenization service is going to be publicly available early this fall, at which point we'll help the industry reduce the number of places that credit cards are even seen.

May 28, 2010

NSEC3: Is the glass half full or half empty?

NSEC3, or the "Hashed Authenticated Denial of Existence", is a DNSSEC specification to authenticate the NXDOMAIN response in DNS. To understand how we came to create it, and the secrecy issues around it, we have to understand why it was designed. As the industry moves to a rollout of DNSSEC, understanding the security goals of our various Designed Users helps us understand how we might improve on the security in the protocol through our own implementations.

About the Domain Name Service (DNS)

DNS is the protocol which converts mostly readable hostnames, like www.csoandy.com, into IP addresses (like 209.170.117.130). At its heart, a client (your desktop) is asking a server to provide that conversion. There are a lot of possible positive answers, which hopefully result in your computer finding its destination. But there are also some negative answers. The interesting answer here is the NXDOMAIN response, which tells your client that the hostnames does not exist.

Secrecy in DNS

DNS requests and replies, by design, have no confidentiality: anyone can see any request and response. Further, there is no client authentication: if an answer is available to one client, it is available to all clients. The contents of a zone file (the list of host names in a domain) are rarely publicized, but a DNS server acts as a public oracle for the zone file; anyone can make continuous requests for hostnames until they reverse engineer the contents of the zone file. With one caveat: the attacker will never know that they are done, as there might exist hostname that they have not yet tried.

But that hasn't kept people from putting information that has some form of borderline secrecy into a zone file. Naming conventions in zone files might permit someone to easily map an intranet just looking at the hostnames. Host names might contain names of individuals. So there is a desire to at least keep the zone files from being trivially readable.

DNSSEC and authenticated denials

DNSSEC adds in one bit of security: the response from the server to the client is signed. Since a zone file is (usually) finite, this signing can take place offline: you sign the contents of the zone file whenever you modify them, and then hand out static results. Negative answers are harder: you can't presign them all, and signing is expensive enough that letting an adversary make you do arbitrary signings can lead to DoS attacks. And you have to authenticate denials, or an adversary could poison lookups with long-lived denials.

Along came NSEC. NSEC permitted a denial response to cover an entire range (e.g., there are no hosts between wardialer.csoandy.com and www.csoandy.com). Unfortunately, this made it trivial to gather the contents of a zone: after you get one range, simply ask for the next alphabetical host (wwwa.csoandy.com) and learn what the next actual host is (andys-sekrit-ipad.csoandy.com). From a pre-computation standpoint, NSEC was great - there are the same number of NSEC signed responses in a zone as all other signatures - but from a secrecy standpoint, NSEC destroyed what little obscurity existed in DNS.

NSEC3

NSEC3 is the update to NSEC. Instead of providing a range in which there are no hostnames, a DNS server publishes a hashing function, and a signed range in which there are no valid hashes.. This prevents an adversary from easily collecting the contents of the zone (as with NSEC), but does allow them to gather the size of the zone file (by making queries to find all of the unused hash ranges), and then conduct offline guessing at the contents of the zone files (as Dan Bernstein has been doing for a while). Enabling offline guessing makes a significant difference: with traditional DNS, an adversary must send an arbitrarily large number of queries (guesses) to a name server (making them possibly detectable); with NSEC, they must send as many queries as there are records; and with NSEC3, they must also send the same number of requests as there are records (with some computation to make the right guesses), and then can conduct all of their guessing offline.

While NSEC3 is an improvement from NSEC, it still represents a small step down in zone file secrecy. This step is necessary from a defensive perspective, but it makes one wonder if this is the best solution: why do we still have the concept of semi-secret public DNS names? If we have a zone file we want to keep secret, we should authenticate requests before answering. But until then, at least we can make it harder for an adversary to determine the contents of a public zone.

"Best" practices in zone secrecy

If you have a zone whose contents you want to keep obscure anyway, you should consider:
  • Limiting access to the zone, likely by IP address.
  • Use randomly generated record names, to make offline attacks such as Dan Bernstein's more difficult.
  • Fill your zone with spurious answers, to send adversaries on wild goose chases.
  • Instrument your IDS system to detect people trying to walk your zone file, and give them a different answer set than you give to legitimate users.

Jason Bau and John Mitchell, both of Stanford, have an even deeper dive into DNSSEC and NSEC3.

May 6, 2010

Contracting the Common Cloud

After attending CSO Perspectives, Bill Brenner has some observations on contract negotiations with SaaS vendors. While his panel demonstrated a breadth of customer experience, it was, unfortunately, lacking in a critical perspective: that of a cloud provider.

Much of the point of SaaS, or any cloud service, in the economy of scale you get; not just in capacity, but also in features. You're selecting from the same set of features that every other customer is selecting from, and that's what makes it affordable. And that same set of features needs to extend up into the business relationship. As the panel noted, relationships around breach management, data portability, and transport encryption are all important, but if you find yourself arguing for a provider to do something it isn't already, you're likely fighting a Sisyphean battle.

But how did a customer get to that point? Enterprises typically generate their own control frameworks, in theory beginning from a standard (like ISO 27002), but then redacting out the inapplicable (to them), tailoring controls, and adding in new controls to cover incidents they've had in the past. And when they encounter a SaaS provider who isn't talking about security controls, the natural tendency is to convert their own control framework into contractual language. Which leads to the observations of the panel participants: it's like pulling teeth.

A common request I've seen is the customer who wants to attach their own security policy - often a thirty to ninety page document - as a contract addendum, and require the vendor to comply with it, "as may be amended from time to time by Customer". And while communicating desires to your vendors is how they'll decide to improve, no cloud vendor is going to be able to satisfy that contract.

Instead, vendors need to be providing a high-water mark of business and technology capabilities to their customer base. To do this, they should start back from those original control frameworks, and not only apply them to their own infrastructure, but evaluate the vendor-customer interface as well. Once implemented, these controls can then be packaged, both into the baseline (for controls with little or no variable cost), and into for-fee packages. Customers may not want (or want to pay for) all of them, but the vendors need to be ahead of their customers on satisfying needs. Because one-off contract requirements don't scale. But good security practices do.

February 4, 2010

Why don't websites default to SSL/TLS?

When a client connects on TCP port 80 to a webserver, one of the first thing it sends is a line that tells the server what website it wants. This line looks like this:
HOST: www.csoandy.com
This tells the webserver which configuration to use, and how to present content to the end-user. This effectively abstracts TCP and IP issues, and lets websites and webservers interact at the discretion of the owner. The designed user of HTTP is the administrator, who may need to host dozens of websites on a smaller number of systems.

Secure Socket Layer (SSL) and its successor, Transport Layer Security (TLS), on the other hand, were designed for exactly the opposite user. The designed user of HTTPS is the paranoid security academic, who doesn't even want to tell a server the hostname it is looking for (the fact that you were willing to tell a DNS server is immaterial). In essence, SSL requires that any server IP address can have only one website on it. When a client connects to a webserver on port 443, the first thing it expects is for the server to provide a signed certificate, that matches the hostname that the client has not yet sent. So if you connect to www.csoandy.com via SSL, you'll note you get back a different certificate: one for a248.e.akamai.net. This is expected -- nothing on this hostname requires encryption. Similarly, for Akamai customers that are delivering whole-site content via Akamai on hostnames CNAMEd to Akamai's edgesuite.net domain, attempts to access these sites via SSL will result in a certificate for a248.e.akamai.net being returned. (Aside: customers use our SSL Object Caching service to deliver objects on the a248.e.akamai.net hostname; customers who want SSL on their own hostnames use our WAA and DSA-Secure services).

The designs of HTTP and HTTPS are diametrically opposed, and the SSL piece of the design creates horrendous scaling problems. The server you're reading this from serves over 150,000 different websites. Those sites are actually loadbalanced across around 1600 different clusters of servers. For each website to have an SSL certificate on this network, we'd need to consume around 250 million IP addresses - or 5.75% of the IPv4 space. That's a big chunk of the 9% left as of today. Note that there isn't a strong demand to put most sites on SSL; this is just elucidating why, even if there were demand, the sheer number of websites today makes this infeasible.

Fortunately, there are paths to a solution.

Wildcard certificates
For servers that only serve hostnames in one domain, a wildcard certificate can help. If, for instance, in addition to www.csoandy.com, I had store.csoandy.com, pictures.csoandy.com, and catalog.csoandy.com, instead of needing four certificates (across those 1800 locations!), I could use a certificate for *.csoandy.com, which would match all four domains, as well as future growth of hostnames.

You're still limited; a wildcard can only match one field, so a certificate for *.csoandy.com wouldn't work on a site named comments.apps.csoandy.com. Also, many security practitioners, operating from principles of least privilege, frown on wildcard certificates, as they can be used even for unanticipated sites in a domain.

Subject Alternate Name (SAN) certificates
A slightly more recent variety of certificate are the SAN certificates. In addition to the hostname listed in the certificate, an additional field lets you specify a list of valid hostnames for that certificate (If you look closely, the a248.e.akamai.net certificate on this host has a SAN field set, which includes both a248.e.akamai.net and *.akamaihd.net). This permits a server to have multiple, disparate hostnames on one interface.

On the downside, you still only have one certificate, which is going to get larger and larger the more hostnames you have (which hurts performance). It also ties all of those hostnames into one list, which may present brand and security issues to some enterprises.

Server Name Indication (SNI)
The long term solution is a feature for TLS called Server Name Indication (SNI). This extension calls for the client to, as part of the initial handshake, indicate to the server the name of the site it is looking for. This will permit a server to select the appropriate one from its set of SSL certificates, and present that.

Unfortunately, SNI only provides benefit when everyone supports it. Currently, a handful of systems don't support SNI, most notably Windows XP and IIS. And those two major components are significant: XP accounts for 55-60% of the browser market, and IIS looks to be around 24%. So it'll be a while until SNI is ready for primetime.

January 13, 2010

The Evolution of DDoS

Last week, I spent some time with Bill Brenner (CSO Magazine) discussing DDoS attacks, and some insight into attacks over the last ten years. He put our discussion into a podcast.

January 4, 2010

Interview at ThreatChaos

Over the holidays, I spent some time on the phone with Richard Stiennon, talking a bit about Akamai and DDoS. The interview is up over at ThreatChaos.