« August 2010 | Main | January 2011 »

December 16, 2010

Architecting for DDoS Defense

DDoS is back in the news again, given the recent post-Cyber Monday DDoS attacks and the Anonymous DDoS attacks targeted at various parties. This seems like a good time to remember the concepts you need in the front of your mind when you're designing to resist DDoS.

DDoS mitigation isn't a point solution; it's much more about understanding how your architecture might fail, and how efficient DDoS attacks can be. Sometimes, simply throwing capacity at the problem is good enough (in many cases, our customers just start with this approach, using our WAA, DSA, and EDNS solutions to provide that instant scalability), but how do you plan for when simple capacity might not be sufficient?

It starts with assuming failure: at some point, your delicate origin infrastructure is going to be overwhelmed. Given that, how can you begin pushing out as much functionality as possible into the edge? Do you have a set of pages that ought to be static, but are currently rendered dynamically? Make them cacheable, or set up a backup cacheable version. Put that version of your site into scaleable cloud storage, so that it isn't relying on your infrastructure.

For even dynamic content, you'd be amazed at the power of a short-term caching. A 2 second cache is all but unnoticeable to your users, but can offload significant attack traffic to your edge. Even a zero-second cache can be interesting; this lets your front end cache results, and serve them (stale) if they can't get a response from your application.

After you think about disaster resilience, you should start planning for the future. How can you authenticate users early and prioritize the requests of your known good users? How much dynamic content assembly can you do without touching a database? Can you store-and-forward user generated content when you're under heavy load?

The important point is to forget much of what we've been taught about business continuity. The holy grail of "Recovery Time Objective" (how long you can be down) shouldn't be the target, since you don't want to go down. Instead, you need to design for you Minimum Uninterrupted Service Target - the basic capabilities and services you must always provide to your users and the public. It's harder to design for, but makes weathering DDoS attacks much more pleasant.

December 15, 2010

Welcome AWS

By now you've certainly seen that Amazon Web Services has achieved PCI Compliance as a Level 1 merchant. This is great; the folks over at Amazon deserve congratulations on joining the cabal of compliant clouds. You've probably also seen the PCI pundits pithily pronouncing, "Just because AWS is PCI compliant, it doesn't mean they make you compliant by using them!". What does this mean?

Put simply, AWS, like Akamai's Dynamic Site Accelerator, is a platform over which your e-commerce flows. It's critically important that that platform is compliant if you're moving cardholder data through it (just as it's also important that PCI compliance is called out in your contract with AWS, per DSS 12.2). But more importantly, the way in which your organization, and your applications, handle credit card data is the key piece of getting compliant. What AWS's certification buys you is the same thing Akamai's does - the ability to exclude those components from your PCI scope.

(if you want to get compliant, the easiest way, of course, is to tokenize cardholder data before it hits your environment, using a service like EdgeTokenization).