DNS reflection defense

Recently, DDoS attacks have spiked up well past 100 Gbps several times. A common move used by adversaries is the DNS reflection attack, a category of Distributed, Reflected Denial of Service (DRDos) attack. To understand how to defend against it, it helps to understand how it works.

How DNS works

At the heart of the Domain Name System are two categories of name server: the authoritative name server, which is responsible for providing authoritative answers to specific queries (like use5.akam.net, which is one of the authoritative name servers for the csoandy.com domain), and the recursive name server, which is responsible for answering any question asked by a client. Recursive name servers (located in ISPs, corporations, and data centers around the world) query the appropriate authoritative name servers around the Internet, and return an answer to the querying client. An open resolver is a category of resolver that will answer recursive queries from any client, not just those local to them. Because DNS requests are fairly small and lightweight, DNS primarily uses the Universal Datagram Protocol (UDP), a stateless messaging system. Since UDP requests can be sent in a single packet, the source address are easily forgeable with any address desired by the true sender.

DNS reflection

A DNS reflection attack takes advantage of three things: the forgeability of UDP source addresses, the availability of open resolvers, and the asymmetry of DNS requests and responses. To conduct an attack, an adversary sends a set of DNS queries to open resolvers, altering the source address on their requests to be those of their chosen target. The requests are designed to have much larger responses (often, using an ANY request, a 64 byte request yields a 512-byte response), thus resulting in the recursive name servers sending about 8 times as much traffic at the target as they themselves received. A DNS reflection attack can directly use authoritative name servers, but it requires more preparation and research, making requests specific to the scope of each DNS authority used.

Eliminating DNS reflection attacks

An ideal solution would obviously be to eliminate this type of attack, rather than every target needing to defend themselves. Unfortunately, that’s challenging, as it requires significant changes by infrastructure providers across the Internet.

BCP38

No discussion of defending against DRDoS style attacks is complete without a nod to BCP38. These attacks only work because an adversary, when sending forged packets, has no routers upstream filtering based on the source address. There is rare need to permit an ISP user to send packets claiming to originate in another ISP; if BCP38 were adopted and implemented in a widespread fashion, DRDoS would be eliminated as an adversarial capability. That’s sadly unlikely, as BCP38 enters its 14th year; the complexity and edge cases are significant.

The open resolvers

While a few enterprises have made providing an open resolver into a business (OpenDNS, GoogleDNS), many open resolvers are either historical accidents, or resulting from incorrect configuration. Even MIT has turned off open recursion on its high-profile name servers.
Barring that, recursive name servers should implement rate limiting, especially on infrequent request types, to reduce the multiplication of traffic that adversaries can gain out of them.

Self-defense

Until ISPs and resolver operators implement controls to limit how large attacks can become, attack targets must defend themselves. Sometimes, attacks are targeted at infrastructure (like routers and name servers), but most often they are being targeted at high-profile websites operated by financial services firms, government agencies, retail companies, or whoever has caught the eye of the attacker this week.
An operator of a high-profile web property can take steps to defend their front door. The first step, of course, should be to find their front door; and to understand what infrastructure it relies on. And then they can evaluate their defenses.

Capacity

The first line of defense is always capacity. Without enough bandwidth at the front of your defenses, nothing else matters. This needs to be measurable both in raw bandwidth, as well as in packets per second, because hardware often has much lower bandwidth capacity as packet sizes shrink. Unfortunately, robust capacity is now measurable in the 300+ gigabits per second, well beyond the resources of the average datacenter. However, attacks in the 3-10 gigabit per second range are still common, and well within the range of existing datacenter defenses.

Filtering

For systems that aren’t DNS servers themselves, filtering out DNS traffic as far upstream as possible is a good solution, but certainly at a border firewall. One caveat – web servers often need to make DNS queries themselves, so ensure that they have a path to do so. In general, the principal of “filter out the unexpected” is a good filtering strategy.

DNS server protection

Since DNS servers have to process incoming requests (an authoritative name server has to respond to all of the recursive resolvers around the Internet, for instance), merely filtering DNS traffic upstream isn’t an option. So what is perceived as a network problem by non-DNS servers becomes an application problem for the DNS server. Defenses may no longer be simple “block this” strategies; rather, defense can take advantage of application tools to provide different defenses.

Redundancy

While the total number of authoritative DNS server IP addresses for a given domain is limited (while 13 should fit into the 512-byte DNS response packet, generally, 8 is a reasonable number), many systems use nowhere near the limit. Servers should be diversified, located in multiple networks and geographies, ensuring that attacks against two name servers aren’t traveling across the same links.

Anycast

Since requests come in via UDP, anycasting (the practice of having servers responding on the same IP address from multiple locations on the internet) is quite practical. Done at small scale (two to five locations), this can provide significant increases in capacity, as well as resilience to localized physical outages. However, DNS also lends itself to architectures with hundreds of name server locations sprinkled throughout the internet, each localized to only provide service to a small region of the Internet (possibly even to a single network). Adversaries outside these localities have no ability to target the sprinkled name servers, which continue to provide high quality support to nearby end users.

Segregation

Based on Akamai’s experience running popular authoritative name servers, 95% of all DNS traffic originates from under a million popular name server IP addresses (to get 99% requires just under 2 million IP addresses). Given that the total IPv4 address space is around 4.3 billion IP addresses, name servers can be segregated; a smaller number to handle the “unpopular” name servers, and a larger amount to handle the popular name servers. Attacks that reflect of unpopular open resolvers thus don’t consume the application resources providing quality of service to the popular name service.

Response handling

Authoritative name servers should primarily see requests, not responses. Therefore, they should be able to isolate, process, and discard response packets quickly, minimizing impact to resources engaged in replying to requests. This isolation can also apply to less frequent types of request, such that when a server is under attack, it can devote resources to requests that are more likely to provide value.

Rate Limiting

Traffic from any name server should be monitored to see if it exceeds reasonable thresholds, and, if so, aggressively managed. If a name server typically sends a few requests per minute, having name servers not answer most requests from a name servers requesting dozens of time per second (these thresholds can and should be dynamic). This works because of the built in fault tolerance of DNS; if a requesting name server doesn’t see a quick response, it will send another request, often to a different authoritative name server (and deprioritizing the failed name server for future requests).

As attacks grow past the current few hundred gigabit-per-second up to terabit-per-second attacks, robust architectures will be increasingly necessary to maintains a presence on the Internet in the face of adversarial action.

Originally posted at a previous iteration of the Akamai blog.


Posted

in

by

Tags: