Deploy360 23 July 2013

David Freedman: Why I’m Practicing Anti-Spoofing

David Freedman, ClaraNet

Many years ago, a number of security and “hacking” sites carried some form of “IP Spoofing Tool”. I always thought this was a strange thing since, unless you were going to impersonate somebody locally on your subnet, routing would carry away TCP responses from you (to the “real” host) and you’d be spending most of your time “blind”, guessing the next TCP sequence numbers to use.

With the advent of TCP hardening and the move away from source IP based access control, it was only recently, some of the more nefarious use cases for spoofing emerged, the reflection attacks, and with them the damages that were caused to online businesses and their supporting network operators.

When I started working for an ISP and had my first experience of Internet routing, I was amazed by the fragility of the thing, a system based almost entirely on co-operation, trust, beer and of course, a spattering of money.

An early project my employer gave me was to analyse some “Netflow” telemetry from our Internet network. Netflow gave us important statistics about traffic consumption of users, and gave us information to base capacity planning and billing exercises on.

Through analysis of this Netflow, I uncovered a puzzling fact. There were IP packets, supposedly sourced by “private” Internet addresses, leaving our network and heading toward the Internet, since we didn’t route these addresses globally, there would have been no way for the traffic to return.

Of course, back then, this was more than just a curiosity, “Why was this happening?” I asked myself, “What is the significance of this?”

Eventually I concluded the fault was our own, we hadn’t taken any steps to prevent such a situation occurring, and through (some) misconfiguration, the network found such a way.

I remedied the situation by proposing an ACL; a simple packet filter between us and the Internet. The ACL would only permit our address space (as the source of a packet) and nothing else. This solved the problem, kept everybody happy and let me get back to my work. I was to hear nothing further until the summer came.

I remember the summer well; we were growing as a company and in the process purchased another ISP. The time eventually came to connect our networks together, with us providing them Internet access (acting us their ‘upstream’). It was at this point we were faced with changing the ACL, to include our new set of source addresses. This proved more complicated, as the company had not only independent customers of their own, but these customers also had their own customers! The ACL began to require more frequent modification.

In addition to this, a customer had called us, he’d installed a new firewall, which was logging all manner of alerts. He was receiving packets sourced from private addresses. This was our fault he said, we had to do something about it. Of course, he, like all our customers that were behind the big ACL, didn’t benefit from its protection.

We’d heard about a document, “BCP 38” that had been recently published. BCP 38 suggested that the key was “ingress filtering” (i.e. from customers) as opposed to what we had been doing (just “egress filtering”, to the Internet).

We knew we needed a form of ingress ACL at the customer edge, but not what form this would take (or how much effort it would take us to get there).

Back then; a number of customers were hooked to the network through dialup network access servers (NAS). We devised a scheme to have the NAS units employ a filter we dynamically built for the customer when they connected based on our provisioning database, we were all pleased with ourselves for being so clever.

Later on, these NAS units (and this technology) would be replaced, the new units (like the rest of the network we were replacing), could do this themselves.  “uRPF” was here.

The premise of uRPF was simple, build a dynamic ingress filter for the port based on the routing (well, the computed forwarding from the routing). The administrative ease with which you could employ uRPF amazed us all and we spent some time rolling it out, employing our BCP 38 (now BCP 84) protection with it.

One day, I had a call. uRPF had reached the ports of a multihomed customer, and he was unhappy. It had caused him an outage and had to be removed.

The reason for this became painfully clear; his routing policy was not the same as ours. We preferred to send traffic down his primary link, and he preferred to send it back to us from his secondary. Of course, the computed forwarding did not agree with the actual forwarding, and uRPF was useless here. We had to restore his old ACLs. Thankfully he forgave us and stayed on for many years.

BCP84 introduced three flavours of the RPF algorithm. Our installed flavor (“strict”) could break multihomed customers. Another flavour (“loose”) prevented only invalid addressing from being sourced (and didn’t prevent the customer from stealing another customer’s address). Finally there was “feasible”, a variant of strict which looked to solve the multihomed problem. To this day technology shortcomings are limiting the deployment of “feasible” in networks.

Today, we use a combination of strict uRPF along with ACLs. Strict uRPF is employed on single homed customers, with ACLs instead if they are multihomed.

I continue to believe in the benefit associated with identifying and authorising transit of packets from the correct source addresses. It’s important to note that these packets originate not just from network access circuits, but also from systems such as hosted or virtualised (“cloud”) servers.

I remember many evenings spent after conferences discussing it with my peers in the industry. I’d ask them why they were not implementing such filtering; the answers would always be the same. “my router doesn’t support uRPF” , “uRPF breaks my customers” or, the most depressing, “I don’t have time”.

RPF is of course a convenience; it isn’t mandatory (or even required) to implement filtering. I used ACLs for many years (and still do today) where RPF isn’t available or appropriate, many others do.

Reflection attacks today are effective mainly because service providers are ignoring (or otherwise not employing) filtering recommendations; this acts, I feel, to the detriment of us all.

 

,

Related Resources

Deploy360 1 March 2019

DNS Privacy Frequently Asked Questions (FAQ)

Almost every time we use an Internet application, it starts with a Domain Name System (DNS) transaction to map...

Deploy360 1 March 2019

IPv6 Security for IPv4 Engineers

This document provides an overview of IPv6 security that is specifically aimed at IPv4 engineers and operators. Rather than...

Deploy360 27 February 2019

Introduction to DNS Privacy

Abstract Almost every time we use an Internet application, it starts with a Domain Name System (DNS) transaction to...