Software-defined networking (SDN) is clearly one of the hot items of the tech field at the moment.  VMware’s purchase of Nicira precipitated a sea change, leading to today’s plethora of SDN vendors and array of competing technologies. It reminds me the early noughties—the introduction of virtualization, competing hypervisor technology stacks and Unix/Linux Zones*—followed by the scramble of the incumbents as they claimed performance penalties for virtualized operating systems and platforms, followed by spreading FUD about support status and onerous licensing models.

The similarities are many. We have not yet reached the onerous licensing stage, but the first two are loud and proud. What exactly is all this buzz about? This is the first in a series of posts explaining SDN in lay terms to the non-technical. This post will attempt to outline some of the history of the technology. Later, we will discuss the key players in this space, how their technology is consumed, and what and who to use in which situations.

The History of the World, Part One

In the beginning, there were only closed and proprietary networks, and the networking gods were happy. Cisco made a lot of money. Brocade, Foundry, and others made enough to feed the goat and provide “competition” for Cisco. But, there were those who looked upon this and saw that it was not good. So, in 2006, a PhD student from Stanford named Martin Casado developed something called Ethane. This idea had, by 2011, led to the first OpenFlow. Also in 2011, the Open Network Foundation was formed to drive standardization in the nascent industry. The current membership listing is like a Who’s Who of networking.

Now OpenFlow is considered by many to be one of the first software-defined networking (SDN) standards. It originally defined the communication protocol in SDN environments that enables an SDN/OpenFlow Controller to interact with forwarding planes. It also defined how make adjustments to the network, so a network can better adapt to changing business requirements.

But What Exactly Is OpenFlow, and How Does It Do Its Magic?

OpenFlow split the dataplane from the control plane. It is the control plane that is the important thing here. This abstraction allowed the separation of forwarding decisions and traffic flow.

So, now we have a dataplane and a control plane. But what are these? Let’s look at an analogy. Below is a map of the Vienna Underground. (I know, what I am using the Austrian underground for? I am British, so I was going to use a map of the London Underground, but I did not want to pay for the privilege, as the London Tube map is copyrighted.)

U-Bahn-Netz (example of SDN)
Vienna Underground (example of SDN)

Back to our analogy. You have total control of this environment, but before you can collect passengers or even set trains on the track, you need a plan or timetable. You need to understand which paths the trains will take for optimal passenger movements from one location to another. This decision-making process is analogous to the control plane. The SDN controller (control plane) is all about learning routes. The routes can be static routes, which we train the driver to follow, or dynamic routes (think more of a taxi, which travels on demand by the best route).

In our analogy, the dataplane equates to the trains, network ports are the stations, and your payload is the passengers. Your passengers enter the stations and get on the trains. Then, using information gathered from the control plane (route details), the passengers travel across the network, changing trains at junction stations (routers) to reach their desired terminal destination.

The dataplane (trains) does not make any decisions on how to forward traffic. It receives those decisions from the control plane (timetable). When a junction station (router) is out of order, you inform your passengers of the change and give them recommended routes to their locations that bypass the blockage.

Now, in a traditional network, the control plane is typically a collection of software installed locally in firmware on the router or switch that conducts the flow (i.e., rule sets that identify some kind of traffic and decide what to do with it) into the data plane.

But OpenFlow uses a software controller. This is the “brains” of the network. It does not have to be local to the devices it is operating, although some do still load it on their hardware. It relays information to your switches/routers “logically below” (via southbound APIs) and to the applications and business logic “logically above” (via northbound APIs). It is these APIs that make OpenFlow and its successors so powerful. These are the building blocks of the policy-driven network that will be at the heart of the software-defined datacenter world we are entering.

* Have I not told you that nothing is new in this world? Read Docker: What Is It? Hint: It’s Not a Harbour Worker.