“SDN” is the current buzzword. Well, to be fair, “SDDC” (software-defined data center) is, but SDN is still a cool kid on the block.
However, who outside of Silicon Valley and the Fortune 500 companies truly knows the details of a software-defined network?
Let’s start with definitions of both software-defined networking and network functions virtualization (NFV) concepts:
Now, I could not find a decent definition of NFV online, so here is my version:
NFV: The physical separation of network functions such as firewalls, routers, and load balancers from dedicated hardware devices.
All though these two technologies are often bundled together as SDN, they are more like distant kissing cousins, and most definitely different sides of the same coin.
SDN, or software-defined networking, at its base concerns itself with the flow of traffic from source, to destination, and back again, abstracting the underlying physical network devices from the traffic. It is an answer for those environments that are large; it is dynamic, manageable, and adaptable. Some say that it is cost-effective, but that is a question for later.
As I have already alluded to, it decouples the management plane from the forwarding function. This decoupling allows the control function to be centrally controlled and programmed.
Another benefit is the abstraction of the underlying infrastructure. This allows the purchase of cheaper, unmanaged switches to replace expensive managed devices from known vendors.
The foundation for the majority of SDN products is the OpenFlow protocol (more about this in a later post).
According to the Open Networking Foundation, for a product to be defined as SDN, it requires the following features. It must:
- Be directly programmable: Network control is directly programmable because it is decoupled from forwarding functions.
- Be capable of being agile: Abstracting control from forwarding lets administrators dynamically adjust network-wide traffic flow to meet changing needs.
- Have centralized management: Network intelligence is (logically) centralized in software-based SDN controllers that maintain a global view of the network, which appears to applications and policy engines as a single, logical switch.
- Be capable of being programmatically configured: SDN lets network managers configure, manage, secure, and optimize network resources very quickly via dynamic, automated SDN programs, which they can write themselves because the programs do not depend on proprietary software.
- Follow open standards and be vendor-neutral: When implemented through open standards, SDN simplifies network design and operation because instructions are provided by SDN controllers instead of multiple, vendor-specific devices and protocols.
The primary goal of NFV, or network functions virtualization, is to decouple network functions from the underlying hardware. A good example of NFV is virtualized load balancers, or firewalls. It is the flexibility and abstraction that matters in NFV. For example, if a firewall needed more bandwidth, the systems administrators could simply deploy another firewall to take up the slack and front it with a virtual load balancer. Alternatively, they could move the firewall to a different host with more network capacity to handle the flow.
It can be reasonably argued that NFV is but a subset of SDN that fits in at the forwarding layer, as shown on the image above. I concur with this position. However, not all SDN is NFV, and that is a critical point I will investigate in a later post.
Next, we will investigate what are the computing trends that are driving this move to a software-defined network from the paradigms that have served us well for more than twenty-five years.
With the rise of virtualization and cloud computing in the modern data center, the vast majority of environments have become a lot more agile and responsive to user needs in their deployments. This is now the new norm. With the rise of the DevOps movement and continuous deployment strategies, the static architecture of conventional networks—I will not say “legacy”—and SDN is still at a very nascent stage. This trend will only accelerate.
According to the Open Network Foundation, the key trends driving this need are:
- Changing traffic patterns: Applications are being designed that are using geographically distributed databases and services through both public and private clouds. These applications require extremely flexible traffic management and access to bandwidth on demand.
- The “consumerization of IT”: The bring your own device (BYOD) trend and its kissing cousin, what I term “PWT computing” (any-Place, any-Where, any-Time) requires networks that are both flexible and secure.
- The rise of cloud services: Users expect on-demand access to applications, infrastructure, and other IT resources without long deployment times to utilization.
- “Big data” means more bandwidth: According to DOMO, the amount of data being created doubled between 2012 and 2014, and this is only accelerating as the internet of things (IoT) starts to come online. The handling of these new, massive data sets will and does currently require access to massive parallel processing capacity that is continually increasing demand for additional capacity and any-to-any connectivity.
Therefore, in trying to meet the demands of these new networking requirement—demands posed by these new and evolving computing trends—network designers found themselves constrained by the limitations of current networks when designing at scale.
The biggest issue with conventional networking paradigms is that they are what I call “DBF” (design, build, forget) paradigms. They are complex, and that complexity lends itself to stasis. Adding or moving devices and implementing network-wide policies are complex, time-consuming, and primarily manual endeavours that risk service disruption and discouraging network changes. Therefore, change is scary and avoided. Again because they are DBF, any assumed growth is designed in at the beginning. This leads to an inability to scale. The usual method of procuring bandwidt—that of link oversubscription to provision scalability—is not effective with the dynamic traffic patterns in virtualized networks. This problem is even more pronounced in service provider networks with large-scale parallel processing algorithms and associated data sets across an entire computing pool: think Google, Facebook, and other massive Internet-based companies such as Yahoo or Amazon. Companies are coming to look at vendor dependence as a negative rather than a positive. Too many times they have been held back from implementing a solution or service because of long vendor equipment product cycles and a lack of a standard, or even competing standards. They are starting to demand open interfaces and standards to limit the ability of network vendors to lock in their custom and allow them as operators to easily tailor their network to their individual environments.
The next post in this series will start to look at the vendors in this space, examining what their strengths, weaknesses, and obvious gaps are and where they would fit into a new, modern software-defined data center.
Good article. I recommend Tom Nolle’s blog re all things SDN and NFV.
I wanted to say though that my understanding of the push of telecom and communication service providers towards virtualization seems to be predicated on (massive) decomposition of existing network functionality. I don’t know for example whether a firewall will look like a firewall we know today or whether its functionality will be distributed as required across a bunch of Linux containers (Docker) and implemented using loosely coupled microservices.