In virtual and cloud environments, network traffic often flows into a virtualization, then back out, forwarded to another device, usually security, before it re-enters the virtual environment. I call this a “sadly defined network,” not software-defined. Many of my colleagues claim that this is not true. They say that an SDN keeps east-west traffic within the hypervisor and that north-south would not need to do this. I disagree. This will happen when bad design is implemented in virtual and physical security. “Ah!” some will say, “this is solved by micro-segmentation,” but that is not always true, either.
To identify the problem, we need to start by looking at our network and security architectures. The goal should be to keep as much traffic inside the virtual or cloud environment as possible, only heading out when absolutely necessary. I often see this goal implemented so badly that I don’t understand why some systems run at all; they look like they should crater under their own weight. The worst offenders use hair-pinning to implement security features. Hair-pinning will at least double, and often triple, bandwidth requirements. This occurs even if the components are east-west and not north-south. Decision points should be made closer to the virtual machines. Alas, this is where NSX’s distributed micro-segmentation firewall comes into play. Most people want edge-like functionality such as network address translation; this doesn’t work with micro-segmentation. So, once more we are faced with either placing an edge on every host, sending traffic north-south, or sending it east-west. In any case we are potentially moving lots of data outside our virtualization hosts to somewhere else, or even between hosts on the same hardware.
Figure 1 shows a typical physical firewall configuration, with paths in, out, and then back in to implement this method of security. For good or passed traffic, there is a 3x bandwidth cost. Data goes in, then out, then back in via the top-of-rack switch.


GigaMon might help us here, but it is less than a full solution. GigaMon has a routing technology that redirects a series of requests to wherever they need to go very quickly. It is ideal for monitoring your environment, as you can easily pass network traffic to a network monitoring layer. Figure 3 is a sample implementation. Once there is more, traffic is often depicted as using the green path, but it really uses the orange path. All we’ve done here is add another need for bandwidth, a fourth path for our data. What GigaMon does is very useful in its own right, but it is not quite what we want. We need to reduce bandwidth requirements along all our wires.

Here is where micro-segmentation really comes into the for, as does tools like Illumio, and Cloud Passage. The workload itself does all the heavy firewall implementations, while using other controls to pass on the packets to yet another security device to capture, identify, etc.


Final Thoughts
No matter how you look at SDN, once you introduce load balancing, security devices, and monitoring devices, you need to rethink your software defined network, or you will end up with a sadly defined network: one that could triple, quadruple, or 10x your network bandwidth requirements. Item #1 to understand is that your traffic ultimately only go where you have wires running. No wire, no traffic. Item #2 to understand is that adding security is not about just adding in a new device, but determining the impact of that new security device on your network, your bandwidth, and your systems.
As system engineers, we need to consider the impact of a device on the entire network architecture, and not just throw in security devices simply because we can. Architecture needs to be considered, and actual network testing is an essential part of building and maintaining an SDN.