Recently, I had the opportunity to talk about the shortest path bridging (SPB) protocol with Avaya while at Interop. This conversation was one of many with networking companies. While SPB is a very interesting protocol, my questions were about how deep into the virtual environment the protocol extends. While SPB and other networking protocols are considered by some to be network virtualization, I could not see this within the realm of the virtual network and hence, confusion reigned. Depending on who is talking to whom, the same words can mean many different things. What I found amazing, still, is that most people thinks networking ends at the physical NIC within the virtualization host, and that what is inside, does not matter as much as what is outside.
In the discussion with Avaya, I found that SPB would be very useful within the virtual environment as there is quite a bit of east-west traffic within a large virtual environment and depending on cabling that traffic may never flow through any top of rack or core switches. For example, SPB may benefit a blade enclosure full of blades within the blade enclosure as well as without. All a hypervisor really is, is a software version of a very large blade enclosure with densities that extend well past the capabilities of even an HP Moonshot server. However, we all speak a similar but distinctly different language. Take the following diagram taken from my Secure Hybrid Cloud Reference Architecture (figure 1), where most virtualization folks will talk in terms of physical switches, virtual switches, physical NICs, and virtual NICs. The networking people I spoke to at Interop did not follow the distinction but we could follow each other once we drew a diagram and adding in our own bits.
However, figure 1, while I believe easy to follow, is not where much of the network virtualization work is being done today. While VMware NSX is an exception to this, most protocols are being developed on within the physical realm of the full virtual and physical network stack. Take SPB and how it works. Once you hit the pSwitch, it needs to bridge to VLANs within the virtual switch constructs within the hypervisor, in otherwords an SPB enabled pSwitch is the end point before the virtual switch constructs (Figure 2). Even if, today, you use Openvswitch (OVS) virtual switch constructs, why? The answer: SPB is not quite in OVS today.
Nor is OVS ubiquitous across hypervisors. You can add it into KVM, Xen, and Hyper-V, today, but not within vSphere ESXi. Not unless you have early versions of VMware NSX or its previous incarnation as Nicira. Yet, at least one networking person stated OVS was available in all hypervisors. While true once VMware NSX ships, each hypervisor requires OVS to be installed and installed correctly for it to work. Even if OVS did ship today in all hypervisors, could you use SPB or other network virtualization or software defined networking within all hypervisors, without a bunch of reworking, rethinking, and reconsiderations about the network? Not likely as many of the new protocols do not extend down into the virtual switches today. While OVS supports the requirements for SPB today, SPB is not actually inside OVS yet. That will change.
But this leads to another consideration about Network Virtualization vs Virtual Networks. Network Virtualization is often confused with software defined networking, which it is not. Any form of network can be software defined as it is just an automation layer to aid in controlling widespread networks using software to programmatically control how packets flow around the network, in essence using software to control switch routing tables to ensure packets are delivered in some shortest, predetermined, or even possibly optimal path. SDN does this by employing tunneling protocols to ensure delivery between end points of the tunnel, whether that is a physical switch, virtual switch, virtual machine, or even physical machine. SDN does this by forming tunnels over which the traffic can run. These tunnels are really just IP packets encapsulated within another form of networking packet structure such as TCP or UDP. Yet, these packets all run over the same wires at layer 1 and encapsulated within Ethernet protocols which are layer 2, and get routed appropriately by the switches involved based on various layers of packet headers.
The question with SPB is why even use encapsulated complex protocols when Ethernet itself can do the work to ensure routing works between one end point and another. This takes advantage of the fact that most network (virtual or physical) out there also have built in routing capabilities else they would not be able to handle current VLAN technologies. However, you can still run tunnelled protocols over SPB. This will be necessary until SPB can be placed within virtual switches. This is the Achilles heal of many new protocols. They do not yet exist within the virtual network, and as such we need to come up with new ways to bridge east-west traffic between virtualization hosts. Avaya solves this problem by using the hypervisor SDKs to automatically detect the creation of new workloads on old or new VLANs and properly configures their physical switches to properly pass the proper VLANs into the virtual environment. In essence, because SPB is not extended into the virtual environment, they are acting as a SPB to VLAN gateway for now.
Concluding Thoughts
Some network engineers seem to ignore the virtual network as uninteresting and not really their area. Yet a well designed network must include all components. The end goal of any network design is to deliver packets to the end points which are not virtualization hosts but the virtual machines they contain. We can no longer divorce ourselves from the virtual components of any network or look at it as a black box and magic happens there. Network Virtualization technologies such as VMware NSX and Software Defined Networking require us to extend from the target to the source whether those are physical or virtual machines. Avaya’s approach to this is to use SDN to aid in the use of a new protocol until that protocol can be extended down into the virtual environment.
Perhaps we need new techniques like this to handle the plethora of network virtualization techniques that are only in the physical world. How do you bridge these gaps today? How do you bridge between hypervisor types today? Do you employ network virtualization as well as software defined networking?
Comments are closed.