I will admit, I was surprised recently to discover that VMware has announced the end of life of its third-party virtual switches (vSwitch). These have been a part of the vSphere ecosystem for many years now. This relationship with other vendors seems to be coming to a close.

It may help to look back on where the vSwitches came from, and why third-party ones in particular arose. The concept of a virtual machine is now pretty well understood: taking many servers and allowing them to reside within one single piece of hardware, better utilizing that hardware. In order for physical computers to communicate, they connect to switches (or way back, hubs) through a network card. In a virtual environment, the same is true. Where CPU and memory are self-contained and can be passed with no other interference to the host, the network is subtly different. Where CPU, memory, and storage are a direct “write down, read up,” network traffic can go up, down, and around and around for the same stream. Something has to decide if network traffic is going to stay within the host computer and connect to another virtual machine, or if it will be passed to the physical network interface and on to the outside world. The simplest option would be to just push all traffic to the outside world, but this would cause a lot of “hairpinning” where traffic went out and immediately back into the host. A network switch is required as part of the software that makes up a hypervisor.

The first virtual switch was a very simple affair. It was not quite a hub, but also not quite a fully fledged switch. This was fine for its intended purpose. It would pass traffic between the internal VMs of a host and also out to the outside world. It had the ability to be segmented and for the segments to be associated with VLANs. The limitations, though, were obvious. Each vSwitch was independent of the others, even in a cluster. Not only that, but many of the systems and tools that network administrators were used to using were not available: tools like traffic policing, SPAN ports, and CDP/LLDP. This caused antagonism between many sysadmins implementing virtualization and network admins into whose networks they were now adding extra, unmonitorable switching. Virtualization was seen as a black hole.

These problems were ultimately fixed in vSphere in the VMware Distributed Switch. VMware saw, though, that to get the network admins on their side, there was a need to integrate fully with the existing network stack. This was done initially by working with Cisco to create the Nexus 1000V switch. This switch integrated into the Nexus system and could be managed through the same tools and mechanisms as the physical switches that made up the rest of the network. VMware very sensibly allowed Cisco to create the switch and opened up APIs to integrate the Nexus 1000V into vSphere. This link was maintained until vSphere 6.5. This use of APIs has allowed others, such as IBM, to create their own virtual switches and integrate them alongside.

When the virtual switching ecosystem was fairly small and contained, when the vSwitch was an add-on and interface to the network, this worked well. However, with the acquisition of Nicira, VMware began to move into the realm of network functions virtualization (NFV) with NSX. NFV and microsegmentation require much deeper hooks into the virtual machines. They are much more deeply integrated into the hypervisor, and the interaction becomes integral rather than just serving as a gatekeeper. From the first, NSX-v has worked only with the VMware vSphere Distributed Switch. The third-party vSwitches, and even the original vSwitch, are completely incompatible. This is no doubt due partly to the difficulty of supporting multiple systems internally, but mostly to the fact that NSX requires such deep integration to the hypervisor that it precludes third parties.

Over the past few years, VMware’s focus has moved from the hypervisor to the SDDC as a whole. VSAN, VxRail and other hyperconverged solutions, integration with the AWS system: these things have taken the focus away from low-level considerations such as which vSwitch to use. Most customers are looking now at a whole stack solution, the SDDC as a whole, and less at individual components. It should then come as no shock that the third-party support has been dropped. The decision was somewhat inevitable. It has come as a surprise to me, though. After many years of APIs being available, they are becoming end of life, and systems have been put in place for customers to move away.