This post on reddit appears to intimate that VMware is closing its API for virtual switches to all parties, including its long-standing networking partner Cisco. When I first read the post, I thought the move was a retrograde step by VMware and another veiled dig at its ecosystem. The post links to an official blog post on the VMware site stating that moving forward, VMware “will have a single virtual switch strategy that focuses on two sets of native virtual switch offerings – VMware vSphere® Standard Switch and vSphere Distributed Switch™ for VMware vSphere, and the Open virtual switch (OVS).”

From a numbers perspective, this makes sense, as about 99% of its customers are already using this single model of standard virtual switch and/or distributed virtual switch. It does fall into what research is stating. We at TVP Strategy applaud any attempt at reducing complexity, but we do find the fact that this announcement was hidden away in a blog post obtuse in the extreme. To be fair, this has been expected for a while. The Cisco Nexus 1000V Virtual Switch is supported on vSphere 6, but Cisco’s Application Virtual Switch is not. IBM’s and HP’s switches have effectively had no development since vSphere 5; although they are not officially at end of life, they effectively are. The interesting part of this hidden announcement is VMware will also provide the Open vSwitch as a viable alternative to its proprietary switches. It is important to remember that Nicira developers contributed heavily to OVS before Nicira’s acquisition.

At first glance, this appears to be a protectionist move—one that could actually damage VMware. It is estimated that VMware has approximately 600,000 to 1,000,000 unique customers globally running vSphere. Taking into account the 99% of those customers that are already utilizing a VMware-only networking stack, that still leaves between about 6,000 and 10,000 customers that will now have to plan for a very intrusive and complex migration. These customers will have to move from their third-party switch—most likely Cisco Nexus 1000V or the Application Virtual Switch (this means that they are still very likely running a vSphere 5 variant)—to a VMware-centric Standard Switch or Distributed Switch deployment or bite the bullet and buy NSX. Nice strategy from a one-dimensional viewpoint.

However, herein lies the rub. Those customers are most likely the very same customers that are VMware high-value assets. Those customers originally chose to utilize the Nexus 1000V over VMware’s native functionality because of the greater functionality and tighter integration that it offered with their already considerable investment in Cisco technologies, or as a sop to their networking team, who did not want to let go of networking to a server team.

Now, a large majority of (I am not saying all) networking teams are very conservative in their outlook. They know what they know and are generally wary of technologies that do not have the badge of their vendor of choice. These teams will be faced with one of five choices: move to NSX, move to a single stack switch, move to their particular vendor of choice’s SDN offering, migrate hypervisors to remain with their virtual switch of choice, or keep the status quo and not upgrade their vSphere and networking stack. All choices are fraught with pitfalls.

Ignoring marketing speak, the plain facts are that NSX is absolutely excellent for a virtual environment but falls down with physical integration. True, there are third-party vendors that can extend the NSX environment into the physical world, such as Palo Alto Networks and  Extreme Networks (formerly Brocade). However, this adds complexity to an already complicated deployment. Cisco ACI is excellent for physical integration but falls down with virtualization integration. What will most likely ensue is a tribal war, the network team wanting to go down the Cisco ACI route and the VMware/server team wanting to go down the NSX route. The fact is, neither is correct in heterogeneous environments: a blend of the two will result in a much better solution. Of course, this is not a cheap answer.

Education of the networking teams and the server teams that are running the VMware stack is necessary to create a more harmonious atmosphere in the IT department, but that is a story for another time.

The most interesting part of this announcement is the statement about native Open virtual switch (OVS) support. This opens up all sorts of interesting integration scenarios. OVS is the prime networking tool in use with OpenStack deployments and Red Hat virtualization stacks. OVS is also supported on Hyper-V and Azure. People are decrying the closure of the networking API, but I applaud the openness of the integration of OVS into the stack, making it a first class citizen on vSphere. This is a brave move on VMware’s part.