During VMware’s online launch event, the company announced the latest release of its flagship product, vSphere 6.0. This release has a lot of great features and enhancements. In this article, I zero in on one specific enhancement: the evolution of vMotion technology into vDistance technology.

Prior to this release, VMware was limited to using vMotion within a single vCenter Server. With the release of vSphere 6.0, vMotion is now able to migrate between different vCenter instances within a data center, as well as beyond to remote vCenter instances. This enhancement breaks down the barriers and constraints of the data center layer, makes them all visible as a single data center entity, and opens up a new world of possibilities.

If you think this sounds great, I completely agree. As great as it sounds, however, there are a few potentially thorny requirements that need to be met to run this enhancement. First, and most obvious, the source and destination vCenter Servers all have to run vSphere 6+. Both vCenter Servers need to be a part of the same SSO domain when using the user interface. Those limitations are lifted when using the vSphere 6+ APIs to move virtual machines between SSO domains. Each of the vMotion migrations will utilize 250 Mbit of bandwidth during the migration, and there must be a level 3 network connecting the vCenter instances.

This vMotion enhancement addresses another “pain point”: migrating between different virtual distributed switches (VDS). If you’ve ever tried to move a virtual machine from cluster to cluster, each with its own VDS, then you know exactly what I am talking about. This enhanced feature allows migrations from virtual standard switch (VSS) to VSS, from VSS to VDS, and from VDS to VDS. While doing this migration, vMotion will transfer VDS port metadata. This release does not support migration from VDS to VSS, so there is still a little room for improvement.

Now for the real fine print: the distance migrations. To work over distances, vMotions require a layer 2 virtual machine network connection between the sites and a round trip latency of 100 ms (supported) or less. I have heard rumors of vMotion still working at latencies of 120 ms to 150 ms, but I would imagine these are carefully managed instances, so be safe and supported and keep your focus on staying at 100 ms of latency or less. Once you’ve got a baseline to work from, then you can consider pushing the envelope. Keep in mind that vMotions can and will now be migrated over a routed vMotion network. Remember, each migration will utilize 250 Mbit of bandwidth during the migration. An added security-related perk of this upgrade is that vMotion traffic can be secured or encrypted at the transport level.

While not a new feature with vSphere 6, VMware has brought a feature back that makes me ask “What the heck took so long?” They’ve re-instituted Network File Copy, which lets you define a specific vSphere network you’re copying from and replicate it to different vCenter Server instances, or queue it up as a migration to a powered-off virtual machine. This replication network can be a layer 2 network or even a routed layer 3 network.

I feel that the vMotion enhancements with this release will separate VMware from the pack. They may increase the adoption of hybrid cloud solutions. I expect the marketing messages to emphasize how well vSphere ties into vCloud Air. Either way, it was great to see how vMotion evolves into vDistance.

One reply on “vMotion Evolves into vDistance”

  1. To put some real world context around this.

    According to AT&T* 100 ms round trip time will get you from San Francisco to Tokyo (95 ms) or Washington DC to Frankfurt (91 ms) of course you still need to add a little bit of transit time through your data center switches. But most importantly it shows that with long-distance vMotion it is possible to move your running server instance beyond the reach of practically any natural disaster short of a particularly large asteroid strike, in which case we probably have bigger things to worry about.

    It’s worth noting though that for some people this long-distance virtual machine shuffling is nothing new. In response to my tweets on VMware’s announcement, Xen guru Simon Crosby responded that blackhole factory CERN was doing this from Switzerland to USA back in 2005 with Xen.

    * http://ipnetwork.bgtmo.ip.att.net/pws/global_network_avgs.html

Comments are closed.