The last mile of virtualization has multiple dimensions, based on where you are going with virtualization. When you ask about the last mile of virtualization—about what it will take to get to 100% virtualized either within your data center or within a cloud (hybrid, public, or private)—the “it depends” answer is the one you will get most often. So, what will it take to get to 100% virtualized?
That depends entirely on what your users and business require. It is possible to be 100% virtualized if you have the time and money to do so. In most cases, this last mile may cost a bit extra up front but will benefit you in the long run. If I can move all your systems to a standard platform that you readily support, that is a huge win.
Virtualizing the last mile must encompass several things:
- Decreasing the long-term total cost of ownership (TCO)
- Moving to readily supported platforms and hardware (such as x86 computing systems)
- Avoiding any loss of functionality.
In essence, the goal is a lower cost with no loss of functionality using industry-standard hardware. There are four specific cases I would like to discuss:
Use Case 1: High-Performance Graphics
Given that we can now assign graphics processors either directly to a virtual machine using any GPU or to be shared among virtual machines using NVIDIA virtual GPUs, this last mile has been solved for nearly all hypervisors. The holdouts for virtual GPUs are KVM and Hyper-V, but Xen and vSphere have the functionality to share GPUs across VMs. This adds an interesting need to balance GPU workloads as you would balance CPU workloads. The tools for this are still being developed, but we see the start today.
However, to use GPUs, you need to pick specific hardware that will house your GPUs that are not actively cooled and require more power than traditional computing units. The 3U and 4U sled form factors are popular with GPU systems and are available from Dell, HP, and Supermicro, among others. NVIDIA GRID cards are at the forefront of this movement.
Just like CPU consolidation, GPU consolidation has many benefits, even for architectural firms and others that require a bit more oomph for their graphics processing.
Use Case 2: High-Performance Computing
High Performance Computing (HPC) can also be virtualized using several different techniques. VMware has introduced real-time extensions to vSphere that allow better control of the clock in order to enable real-time applications. A predictable clock within VMs is crucial to real-time applications. Yet, not all applications that require HPC are real-time. Some applications are time sensitive only in that there is a certain amount of time to run a query or algorithm and return a result. However, from a real-time perspective, that “certain amount of time” is huge: in the microsecond range, not the nanosecond range. Modern CPUs can support these times handily.
There are two types of HPC: CPU-based and GPU-based. Both require incredible speeds for inter-process communication. CPU-based HPC makes use of real-time extensions, of the pinning of CPUs to VMs, and of other techniques. GPU-based HPC instead makes use of GPUs pinned to VMs, as you cannot currently run CUDA code through a virtual GPU (I expect this will happen sooner rather than later). GPU-based HPC has mathematically compute-intensive algorithms, while CPU-based HPC can mean mathematical, string-based, or mixed algorithms.
HPC within a virtual environment is possible today, but it requires special hardware to host GPUs, hardware with many CPUs within it, or better communication between hosts to use virtualization.
Use Case 3: Antiquated Hardware
Until recently, this was the tough use case to solve, but there is now a solution for those who have PDP-11s, VAXen, Alphas, and older SPARC models that they just cannot get rid of yet. Stromasys has developed an application that runs within a guest operating system (or physical) to interpret and dynamically translate instructions from those antiquated systems to an industry-standard x86 platform. Most of these systems are tied to some form of hardware, such as an electron microscope or other hard-to-replace item. By working with the vendors, it is possible to develop new interconnects to talk to the virtual machines. Stromasys does just this for its customers.
While not truly a virtual in virtual, Stromasys looks surprisingly like early Type 2 hypervisors. There is no requirement on Intel-VT yet for Stromasys, so it will run anywhere. The key is to move these antiquated, hard-to-repair systems to something anyone can repair.
Will other systems eventually be supported, such as AS/400? I imagine so; most likely depends on customer demand.
Use Case 4: Unsupported External Devices
Sometimes it is seemingly impossible to virtualize due to older external devices such as time clocks or electron microscopes. However, there are companies looking to solve these problems by creating devices that map from older interconnects to newer interconnects. There are also newer devices that do the same exact thing. What varies is the upfront cost of the new devices.
Other use cases? If you have other last-mile use cases I have missed, please let me know. I am sure we can find a usable solution to the problem. However, sometimes the upfront costs are just too much. As a customer told me when approaching me about virtualizing an HPC environment, it was the upfront costs that made it prohibitive, not the ongoing costs. So whether or not you get to 100% virtualized these days depends on the costs and politics. It is achievable, but at what cost?
Costs are going down, so I expect this will also become a non-issue moving forward. The real issue will end up being politics.
Then the next question will become which portion of these use cases can also be placed into a hybrid cloud? A discussion for another time.