During the Virtual Thoughts podcast on 6/29/2010, the analysts discussed various hardware aspects of virtualization trying to determine if the hypervisor was to move into the hardware? and if so how much of it? as well as whose hypervisor? and lastly such a move part of any business model?
Virtual Thoughts is a monthly podcast that looks at the entire scope of virtualization to discuss new trends and thoughts within the virtualization and cloud communities.
This weeks podcast started with a discussion of TPM/TXT and the boost it gives to virtualization security. Since TPM/TXT is based in the hardware and provides a measured launch of an operating system, the next logical discussion was on whether or not the hypervisor would be placed into the hardware? Much functionality already is in one way or another:
- Intel-VT/AMD RVI both push many of the long and repetitive virtualization instruction streams down into the hardware so that only one instruction need be called from the hypervisor to implement 100s of repetitive instructions around the areas of VM to VM context switching and other like tasks such as zeroing memory.
- VMware AAI will do the same thing for the storage subsystem.
- Cisco Palo Adapters and HP Virtual Connect seem to be moving the virtual network into hardware as well by providing in effect adapters that on one side have a set number of physical ports but on the other side can have 100s of virtual NIC ports (physical virtual NIC or pvNIC ports). This capability with VMDirectPath and other ‘hypervisor’ bypass technologies could allow a VM to go direct to the pvNIC ports which would also increase network performance. To get to this point we are still waiting on Multi-Root IO Virtualization.
- TPM/TXT pushes some aspects of security policies into the hardware as well.
So could the hypervisor actually be pushed into hardware? or is the instruction pipeline for a given VM ? In other words the atomic unit is a VM not an application. Would we end up with a one to one mapping between cores and VMs instead of the current overcommit capability? Does this actually make a difference as Intel pushes more and more cores into a given socket? If the hypervisor does move into hardware this is a 3-5 year project that may have started a year or so ago.
This lead to the question of whether or not we are resource bound in any way at the moment within a virtualization host?
- Cores, cores, and more cores… CPU does not seem to be much of an issue.
- 10G networking with Pass-thru modes… Network should not be an issue.
- TB memory configurations within some systems… Memory does not seem to be an issue.
- Better memory overcommit technologies… VMware is leading the way with this.
- 10G Ethernet, 12G FC, Converged network adapters… Storage does not seem to be much of an issue
Virtualization pushed these technologies to grow and provide more performance, but even so, at this moment the virtualization hosts are more IO bound than anything else due more to implementations and designs than existing technologies. In other words, not everyone is using high speed storage or networking but they do make use of multiple Cores and larger memory configurations.
What other technologies are now in hardware?
In some ways EMC VPLEX puts SVMotion, Disaster Avoidance, and Business Continuity into hardware but still requires a vMotion/vTeleport to make use of it which has not yet been pushed into hardware.
There are lots of functions of a full virtual environment that are not in hardware such as:
- Fault Tolerance
- High Availability
- Dynamic Resource Load Balancing
- LiveMigration/vMotion
- etc.
There is new functionality being made available for virtualization every day which lead to a discussion on what drives virtualization into hardware?
We mainly see the answer to this question as performance not a business driver. VMware is rebranding itself as a management product company not necessary a hypervisor company. The bit push is IT as a Service, so this means that the basic hypervisor is a commodity, which is ripe for movement into the hardware. But which hypervisor would be moved into the hardware?
- VMware’s hypervisor is the most advanced.
- Xen has a large open source following
- KVM has an even larger open source following
- Hyper-V has Microsoft to push it along
But it boils down to what the chipmakers will do could we be heading to an environment that allows you to download the hypervisor of choice? Are we heading to the least common denominator of hypervisors? or should the chipmakers go for the most advanced and allow functionality to be dialed back? The difference between all these hypervisors is very stark. KVM, Xen/Hyper-V, and VMware hypervisors all have distinct designs and I am not so sure they are interchangeable. So will one chipmaker pick up KVM/Hyper-V, another Xen, and a third VMware?
Who wins in the end?
- The chip makers
- VCE may also win as they are pushing more hardware than hypervisor and much of the push to move functionality into hardware is within this group.
It could also mean that many companies need to decide if they want to sell hardware ‘stacks’ or just tools that work within the stacks. Citrix and Microsoft are well known for selling tools that work within many stacks.
Only time will tell what will happen in a few years, but we see a glimmer of what will happen as the years roll by, for performance reasons more functionality will be pushed into hardware. How much is still the question.