Windows 2012 Hyper-V is the hypervisor for the cloud, VMware’s vSphere is a dead man walking?
In Part I I shared a chunk of what I learned from Aidan Finn‘s enlightening and entertaining session delivered at the E2E Virtulisation Conference in Hamburg tastefully titled, “Windows Server 2012 Hyper-V & VSphere 5.1 – Death Match”. In Part I we looked at pricing, scalability and performance, as well as storage in questioning how bold this statement was.
Pure license-cost wise, it more straightforward to run Microsoft Hyper-V than add another licensed hypervisor: note that Hyper-V does have a free offering (although this version doesn’t cover the virtual Windows Server instance licenses). We showed that scalability wise, Hyper-V can better common competition. Storage-wise Hyper-V, as should be expected from the newest offering, supports the newest technology: 4k sector sizes, and had the largest virtual disk support. Still, if you needed greater than 2TB of storage, you could always join multiple 2TB instances together, or bypass limits by mapping a LUN direct to the VM.
Still, besides pricing simplicity, performance improvements, and updated storage what has Microsoft done for the latest version of Hyper-V? In Part II, lets question further Aidan’s premise that Hyper-V kills vSphere.
Resource Management
Resource Management is key for the modern datacentre. It allows for optimal VM consolidation. You need to collect and understand historical data for your virtual machine usage to plan for growth, to provide a consistent performance level to create and adhere to SLAs.
Capability | Hyper-V(2012) | XenServer(6.0) | vSphere Hypervisor | vSphere (5.1 Ent+) |
Dynamic Memory | Yes | Yes1 | Yes | Yes |
Resource Metering | Yes | Yes2 | Yes4 |
Yes |
Network Quality of Service | Yes | Yes | Yes5 | Yes5 |
Data Center Bridging (DCB) | Yes | Undoc3 | Yes | Yes |
1.Memory Optimization is a feature found only in XenServer 6.0 Advanced and higher
2.XenServer collects processor use, memory usage, and network I/O rates for the entire host system, as well as each individual virtual machine. The Free edition is limited to 24 hours of historical data. Archive data is available from Advanced Edition up to Platinum.
3.A number of Converged Network Adapters are supported within the XenServer 6.0 HCL however no official documentation can be found for DCB and XenServer 6.0.
4.Without vCenter, Resource Metering in VMware vSphere Hypervisor is only available on an individual host-by-host basis.
5. While there are QoS features in all vSphere editions, Net I/O Control and Storage I/O Control are only available in the Enterprise Plus edition of vSphere 5.1
Dynamic Memory
Sure, “Dynamic Memory” has been in Hyper-V since 2008R2 SP1. In Windows Server 2012 administrators can adjust dynamic memory settings on virtual machines that are running. Also, Dynamic Memory now has a readily configurable minimum memory setting, allowing Hyper-V to reclaim the unused memory from the virtual machines. Dynamic memory in 2008R2Sp1 could put you in a state where VMs would not reliably restart. In Windows Server 2012 you’ve Smart Paging: Smart Paging uses disk resource when there isn’t enough physical memory available and you are restarting a VM. There is a more complete description here.
This is not VMware’s memory over commit. Which is better? Microsoft‘s answer or VMware’s?
VMware would say with their over-commitment components, you can allow the active memory of a system to perform as close to 100% as possible. I think the end results for both are comparable: you increase VM density.
Perhaps the significant difference between VMware’s and Microsoft’s approach is that Microsoft’s Dynamic Memory allocates memory in ‘real-time’ whereas VMware’s memory management techniques pre-allocate memory, then use memory management techniques to reclaim unused memory. With VMware you can oversubscribe your host’s physical memory of the host. When RAM is oversubscribed it introduces a higher probability of paging and an impact on performance.
Does 2012 Hyper-V’s memory management outclass vSphere? You can have pistols at dawn in your own time: but in comparison to 2008R2 Dynamic Memory has been enhanced and offers benefits of increased density, especially in hosted desktop/VDI environments.
Still, Resource Metering, Quality of Service, DCB are all new for Windows 2012 Hyper-V. Does this bring Hyper-V in-line with vSphere 5.1 in key capabilities for larger organisations and hosting companies?
Quality of Service
QoS is not just from VM to network (egress) but also from network into VM (ingress). What does QoS in Windows 2012 Hyper-Vget you?
- Bandwidth Management: With Windows Server 2012 “Maximum bandwidth” has the added option of Minimum Bandwidth. Minimum Bandwidth provides a specified level of service to a workload when network congestion occurs while still permitting higher bandwidth utilization by this workload in circumstances where there is no network congestion.
- Classification and tagging: Before the bandwidth for a workload is managed, the workload must be classified or filtered out so that QoS Packet Scheduler or a DCB capable NIC can act upon it. Windows has sophisticated traffic classification capability: classification can be based on 5-tuples, user type, or URI. Windows Server 2012 simplifies the management task so that you can invoke built-in filters in Windows PowerShell to classify some of the most common workloads.
- Priority flow control: You may well know the goal of this mechanism is to ensure zero loss under congestion in data center bridging (DCB) networks. Windows Server 2012 allows you to enable Priority based Flow Control(PFC) as long as it is supported by your NIC.
Security and Multi-tenancy
Security and isolation capabilities are fundamental if you are to host services for multiple organisations. We’ve already discussed the Extensible Switch in Hyper-V where we stated that with this feature, Microsoft opened a new front in the war with VMware.
Capability | Hyper-V(2012) | XenServer (6.0) | vSphere Hypervisor | vSphere(5.0 Ent+) |
Extensible Switch | Yes | Yes | No | Yes |
Confirmed Partner Extensions | 4 | Undoc1 | No |
3 |
Private Virtual LAN (PVLAN) | Yes | No | No | Yes2 |
ARP/ND Spoofing Protection | Yes | No | No | vCNS App/Partner3 |
DHCP Snooping Protection | Yes | No | No | vCNS App/Partner3 |
Virtual Port ACLs | Yes | Yes | No | vCNS App/Partner3 |
Trunk Mode to Virtual Machines | Yes | No | No | Yes |
Port Monitoring | Yes | Yes | Per Port Group | Yes4 |
Port Mirroring | Yes | Yes | Per Port Group | Yes4 |
Dynamic Virtual Machine Queue | Yes | VMq5 | NetQueue5 | NetQueue5 |
IPsec Task Offload | Yes | No | No | No |
SR-IOV | Yes | Yes6 | Yes7 | Yes7 |
Storage Encryption | Yes | No | No | No |
1.No XenServer documentation can be located that discusses Partner Extensions to the XenServer Open vSwitch
2.Requires the vSphere Distributed Switch
3.ARP Spoofing, DHCP Snooping Protection & Virtual Port ACLs require either vShield App or a Partner solution, all of which are additional purchases on top of vSphere 5.0 Enterprise Plus
4.Port Monitoring and Mirroring at a granular level requires vSphere Distributed Switch, which is available in the Enterprise Plus edition.
5.Dynamic Virtual Machine Queue (DVMQ) is not supported by either XenServer or vSphere – which both support regular VMq (known as NetQueue on vSphere)
6.Whilst XenServer 6.0 provides SR-IOV support, the release notes state: “If your VM has an SR-IOV VF, functions that require VM mobility are not possible. For example, Live Migration, Workload Balancing, Rolling Pool Upgrade, High Availability and Disaster Recovery, cannot be used. This is because the VM is directly tied to the physical SR-IOV enabled NIC VF. In addition, VM network traffic sent via an SR-IOV VF bypasses the vSwitch, so it is not possible to create Access Control Lists (ACL) or view Quality of Service (Qos).” (http://support.citrix.com/article/CTX131381
7. vSphere supports both SR-IOV and DirectPath IO. DirectPath I/O and SR-IOV have similar functionality but you use them to accomplish different things. As with XenServer – enabling these features disable other functions.
Dynamic Virtual Machine Queue (DVMQ)
Virtual Machine Queue (VMq) was available in Windows 2008R2 – when combined with VMq-capable network hardware there is more efficient network packet delivery, reducing host overhead. However, VMq in Windows could get messy. In Windows 2012, DVMQ dynamically (the clue is in the name) spans processing a VMq across more than one core, allowing a better match of network load to processor use which results in increased network performance.
vSphere has NetQueue – which essentially does the same job as VMq. But, in VMware’s own documentation, ‘Performance Best Practices for VMware vSphere 5.0’, it is noted that “On some 10 Gigabit Ethernet hardware network adapters, ESXi supports NetQueue”. In Windows 2012 Hyper-V D-VMq is supported on both 10GigE and 1GB Ethernet adapters. But then some might say – why use NetQueue? Why not use VMDirectPath I/O?
What is SR-IOV?
SR-IOV is a specification that allows a single Peripheral Component Interconnect Express (PCIe) physical device under a single root port to appear to be multiple separate physical devices to the hypervisor or the guest operating system. The goal -boost VM performance, and reduce the burden on host CPU cycles. As you can see in the chart, while SR-IOV is supported in both XenServer and vSphere, you lose features. In vSphere – this list of unavailable features includes handy things like vMotion, vCNS App, and snapshots.
One of the most significant benefits of Windows 2012’s Hyper-V is that live migration is available when SR-IOV is enabled. Not only that, live migration in Hyper-V doesn’t require that both environments have the same hardware – you can Live Migrate a VM from a host with SR-IOV to a host without SR-IOV.
For further information on SR-IOV in Hyper-V, read the incredibly informative series of blogs by John Howard – Everything you wanted to know about SR-IOV in Hyper-V.
Some may counter – “what happens if you’ve a SR-IOV device that isn’t a network card? Can you live migrate then?” That’s an entertaining question. I expect for many – the interest will end at NIC support. Still, here are some nice notes on SR-IOV for Windows drivers. There is an explanation on the architecture for NICs – maybe we’ll cover this more in depth in another post if there’s interest.
Flexible Infrastructure
Capability | Hyper-V(2012) | XenServer(6.0) | vSphere Hypervisor | vSphere(5.1 Ent+) |
Migration | Yes | Yes | No | Yes |
1GB Simultaneous Live Migrations | Unlimited1 | Undoc2 | N/A |
43 |
10GB Simultaneous Live Migrations | Unlimited1 | Undoc2 | N/A | 83 |
Live Storage Migration | Yes | No | No | Yes |
Shared Nothing Live Migration | Yes | No | No | Yes |
Network Virtualization | Yes | No | No | Yes |
1.Within the technical capabilities of the networking hardware
2.No XenServer documentation can be found that details the number of simultaneous live migrations over either 1GB or 10GB Ethernet
3 This value can be overridden
Hyper-V 2012 the Hypervisor for your Private Cloud?
From a flexible infrastructure and resource management point of view there are a number of new checkboxes in Windows 2012 Hyper-V. But those checkboxes belie an extensive set of technology changes under the hood. The undoubted whizzyness of SR-IOV not preventing live migration aside, Hyper-V now has an incredibly similar set of functions and features in comparison to what is the embedded virtualisation solution in many server rooms and data centers.
And to this end, this isn’t the only “next installment”. We’re going to need a bigger boat in understanding Microsoft’s claim that Hyper-V is the hypervisor for the cloud. In the next installment, we’ll expand on the feature comparison and take a look at Security and Multi-tenancy, Live Migration and High Availability, and backups, as we continue to look to answer if Microsoft can win an organisation’s cloud administrator’s and architect’s hearts and minds.
Does Hyper-V kill VMware? There are definitely some serious kidney punches going on