When server virtualization started to get its foothold, one of the key reasons for going virtual was the ROI that could be saved from running many servers on one physical box. It would make logical sense that this same key point can be applied to other aspects of virtualization and now we are really seeing the consolidation within the I/O area. This is the point where virtual I/O will really start to take off. After all, haven’t we all seen this nightmare during our career?
There seem to be three styles of IO Virtualization (IOV) taking place within the virtual environment. At VMworld, the IO Virtualization companies were out and talking to people about their wares, products, and approaches to IO Virtualization. These three methods are:
- Converged Network Adapters used within Cisco UCS, HP Matrix, etc.
- Attached IOV top of rack devices such as the Xsigo Device
- PCIe Extenders
Each of these provide unique benefits to your virtual environment but which to use? First, we need to know what each of these approaches brings to the table.What is IOV?
IO Virtualization is designed to reduce your overall cabling requirements for a given rack of hosts, blades, and other devices. In essence, you are changing your cabling load per virtualization host from some number of networking cables greater than 2 to no more than 2-4 cables depending on redundancy requirements. Each of these solutions present to each host either fibre and Ethernet ports or both. Some of these present other devices as well.
CNAs ala Cisco UCS, HP Matrix, etc.
Converged Network Adapters (CNAs) are one way to achieve IOV by providing one device that is both a network device and a fibre channel device (or at least speaks fibre channel over Ethernet). These devices are usually 10G devices that split the load on the device between storage and network requirements. In addition, these devices require that the switching hardware in use also understand the converged network traffic. If you are using fibre channel and traditional networking, the switch is the place at which the split between the fibre channel and standard network takes place. It may not be the first switch in the switching fabric but would be soon after wards. Within a Cisco UCS device the CNA talks to the top of rack device, which in turn talks to the Nexus 7000 which does the split between standard fibre channel and ethernet networks and cables.
Attached IOV top of rack devices
Very similar buy slightly different from the CNA approach is the IOV top of rack device that connects to the host chassis via ethernet or some other means such as Infiniband. Unlike CNAs which present fibre and Ethernet to hosts from a card within the host, these devices present fibre and Ethernet to hosts across the IO extender cable: Infiniband or Ethernet. Xsigo now has a top of rack IOV solution that uses standard ethernet ports on a host instead of requiring Infiniband. Granted, for very high performance IO, Infiniband or 10G ethernet ports are still required.
PCIe Extenders
The last class of devices are PCIe Extenders. These devices include generally a simple PCI bus extender card within the host chassis which connects via a proprietary cable to another chassis that contains slots for your existing PCIe network and fabric cards. These are then virtualized within the extender chassis and presented back to multiple hosts. On the other side of the externder chassis are usually ethernet ports. So these types of devices act not only as PCIe extenders but as switches as well. A self-contained unit. Devices from Virtensys can virtualize some well known fibre channel and Ethernet devices so that one ‘well-known PCIe card’ can be used by up to 4 or 8 hosts. Aprius on the other hand, while doing roughly the same thing is not as limited in the type of hardware that it supports.
So which to use?
Each of these solutions are quite similar in that you have a device within each host chassis that talks to an external box. Either the device within the host chassis presents multiple types of devices (CNAs) or they are presented from the external box to the host. In either case, you have reduced your overall cabling but at the expense of either specialized per host devices or via proprietary cabling.
So what are the benefits:
- Reduced Cabling requirements
- Possible Re-use of existing Ethernet and Fabric PCIe cards across more than one chassis (Aprius/Virtensys)
- Possible use of existing hardware for connect to top-of-rack device (Xsigo)
- Possibility of presenting to VMs, GPUs, MPEG Encoders, etc. (Aprius and possibly Virtensys)
Which to use will depend on your needs and how much you wish to spend. None of these solutions are inexpensive.
Conclusion
IOV has grown to not only address storage issues, but to re-use well known ethernet and fabric IO devices. Now with the latest Xsigo device you can ditch proprietary and expensive interconnects for standard network devices found on every host.
Given the growth of VDI and the possible requirements to virtualize even the most graphic intensive desktop workloads, the IOV market should allow the virtualization of MPEG Encoders/Decoders and GPUs as well as any other type of PCIe card a desktop or server could warrant. By making use of VMDirectPath and other PassThru technologies it may be possible to use IOV to allow a desktop direct access to required devices that are not virtualized by the hypervisor. With this type of technology perhaps it is possible to virtualize even the most demanding desktop work loads.
Very good overview of various I/O virtualization solutions. A few clarifications about the Virtensys solution:
The simple PCIe extender card that is inside the server connects to the Virtensys I/O virtualization system using industry-standard PCIe cable – not a proprietary cable – that is also available from third- party cable vendors. The Virtensys I/O virtualization systems virtualize standard off-the-shelf Ethernet, FC and SAS/SATA RAID I/O adapters (from third-party vendors such as Intel, Qlogic and LSI) and create multiple virtual images of the I/O adapters that are presented to each one of the hosts. The virtual adapters represent exactly the physical adapters and thus don’t require that any change be made inside the server, the OS or the applications. In fact, the original device drivers from Intel, Qlogic and LSI work on the server unmodified. Further, Virtensys doesn’t require any additional “Virtensys-specific” device driver or software stack layer to be installed inside the servers. The Virtensys solution works seamlessly with traditional or virtualized servers and enables all the physical hosts and virtual machines to share and access any of the I/O adapters at the same time while delivering the full connectivity bandwidth to each server.
Hello Bob,
Thank you for the clarification. Just to add one thing more to this:
The Virtensys chassis virtualizes VERY specific adapters from Intel, QLogic, and LSI but none from other vendors such as Emulex and Broadcom at this time. While the supported cards are off-the-shelf, they are limited by the models supported by the Virtensys chassis.
Best regards,
Edward L. Haletky