I/O Virtualization Shines at VMworld 2009

It started with virtual memory, then virtual machines (CPUs), then virtual storage, and now I/O virtualization (IOV) – where the I/O path from the server to the peripheral is itself virtualized. Traditionally, I/O devices connect to the server with some sort Interface or adapter, e.g., NIC – Network Interface Card, HBA – Host Bus adapter, etc., which are located inside the physical server.

I/O virtualization moves the adapters out of the server and into to a switching box. This allows the adapters to be shared across many physical servers, which drives up adapter utilization – often less than 10%-15% in a non-virtualized world. Fewer adapters means less power and cooling. Also, adapters take up a lot of space in servers and moving them out of the server allows 1U servers to be used instead of 2U ones.

Moreover, depending on the vendor, the application server only needs to be touched once with the installation of a high-bandwidth IOV adapter(s). Changes to the network technologies – say 4Gb Fibre Channel to 8Gb Fibre Channel happen only at the switching box. What’s more, different I/O protocols can be carried by a single IOV adapter further reducing complexity, disruption, power and cost.

On the standards front, in June 2008, the PCI special interest group (PCI-SIG) announced it had completed a suite of I/O Virtualization specifications that in conjunction with system virtualization technologies allow multiple operating systems running simultaneously within a single computer to natively share PCI Express devices. These specifications are grouped into three areas:

  • Address Translation Services (ATS) provides a set of transactions for PCI Express components to exchange and use translated addresses in support of native IOV.
  • Single Root IOV (SR-IOV) provides native IOV in existing PCI Express topologies where there is a single root complex.
  • Multi-Root IOV (MR-IOV) builds on the SR-IOV specification to provide native IOV in new topologies (such as blade servers) where multiple root complexes share a PCI Express hierarchy.

This suite of standards has been leveraged many of the IOV vendors at VMworld:

Xsigo – uses Infiniband Host Channel Adaptors and the company’s software on real machines to present virtual adaptors (vNICs and vHBAs) to VMs. It transports all I/O traffic to its I/O Director where it simultaneously switches TCP/IP/Ethernet, Fibre Channel, and Infiniband to real I/O devices. Most notably, VMware embraced Xsigo at VMworld. Every server in its impressive 32-rack data center and every server in its booth used Xsigo virtual I/O.

NextIO – The first company to develop IOV solutions using PCI Express, NextIO uses an inexpensive card and cable to simply extend a server’s PCIe bus to NextIO’s switching box rather than using an IOV adapter. NextIO is focused in several key vertical solutions and has partnered with several leading industry companies, including: IBM, nVidia, Dell, and Marvell. At VMworld, it was demonstrating a solid-state disk inside its switching box to help with high performance applications.

VirtenSys – Like NextIO, VirtenSys also extends the PCI bus with an inexpensive card and cable. It only announced availability of its IOV switches in August 2009, but won the Best Technology Award at VMworld. It is the first vendor to market to consolidate, virtualize and share all the major types of server networking and storage connectivity, including Ethernet, Fibre Channel over Ethernet (FCoE), SAS/SATA and Fibre Channel, without requiring any changes be made to the servers, networks, or I/O adapters. It also claims to be the most scalable and it has some Xyratex heritage

Aprius – Aprius did not have a booth at VMworld, but its President and CEO, Vaurn Nagaraj, was there extolling the virtues of its technology which is still under development. When complete, Aprius Gateway Systems use an extended PCIe bus to virtualize and provide a unified access layer to a common pool of PCI Express-based I/O adapters including Ethernet, Converged Enhanced Ethernet (CEE), iSCSI, and FCoE.

QLogic – showed, among many other things, its dual port 10-Gigabit PCIe Gen2.0 Intelligent Ethernet Adapter that supports SR-IOV and complete TCP/IP offload. Here, QLogic is leveraging its recent acquisition of NetXen.

Mellanox – This Infiniband supplier has also developed its server-edge gateway product, BridgeX, which supports both a 40Gbit/s Infiniband link to servers or a 10GigE link. Xsigo utilizes Mellanox technology.

IOV players who were not a VMworld include:

Neterion enables I/O virtualization of network traffic with 10 Gigabit Ethernet controllers.

Solarflare Communications’ Solarstorm 10-Gigabit Ethernet server controller chips support a large number of vNICs, virtual PCIe functions, hypervisor cut-through and SR-IOV.

All the IOV vendors marketing pitches suggest there is a battle to be fought at the “top of the rack” where Cisco switches typically reside. They hope to win this battle with IOV products that deliver more function and cost less than Cisco. Certainly, the advent of technologies like FCoE demonstrates that users want to preserve infrastructure investments while evolving their data centers and not revolutionizing them. IOV lets users move in baby steps. As such, it has a future, but it is not clear yet just how long that will be.