Participate in any virtual desktop design session and you will know that the discussion almost always moves immediately to how many IOPS per virtual desktop session should be expected. More often than not, the leader of these conversations will answer “it depends”. This is a statement that does not give most end users a warm a fuzzy feeling because it usually comes with a pretty heavy storage price tag. Unfortunately, there are many factors that affect overall performance. Within the virtual desktop session, the number and type of applications you have running, the layers of security configuration and policy that are applied, and how you are handling user personalization have an impact on IOPS. Many of these challenges can be addressed by applying good standard virtual desktop practices, which are often different from the way physical desktops are traditionally architected.

The next part of the conversation leads to the storage itself. Does the user leverage their existing storage infrastructure? Do they invest in additional storage capacity, move to Solid State Disk (SSD) arrays, or even use local storage? Each is a technically valid option, but they may not address the conditions which plague storage IOPS performance: boot storms and system swap, and for environments with mostly persistent desktops, the need for increased capacity requirements to store virtual desktop images. For these, looking to a storage optimization solution could be the answer.

Reducing the IO

Virtual Desktop vendors publish reference architectures with lab-tested IOPS results, but few ever really live up to what real-life scenarios produce, and many focus on adding a tier of storage infrastructure that adds to the cost and complexity of managing the virtual desktop environment. The Greenbytes IO Offload Engine looks to tackle the IO performance issue without changes to the storage infrastructure. The SSD appliance-based solution leverages a patented inline primary storage de-dupe technology to reduce boot storms and system swaps up to 20x and has seen total VDI image storage reduction of 80 to 1. It does this by integrating itself with the underlying hypervisor and offloading the golden and replicated disk images and temporary vDisk to its appliance.

The IO Offload Engine’s architecture is data center ready, designed with full HA components; redundant power, network and storage controllers; and drive bays that are all hot-swappable and can be either iSCSI or fiber-channel attached. The appliance runs Oracle’s ZFS storage software on illumos, the OpenSolaris operating system. Greenbytes looks to store the virtual desktop images and the running system swap files only and pairs with local storage fast storage, such as a flash, to hold its hash indexes. All other user data is optimally stored on more traditional storage devices. Current devices can scale up to 4500 concurrent users per device and can scale linearly.

Greenbytes’ initial target from its inception in 2007 has been telecommunications and service providers of Desktop-as-a-Service where capacity is key, but it is now focusing on delivering its solution to the end-user markets. Storage optimization software and SSD disk arrays provide additional benefit to virtual desktop environments over traditional NAS and SAN solutions. Greenbytes’ approach and added in-line de-duplication can realize a reduction in storage investment and an increase in end-user virtual desktop performance.

Greenbytes can be found at http://www.getgreenbytes.com

2 replies on “Greenbytes Addresses VDI IO Without Changing Your Storage”

  1. Joe–This is a great topic. Visibility into IOPS comes up a lot in my conversations with customers and prospective customers. I can give them alerting and reporting into IOPS and storage metrics, as well as the ability to take automated fix actions. If they every want to take a more long term remediation action, Greenbytes might be a good place for them to direct their attention.

  2. Pingback: Virtual Intelligence Briefing » Greenbytes Addresses VDI IO Without Changing Your Storage

Comments are closed.