I’ve written in the near past about a number of different products that are helping enterprises use flash as a cache to accelerate their traditional storage workloads. One product that is helping to push the whole market forward, if only by raising awareness of the options in this space, is VMware’s own vSphere Flash Read Cache.
VMware shipped the vSphere Flash Read Cache with vSphere 5.5. Unlike many of its competitors, the vSphere Flash Read Cache has a name that says exactly what it does: cache reads on all types of filesystems (block or file). It uses as many as eight SSD or flash devices installed locally on an ESXi server to build a filesystem called VFFS. Virtualization administrators can then reserve portions of the VFFS for use by individual VMDK files, up to 400 GB per virtual disk. This is done at the hypervisor level, requiring no agents on the guest OS at all, which makes it an option for every guest OS supported by VMware vSphere.
Because the vSphere Flash Read Cache is just a read cache, there is no emphasis on clustering the caches, as PernixData does with its FVP product to protect writes. Live migrations of virtual machines now have the option of preserving the cache—which means it will need to be copied to the target host, lengthening the migration time—or of discarding the cache contents and requiring the target host’s cache to rewarm itself while performance suffers. Neither of these choices are great ones. With a total possible VFFS size of 32 TB per host, the act of evacuating a host (such as for maintenance mode) with 32 TB of cache in use takes around 7.5 hours with a single 10 Gbps Ethernet link. As a result, VMware’s Dynamic Resource Scheduler, or DRS, largely ignores VMs that are configured to use vSphere Flash Read Cache, unless there is a serious imbalance in the cluster. That’s actually a nice consideration; many of VMware’s caching competitors can’t say they have that feature, mostly because they cannot influence what DRS considers.
Most other competitors in the flash caching space configure their products on a per-host basis, with some form of autoconfiguration in effect to govern usage of the cache pool, and with the option to tweak settings on a per-VM basis if needed. That isn’t true with the vSphere Flash Read Cache, as the settings are statically configured on a per-VMDK basis. Because of this new configuration option, VMware had to increment the virtual hardware, going from version 9 in vSphere 5.1 to version 10 in 5.5.
Both of these facts add up to big problems, though. First, it’s another tunable feature that some administrator has to babysit; in addition, the administrator must guess at the correct settings. This is ridiculous—this is exactly the sort of activity that humans are not good at, and computers are. Let the machines do it algorithmically.
Second, because the vSphere Flash Read Cache is a virtual hardware feature, it limits where a VM that is configured with the cache can run. A vSphere Flash Read Cache–enabled VM cannot be started on a host that has no flash without modifications to its hardware configuration. That has serious negative implications for DR sites, replication, and even VM template cloning.
Third, because the per-VMDK configuration is treated as a reservation against the VFFS, it is possible that you won’t be able to start a VM if space is tight in the VFFS. Again, this has serious implications for DR scenarios. To fix an overcommitment situation like this requires altering the settings on numerous other VMs, too. By the way, the act of reconfiguring the the vSphere Flash Read Cache on those VMs will cause it to be rebuilt and rewarmed, which in turn causes performance issues. Oops. At the very least, you won’t want to adjust all the VMs on a host simultaneously, to avoid I/O storms on your storage.
For a company that claims to reduce IT OpEx, the VMware vSphere Flash Read Cache is multiple steps in the wrong direction. vSphere Flash Read Cache is licensed as part of the vSphere Enterprise Plus license level. If you (1) are already at that level, (2) can put flash in all your hosts at your primary site and your DR site and keep it sized properly at both sites, (3) don’t mind taking outages to upgrade all your VMs to hardware version 10, and (4) are comfortable with ongoing scripting and automation to modify all your VMs’ VMDK properties, this might be a product to look at. Otherwise, the SanDisks, PernixDatas, Infinios, and Proximal Datas of the world have other options that trade some initial capital for storage performance with little to no additional operational expense.