2013 is the year of caching. The VMworld conference was full of news about startups using expensive-yet-fast technologies like flash, SSD, and RAM to make up for deficiencies in storage performance. One of those startups was PernixData, announcing their FVP caching product.
PernixData FVP is a product that installs as a kernel module (VIB) on VMware vSphere platforms. This makes FVP fairly unique, as not much of their competition has chosen a route that embeds a caching layer directly in the hypervisor. As I wrote in my post about Virtual Storage Appliance considerations, using virtual machines running within a virtualized environment to provide core services to that environment is tricky. For starters, the dependency graph for your environment starts to have loops in it. Second, virtualization is all about overcommitment; what happens when that overcommitment causes contention? A war between your workloads and their storage sounds like a bad day to me. PernixData has decided to skirt all these issues by having their caching platform sit in the storage layers of the hypervisor. This way, when there is contention at the VM level, it doesn’t affect the ability of the cache to provide reliable, consistent performance. It also limits what they need to worry about for compatibility and testing. Since they don’t touch hardware or guest OSes directly, they can just rely on the VMware HCL to handle their compatibility, too.
FVP relies on solid state drives installed locally on your servers, which means that you need SSDs installed in your servers, of course. That seems obvious, but blade servers often only have two drive bays, and if those bays are currently occupied by traditional disks, you might need to do some design work. Some of PernixData’s competition, like Infinio, is based solely in RAM, which is also a tradeoff. Intel’s architecture for E5-2600 systems, coupled with the current pricing sweet spot for DIMMs at 16 GB, means that RAM continues to be the limiting factor in many environments. Giving some of that RAM up for a cache might be hard to do.
Another feature that makes PernixData FVP unique is its ability to do write caching. Write caching is the holy grail of caching and something most other caching vendors choose not to do. Usually that choice is explained away as customer demand; no customer would ever want to do such a thing. Indeed, it is a choice that isn’t taken lightly. Doing write caching on the hosts means that the data on the array isn’t necessarily consistent and array snapshots aren’t going to be much use. It also introduces reliability issues. What happens if your host dies while writes are still in-flight?
PernixData FVP uses the network to synchronously copy writes to one or two other hosts, destaging the write to the backend storage once it’s safely copied elsewhere. At first glance this sounds pretty ridiculous. Why go through all the trouble to write it elsewhere when you could just write it to the backend and be done with it? In many environments writing across a network link to a dedicated SSD is still much faster than writing to the storage array, though. Aggregating writes also allows them to write more efficiently, making better decisions about how to write data. FVP lets you configure per-VM policies for caching, so that you can ease your way into the idea of write caching or preserve compatibility with array-based features. When it comes to array-based snapshots, that technology has been waning in usefulness as per-VM options come to market and technologies like NFS-based appliances, and the elusive vVols, herald the end of the traditional datastore. Besides, I don’t like vendors telling me what I want and don’t want. I do want write caching.
Speaking of NFS, PernixData FVP is a block-only product. It will only help you if you’re running Fibre Channel, FCOE, or iSCSI protocols to your VMFS datastores. If NFS is your protocol of choice you’ll want to look elsewhere, perhaps to the NFS-only Infinio. But if you need storage performance that’s easy to install, supports nearly everything on the VMware HCL, and is quick to implement, FVP is a great option.