As the SSD invasion of the data center continues unabated, we are seeing more host-side caching solutions emerge. These solutions purport to be easier and way less expensive to implement than array-side SSD and flash, and they promise decent performance gains. The Proximal Data AutoCache is one of these products.
The Proximal Data AutoCache is a VMware vSphere hypervisor-level caching product, like PernixData’s FVP. It installs at the hypervisor level, as a VIB, meaning it doesn’t require guest agents to be installed, is likely compatible with AutoDeploy, and unlike VMware’s vFlash Read Cache it doesn’t require changes to virtual machine hardware, either. It also isn’t a virtual appliance that introduces issues like dependency loops, resource contention, and more complicated networking setups. Unlike its competitors, AutoCache can cache any storage mechanism in place on a VMware host, whether it is traditional block storage via fibre channel, iSCSI, or NFS-based. Proximal Data is also fairly agnostic when it comes to the flash device as well, and they are able to use PCI Express, SAS, and SATA-based flash. I say “fairly” because they admit some devices are better than others, and they maintain compatibility lists with those recommendations. The hardware & protocol neutrality is very nice, though, and helps speed adoption because you can likely use SSD you already have against the technologies you’re already running. Of course, if you don’t have SSD you’ll have to add some to each host, and blade systems can be tricky with their limited drive bays. It’s always some sort of tradeoff.
AutoCache is a read-only cache, and when I spoke with them they expounded on the issues of write-back caching when it comes to array-level volume consistency. While I agree there are potentially serious issues, I also think there are some real benefits to write-back caching as a buffer for “bursty” write I/O, given a proper system design. They advertise some decent features of their read caching mechanisms, though, such as the ability to detect serious cache-unfriendly behavior. Backups are a great example of that, where a read process will touch every block on disk, potentially evicting all the useful data in the cache in favor of all the new data. If a read cache is designed properly it will detect the offensive behavior and work to protect itself, often by just discontinuing caching for the particular VM until the behavior stops. Most large array vendors do similar things with their array-level caches and it is nice to see the same sort of logic present in a host-based product.
Because AutoCache doesn’t perform write back caching, it is a much less complicated product than others. For example, it doesn’t need the complicated clustering that PernixData does to protect write operations. The caches do talk to each other, to pre-warm a target host’s cache when there is a vMotion in progress. A cold cache is a big problem for systems that are oversubscribed and relying on cache to make ends meet, performance-wise. Like PernixData, the product supports the use of all VMware vSphere functionality, like vMotion & DRS, HA, fault tolerance, etc. The management interface integrates with the legacy (and thoroughly deprecated) Windows vSphere client, providing the ability to control caching per VM as well as to gather & export statistics from the hosts, datastores, and virtual machines. It adopts the permission model of vCenter, so users can be assigned rights to view and alter the settings granularly. A nice touch.
Overall the solution appears to be a solid one, and they have a free trial available via their web site (you do have to fill out a form and talk to a salesperson, though). Their easy-to-use, agent-free, support-everything approach makes this a product worth looking at if you’re in the market for a storage caching solution.