In the world of virtualization storage it seems all we talk about lately is flash and SSD. There is a good reason for that. Traditionally, storage capacity and storage performance were directly linked. Sure, you could choose different disk capacities, but in general you needed to add capacity in order to add performance because each disk, each “spindle” could only support a certain number of I/Os per second, or IOPS. This was governed by the mechanical nature of the drives themselves, which had to wait for the seek arm to move to a different place on disk, wait for the seek arm to stop vibrating from the move, wait for the desired sector to rotate underneath the read head, etc. There’s only so much of that type of activity that can be done in a second, and in order to do more of it you needed to add more drives. Of course that has drawbacks, like increased power draw, more parts so more chance of failure, and increased licensing costs since many storage vendors charged based on capacity.
Flash memory takes most of what we know about the physics of storage and throws it away. Because there are no moving parts, the act of seeking on a solid state disk is a completely logical one. There are no heads, no sectors, no rotation speeds. It’s all the speed of light and however fast the controller can go. As such, flash memory can do enormous numbers of IOPS, and if implemented well, it decouples storage performance from storage capacity. You save power, you save data center space, you save money in licensing fees, and your workloads run faster.
In early 2012 SanDisk, well-known manufacturer of flash memory products, acquired FlashSoft. Like a number of other companies in the virtualization storage space, FlashSoft has several different products designed to use SSD and flash memory to cache storage I/O. They have specific products for Microsoft Windows Server and Red Hat Enterprise Linux (and clones). They also have a product for VMware environments, SanDisk FlashSoft 3.1 for VMware vSphere. Their VMware solution is a hypervisor-level one, meaning that it doesn’t require guest agents installed in the virtual machines themselves. It also doesn’t require changes to virtual machine hardware, unlike VMware’s vFlash Read Cache, and it doesn’t add dependency loops and resource contention issues like virtual appliance-based solutions. It plugs in, turns on, and makes things faster.
SanDisk FlashSoft Windows and Linux products, when run on single nodes, can do both read and write caching. Introduce a cluster, though, and you are stuck with only read caching. If you’ve been reading other posts of mine in this series on flash caching, you’ll know I’m a big fan of write caching, which is a problem that only PernixData FVP has managed to crack in a way that doesn’t require enormous amounts of additional complexity. Write caching is difficult. If you screw up a read from cache you can always go get the original data from the source, but if you mess up a write you now have corrupt data. Not good. Write caching also means that data on the array may not be consistent, so array-based snapshots and array-based replication isn’t an option anymore. Write caching is also only good for “bursty” workloads, anyhow. If you need to write a lot of data every 10 minutes a write cache might be a good choice because it’ll help you spread the I/O out. If you need to write a lot of data constantly a write cache isn’t going to help, because eventually you will fill the cache and need to destage that data. When that happens you’re once again at the mercy of the write speed of your array. Good caching software can mitigate some of these issues, but sometimes you actually just need raw performance.
It’s notable that SanDisk FlashSoft 3.1 for VMware vSphere can work with any SSD or flash you might have, though given that SanDisk is a flash vendor, you can expect a certain response if you ask them for a recommendation. It does require SSD, and unlike products like the Infinio Acceleration, it cannot just use RAM to do its work. That often isn’t a huge deal, but if you are using blades it might drive some design decisions, since blade servers have limited disk bays you might already be using. You can only have as much as 2 TB of cache per host, but I’d expect that at a certain point there are diminishing returns.
Their management interface, like that of many vendors, is a plugin to the classic vCenter Windows client interface, and we can expect changes in that area as the VMware vSphere Web Client takes hold, as well as the vCenter Server Appliance where there isn’t a usable local operating system to install vCenter plugins to. The management interface has an increasing emphasis on statistics, as much of the operation of the cache is automated. They support all VMware native features, like HA, vMotion, and Storage vMotion, and fault tolerance, which is something even VMware cannot say about all of their own products. Overall, the FlashSoft for VMware vSphere product is a great one, and with competitive pricing, a hardware-agnostic approach, and the support of a company with a distinguished flash pedigree, it’s a real contender in the storage acceleration market.
You might have mixed up the FVP features with FlashSoft, because the latter supports NFS storage as well as block storage. That’s probably one of the reasons why NetApp has validated FlashSoft for Data ONTAP.
Thanks Joeri for pointing out that FlashSoft does indeed support NFS storage as well as block storage.