Dell Fluid Cache for SAN

dell-compellent-sc8000
Click to expand

Back in mid-2011, Dell acquired RNA Networks, a small startup out of Portland, Oregon. At the time Dell purchased it, RNA had a product, MVX, that employed three different ways to pool memory across multiple servers in order to accelerate workloads. One was a way to pool memory as a storage cache in order to speed disk accesses using system RAM. In the spring of 2013, we saw some of these features emerge again as Dell’s Fluid Cache for DAS (direct-attach storage) morphed to use the incredible speed of PCIe-based SSDs instead of RAM. Now, in late 2013 at Dell World, we finally get what many of us have been waiting for: the announcement of the expected availability of Dell Fluid Cache for SAN.

As readers here have learned recently in our ongoing series on caching, SSDs and flash memory can be used in many different ways and places in the enterprise IT stack. In its simplest and perhaps most decadent form, flash can used as primary storage. Per gigabyte, though, flash is far more expensive than traditional rotational media. Given that most data on disk is “cold” and will not benefit from the insane I/O capabilities, this approach is fairly wasteful. As such, flash memory has been incorporated primarily as a caching layer, or sometimes as a block-level tier. As a cache, it is most often a read cache, because that approach is easy and safe to implement. Just cache “hot” blocks, and if something happens to the cache, you lose performance but not data.

Write caches, then, are the holy grail of flash memory caching. They come with problems, though. First, you need a lot of redundancy, because the idea of losing data is a nightmare for anybody. This means multiple caches and multiple servers being written to. You also have to honor the order of writes to ensure the data on disk looks the way it was intended to look. Last, you have to detect and throttle workloads that insist on writing data faster than the cache can be flushed. PernixData’s FVP is a product that achieves all these for VMware environments, using local SSD and the vMotion networks to do read and write caching, with redundancy. But even PernixData’s product has a giant flaw: the data on your array is no longer consistent, meaning that array-level operations like snapshots, cloning, and replication will not yield a usable copy of your data. This approach also interferes with array-based tiering, since the array no longer has good information on which blocks are hot. These are serious problems for enterprises that aren’t ready to redesign the way they do their storage.

That’s where Fluid Cache for SAN comes in, at least for Compellent customers (for now). This solution uses local PCI Express flash devices on the servers to do read and write caching, and it uses a dedicated 10 or 40 Gbps low-latency network between the hosts to mirror writes and access caches on adjacent servers. The big twist, though, is that the Compellent SC8000 array is connected to that network, too, and is participating in the caching. With the array aware of the caching, operations like replication, snapshots, and clones can be valid, consistent, and coherent. Compellent’s main claim to fame, automated tiering, also continues to work correctly. The dedicated cache network also helps mitigate other issues with write caching, like being able to flush the cache faster than writes occur, avoiding complicated performance issues and the need for throttling.

The net effect is SAN storage as it has traditionally worked, except much, much faster.

How much faster? At Dell World 2013, a demonstration during the keynote showed it reaching more than five million I/Os per second. Of course, that’s a completely useless synthetic benchmark, 100% read I/O using 512-byte blocks, built to show up competitors that benchmark with equally useless I/O. That benchmark also avoids virtualized environments, using natively installed OSes with native OS drivers. In a virtualized environment, such as VMware vSphere, the cache is implemented both as a hypervisor storage filter driver and as a virtual appliance on each host. The use of a virtual appliance adds latency, potential for resource competition with the workloads themselves, and overall complexity.

Complexity is the biggest issue I have with this solution. There are some stringent guidelines for using Fluid Cache, starting with the server configurations. PCI Express Flash isn’t an option that can be retrofitted into Dell’s 12G PowerEdge server lineup—you have to have ordered the option from the factory—but Dell has certified the Micron P420M SSD card as an alternative. You also need Compellent SC8000 controllers set up in a particular way, a completely parallel set of low-latency network switches (like the more expensive, cut-through S6000s), and dedicated server-side NICs for communication. As far as expenses go, the solution is licensed per server node, but the idea of “expensive” is in the eye of the beholder. Compared to the cost of adding flash to an array directly, or to re-architecting an enterprise to use VM-level replication, snapshots, etc., this might be a relatively inexpensive way to go. Overall, if you are a Compellent customer, or looking to become one, this appears to be a solid way to procure an immense amount of I/O capacity for your environment.

Dell Fluid Cache for SAN is expected to ship sometime in the first half of 2014.