By Greg Schulz, Server and StorageIO @storageio
Part 1 of this series laid out the basics of nand flash Solid State Devices (SSD) with part II discussing endurance and performance. Part III looked at SSD options for virtual servers, VDI or virtual desktop as well as storage for physical server environments, usage and configuration criteria. Let us build off those and continue to look at what SSD to use for different environments.
For write, endurance today that means single level cell (SLC) combined with a robust flash translation layer (FTL) implementation combing software firmware and hardware for write intensive scenarios requiring a long duty cycle. Granted SLC is more expense per GByte and that multilevel cell (MLC) provides more capacity in a given density, look at the total value. For example, if SLC has a higher price per GB and your requirement is to use that technology for as long as possible, you will get more program/erase (P/E) cycles with SLC than with MLC. That means then if endurance is a priority, compare on a cost per P/E basis. That is what is the cost of P/Es, which might mean using an SLC device longer, or, using a lower cost MLC and then plan on replacing it sooner. However as noted in previous posts, look beyond the chips or MLC and SLC dies also considering how they are combined with FTL, software and firmware as part of a solution.
In the future, if not already today there are robust MLC based solutions in or entering the market that have improved FTLs, hardware, software and firmware to improve on P/E wear and tear on the media improving endurance and overall reliability. Some solutions will also be, or are already being labeled eMLC or Enterprise MLC as a hybrid approach. IMHO, these are similar to the role of enterprise high capacity SATA not to be confused with eSATA (external SATA) and desktop SATA HDDs. Both enterprise and desktop SATA HDDs have high capacity, however their duty cycles and characteristics differ as do cost.
While the focus on media has been mainly around nand flash for SSD, let us not forget the role of DDR DRAM. DRAM can be found on PCIe flash SSD cards as a read/write buffer or cache to help with wear leveling and other management tasks. DRAM is used in physical servers and controllers for device drivers and their buffers or data structures. DRAM is also found in SSD appliances used for software, buffers, data structures and other tasks, as is also the case in storage systems. Even in the Hybrid Hard Disk Drives (HHDD) like the Seagate Momentus XTs, which I have in my laptops and some servers that include SLC flash SSD, there is also DRAM working in a compliment manner.
Moving on from the mediums, what type of SSD to use when and where will vary on several factors however, they can be simplified down to the following:
- Are you looking for an extension of physical server DRAM?
- Are you looking for a cache (read or read/write) to compliment or enhance underlying internal dedicated, or external shared storage?
- Are you looking for a dedicated internal storage and IO medium to replace HDDs?
- What IO, performance, storage, server, application, database or other problem are you looking to address?
- Do you need high availability with ability to failover or move VMs from one PM to another?
- How large is the data footprint of the application that needs more performance?
- What are your growth plans or requirements?
- Do you need to support more IOPs and transactions, or more bandwidth throughput?
- What is the size of the IO or transactions being performed for comparison purposes?
- What are the read/write characteristics of your applications?
- How many PMs or physical servers need to get some benefit from having SSD?
- How will you move data or applications around to derive benefit of the SSD?
The above are just a few questions that should drive the conversation of what type of SSD solution you need for a given scenario.
In some cases, it will be a PCIe card with SLC flash configured as a read or read/write cache to complement and enhance your underlying iSCSI, SAS, FC or FCoE shared block storage such an EMC VFCache or those from LSI and Mircon among others.
In other cases it will be a PCIe card configured as a internal dedicated flash SSD target in a server using application or server based replication to another server equipped with SSD card for HA (for example FusionIO, LSI, Micron, TMS and others).
Other scenarios will involve PCIe cards, like those among others mentioned above, installed in ether a traditional Cisco, Dell, HP, IBM, SuperMicro or Oracle server configured as an appliance for hosting VMs or other applications, or simply deployed as a traditional physical machine. Besides using PCIe cards, those same servers or appliances can also be configured with SAS or SATA 2.5” or 3.5” SSDs installed into drive slots as individual drives using software based RAID, or attached to a PCIe RAID card.
Yet another option is to place SAS and SATA 2.5” or 3.5” SSDs (MLC or SLC) into the drive slots of storage systems and enclosures for use as a target, or as EMC has done, as an optional cache on the CLARiiON and VNX product lines. Some storage systems and appliances combine different SSD technologies including a mix of drive form factor SSD as targets managed with tiering tools and software along with PCIe cache cards or special form factor SSD DIMMs. For example, Oracle uses various SSD packing approaches in their 7000 series, as do others including NetApp. NetApp as an example supports a PCIe flash card as a performance accelerator module (PAM) along with drive form factor SSDs. The above and those solutions from other storage systems and appliance vendors can also be complimented by host or server side caching.
Which type of approach, what specific product will come down to the problem being solved or opportunity being presented, preference to packaging approaches or vendors, physical server PCIe expansion slot space, budget and comfort level among other factors.
If you are not sure which SSD approach, packaging, or product is best for your environment, drop a note or comment here and will provide perspective and suggestions in addition to those I am guessing vendors and vars will offer.
Here are some related SSD links to learn more in addition to the previous posts in this series.
I think it will be interesting when consumer level hardware supports some of the enterprise features we enjoy now, such as putting a pair of SSDs into a small NAS enclosure to handle IO in front of SATA drives. That aside, I did want to point out that not all flash technology works the same, as with the NetApp example you cited (PAM aka Flash Cache) being a read only cache.
Chris good point that not all SSD is the same, even when it comes to a particular form of media such as flash, or even slc vs. mlc vs. emlc let alone packaging. Thus, not all PCIe SLC flash cards are the same, some are targets, some are cache, and some are read while others are read/write (write through or write back). As you point out, netapp has read cache with PAM as well as supporting SSD drive form factor devices as read/write targets along with data cache/promote/demote strategy to use the different tiers/mediums (as do others).
So as you point out, not all packaging or forms of ssd are the same and they can be used for different situations, some in conjunction with others.
Here is a link to the first of a two part post series I did on SSD and storage systems:
Why SSD based arrays and storage appliances can be a good idea (Part I)
http://storageioblog.com/?p=2823
Btw, what’s to stop you from putting a couple of SSDs into a consumer/SOHO/SMB NAS such as an Iomega IX4 or synology among others? I know what is keeping me from doing it right now, which is price ;)…
Otoh, some of the NAS devices do have requirements for intelligent power management of drives, either being able to slow down, spin down, or turn off read/write channels, which precludes using some types of devices. For example, I was exploring using some HHDDs (e.g. Seagate Momentus XTs that combine SLC flash with a HDD) however; some of the NAS want to be able to power those down.
Hope all is well, cheers gs
I have put a pair of Intel SSDs and SATA drives in my Synology DS411, but it’s just not the same as with, say, a Tintri appliance. My NAS is two distinct volumes (small and fast, or big and slow).
If consumer grade SSD hits 1TB in the $200 range, then I suppose it won’t matter. 🙂
Chris I like my data too much to put it on a $200 1TB mlc consumer flash SSD ;)…
So are you saying you use tintri for a general purpose NAS file server?
I can see some scenarios where tintri can make sense for hosting VMs and their data in lieu of a traditional server with SSD or HDDs, however as a general purpose NAS server similar to a synology, wow, would like to have your budget ;)…
Now on the other hand, get yourself a 1U or 2U whitebox server that accepts a PCIe RAID card that includes BBU read/write cache (e.g. dram) and then attach some SSDs, HDDs or HHDDs to it and take it for a spin…
We will be building a new server with Windows Server 2012 and SQL 2012. Our plan is to build the base server with WS2012 and then virtualize 4 servers. They will be for 1)SQL 2012, 2) Active Directory, 3&4) Exchange 2010 (and later 2013) in both a hub and edge configuration. Aside from e-mail, we have a significant dependance on SQL, with 2 primary DB’s running about 5gb’s each. We are relatively small with close to 20 users, but may continue to grow. Still, our DB has about 9 years worth of data that we routinely process through and run reports off of. Finally, we need to have approximately 2-300GBs of storage space for our document management service and file storage. Add to that a recent change in business operations involving e-mail and we are anticipating an increase in e-mail activity of at least 100%, coupled with an increase in attachments in all e-mail with a daily average of 75MB’s of data, if not more.
The server we are looking at is the Dell T620. We will do dual processors (8 cores each) and are planning to use 96GB of memory (1600 RDIMM) at this time. Originally, I was adamant about using SSD, but the cost of the enterprise grade SSD was problematic. So I began looking at some of the high-end consumer grade/low-end enterprise drives (Intel 520), which brought the cost closer to something like the Savvio-15K SAS HDD by Seagate.
So with this in mind, I would welcome any input you could provide on whether we can use (or should) use the SSD’s in a raid 6 configuration for our base server, which will be running the 4 virtualized servers. As noted above, we would likely go for something like the Intel 520’s if we did SSD’s as the price is comparable to the SAS I mention above. Alternatively, should we NOT consider SSD’s for the primary RAID, and if so, is there a more prudent, effective strategy/practice for using the SSD’s in this set-up with a virtualized environment? I noted one reference above to a pair of SSD’s in a Raid 0 sitting in front of the SATA (or for us SAS) drives. Candidly, because we’re going to virtualize the 4 servers we’re not sure what is the best thing given some of the newer, exciting technologies. Also, FWIW, we can always increase our server memory, if need be. An additional 96 would cost us about $800. So that’s another option if that is a better move with the virtualization.
Finally, we are anticipating a sizeable
Thanks in advance for any help you can provide!
Hello,
There are several ways to go here. First, I would use TWO servers not just one, you want some level of redundancy. Second, I would not necessarily use SSD for the boot volumes of those virtualization hosts, there is no real need to do this. Third, If your DBs are critical I would use in memory DBs backed by storage (but that is not a trivial combination). Fourth, you may have a security issue where your ‘Edge Mail Server’ is also your ‘Hub Mail Server’ as they share data. Most companies from a security perspective will have an Edge server segregated from their internal server separated by Firewalls (virtual or otherwise), etc. Fifth, you want SSD for your virtual machine storage ONLY if the IOPs require it.
I have a similar situation, and I went the route of a NAS/SAN device that CAN be upgraded to SSD if necessary. This will allow you to get cheap storage for your environment and migrate as SSD becomes price competitive in the future. If you need SSD Now, install some SSD and ensure only those things that need it will use it; tiered storage in effect.
A separate NAS/SAN device provides quite a bit of capability, but if you cannot go that way then virtual storage appliances within your ‘boxes’ work well for this. Just be sure the ‘data’ disks’ are not used as your boot volumes.
Best regards,
Edward L. haletky