Getting SASy, the other shared storage option

By Greg Schulz, Server and StorageIO @storageio

Serial Attached SCSI (SAS) is better known as an interface for connecting hard disk drives (HDD) to servers and storage systems; however it is also widely used for attaching storage systems to physical as well as virtual servers. An important storage requirement for virtual machine (VM) environments with more than one physical machine (PM) server is shared storage. SAS has become a viable interconnect along with other Storage Area Network (SAN) interfaces including Fibre Channel (FC), Fibre Channel over Ethernet (FCoE) and iSCSI for block access.

Various storage options for servers include shared external DAS, networked SAN (iSCSI, FC and now SAS) or network attached storage (NAS such as NFS and Windows CIFS file sharing). In some cases, storage is moving offsite utilizing public or private clouds and managed service provided (MSP) capabilities. Also, it is important to keep in mind that DAS does not have to mean dedicated internal storage; it can also mean external shared direct accessible storage using SAS, iSCSI or Fibre Channel in a point to point topology configuration.

SAS provides a cost effective solution to meet performance, availability, capacity, energy (PACE) and economic requirements while enabling more data to be processed, moved, stored and shared in a given footprint density. Shared direct-attached and switched SAS storage solutions are being deployed in diverse environments in place, or adjacent to traditional enterprise protocols such as Fibre Channel (FC) or 10Gb Ethernet (GbE) iSCSI Storage Area Network (SAN). In addition, shared and switched SAS storage solutions are being deployed for high performance external storage in price sensitive environments that previously relied on either dedicated direct attached storage (DAS) or 1GbE iSCSI-based solutions.

Fibre Channel has evolved to be a popular option for both server to storage system and storage system to HDD attachment. iSCSI (Internet SCSI) is another popular server to storage system SAN connectivity option where the SCSI command set is mapped to the TCP/IP protocol deployed on Ethernet networks along with FCoE. Commonly deployed server and storage I/O access scenarios include dedicated internal direct attached storage (DAS), dedicated external DAS, shared external DAS, shared external networked (SAN or NAS) storage and cloud accessible storage. DAS is also called point-to-point in which a server attaches directly to storage systems’ adapter ports using iSCSI, Fibre Channel or SAS without a switch.

The value proposition, or benefit, of Fibre Channel has been the ability to scale performance, availability, capacity or connectivity over longer distances (up to 10km natively with long range optics) with speeds currently at 8Gbs and 16Gbs on the radar. Refer to chapter 5 and 6 of Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures (Elsevier) to learn more about optical as well as metropolitan and wide area storage networking.

A challenge of Fibre Channel has been the cost and complexity, which for larger environments is absorbed as part of scaling; however, it is a challenge for smaller environments. The benefit of iSCSI (SCSI mapped onto TCP/IP) has been the low cost of using built-in 1GbE network interface cards/chips (NICs) and standard Ethernet switches combined with iSCSI initiator software. In addition to low cost for 1GbE based iSCSI other benefits include ease-of-use and scalability. A challenge of iSCSI is less performance compared to faster dedicated I/O connectivity and when a shared Ethernet network is used, increased traffic can impact the performance of other applications. iSCSI can operate over 10GbE based networks, however, that approach requires expensive adapter cards, new cabling, and optic transceivers and switch ports that increase the cost of shared storage solution requiring high performance.

The decision about what type of server and storage I/O interface and topology is often based on cost, familiarity with available technologies, their capabilities and, in some cases, personal or organizational preferences. In the past there was a gap in terms of connectivity or number of servers that could be attached to a typical shared SAS or DAS storage system. This has changed with the increase of native 6Gbs ports and using the SAS switch to increase the fan-out (from storage to server) or fan-in (servers to storage) number of attached servers. What this means is that in the past to support connectivity of multiple high performance servers, 10GbE iSCSI or Fibre Channel has been used. With the advent of increased numbers of native SAS ports on storage systems along with switches, there are new options for system designers, architects and storage administrators. Each of the different SAN connectivity approaches can be used to do many different things, however doing so can also extend beyond its design or economic and QoS (Quality of Service) comfort zone.

The availability of 6Gbs shared and switched SAS storage solutions gives system designers, architects and IT administrators an option for significantly boosting performance over 1GbE based iSCSI without the complexity or cost of more expensive 8Gbs Fibre Channel or 10GbE iSCSI or FCoE. In addition to the current 6Gbs, there is a SAS roadmap for next generation 12Gbs speed with backward compatibly to protect investment in current 6Gbs and past 3Gbs solutions. 6Gbs shared and switched SAS connectivity are viable options for both entry level, SMB, workgroup or departmental environments including for high cloud, virtual and other storage networking environments.

One reply on “Getting SASy, the other shared storage option”

Comments are closed.