The data center is changing, and once again the question of hardware format comes to mind. It is an open secret that I am not a fan of the blade format. Yes, it has flexibility, but that comes at a cost: namely, the loss of density brought by the blade format. 

True, the blade format has been manna from heaven for virtualization. The hardware standardization that the format has instigated has allowed a massive move forward in computing. But isn’t it now time to revisit the paradigm? “Why?” you may ask, thinking “hardware is not important; this is the age of the software-defined data center.” Let’s leave that statement pinned on the wall until later.

Before we delve into the question, let’s look at a few definitions for “blade,” “SLED,” and “rack.”

  • Blade: Shared everything (power, network outputs, storage outputs)
  • SLED: Shared something (power, discrete network and storage outputs)
  • Rack: Shared nothing (discrete network, storage, and power).

These are simplistic definitions, I know, but adequate ones for my proposition in this article.

With the rise of the hyperconverged paradigm, it may be time to revisit the form factor of the base compute unit. But first, a history lesson (I say “history” but am in fact only talking a matter of twenty years here):

Several form factor iterations have taken place since the mid-1990s, when the original 19 inch rack machine format took over from the mainframes, minis, and tower x86 servers of the previous compute paradigm. At the time, x86 compute was a bastion of singularity. Everything was in one box—storage, memory, CPU, and network—just as it was with its evolutionary cousins; however, application logic was also self-contained. This was not the age of distributed computing. The format worked well until the rise of the storage arrays.

In the late 1990s and early noughties, the storage array allowed for pooling of the (at the time) very expensive spinning rust, facilitating sharing across multiple machines. This pooling of storage then drove the first major change in machine format: the rise of the 1U and 2U servers in their bid to take over the world. These pizza-box servers allowed a massive increase in compute density in the server room (yes, it was called a server room then). This paradigm shift was also driven in part by the rise of three-layer architecture and the splitting of data, application, and client. The rise of the pizza box was also driven by what has since been labelled the “dotcom bubble.”

Also in the late 1990s, a new compute paradigm was being developed by a tiny company named RLX Technologies. This new paradigm is what would become blade computing. The desire for even greater compute density drove it. A standard 42U rack can contain a maximum of forty-two 1U pizza-box servers, assuming an end-of-rack network layer, or less if you have a top-of-rack network and storage layer access paradigm. Blade servers increased a rack’s compute density by effectively turning the problem on its side. When RLX Technologies first introduced the blade format in 2000, suddenly 3U of a 42U rack could now support twelve dual CPU servers, boosting compute density to 168 units, a significant increase. Even in the post-millennium crash, people flocked to the format in droves. Further driven by the rapid adoption of virtualization in the data center, the blade format is currently the undisputed champion of the world.

You could now argue that we are on the cusp of a new compute paradigm change. The term “hyperconvergance” has seeped into the lexicon. This is a compute format in which IO, both network and storage, is local. This has led to the introduction of a new format for servers: a sort of halfway house between the traditional rackmount server and the blade format. Dell has termed this compute format a “SLED.” Here, the IO is decoupled from the chassis and moved back into the discrete compute appliance. In addition, storage is starting to drift back from the arrays to the local device, in part driven by the adoption of virtual storage appliances. These take local storage from multiple machines, pool it together, and present it as an iSCSI or NFS target. You get all the benefits of an array, with much-reduced costs and complexity. The addition of a flash/RAM accelerator like Atlantis USX allows acceleration of the IO to the point that it is actually greater than that of a low to midlevel array.

Companies like Nutanix and SimpliVity are driving this change. But is this the new dominant format? The jury is still out. This format does transcend the major limitations of the blade format, mainly expandability. Further, as blade technology has improved, density seems to have dropped. Where we once had twelve blades in a 3U chassis, we now have sixteen in an 11U chassis. That said, the reduction in density is no longer so important, as processor and memory technology have improved so much that virtual machine density is at numbers believed impossible not too long ago. The virtual machine configuration is as powerful as almost any physical machine.

This is what is driving the move to SLEDs. The virtual machines on each host are driving so much storage and network IO that they are starting to saturate the capacity of a blade chassis.

Personally, I do not think that everybody is about to throw out their investment in blades. Walk into any data center in the world, and you will still see rackmounted machines. Certainly, SLED as a format will start to be seen in the wild in an increasing number of data centers. But racks and blades still have their place for the foreseeable future.