A few days ago, Stevie Chambers tweeted about the evolution from mainframe to container: “Why is it a surprise that VMs will decline as things miniaturise? Mainframes → Intel VMs Containers, etc. Normal, I’d say.” By “Intel” here, I’m going to take Stevie to mean “rackmount servers.” I’m also going to assume that by “decline” he meant “decline in importance, or focus” rather than decline in raw numbers of units sold. It would be easy to argue that there have been fewer rackmount servers sold in the last few years than would have been the case without virtualization, due to the consolidation of servers onto fewer, more powerful boxes. It’s also arguable that virtualization has brought us options that would simply be unavailable without it and have led to more volume of sales. Either way, Intel’s profits seem to be doing ok.

What struck me the most about this tweet, though, was that from an App point of view, this is accurate, but from an infrastructure point of view, it is anything but. Increasingly, the servers we use seem like the old mainframes in the way they look and in the way they are treated by App developers. Mainframes were characterised as large, very powerful boxes that were shared by many applications. Usually, this took the form of time-sharing of the mainframe resources, accessed by “green screen” terminals. These mainframes were very powerful (for the time) and contained all of the CPU, memory, storage, and networking resources that were available. As computing became cheaper and, crucially, smaller in physical footprint, it became possible to use a single computer for a single job. This overcame the time-sharing limitation by which only one compute task could be executed at one time.

This move to more distributed computing lasted for a good long time, until CPU capacity became such that virtualization became possible. The advent of virtualization brought the ability to eliminate the biggest issue of rackmount computers: that of waste. It is estimated that the average rackmount server in the early 2000s that was not virtualized was around 5% to 30% utilised. This kind of waste brings with it a huge cost, not just in capital outlay for equipment that is never fully realised, but in the power and cooling capacity needed to keep it running.

The move to virtualization was an obvious one. It brought with it its own issues, though. In order to get the full benefit of virtualization, shared storage is required. Without the ability to live migrate VMs, the risk of putting many VMs on one piece of kit becomes too large to be worth it. With live migration (which requires shared storage) the risk is massively reduced. So, in moving to virtualization, we distributed the workload another step: we split out the storage, for the most part, to SANs. The biggest limitation in doing this was the latency it added to the storage subsystems, followed closely by the limitation of bandwidth. Fibre Channel, Ethernet, and InfiniBand are all inherently higher latency and lower bandwidth than an internal PCI-style bus will be—not least because they run on that same bus in the first place.

Jump forward a few years, and we have ubiquitous 10 GbE and very, very fast flash storage. These two combine to overcome many of the issues with putting storage over the network. Importantly, we have very large drives, meaning that we can put a tremendous amount of storage into standard rackmount servers. The final piece of the puzzle is the file systems development that has led to being able to take pools of storage internal to a server and display it as shared external storage. This convergence of the SAN and the servers that provide the compute capacity that is so important we call “hyperconverged.” In reality, it is a modern mainframe. A hyperconverged cluster looks to the App developer just like that mainframe should: a pool of resource that can be utilized. This time, however, it doesn’t have the time-sharing limitation. It doesn’t have the huge initial outlay costs, and it is orders of magnitude more scalable.

Between vendors such as VMware that are providing pure-software hyperconverged systems that can be put on any supported commodity hardware, and those such as VCE (with VxRail) and SimpliVity, which provide complete solutions, hyperconverged systems are very attractive. Some models are designed to provide huge amounts of storage to compute, and others a more general balance of storage. The age of the mainframe has returned, and we have come full circle.