In IT there is not simply more than one way to skin a cat. There are multiple philosophies describing the methods. Most of these philosophies are based on preference and history, and on how, where, and when we learn, but not so much on what is necessarily the best way. Much of the how and when is linked to cycles that seem to permeate the industry: cycles of ideas that surface, are used, are bettered, and eventually resurface.

For most of my career, I’ve straddled the Windows and Linux worlds, with their vastly different approaches to similar problems. Open source vs. closed source is the big one. A much more subtle one is the “One Application to rule them all” philosophy of Windows systems vs. the “many applications that do one thing well” (strung together with duct tape and string) philosophy of Linux.

It’s worth remembering that way back when both of these operating systems were created, we did not have a GUI at all. Everything was command-line oriented. However, where Unix followed a philosophy of small applications strung together with pipes, Windows adopted the idea of larger applications with key bindings and shortcuts to make usage easier. This was the first divergence of many. Like many other people, I was introduced to computers with Windows 3.1 and, soon afterward, Windows 95. This was the true dawn of the age of the GUI. Most functions in Windows could be done only through the GUI, or at least doing them from DOS was a lot harder. Linux came about in this era but based itself very much on the UNIX ideas, and the GUI (X-Windows) was very clunky at the time. While it worked, most things were easier from the command line.

This idea, that some things were easier from the command line, persisted in Linux. It was lost in Windows. The other place where this idea persisted was in devices with a primary purpose other than “computing”: routers, switches, SAN devices, tape libraries. For many years, we had command-line interfaces rather than GUI ones, even in Windows. Where there was a GUI for a router, it was often a cobbled-together afterthought. Many of us became “command-line jockeys” through using these types of devices, even where we were in pure Windows environments.

So, a generation grew up thinking that the GUI was the best way, that Telnet existed solely to access switches (and if you could get a geeky network engineer to handle that, even better!) The way of creating scripts and chaining small commands was lost to a lot of sysadmins. Then virtualization entered the scene. VMware took off, and of course it was accessed using a GUI—quite a good one, too. It was intuitive and simple (as all the best GUIs are). However, with virtualization came sprawl. The number of “servers” that needed to be administered grew massively. Coupled with hardware improvements, that meant thousands of virtual machines could run in a single rack. Many sysadmins started to move to scripting to solve repetitive problems.

Even Microsoft, the bastion of the GUI, saw the evidence and moved to a “PowerShell first” development, best seen in Exchange from 2010 onward. Via this idea, all functionality was created as PowerShell commandlets, and the GUI called those, meaning that scripting was easier than ever. Indeed, there were deployment models for Exchange 2010 in which the GUI didn’t even install, let alone function. A new generation of sysadmins were seeing the light, and scripting moved into orchestration.

The rise of Apache and Linux servers for web development went hand in hand with this. Linux has a deep history of shell tools and scripting languages, from ash to zsh, Perl, Ruby, and Python. With orchestration tools, mass deployment was possible. To go along with this, desired state configuration systems such as Puppet and Chef were created (with Salt and Ansible later). It was now possible to define and deploy many servers not only from the command line, but programmatically, without ever actually logging into a single one. This system goes hand in hand with cloud systems to create “elastic” clusters that can shrink and grow on demand, with the smallest inputs from the user—true automation.

Microsoft has seen this trend, too, not only putting desired state configuration (DSC) into PowerShell, but opening up APIs into Azure and Office 365.

Where does that leave the modern sysadmin? I know and work with many for whom the GUI is still the only way. If they can’t point and click, then they can’t do it. I can’t see them being around for much longer, though. We have come full circle to not requiring the GUI on most servers at all any more—to running our estate as pure code. I see this as a good thing. It brings version control and standards and documentation into much easier reach. It brings business metrics that can be automated for management. Even then, I still struggle to wean myself off the pretty graphics.