The WannaCry “ransomworm” highlighted a number of issues in modern enterprise IT. Dealing with legacy software, support, maintaining good patching processes, effectively mitigating discovered vulnerabilities, responsible disclosure—all of these points have been highlighted and discussed in the aftermath of the outbreak.
A particularly interesting—and quite controversial—point was one that intimated the vendor should be made legally responsible for maintaining out-of-date software. This opinion piece originated in The New York Times and was repeated in a few other journalistic outlets. As a result of the article, former GCHQ chief Sir David Omand assigned the “moral responsibility” to the vendor. Microsoft created the Windows OS, the thinking goes. Now that it has exhibited a dangerous vulnerability, surely it should be up to Microsoft to issue a fix, regardless of the support timescales involved.
The hit that the NHS took probably made this much more of an emotional point. Expensive systems like MRI machines that were purchased for not-inconsiderable sums of money are, in many cases, wedded to software that can only run on the operating system supplied at the time (Windows XP, in many of these cases). Although the option still exists to obtain paid-for support for legacy systems, the fragmented nature of the NHS meant that many trusts did not have this option in place any longer. Dealing with medical systems raises the spectre of damage to health as a result of the attack, and it is only natural that Microsoft has been on the firing line because it is its code that was exploited.
Many have compared this to the car industry. If cars no longer under warranty exhibited a dangerous fault, they would be recalled by the manufacturer for fixes. This is the sort of analogy that generates breathless headlines and creates a push for change. But is it fair to apply it here?
There’s no easy answer, but here is a list of points that are relevant to this discussion:
- Microsoft provides longer-term support than many other companies do for their products. XP was supported for the better part of thirteen years, and there was a slew of advance warning regarding the end of support. Both Google and Apple have much shorter support lifecycles.
- Microsoft didn’t just “drop” XP support on a whim: it went through a considerable cycle of support changes before finally withdrawing support.
- Microsoft offered users the option to pay for extended support. In the case of the NHS, central government paid for extended support for a year only, and then it passed over to each individual trust to decide whether to pay for continued support itself.
- Providing service “in perpetuity” would have a huge impact on the price of software in general and on the availability of new versions. Granted, this might not be a bad thing (see Windows 10), but it’s not something that the entire ecosystem would be pleased to see. Providing ongoing support and patches is not a cost-free process.
- Microsoft support policy is based on the date of first release and not the date of sale, which is probably why the car analogy doesn’t fit so well.
- The Windows software is “proprietary,” so what you’re actually getting is a limited license for a service, not real property; the same rules don’t apply.
Would open-source software be the answer? But that would mean that whoever submitted the code, or someone with the skills to do it, would then have a commitment to support the code for the given period of time. And that would probably lead companies to take out support contracts with vendors to provide the support—which they may after a period of time decide is no longer economically feasible, and then we’re back in exactly the same boat we started in.
Focusing on specialized equipment such as MRI scanners is probably a trifle disingenuous. Without hard statistics, it’s impossible to know how many vulnerable legacy systems were actually connected to specific devices like these, or whether some of the affected trusts simply still had Windows XP machines on every desk.
The current aggressive approach being taken for hardware and software may not be sustainable, particularly in verticals such as healthcare. OS stability should be taken as more of a concern for enterprises. Additionally, there is a tendency to “over-engineer” solutions and put a full, general-purpose desktop OS onto devices that do not require it. Does an MRI scanner, to continue the example, need to have access to a device that can run Excel or a browser?
Should some of the blame fall on the procurement sections of the enterprises involved? Should any procurement be forced to take into account planned or unexpected obsolescence of the software involved, with an appropriate contingency plan attached?
I could go on and on: there is a whole host of relevant talking points. The debate will continue, but the temptation to simply legislate our way out of these situations probably needs to be avoided. Imposing laws that make the vendor responsible will probably have huge knock-on effects that change the software industry as a whole.
And the final point is: Is Microsoft somewhat to blame due to its long-term strategy? Since the 1990s, it’s made an attempt to lock software vendors into its platform. Did it actually encourage software developers to come up with nightmare pieces of code that would prove to be unsupportable in the future? And because of that, should it be forced to change in other ways besides simply providing extended support?
I can’t see that using legislation would be a serious way to induce the required changes that could guard against this type of event. The industry as a whole needs to give feedback to the vendors to encourage them to adapt their practices to minimize the potential for future disruption. Changes are needed, but when they are enforced by governments, they have a tendency to be short-term, badly thought out, or both. The IT industry as a whole needs to deal with cybersecurity, make it the default posture, and put its own house in order.