The October 12, 2017 virtualization and cloud security podcast moved up the stack. In it, we discussed security within the guest operating system. This approach to security applies to clouds, virtualization, and physical systems. Unlike Software as a Service (SaaS), all other forms of infrastructure or platform involve an operating system you can control. That is the crux of the reason for going up stack to the operating system. The operating system is something you can control, regardless of location. We are not talking about adding new features or functionality; rather, we’re talking about using what’s there to mitigate attacks that laterally move within the operating system—attacks like ransomware.
Part of any anti-ransomware solution is prevention: finding a method for preventing ransomware from running at all times. To do that, we need tools that prevent the running of programs that are not allowed to run, or that prevent running programs from accessing data they should not access. Companies like Bromium do this by restricting all access to the operating system via a hardware sandbox. Others use a software sandbox. Still others build tools that tie into the underlying kernel to prevent use of executables and other resources. Often this is called mandatory access controls or whitelisting.
Whitelisting tools are present in most enterprise operating systems. On Linux systems, the tool is SELinux, and within Microsoft, you can use Group Policies to restrict application access using a whitelist. In either case, if this is set up, the security team can restrict what can run. However, for this to be effective, the application vendors need to participate.
As we discussed in the podcast, many companies just do not have the expertise to set up these systems. If they did, the systems would be in use. Some of this is historical, as the policy tools were rather horrible in the beginning, missing even the most basic things. Now, however, they have grown, and there are a wealth of examples on setting up whitelisting. This is ideal for those systems that run one or two applications only, or that are used for a single purpose. It is even possible to make this type of effort for desktops; in this case, you would have to control what can be installed.
This gets us back to the vendors. The vendors who produce applications need to ensure that they work properly with the built-in security measures of the operating system. If, for example, as a vendor you develop on a Linux box, and you do not have the SELinux configuration listed, that needs to be written and tested. Security really is everyone’s business.
All the scripts I use to install products within my Linux systems use SELinux, as it is enforcing within my systems. Those scripts make the necessary changes to ensure that SELinux works properly with the application installed. The scripts even have the ability to ensure that after the installation or upgrade of a product, the SELinux bits are updated as well. Why?
Upgrades to software and operating systems, as well as installs of new software, change the underlying files and configurations when they are installed. This creates a need to post process to reset the security context to the desired state. This way, there are no issues moving forward, and what you created is what works from then on. Windows has similar requirements.
Anti-ransomware starts with prevention, and the best way today to prevent is to whitelist only specific applications to run and to access critical resources.
Where are you with using GPOs or SELinux to whitelist and set up an overarching security context for your workloads?