In my last article, EUC Use Cases: Secure Hybrid Cloud, we looked at how the user could be getting to our data. By doing this, we can place security at the union of data and the user, wherever the data resides and however the user gets there. Yet, we cannot forget where the data is presented. In order to present data, data is copied from its repository to some other device. Now, in the case of virtual desktops, that data is copied as graphical constructs derived from the data; for file servers, the data presented is a raw form of the data. So, to secure everything from end to end, what do we really need?
This truly depends on the classification of the data. If it is public data, per our use cases, not much security is actually required. We may, for example, require just an audit log for compliance reasons but nothing much outside that. For private data, we may need much more than an audit log. Data should be encrypted unless it is accessed from a blessed device, in a blessed location, by a blessed application originally requested by a blessed human being or other system. In essence, we need to know who, what, where, when, and how the data will be accessed in order to make an intelligent decision on whether or not to allow access to the data.
So, are there any tools that can assist us here today?
EUC Use Cases: Current On-Premises Tools
There are a few, but none of them are as authoritative as I would like. We just do not know who is really accessing our data. This has been the crux of the secure hybrid cloud and software-defined data center. When you add in end user computing devices, there is almost no way to prove the identity of the user. If we cannot prove the identity of the user and if our security is user-centric, then we fail at the beginning. However, there are other avenues to take to prove identities that are being researched. To be honest here, there has never been a true way to prove identity at the keyboard of any computer except through external means. The current best means is to use multifactor authentications:
- Something you know: passphrase, password, etc. (we all know about these)
- Something you have: identity card, tokens, USB key with certificate, etc. (CAC, RSA SecureID, GoldKey, etc.)
- Something you are: fingerprints, eye scans, photos, DNA, rhythm of device use, keyboard usage, etc. (BehavioSec and others).
So far, each of these has a tool behind it. We know about passwords and passphrases, and everyone has at least one. We have so many, in fact, that we store them in passsword lockers on our devices, in the cloud, or elsewhere with a master password to unlock the password store. These password stores also need multiple factors of authentication. Those factors do not need to be present all the time but should be presented when you access more-secure data from less-secure locations, devices, applications, or the like.
Now we move further into our secure hybrid cloud, and we pass through a firewall: perhaps it is a next-generation firewall. A next-generation firewall attempts to associate a user, a location, a device, and permissions with each other. If you have the right user from the proper location and device, then you can gain access to a set of ports that an application can use. All in all, exactly what we need in some cases. However, a next-generation firewall that does user detection hooks into the authentication of the user via a directory server. That directory server is the authoritative source neither for all connections made or broken within a virtual desktop environment nor for a file server, etc. That authentication may not happen immediately, and the delay for access could be enough to disrupt a low-latency application—one that needs to respond in a very fast time, perhaps 300 ms or less. Therefore, we need authoritative sources for connectivity.
Once we move further into our application suite, there is a set of tools that can help us—a set that is based on where a virtual desktop resides, how it was booted, the application in use, the user who is logged in, and the file to be accessed. We can write a policy that will marry all of those together to form a policy chain that can be followed to allow or deny access to the data in question. This is achieved using CloudLink’s SecureVSA, SecureVM, and SecureFILE tools working in concert. SecureVSA is a place to store data that should be encrypted at rest, such as backups and data volumes. SecureVM is used to control encryption of the boot volume of a VM using in-VM tools that exist for Windows and Linux. And SecureFile wraps the data in a security context that could tie the others together with the application in question.
However, that does not necessarily defeat malware or viruses that could be attached to a system to exfiltrate data that has been unencrypted by specifically attacked applications. For that, you will need tools like Bromium, Symantec Data Center Security, and other sandboxing technologies. These tools prevent access to the network, storage, and other elements of an operating system based on policy, application, user, and in some cases the actual data.
For on-premises situations, or ones in which you can gain control of the actual virtual machine (such as DaaS setups), there are tools we can use to gain some of our user-centric and data-centric requirements.
EUC Use Cases: Current Off-Premises Tools
In off-premises situations, we first need to gain visibility into our applications in order to determine where and when to elevate privileges and control access to our data. For those, we need tools like Adallom, Elastica, Managed Methods, and Skyfence. These tools present gateways through which our traffic must run in order for us to determine where our users are going, where our data resides, and how that data is accessed. These tools either present a gateway device that could live anywhere necessary (on-premises, in a IaaS cloud, etc.), or they hook into the data stream via cloud-based, single sign-on mechanisms that force all traffic through a transparent proxy.
Once we know how an application is being used, we can request further factors of authentication in order to grant access to more functionality or critical data. This knowledge of how the user accesses your data is crucial to user-centric security. This knowledge of where your data resides is crucial to data-centric security. One set of tools can marry these concepts together and present a union of user-centric and data-centric information by which security decisions can be made.
In Conclusion
There are some very simple steps to take to move along our user-centric and data-centric paths to security. We want to place security at the union of these approaches. But first, we need to do several things:
- Classify our data (classification)
- Determine where the data has gone in the mean time (discovery)
- Pick tools that will provide additional security off and on-premises
- Prove the identity of our users as they access different levels of classification
- Find authoritative sources for connections
- Keep in mind how hackers access our data and exfiltrate our data (research)
- Automate our security based on user actions.
Most of where we need to go is based on doing quite a bit of leg work to discover where our data has ended up, classifying our data, researching attack paths, and finding authoritative connection sources. Regarding virtual desktops, for example, we know that the connection broker is authoritative for connections and for which users, so we should tie into that source of information to make security decisions every time a user connects or disconnects. Our firewall may miss the disconnect, so leave open any ports or access that should otherwise be denied. We need to respond quickly and in an automated fashion to our users’ requirements to access the data to do their jobs. We also have to automatically put in more authentication when moving between classification levels.
Where do you place your security now? Does it respond to the user? Does it use authoritative sources for connection requests and users?
Some tools are there now; others are not. We still need to unify our policies and security measures to follow one set of rules.