Data Protection and patch management of virtual desktops, while not a sexy topic, is one that should happen on a regular basis within any organization  implementing or working to implement virtual desktops.  Recently, we have been testing virtual desktop software and there is a huge difference between patching and protecting data in a small number of instances and 1000s of instances. There are scale considerations as well as ease of use for file level and system recovery as well as issues with patching virtual desktops (not to mention other security issues).The typical design for a virtual desktop implementation is depicted in Figure 1, where the EUC device connects through a firewall to a security server (perhaps just a load balancer as is the case for XenDesktop) through another firewall to a connection broker. The connection broker brokers the direct connection to the desktop and steps out of the way until the user logs off.

Typical Virtual Desktop Implementation
Figure 1: Typical Virtual Desktop Implementation
Furthermore, in larger virtual desktop implementations the users profile and any data that user accesses is on remote file shares. For smaller implementations, there may be remote profiles or remote data, but not usually both.
Given this typical virtual desktop implementation how does one architect a data protection and patching solution that will

  • be transparent to the user
  • maintain a pool of available desktops at all times
  • maintain data integrity (never loose any data while we reconstitute VMs, etc.)
  • allow individual users to perform file level recovery as needed from anywhere
  • maintain data classification controls (such as RBAC for data, etc.)

So what is required to make this happen?

Patching

Patching is at once the easiest and hardest part of a virtual desktop deployment. Depending on the solution used, patching could be as simple as updating a master image (VMware Horizon View w/Linked Clones) or may need to reconstitute the virtual desktops from a master image (XenDesktop, VMware Horizon View w/ and w/o Linked Clones depending on per VM application installations).  This is one of the major reasons it is very important to offload user profiles, per user applications, and any user data to locations outside the actual virtual desktops. If this does not happen, then patching requires you to patch each individual virtual desktop separately or wipe the slate clean when a virtual desktop is reconstituted.
Yet, there is one other consideration and that is as you patch one pool of desktops it often implies a complete removal of the old virtual machine and recreation using the updated master image. If you have to remove desktops to patch them, how do you account for those users that need access 24/7? This must be part of any architecture. Perhaps one solution is to have multiple pools available. This way you can migrate users between pools so you can perform patching without needing to tell users they cannot login. In this case you end up with something like Figure 2, however this implies you would need to split users by pools, or have a high-availability pool to meet patch requirements.

Virtual Desktop Pool Implementation
Figure 2: Adding in Pools of Desktops
Unfortunately, each tool I have played with whether VMware View or XenDesktop does not yet have automatic mechanisms to manage patching using pools, so it is still a manual process. Which implies that this is a perfect opportunity for a bit of scripting to provide more automation for virtual desktop patching, which would bring even more aspects of the software defined datacenter (SDDC) to virtual desktops.

Data Protection

The main goal of data protection is to recover files either one at a time or en masse, however, when we look at virtual desktops we have multiple places to recover files. We can recover the following:

  • Applications
  • Per User Configurations
  • Data files

We fully understand how to recover data files and applications, but the per user configuration files associated with an application are often either in odd locations on the filesystem or in the user’s profile location. Which is it? There is no standard, which is an issue. Which means, that we need to understand how per user configuration is stored for each application and either ensure that location is within the users profile or create a specialized data location for it. Else when we go to recover data we end up with missing data.
We need data protection that is not only knowledgeable about the application but about the user as well. Unlike server backups where applications rule, the user rules within virtual desktops. So our data protection must also be user specific. Data protection tools from Veeam and Symantec understand servers quite well and are crucial to any environment yet we also need per user file level restore and user centric backups.  We have several items that can be backed up separately as if they were servers, but we then run into user centric concerns. Those are:

  • The Master Image(s)
  • The Security Server(s)
  • The Connection Broker
  • The Profile(s) Location
  • The Data Locations

However, even if we protect this information with Veeam or other tools, the file level recovery aspect of data protection must understand the existing RBAC for users and not just administrators. There needs to be an IT as a Service capability that allows for file level restore by the user on a regular basis. Veeam is one of the few that have some mechanisms in place like this for users. I feel these need to be built into the desktop and existing role based access controls for files and data.
This is another area where automation associated with the software defined datacenter could come into play. While legacy storage understand file level recovery via web pages, do they recover everything related to a user within an application? If I recover file X, does this also restore the meta and configuration files about file X? Do we know how the files are related? This is what I mean by being user centric, the backup tools need to understand this type of interdependency.

Closing Thoughts

When you consider patching and data protection for virtual desktops, there is more here than is initially considered. We need to consider all aspects of the virtual desktop application and operating system life cycle. The software defined datacenter concepts must also be embraced by virtual desktops not only for servers.
How do you patch and protect the data within your virtual desktops?

One reply on “Virtual Desktop Patching and Data Protection”

  1. Patching is a valid point – just because you’ve got VDI doesn’t mean you can stop managing. But I think your patching options are messy. Disregarding the concept of a master image being confined to linked clones with VMware (Xendesktop has that functionality, you can incorporate solutions such as UniDesk) personally, I believe linked clones are a feature to improve storage options and performance rather than deal with patching.
    But then you don’t need layers to patch. There are a range of products that can and have dealt with patch management (Landesk, Microsoft, Symanatec, Novell to name a few vendors). Layering makes the process faster in a VDI environment for sure – but the reason its not automated you’d like to aspire to is because people tend to get annoyed when they are asked to log off and log back on again. While it is possible to redirect a server workload in a software defined datacentre, its way harder to ask Mavis to stop typing in code while you reboot her PC if that application is running in the PC environment. Desktop changes have a far more immediate impact: there has long been a push for VDI environments to be layered – that’s not the case today.
    Data protection – again, vital – but there are many tools here. Microsoft have user file version control on SMB shares since windows 2003; Appsense and RES Software will offer user state recovery; there is an ever increasing rise in the use of sharepoint as a document store rather than a directory shares. And here is a problem with automation in a user centric environment – we are a number of steps away from the computer preempting the user dropping the ball.
    I was at the London VMUG today – saw an excellent presentation on the benefits of a software defined datacentre using automation to preempt and predict change or failure to maintain a best-running/optimal state. Brilliant for servers.
    I think a difficulty this piece projects is again, attempting to equate server virtualisation with desktop virtualization – they share a surname, but they are very distant cousins.

Comments are closed.