One of the most common complaints that arises from users in the aftermath of a desktop virtualization deployment (whether it is done via pure virtual desktop interface [VDI] or some form of server based computing [SBC] solution) is that performance doesn’t measure up to their expectations. A negative image of the new platform develops and often spreads throughout an organization. Why is this? Are we failing to manage our users’ expectations properly, or is the perceived poor performance symptomatic of inadequate planning and bad implementation?

The truth is that in many of the virtualized desktop implementations out there, the negative image probably arises based on a number of causes. Of the great many contributory factors, a few of the more common ones are discussed herein.

Management of Expectations for Desktop Virtualization

There is a habit, when dealing with technology, for users to expect that “newer = better.” This is entirely understandable. In their experience, new technology either runs faster than the old (think an upgraded home PC with faster CPUs and more RAM), or it provides a better experience (think PlayStation 3 games compared to the old PlayStation 2).

Perversely, the drivers for a desktop virtualization project are often not centred around improved performance. Consolidation of infrastructure, centralization of management, reduction of costs, simplified desktop provisioning, streamlining of applications—these are the main needs driving desktop virtualization plans, and none of them concern speed or responsiveness improvements. If these drivers aren’t communicated to the business and users, then they have a tendency to see a project in progress and assume it will be an improvement over the existing infrastructure. Attach the word “upgrade” to the project—as many may do when moving from XP/2003 to newer versions—and you can understand why users will expect a performance increase of some sort.

This is further complicated when existing users have their own local, dedicated physical PCs, invariably coupled with huge local profiles and local storage. Not only does this give a virtualized environment a rather tough act to follow, but it also means that users will now be a lot more sensitive to loss of network connectivity than they previously were.

It’s therefore vitally important to communicate to business users that a virtualized, thin-client infrastructure (particularly SBC) is not going to be twice the speed of a dedicated local PC with the associated local resources, unless you’ve specifically invested in technology that can offer that level of improvement.

Planning

Besides expectation issues, poor planning can also contribute to substandard desktop virtualization solutions. Especially when moving from thick to thin client, the change to the virtualized model often introduces a reliance on network and storage of a much higher level than there was previously. Many virtualization technologies rely on communication with databases and web services to operate. Add to this the intricacies and oddities of a typical business’s application sets, and you can see that the planning of such a project is vital, and cannot be undertaken lightly.

The planning process should also undertake application analysis and discovery, particularly if there is a need to decide on application placement. Applications—particularly legacy ones—may need to be delivered in different ways, whether installed, packaged, or streamed.

Baselining

Another task overlooked during many desktop virtualization projects concerns adequate baselining. Post go-live, users will often complain that various parts of “the system” are “slow.” But just what qualifies as “slow”? Some users may see a logon time of two minutes as acceptable, whereas others may think forty seconds is far too long. Fifteen years ago, many users only had PCs provided to them at work, whereas today, users may have vastly quicker machines in their homes than those provided to them in the workplace, and expectations of performance also may be affected by that.

It is therefore vitally important to establish some accepted baselines for measuring the performance of a virtualized system. Some of the most common factors that users will look at to gauge the performance of their desktops are logon time, application launch time, and application responsiveness. Baselining the performance of your chosen factors on the existing environment, prior to doing it on the new environment, will allow you to see whether users will react negatively or positively. It’s also important to agree with the business on what the expectations of each area are. For instance, many people work under the assumption that a logon time of thirty seconds or less is acceptable; however, depending on what users are accustomed to or the particular departments in which they work, that may not necessarily be the case.

Testing

Surely inadequate testing shouldn’t still be a factor these days? Sadly, sometimes it is. Organizations fail to perform testing properly much more than they should, often because of pressure to deliver the project. Too often testing is left to IT staff, rather than “real” users, which is not satisfactory because IT staff won’t behave in unexpected ways like users do. Sometimes testing is done purely on an automated level, using technologies like Login VSI and LoadRunner, which, while perfectly adequate for stress testing and load planning, don’t quite capture the tendency of users to try to perform tasks in odd or even erroneous ways.

Once the solution is designed and in the process of being built, a solid user acceptance testing (UAT) phase is needed to feed back new requirements that may alter the overall plan. Many businesses rush through this phase too quickly and fail to adequately identify potentially serious problems.

Monitoring

Again, monitoring should be thought of as a standard. Most solutions tend to build monitoring in—but the monitoring itself sometimes is configured incorrectly, or there is no solid process in place to deal with alerts and events raised by the monitoring system. This should not be taken lightly—a good monitoring system can isolate and prevent problems that may contribute to users’ poor perception of the virtualized infrastructure.

Overcomplexity

The final point that tends to be seen often is overcomplication of the solution. Rather than sticking with a few complimentary technologies to deliver the virtualized project, some businesses end up using myriad pieces of software in an attempt to overaccommodate demanding users. If application and user virtualization is in play, then the solution can quickly become unwieldy and difficult to support. In these cases, the service provided to the user suffers as IT departments struggle to identify the root cause of problems, and again, the perception of the entire virtualized infrastructure becomes negative.

Summary

A desktop virtualization solution doesn’t necessarily mean a trade-off in performance. Technology such as Atlantis ILIO, for instance, can make it perform better than a physical solution. But ensuring that the project is planned, baselined, tested, and monitored correctly can make a world of difference in getting users to warm to a new solution rather than rejecting it.

2 replies on “Do Users Have a Negative Perception of Desktop Virtualization?”

  1. Good article, and some good tips – especially the one around overcomplexity.

    I don’t think users have a negative perception of desktop virtualization – I think users have a negative perception of change for change’s sake – or change when the outcome of the change doesn’t support what it is that they do. Planning, baselining, testing are therefore all key.

    Perhaps the most important point is that the success (or failure) of a DV project hinges on user perception – and its that soft component that is often forgotten. Deliver great (i.e. as good as or better than before), consistent performance and you’ll not lose them to win them, look to provide something extra that helps them focus on doing their job.

Comments are closed.