In the past, virtualization architects and administrators were told the best way forward is to buy as much fast memory as they could afford as well as standardize on one set of boxes with as many CPUs as they dare use. With vRAM Pool licensing this type of open-ended RAM architecture will change as now I have to consider vRAM pools when I architect new cloud and virtual environments. So let’s look at this from existing virtual environments and then onto new virtual and cloud environments. How much a change will this be to how I architect things today, and how much of a change is there to my existing virtual environments? Is it a better decision to stay at vSphere 4? Or to switch hypervisors entirely?
VDI Environments
Yesterday, Simon Bramfit vSphere 5 – Did VMware Misjudge its Licensing Changes? requested a VDI only version of vSphere and yesterday VMware responded with vSphere Desktop which for VDI removes the vRAM Entitlement barrier. I see this as progress and that VMware is listening. Unfortunately, this is for new purchases and you cannot convert existing vSphere licenses into vSphere Desktop licenses.
Existing Virtual Environments
I have already started to ask my consulting customers to run Luc Dekens Query vRAM script, I have also asked my contacts at Solarwinds if it was possible to get this information from their vMAN product (formerly Hyper9) so that there is another way to do this math (visually for those without PowerShell skills). They produced a dashboard that does the math for me (see Figure 1).
SolarWinds made this analysis easy by doing the math for you and reports how many of every license you will need based on current license methods. You can get this dashboard from us here.
For existing customers both these tools are very useful. While one provides a graph, Luc’s script provides the values and if your Configured is > than vRAMEntitled you need more licenses (see figure 2).
For those existing customers this the first set of tools I will run to determine if the upgrade to vSphere 5 will go smoothly or will new licenses be needed at added expense. So the first step of any analysis of whether to go to vSphere 5 will be to analyze your existing systems for the total vRAM allocated to a VM in your pool and the total amount of physical RAM you have within your systems based on vRAM entitlements. Alan Renouf also has a License Validator script you can use that displays the results within HTML that looks good to a manager. For a manager view I suggest Alan’s script of Solarwinds vMAN.
Running one of these tools will be very important for existing virtual environments. If the results are not in your favor, you have a few choices moving forward:
- Do not upgrade to vSphere 5 and stay at vSphere 4.1. Yes, you can still renew your Service and Support (SnS) for vSphere 4 even after vSphere 5 is released. vSphere 4 is not at end of life and most likely will not be for many years. Also remember, all licenses you purchase have no time limits, only SnS has a time limit. If this is the case, I would download your vSphere 4.1 ISO images now and save them for future use.
- Wait and see how VMware will respond to the flack about the licenses on the VMware Communities forum thread vSphere 5 Licensing. In this thread, which is quite emotional, there are some very interesting comments, and contains the extremes of thought regarding the licensing changes.
- Create a mix mode hypervisor datacenter using any of the following either Citrix XenServer, Microsoft Hyper-V, RedHat KVM, or LynuxSecure with your existing vSphere licenses.
- Completely switch your virtual environment and cloud hypervisor ditching your currently purchased VMware products.
- Plan to upgrade to vSphere 5 when you need to add more resources to your environment.
Even with all these choices, I would start by running one of the tools and finding all your vRAM pools. So one of the key questions is how do I find all my vRAM Pools?
Where are my vRAM Pools?
VMware knows exactly how many licenses they have sold of each entitlement and to whom they have sold them. Unfortunately, from the customer side the administrators may not know this, and as such only see what is in their realm. For a SMB/SME this may be just one group, but with a large enterprise there could be many different organizations that have vSphere licenses somewhere all managed by different vCenter servers. So we have two cases to think about.
- All vSphere Licenses in one vCenter Server
- vSphere Licenses split across many vCenter Servers
So in the first case, I run the tools described previously and get my numbers and then add in my growth percentage. But in the second case I may have to do things quite a bit differently. If vRAM is measured by a single vCenter instance, then I may want to go through my enterprise and ensure there is really only one true vCenter in use, then assign permissions out as needed. There are several ways for this task to take place:
- Remove all but one vCenter and add to that vCenter multiple datacenters (more work)
- Federate my vCenters together (more cost)
The question then arises as to whether federated vCenters can be used to maintain one vRAM Pool? As to that, this is a question for VMware still to answer. So we have found one place for possibly more vRAM for the pool by looking at what licenses have already been sold and perhaps combining them into one vCenter. Given how large enterprises work, I just do not see this happening as everyone wants to protect their realm (politics over technology) but the cost increases may force this to happen. The list of places to look for vSphere entitlements that add to your vRAM pool start with:
- Multiple vCenters (not-federated, if federated vCenters count as one for vRAM pooling) including Dev/Test/QA environments that may be separate entities
- Hot-Sites
- SRM Target locations
- Rightsize VMs to use less memory using tools like vKernel, VMTurbo, and other capacity management tools
The last option is often perilous due to politics and the possibility of swapping, if memory goes to low. This is a balancing act that will take some time to figure out. Any such Rightsize needs to not only look at the now, but trend over time to not only catch the low utilization periods but the high utilization periods.
For many of these issues of finding more vRAM for your pool will involve politics so best to involve your security, application, and operations teams in such discussions. Even so, there will be some companies that are in a serious bind with the new licensing scheme. These high density sites will definitely need to discuss this with VMware after running the tool to determine how they are impacted.
The other issue I find with existing environments is that they may be over provisioned on licenses and resources, such that they have lower resource utilization per node than could actually be supported. This new licensing model for those who do not meet their vRAM entitlements will force higher densities or scale-up before buying more license, but for those over their Entitlement may force lower densities of VM per node and a more scale-out approach.
New Architecture
Price by vRAM pool will also affect how we architect and design our systems given the possible cost increases. As a virtualization architect, while predominantly designing VMware environments, others come into play periodically and this change will increase that number. Architects should know about all virtualization possibilities and whether an application can be virtualized in that environment. VMware current has the lead on features and with this change it is not just features that will sell VMware to architects but their price as well.
The end result of such a pricing model will be to rethink scale-out vs scale-up for many VMware customers. A typical architecture starts with the following thoughts:
- Requirements based on Applications to virtualize
- % of Memory to Utilize dependent upon # of hosts in a cluster + or – 10%
- % of Growth of Applications over 2-5 years.
- % of Network to Utilize dependent on applications
- % of Storage to Utilize dependent upon connections to storage device
- % of CPU to Utilize not to exceed 80% of any one node
I think there is a mindset change that needs to take place. We need to stop thinking about independent nodes, but think about the entire cluster/virtual environment/cloud. Even though currently we cannot span VMs across nodes, we can however pool our resources and this is what VMware intends: that we pool all our resources. Now if they somehow figured out how to get a VM to span nodes, we would no longer need to worry about the per node memory requirements. We may want to instead turn this around and look to create one massive compute node using technology such as provided by ScaleMP (www.scalemp.com). Using ScaleMP your workloads could span multiple nodes and you present as just one node on-top of which you can run vSphere. vRAM licensing could be an issue with using tools like ScaleMP but would remove the boundaries caused by nodes.
So the next phase of thought is about the application and how much resources they require:
- Off-the-Box memory requirement
- Off-the-box CPU requirement
- Off-the-box storage requirement
As virtualization architects and administrators we know the off-the-box numbers are horrendously inaccurate and usually way to high, but many people believe them. So at this point we either arbitrarily lower the values based on experience or we run some experiments trying to duplicate real-world utilization. This will allow us to know the real values for memory, CPU, and storage. Overallocation of underutilized VMs will now cost you more to run, than before, but the performance gains of rightsizing such VMs is very important. From the beginning there is a requirement to architect using restraint when it comes to resources, as every resource you use will also impact every other VM on a host and perhaps within a cluster.
Now we may actually think about hardware, I find my discussions start with the quantity of resources to use then from there decide how many nodes to provide within virtual environment. This is dependent on two values, the need to never exceed 80% of allocated resources, and the need to maintain a highly available cluster. HA events happen and the cluster should still run. So simply put, if I only have two nodes, then no more than 50% of a node could be in use at any given time, 3 nodes it is 66% of a node could be in use at anytime., 4 nodes it is 75%, and for any number > 4 nodes I plan on using no more than 80% of any one resource. This architecture goal includes using TPS, and other memory savings so that I can put more workloads on a single node up to my 80% limit if I have over 4 nodes. vRAM Pools are about pooling all your vRAM Entitlements together which means only the cluster is important not any one node.
However, if I need lots of memory to handle an application on a per node basis such as very large databases, I need to ensure that memory value is in every node in a cluster where that VM may end up via an HA event. So minimally I need two such nodes with the proper affinity set. But to afford such dense nodes, I need to perhaps think about quad socket or more dual-socket nodes. Which always leads us to the question of scale-out vs scale-up. Once more I need to shift my thinking to the cluster and not a conglomerate of nodes. Yes I do need to consider node size, but over all I am thinking about all pooled resources.
Next I think about the growth of an environment over the next 2-5 years. Then plan any upgrades as part of the architecture. Say, we now can handle 50% of the 5 year plan of total workload, in that case, I may only allocate a small portion of the planned workload per node, but perhaps decrease the amount of memory to start. Then increase that during a hardware purchase cycle within the next year or so. Now I may have to rethink this and instead of purchasing RAM in cycles, purchase new nodes as part of that cycle with less RAM, but I may actually need the CPU these new nodes provide.
A nifty feature to have and I think we may be able to do this today if VMware FT ever worked with more than 1 vCPU, is a way to span loads across a node instead of only on one node. This would really make for an interesting discussion based solely on resources and independent of the actual nodes. FT raises questions about configured vRAM for a VM, is this one VM across two nodes or more than one VM? In addition, how are templates handled? Are they considered configured vRAM or not?
With the announced vSphere Desktop, with unlimited vRAM Entitlement, the need to be concerned about vRAM Entitlement goes away for new VDI installations. This implies that architecture and design of VDI environments will not change with the new vRAM licensing.
Conclusion
There is quite a bit to understand about this new license model, and I am still thinking my way through it. Not only from the point of view of existing virtual environments but the architecture, design, and planning of new environments. From the existing, I need to understand what is part of my vRAM pool and determination of what I currently have is very important. From the new I need to think more about memory allocation across nodes or from within the single pool. Does this change how I architect things? Absolutely, except for the case of VDI. Do I have to think about what comprises my pool of resources? Yes I do.
It is also obvious that the capacity management tools such as those form Solarwinds, vKernel, VMturbo, VMware, Veeam, Quest Software, etc. will need to change to determine if an environment is over vRAM entitlement, or if I will be going over entitlement in some number of days. Solarwinds vMAN has a start as we stated at the beginning of this article but it needs more. I would also like tools that work with the VMware store to list out all the available licenses within an organization.
Some final questions for VMware:
- Is there an official tool to run yet?
- Can VMware provide me a list of all licenses bought or available to my organization? This will help with possibly joining resources into one vCenter
- Is the pool considered what is in one vCenter or does it federate?
- When will we be able to span VMs across nodes? If you want to license by pooled resource, I need to architect based on the cluster and not any one node.
- Are FT based machines counted as 2x configured memory, when I only configured the memory for one VM not two?
- Are Templates considered part of this calculation?
- Will VMware be willing to increase per CPU vRAM Entitlement by minimally 16GB per socket for all license levels?
There is a lot of emotional responses to yet another license change by VMware and they need to handle this carefully. I see they are responding, such as vSphere Desktop, but more is required.
Comments are closed.