In my past upgrade sagas, I had upgraded vCenter and then fixed a few niggling problems (or attempted to). Now it is time to actually upgrade the vSphere servers. I previously staged the vSphere 5.5 ISO into the VMware Upgrade Manager (VUM). Since all nodes need to be rebooted to do the upgrades, it is also a good time to update firmware. Otherwise, the upgrade is pretty straightforward. Hopefully, it will fix my remaining issues.
vSphere 5.5 Upgrade
To upgrade vSphere to 5.5, I did the following for each node:
Step 1: Enter Maintenance Mode (and evacuate all VMs, not just the running ones)
This step keeps my VMs running with zero downtime, which is a major feature of using virtualization for me. Without the ability to vMotion and migrate workloads, I would have to shut down mission-critical components. If your VMs are on local drives, you will have to migrate them by hand to other drives. This is where shared storage comes into play.
Now, if you are like me, you will have some failed vMotions due to mounted CD-ROMs. This is another reason I make entering maintenance mode my first step. I remove CD-ROMs and migrate the running VMs by hand. Then, any shut-down VMs will automatically migrate.
There are also times when your VMs are linked to local storage, such as a virtual storage appliance. To finish placing he node in maintenance mode these types of VMs need to be migrated off any local storage devices using SVMotion. Then, the virtual storage appliances can be shut down, but they will not migrate, as the VSA VMs are attached to local resources. This is also where redundant virtual storage appliances are useful as it alleviates the need to migrate non-VSA VMs.
Step 2: Upgrade Firmware as Needed
This is an optional step, but since we are rebooting nodes, it is a good time to install any firmware updates. For HP hardware, I tried using HP SUM remotely but decided to do an offline upgrade (which also uses HP SUM) instead, as it gives me a bit more control over what gets updated. You should go with what you are used to. However, for larger environments, some form of scripting is the way to go.
Step 3: Using VUM, Remediate the Upgrade
Since we staged the install ISO into VUM, we now need simply to remediate the upgrade. You need to select upgrades for this to work properly. Then, you can select the image associated with the upgrade.
Step 4: Using VUM Remediate Extensions
Now that the upgrade is done, it is time to update your extensions, which could include hardware-specific items (in my case, HP offline bundles), Nexus 1000V upgrades, and additional vendor extensions to vSphere, among others. Step 4 and Step 5 are interchangeable and depend on how you like to do upgrades.
Step 5: Using VUM Remediate Patches
After the extensions are upgraded, remediate any patches. VUM will not do upgrades, patches, and extension updates all at the same time; you need to do them separately. Step 5 and Step 4 are interchangeable and depend on how you like to do upgrades.
Step 6: Reboot the Node
I always reboot a node once more, just to be sure. After all those upgrades, perhaps something did not start properly. In some cases, the CIM server may not have started (as was the case after adding the HP SMX Providers).
Step 7: Exit Maintenance Mode and Power On Any Local VMs
Now, we are ready to place everything back in the cluster for use by virtual machines. I generally power on any necessary VMs, such as VSAs.
Repeat these steps for each node in the cluster; a rolling upgrade is the way to go. For those clusters using auto-deploy, there are other steps to go through.
Now that my cluster is at 5.5, it is time to consider building a new Host Profile to ensure hosts stay compliant. This caused a few issues for me, as I kept getting a range error, but before I touch on host profiles, we need to upgrade some of our management tools. Hardware management is just as important as VMware vCenter upgrades. While we did this once before, that was before the nodes were upgraded to vSphere 5.5, and this time I want full integration for better overall management.
HP Insight Control Upgrade
I also had to ensure that all my management tools worked with vSphere 5.5, so the next task was to ensure that everything worked with HP Insight Control since we not only upgraded vSphere but also upgraded the firmware on each blade. I upgraded HP Insight Control to the newest available version, v7.3. The steps I took were:
Step 0: Make a Snapshot of the HP Insight Control Virtual Machine
This is one of the more important steps, because it allows you to recover from major upgrade disasters faster than if you just use a backup.
Step 1: Upgrade HP Insight Control to v7.3
This is a simple upgrade that I’ve done many times. However, I first had to clean up enough space to allow the upgrade to happen. I had 8.5GBs of items that I could delete from my Microsoft Windows 2008 R2 system. That is quite a few upgrade packages and other items.
Step 2: Subscribe to WBEM Events
Within HP Insight Control, you can use Options -> Events -> Subscribe to WBEM Events to subscribe to these events. Doing so is required for HP Insight Control for vCenter as well as for the HP Matrix Operating Environment. However, this could fail for several reasons:
- The HP extensions for HP management are not installed: To fix, simply install the HP management VIBs. I install four, which you can get directly from HP.
- The node needs to be rebooted, as the HP SMX Provider for ESXi’s CIM was not started: To fix, simply exit maintenance mode and reboot the node. The reboot is required, as sometimes just restarting CIM services via the vSphere Clients does not work.
- There are problems with the HP Extension VIBs, usually indicated by a Java.lang.error when trying to subscribe: To fix, remove all HP VIBs, reboot, and then reinstall the VIBs using your favorite method. To remove, I used ESXCLI, but I used VUM to reinstall them after a reboot.
- There are partial WBEM subscriptions within the ESXi system: To fix, call HP Support and ask for the Python script DeleteInstanceWBEMSub.py. To use this script, you are required to have pywbem installed on the system. This is available as a package with the CentOS and RHEL6 releases of Linux. A Windows version of Python and pywbem are also available. HP Support may claim this is not available for your version of HP SIM, HP Insight Control, or HP Matrix Operating Environment. However, it is, it works, and it fixes the problems. After you delete the WBEM subscriptions, you can re-identify the host within HP Insight Control and then resubscribe.
Unfortunately, there are times when a re-identify does not work. When that happens, after using the Python script, I remove the host (and its ILO) from HP Insight Control, rediscover it, and re-delete the partial subscriptions; then I am able to subscribe to WBEM events. There could also be a combination of errors, like problems with the HP Extension VIBs, followed by a need to clean up partial subscriptions.
This may seem a bit nitpicky, but if your management tools do not work properly, specifically those for your hardware, you end up with many alarms and issues that could easily be solved by looking at, for example, HP Insight Control for vCenter. There are also HP Insight Controls for KVM, Hyper-V, and what looks to be a full series of hypervisor and cloud infrastructures. In addition, if WBEM is not properly subscribed to in HP Insight Control, it could affect how data is seen inside VMware vCenter as it also uses WBEM to communicate to get hardware health information.
There you have it: I am now updated to vSphere 5.5. Next, I will try to fix my Host Profiles issues.
Edward,
Hi. I’m starting to do testing for our 4.1 to 5.5 upgrade and am having problems with WBEM subscriptions. We have a VMWare image for vSphere 5.5 and we then use ESXCLI to install the HP Off-line Bundles. I have the Windows version of Phyton 2.7 with the DeleteInstanceWBEMSub.py script. Not knowing Python, I can’t seem to get the correct syntax to run the script to clear the bad WBEM subscription. Can you please provide that? HP won’t support me because we’re still on HP SIM 6.3, which is now unsupported.
Thanks!
We are using c7000 enclosure with hp bl490 G7 blades as esx Hosts. Also use nexus 1000v as vds, everything Works Fine in vm Version 5.1. I’ve updated my vcenter to Version 5.5b and Some blades to esxi 5.5 with a customers created Image which includes hp-offline bundle and the right VEM Modul for the nexus Connection. Now i’m strugling with a strange issue while trying to migrate my HP Bl 490 Blade Servers wir esxi 5.5 and installed VEM the nics don’t migrate and vcenter Initialised a roll back to the vss. I have installed the actual Firmware for virtual connect Moduls and Blades too. Do you have any Idea, what could be the prob? Thanks in advance
can you send me the script DeleteInstanceWBEMSub.py for ESXi 4.1 U2, because I have a problem with ubscription to WBEM events.
Thank’s
Hello Ryad,
I suggest getting this directly from HP as they are the official source for all things HP SIM related. One thing that you should note, however, is that you need to be very clear for what you are asking and why.
Best regards,
Edward L. Haletky