I recently received a pair of Gen8 blades for my enclosure, and it is time to change out my Gen6 and 7 blades for Gen8. Now, as with every upgrade, a fair amount of planning must occur in order to start this upgrade. I consider it a hardware upgrade, and while it should be straightforward, one cannot simply swap the blades. So much for the easy way.
What follows are the steps necessary to get Gen8 (or Gen9) blades into an existing enclosure.
Blade 1:
- Read all readmes associated with your blades. You will find that there are minimum requirements for Virtual Connect and Onboard Administrator (OA) firmware.
- Ensure you have the proper offline support pack ISO (what used to be the firmware CD) from HP. This now requires you to have a proper entitlement (either care pack or warranty). Other firmware for your enclosure is free still, but your blade BIOS is not.
- Upgrade your Onboard Administrator firmware. This is the first step; without it, the blade will not even be recognized.
- Now, look at the blade. If you are using mezzanine slots, such as for an FC HBA, ensure the FC HBA is in the slot previously used by the older blade.
- Ensure you have new disk trays. Gen6 and Gen8 use different disk tray formats. While the drives are the same, the disk trays are not. You can order the disk trays separately.
- Ensure your Onboard Administrator and Virtual Connect firmware and your management tool versions match the requirements of the blade. (Hence, my OA upgrade done first.)
- Remove the old blade.
- Take the drives out of the old blade, and change out their disk trays needed by the Gen8 blades. Four screws per disk, two disks, done.
- Insert the disks in the same order in which you pulled them out. (Left on Gen6 to bottom of top of Gen8; right on Gen6 to bottom on Gen8.)
- Insert the new blade into the appropriate slot.
- Let the ILO come on. Once registered, it will boot the blade. Mine did not boot due to a drive issue (covered later). However, this allowed me to shut down the blade and configure the ILO appropriately. I accessed the ILO through the OA, which did not require a password, as I use SSO. The only change I made was to fix the IP and add any necessary users.
- Log in via the ILO, and start the remote console.
- Mount the HP Service Pack for ProLiant ISO image via the virtual media. I used version 2014.09.0.
- Boot the blade. Enter F11 when appropriate and do a one-time boot from CD-ROM.
- Upgrade the firmware using the HP Service Pack. I used the automatic update.
Now the blade is prepared unto itself. However, it is not yet ready to boot. I had a red screen from the blade, because the boot volume was not found. Oops. Well, not really. When going from an HP-210i to a 220i, the array disappeared, and my system would not boot properly. The fix is as follows:
- Boot the blade once more, but this time, hit F5 when appropriate to enter the HP ACU to manage your disks.
- Select the two drives, and create a new array (you do not lose data).
- Ensure you pick the same RAID level as you had before (in my case, mirrored). This should be automatically selected, but please double check everything.
- Voilà! The boot volume is now ready. Exit out of the ACU, and reboot the blade
At this point, my blade booted properly into ESXi. But the local datastore was not available. The solution to this was to mount the datastore with a different identifier, just like you would mount a mirror volume or storage snapshot volume. I then rebooted my ESXi host, which allowed me to rename the volume back to its original name. This is required, because one of my HP StoreVirtual nodes sits on that disk, and I need that to restore redundancy.
However, the VM could not be added into vCenter due to the location of the VMFS volume’s now being incorrect, since we gave the datastore a new identifier or UUID. I tried many options but was unable to fix the problem by editing the .vmx file by hand. Instead, I just recreated the VM. Since it is part of my HP StoreVirtual, if something goes wrong, it is just easier to recreate one of the VSAs in use.
The vSphere host is working fine, as is my StoreVirtual VSA after a forty-eight-hour restripe of the storage array. Now it is time to ensure my host profiles are up to date. I find I often have to resync my host profiles after any upgrade of vSphere and hardware. However, the first time I attempted to check compliance, I received the following error:
Failed to execute command to configure or query coredump partition.
The fix for this was already documented by a fellow vExpert at Virtual Potholes. However, being the security-conscious person that I am, I implemented my solution using the vCLI off the VMware Management Assistant (vMA) using the following commands (the vCLI from Windows or Linux is also possible to use):
esxcli -s vCenterIP -h vSphereHost system coredump partition get esxcli -s vCenterIP -h vSphereHost system coredump partition set -u esxcli -s vCenterIP -h vSphereHost system coredump partition set -e true -s
Now I was able to check the host profile, and I found it wanting. Anything related to the older partition was incorrect, as were some other issues. It was time to fix what I could by applying the profile, which had become possible now that the reason for the error was gone. However, due to disk issues, it was just easier to recreate the profile once all the security items were fixed in the old profile—once TSM was enabled, etc. So, now I have a Gen8-specific host profile with a modified storage profile, per this post.
Blades 2 and above:
- Now, look at the blade. If you are using mezzanine slots, such as for an FC HBA, ensure the FC HBA is in the slot previously used by the older blade.
- Ensure you have new disk trays. Gen6 and Gen8 use different disk tray formats. While the drives are the same, the disk trays are not. You can order the disk trays separately.
- Remove the old blade.
- Take the drives out of the old blade, and change out their disk trays needed by the Gen8 blades. Four screws per disk, two disks, done.
- Insert the disks in the same order in which you pulled them out. (Right on Gen6 to bottom of top of Gen8; left on Gen6 to bottom on Gen8.) Yes, I switched the order from my first blade, just to see if that would fix the weird array issue I had (and it did, so order is important).
- Insert the new blade into the appropriate slot.
- Let the ILO come on. Once registered, it will boot the blade. I accessed the ILO through the OA, which did not require a password, as I use SSO. The only change I made was to fix the IP and add any necessary users.
- Log in via the ILO and start the remote console.
- Mount the HP Service Pack for ProLiant ISO image via the virtual media. I used version 2014.09.0.
- Boot the blade. Enter F11 when appropriate, and do a one-time boot from CD-ROM.
- Upgrade the firmware using the HP Service Pack. I used the automatic update.
- Apply the host profile (fixing anything necessary; in my case, the Advanced Option ScratchConfig was incorrectly set, and the KB#1033696 had the fix).
Now that my systems are at Gen8s and my StoreVirtual VSA is once more synced at Network Raid-10, it is time to upgrade the hosts using a rolling upgrade approach via VMware Update Manager.
The key to making this work is to ensure your StoreVirtual VSA is redundant before you switch out any other blades that contain StoreVirtual components. If you do that prematurely, you will have data loss. Secondly, find the correct order in which to place your disks so that you will not have to recreate the array. Granted, to get everything correct, it took properly powering off my StoreVirtual VSA a few times. You cannot power these off from within vCenter but must use the HP StoreVirtual Centralized Management Console; otherwise, the Network RAID-10 device metadata on each VSA will get corrupted, forcing you to reinstall that VSA and rebuild the RAID, which can take up to forty-eight hours for three TBs.
Nice to hear this. Great article.