It has once more come time to upgrade my hosts with security patches and I have run into one more, but easily worked around issue. Some VMs do not migrate or power-off when entering maintenance mode. These VMs include vShield per host appliances, and surprisingly vCops.So what is the solution to this ongoing problem? From VMware, there is not one yet, I hope with the next release of vSphere these are fixed, but it may require changes to vShield and vCops to make happen. But it is not always just vShield and vCops VMs. I also run Xangati to monitor my environment, and the Xangati Flow Summarizer (XFS) is also per host, and it also did not shutdown correctly when the host entered maintenance mode.
I can see vShield and per host VMs like XFS but not vCops. The reason for vCops is because it has a mounted CDROM and entering maintenance mode will not migrate VMs with CDROMs attached automatically. Actually this CDROM cannot even be removed which I find odd as well.
So how do I update my hosts?
- Scan using the vCenter Upgrade Manager (VUM)
- Stage all pending updates using VUM
- Enter Maintenance Mode on each host which migrates off all VMs that can be migrated
- Shutdown any remaining VMs by hand (or if vCops, vMotion by hand)
- Remmediate all pending updates using VUM
- Exit Maintenance Mode
- Power on the per host VMs
Seems to be a few extra steps there, perhaps there needs to be better integration between vSphere ESXi Maintenance mode and the per host VMs, perhaps there is a special group to enter or Maintenance Mode should look at the per VM DRS Automation Level to determine what to do? Or is this a Host Isolation issue and we need to change the way those VMs work when the host is isolated? But to me it is not a host isolation, just entering maintenance mode. So should there be another action or set of actions to control this situation as HA is not disabled until AFTER all VMs are migrated or shutdown.