In my environment I use VCSA as it is generally far easier to manage. However, to do so, you still need either a single or multiple Microsft Windows helper VMs. The VMs run packages like SRM, vSphere Update Manager, HPE OneView, and other tools that integrate with VMware vCenter. I recently wanted to do some automation based on a few PowerCLI scripts and my version of PowerCLI was out of date. In order to bring it up to date, I either needed to reconfigure my W2K8R2 helper VM or create a new helper VM based on W2K12R2. I chose the later as it was time to upgrade anyways.
The seemingly straightforward installation took a few turns for the unexpected. The upgrade proceeded as follows:
- Install the vSphere .NET Client
- Install VMware PowerCLI
- Install HPE OneView
- Install vSphere Update Manager
- Install vSphere Authentication Proxy
- Install VMware SRM
- VMware Syslog Service
- Install Jenkins Client
Now granted, I could put everything on its own server, yet, I prefer just ONE helper VM based on Windows. Steps 1-3 went seamlessly. The problems started with step 4.
vSphere Update Manager (VUM) Issues
Prerequisite: There is an undocumented pre-requisite for the SQL Native Client v11 when running on W2K12.
Problem: When I installed VUM using my previously database, the vCenter clients (all that can talk to VUM) would suddenly disconnect from the Update Manager and no work could get done.
Investigation: I looked at log files, configuration files, and several KB articles but could not find the culprit.
Solution: Re-install VUM using a brand new database. The issues seemed to be around the machine name change and that was somehow embedded within the VUM database. Since, the database is really about holding patches, etc and a quick scan can get me the data back, removing the database was not an issue. The solution worked and I was then able to add back in PernixData FVP’s HostExtension, the HPE Custom ISO, and a pointer to HPE VIBS for easy download (http://vibsdepot.hpe.com/hpe/apr2016/index.xml) and application of HPE host extensions and patches.
vSphere Authentication Proxy Issues
Problem: The vSphere Authentication Proxy kept getting a 29106 Unknown error.
Investigation: The KB articles I read either pointed to non-ascii characters within the path, not being a Domain Admin, or not installing as an Administrator. They were NOT clear on what they meant by Administrator as most documents say system administrator.
Solution: What they really meant was a vCenter Administrator. Once I changed the Role and permission for the user I use for vSphere Authentication from the Read-Only to Administrator role. The installation worked just fine. Then I could change the user back to the read-only role. This is a tool I do not use quite yet.
VMware SRM
Problems: There are a myriad of issues with getting SRM enabled:
- Credentials to install must be the SSO Administrator user
- vSphere Replication must be updated
Investigation: The SRM version you use must match the vSphere Replication version (at least the version not build numbers). A 6.0.0.1 version of SRM will not install if VRA is at 6.1.0 for example. However, there is a more serious issue, in order to perform the VRA upgrade there are two methods (login directly to port 5480 on the appliance and perform a per appliance upgrade or use VUM). The VUM method failed with a “Discover virtual appliance” error.
VMware Syslog Service
Problem: VMware Syslog Service received a VMware Common Logging Service Health Alarm for two distinctly different reasons.
- Syslog server localhost:514 unreachable
- Available storage for logs /storage/log reached critical threshold
Solution A: The solution to the first alarm is to restart the VMware Syslog Service using the vSphere Web Client. Navigate to Administration -> Deployment -> System Configuration -> Services -> VMware Syslog Service, then restart the service.
Solution B: The solution to this problem is either to add more disk space to the VCSA for /storage/log (or VMDK #5), or to shorten the length of time SSO logs are kept around. I chose the latter and followed KB #2143565. I also went one step further and since I had no other issues that required the older logfiles, I removed all the old file using the following before exiting the shell for VCSA
cd /storage/log/vmware rm -rf */*.gz */*.bz2 */*.[1-9] */*.[1-9]? */*.zip */*/*.zip */*/*/*.gz rm -rf */*/*/localhost_access_log.2015-* rm -rf */*/*/localhost_access_log.2016-0[0-5]-* */localhost_access_log.2015*
Actually I did quite a bit more than the above, but those get rid of the worse culprits. I went through and using ‘ls -R | more’ found all the instances of older log files and removed any that were not from the date of removal.
Now my VCSA Helper VM is updated.
Leave a comment