I recently rebooted my vCenter server for maintenance reasons. However, when I went to access vCenter, I could not as vCenter would not start. The vpxd.log file contained:
Datastore with invalid folder
Well that was a first for me. But what did it mean? Was it the non-existent iSCSI Server it had issues accessing? Or was it the Iomega StorCenter IX2 acting as an NFS share? So since I did not have a vCenter Server, I went to each node and removed those datastores as they were not actually in use.
This solution did not work. So it was time to look somewhere else? I tried to increase the verbosity of the logfiles for vCenter by adding the following to the <log></log> section of the vpxd.cfg file.
<trace><db><verbose>true</verbose></db></trace>
As well as setting the log level to ‘trivia’, but that just increased the log file output with no really useful data. So I reinstalled vCenter as my next task but kept the same database. This also had no effect. So in essence it was not a vCenter issue nor an issue easily cleaned up by removing datastores from the hosts. So what was it?
I then started investigating the vCenter Database not wanting to lose my existing performance data with a recreate (which would have also fixed the issue). My theory is that the IX2 was the cause of the issue as it had on it non-vSphere created files. So I did the following SQL commands for my vCenter Database:
select * from dbo.VPX_DATASTORE;
This allowed me to find the IDs associated with the IX2 and the iSCSI Server. Their ID numbers were 661 and 435. So the next commands to run where the following for each of the IDs. Note this is NEVER recommended, and should only be done after first contacting your VMware support specialist.
delete from dbo.VPX_DATASTORE where ID=435;
delete from dbo.VPX_DS_ASSIGNMENT where DS_ID=435;
delete from dbo.VPX_DATASTORE_INFO where ID=435;
delete from dbo.VPXV_DATASTORE where ID=435;
delete from dbo.VPXV_ENTITY where ID=435;
Now I was able to boot vCenter properly. Finding that the datastore ID was also in VPXV_ENTITY took the most time. I had to review quite a few tables til I hit on this one. I kept getting an error about /vpx/group/#s24 not able to access the IX2 NFS server. Well, the group #24 was within VPXV_ENTITY.
In finding this result, I had to add back in the IX2 as an NFS share. So I think the real culprit was not the IX2 but the iSCSI Server that no longer was available. Why it was not available was not the real issue, but if a Datastore could not be reached, why would vCenter not pull that information from the ESX/ESXi host instead of failing to start?
As an aside, a future issue forced me to also reinstall my vCenter database, which would also fix this issue.