I’ve noticed something odd with a number of clusters that I administer that have NFS datastores. One of the datastores will be duplicated, and end with (1). Typically, only one host has this duplicated datastore. At first I thought it happened because the datastore was mounted twice, once by IP address, and again by hostname/FQDN.
So I painstakingly wrote this PowerCLI script that would look for all datastores with a parenthesis in its name. 😉
[powershell]
Get-Datastore * | where {$_.Name -like "*(*"}
[/powershell]
That returned this list to me.
[powershell]
Name FreeSpaceMB CapacityMB
—- ———– ———-
ESXi_CLUST_01_OS1 (1) 150422 163840
ESXi_CLUST_03_OS3 (1) 81806 81920
ESXi_CLUST_12_OS1 (1) 139271 163840
ESXi_CLUST_15_OS2 (1) 157634 163840
ESXi_CLUST_14_OS2 (1) 142356 163840
[/powershell]
From there, I went to the Datastores view within the vSphere Client (CTRL+SHIFT+D) and found the offending datastore. I switched to the Hosts tab for that specific datastore, and saw that only one host was mapped to the datastore. In addition, the duplicate datastore had a different location on the Summary tab. For example, the duplicate one was listed as having a FQDN location. Other NFS datastores in the cluster all show the IP address in the Location field.
The duplicated datastore is particularly interesting for the following reasons:
- All of the effected clusters were configured by a PowerCLI script after initial install and connection to vCenter.
- I know for a fact that all of the NFS datastores were specified by FQDN during scripted setup (because I wrote the script, and did a large number of the deployments).
- By using the host->UUID->datastore script that I wrote, I’ve confirmed that the original datastore and duplicate datastore have the same UUID. This is significant, in my mind, because VMware KB1005930 specifically states that the NFS datastore hash and resulting UUID are based upon the NFS information you’ve provided. Meaning that I’d expect an NFS datastore provided by IP and one provided by FQDN would have different hashes.
- By enabling Remote Tech Support mode, I was able to SSH into the host. Executing a ls -al /vmfs/volumes only showed the NFS datastores that were supposed to be there. There was no (1) datastore. Running the command esxcfg-nas -l didn’t show the (1) datastore, either.
- Attempting to Unmount the datastore from vCenter throws an error. “The object has already been deleted or has not been completely created“. This error is thrown whether the host is in Maintenance Mode or not.
By now, I think you’d agree that this error/issue is perplexing and frustrating. Fortunately, there’s an easy (but potentially time-consuming) way to fix it.
- Go back to the Datastores view and find the duplicate datastore. Click the Virtual Machines tab on the right side and perform an svMotion/cold migrate to another datastore that all the hosts in the cluster can see.
- After all VMs on that datastore have been moved, click the Hosts tab, double-click the host, then the Virtual Machines tab and vMotion all of the VMs off the host with the duplicate datastore.
- Put the host into Maintenance Mode and reboot. After rebooting, the (1) datastore should be gone.
- If you’re not running DRS, be sure to re-balance the VMs as necessary across the cluster.
Muhammad Yousuff says
Hi Damian,
Don’t you think restart and reboot are paths not so easily picked by an Engineer. What is the point to reboot, without even understanding the root cause.
I really appreciate the notes by Kurt.
Andrew says
I have a script that I’m trying to automate the unmount/remount of NFS exports to hosts on a managed appliance. I can unmount fine, but when I go to remount with the same name as before with a different IP, I also get this (1) issue. Adding a /sbin/services.sh restart does NOT solve the issue for me.
Network services restart, but the (1) remains.
Chad says
I rebooted the ESXi host and it did not remove the (1) from the datastore name in VC.
Damian Karlson says
Chad – is the datastore empty? All swap files and such cleared off?
Julian Wood says
Damian, these kind of issues can really plague your virtual environment.
Often the issue is a difference between how the ESX hosts see the storage and how vCenter sees the storage where vCenter gets confused about the UUIDs of the NFS mounts.
The ESX hosts look OK but vCenter sees the NFS as (1) datastores.
The problem can also cause catastrophic issues for vCenter. I have seen the service crash when trying to power on a VM on a (1) datastore and sometimes you cannot SvMotion a VM elsewhere.
Sometimes the simplest solution is to just restart the management agents on the ESX hosts which refreshes the information in vCenter with the ESX host storage and the (1) datastore disappears.
Damian Karlson says
Thanks for commenting, Julian. I don’t believe I tried restarting the management agents – probably because I’d first assumed that it was an actual case of mismatched UUIDs. I’ll be sure to try that in the future, in case it comes up again.
Tom says
I restarted the management agents, and this solved the described problem for me.
/sbin/services.sh restart
Kurt says
hi,
Does rebooting the management services interrupt any of the virtual servers on the ESX host? I mean can this service be restarted without the normal users noticing anything?
Thanks,
Kurt
Damian Karlson says
Kurt – yes, you can bounce the services with no impact to the VMs.
Kurt says
Hi,
I found out that I mapped the nfs datastore to another ip-address (nfs alias) than in the other ESX hosts. Removing it and adding it back with the correct IP-address immediately solved it for.
Best regards,
Kurt