-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
zfs destroy: dataset is busy #4715
Comments
Right, that's a good guess. Under Linux the NFS kernel server holds a reference on the filesystem which keeps it busy. That reference only gets dropped when after exportfs is run to unshare the filesystem. Beyond that a Unfortunately, since both of these references are taken in the kernel and aren't associated with any user process they don't show up in |
It'd be nice to prove it was a leak between zfs and nfsd, but that would be pretty hard to show, let alone track down. |
keywords matched in #3186 |
This specifically will be resolved shortly by #7207. As for a leaked NFS handle we did resolve some related issues so this might have been fixed. @lundman since this is an oldie any objections to simply closing it out for now? |
Not to bump a closed ticket but I ran into this oddly enough yesterday. It was the same senerio with LVM detecting a volume on a zvol, thus locking it from zfs. Perhaps a side issue is not only was the zvol locked from removing, zfs completely hung the pool. No userland utilities listed any open files on it. |
ZFS: Loaded module v0.6.5.5-1
Linux 3.10.0-327.10.1.el7.x86_64 # 1 SMP Tue Feb 16 17:03:50 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
There are a couple of issues with similar problem, but I think they are old enough that we should already have those fixes. I am suspecting this has something to do with how the NFS server is setup.
There are many datasets on this system, one such example would be
and each filesystem has sharenfs set
sharenfs rw=@10.0.0.0/8,no_root_squash,no_all_squash inherited from zpool1/user
The clients however, mount this using mirror-mounts, the whole
/zpool1/user
and crosses filesystem boundaries automatically.Surprisingly, this all works.
Now then, talking about removing a dataset, and unprovisioning it, we run into trouble.
So
umount -l
just confuses zfs a bit, that's ok. It is still being busy somehow.I have checked with
lsof
andfuser
that no process has it openI have checked with
exportfs -v
that it is not exported (any more).I have checked
/proc/*/mounts
- it is not listedI have checked
zpool history -i
and for anyzfs holds
but there are no holds, no aborted sends. (That I can see)This happens for all datasets that have been provisioned, used, and then attempted to cleanup, so I am concerned there is a reference held by nfs somehow, any ways to show what is holding the datasets busy?
Out of 274 remove requests this month, 6 has ended up "busy".
The text was updated successfully, but these errors were encountered: