Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zfs destroy fails to destroy snapshots #1064

Closed
jvsalo opened this issue Oct 21, 2012 · 5 comments
Closed

zfs destroy fails to destroy snapshots #1064

jvsalo opened this issue Oct 21, 2012 · 5 comments
Milestone

Comments

@jvsalo
Copy link

jvsalo commented Oct 21, 2012

At least on my 3.5.x kernels, 'zfs destroy' fails to destroy a snapshot the first time it is executed, if that snapshot has been visited recently. On my 3.2.x kernels this does not happen:

  • Produces on my laptop: ZFS rootfs, Debian testing, kernel 3.5.3 (x86_64), rc11 built by hand
  • Produces on server 1: ZFS for data only, Debian testing, kernel 3.5.4 (x86_64), rc11 Ubuntu PPA
  • Doesn't produce on server 2: ZFS for data only, Debian squeeze, kernel 3.2.2 (x86_64), rc11 Ubuntu PPA
  • Doesn't produce on server 3: ZFS for data only, Debian testing, kernel 3.2.0-3-amd64 (x86_64), rc11 Ubuntu PPA

I'm using legacy mount points. Please find a reproducer below:

#!/bin/sh
zfs create rpool/testfs
mount -t zfs rpool/testfs /mnt
zfs snapshot rpool/testfs@testsnap
cd /mnt/.zfs/snapshot/testsnap/
cd -
zfs destroy -v rpool/testfs@testsnap
zfs list -t all -r rpool/testfs
zfs destroy -v rpool/testfs@testsnap
zfs list -t all -r rpool/testfs
umount /mnt
zfs destroy -v -r rpool/testfs

Result:

root@thinkpad:/tmp# sh reprod.sh 
/tmp
will destroy rpool/testfs@testsnap
will reclaim 0
NAME                    USED  AVAIL  REFER  MOUNTPOINT
rpool/testfs             30K  64.2G    30K  legacy
rpool/testfs@testsnap      0      -    30K  -
will destroy rpool/testfs@testsnap
will reclaim 0
NAME                    USED  AVAIL  REFER  MOUNTPOINT
rpool/testfs             30K  64.2G    30K  legacy
rpool/testfs@testsnap      0      -    30K  -
umount: /mnt: device is busy.
        (In some cases useful info about processes that use
         the device is found by lsof(8) or fuser(1))
will destroy rpool/testfs@testsnap
cannot destroy 'rpool/testfs@testsnap': dataset is busy

/proc/mounts that the snapshot mount is still there:

rpool/testfs@testsnap /mnt/.zfs/snapshot/testsnap zfs ro,relatime,xattr 0 0

To destroy the snapshot now, it must be manually unmounted after which 'zfs destroy' works as expected.

However, if some time is spent before the first destroy but after the first cd (into the snapshot dir), the first destroy manages to get rid of the mount (not shown in below log but it does) but still fails to finish the job; the second destroy truly destroys the snapshot:

#!/bin/sh
zfs create rpool/testfs
mount -t zfs rpool/testfs /mnt
zfs snapshot rpool/testfs@testsnap
cd /mnt/.zfs/snapshot/testsnap/
cd -
sleep 1
zfs destroy -v rpool/testfs@testsnap
zfs list -t all -r rpool/testfs
zfs destroy -v rpool/testfs@testsnap
zfs list -t all -r rpool/testfs
umount /mnt
zfs destroy -v -r rpool/testfs

Result:

root@thinkpad:/tmp# sh reprod.sh 
/tmp
will destroy rpool/testfs@testsnap
will reclaim 0
NAME                    USED  AVAIL  REFER  MOUNTPOINT
rpool/testfs             30K  64.2G    30K  legacy
rpool/testfs@testsnap      0      -    30K  -
will destroy rpool/testfs@testsnap
will reclaim 0
NAME           USED  AVAIL  REFER  MOUNTPOINT
rpool/testfs    30K  64.2G    30K  legacy
will destroy rpool/testfs

I also checked with 'lsof' that my system doesn't have open file handles to the snapshot dir after I cd out of the snapshot dir in the above tests.

This bug might be related to #1007, however the author of #1007 says they are not visiting the snapshot directory, and doesn't seem to have an easy reproducer.

@ghost
Copy link

ghost commented Oct 22, 2012

Interesting: uname gives this

uname -a
Linux multi-os-host 3.0.0-26-generic #43-Ubuntu SMP Tue Sep 25 17:19:22 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux

and the reproducer gives this:

+ zfs create -o mountpoint=legacy -o snapdir=visible RAID/testfs
+ mount -t zfs RAID/testfs /mnt/tmp/mntx
+ zfs snapshot RAID/testfs@testsnap
+ cd /mnt/tmp/mntx/.zfs/snapshot/testsnap/
+ cd -
/mnt/tmp
+ zfs destroy -v RAID/testfs@testsnap
will destroy RAID/testfs@testsnap
will reclaim 0
+ zfs list -t all -r RAID/testfs
NAME                   USED  AVAIL  REFER  MOUNTPOINT
RAID/testfs            181K   133G   181K  legacy
RAID/testfs@testsnap      0      -   181K  -
+ zfs destroy -v RAID/testfs@testsnap
will destroy RAID/testfs@testsnap
will reclaim 0
+ zfs list -t all -r RAID/testfs
NAME                   USED  AVAIL  REFER  MOUNTPOINT
RAID/testfs            181K   133G   181K  legacy
RAID/testfs@testsnap      0      -   181K  -
+ umount /mnt/tmp/mntx
umount: /mnt/tmp/mntx: device is busy.
        (In some cases useful info about processes that use
         the device is found by lsof(8) or fuser(1))
+ zfs destroy -v -r RAID/testfs
will destroy RAID/testfs@testsnap
cannot destroy 'RAID/testfs@testsnap': dataset is busy

@maxximino
Copy link
Contributor

Duplicate of issue #1210 ?

@behlendorf
Copy link
Contributor

It sure looks like it. Between @nedbass's fix and the async destroy changes we should now reliably get an error. Can someone please verify this is true for master.

@maxximino
Copy link
Contributor

with 7973e46

+ zfs create mypool/testfs -o mountpoint=legacy
+ mount -t zfs mypool/testfs /mnt
+ zfs snapshot mypool/testfs@testsnap
+ cd /mnt/.zfs/snapshot/testsnap/
+ cd -
/tmp
+ zfs destroy -v mypool/testfs@testsnap
will destroy mypool/testfs@testsnap
will reclaim 0
+ zfs list -t all -r mypool/testfs
NAME            USED  AVAIL  REFER  MOUNTPOINT
mypool/testfs    30K   614G    30K  legacy
+ zfs destroy -v mypool/testfs@testsnap
could not find any snapshots to destroy; check snapshot names.
+ zfs list -t all -r mypool/testfs
NAME            USED  AVAIL  REFER  MOUNTPOINT
mypool/testfs    30K   614G    30K  legacy
+ umount /mnt
+ zfs destroy -v -r mypool/testfs
will destroy mypool/testfs

everything looks correct now......

@behlendorf
Copy link
Contributor

@maxximino Thanks for verifying this, in fact everything looks good. I'm closing this issue as a duplicate of #1210

pcd1193182 pushed a commit to pcd1193182/zfs that referenced this issue Sep 26, 2023
…1064)

Bumps [sysinfo](https://github.com/GuillaumeGomez/sysinfo) from 0.29.5 to 0.29.6.
- [Changelog](https://github.com/GuillaumeGomez/sysinfo/blob/master/CHANGELOG.md)
- [Commits](https://github.com/GuillaumeGomez/sysinfo/commits)

---
updated-dependencies:
- dependency-name: sysinfo
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants