-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Very slow zfs destroy #11933
Comments
You seem to have deleted the entire form of requested information for bug reports; please don't do that, and provide the requested information so people can try to do something useful with your report. |
What are the things you are |
@ahrens - https://docs.docker.com/storage/storagedriver/zfs-driver/
|
Interesting, so it seems like the |
@ahrens no, it making progress but very slow. I think that is becouse system has accumulated many clones and snaps:
|
Huh, so they're a bunch of clones. Do you have feature@livelist enabled on the pool? ( edit: I would assume that like most things with ZFS, it won't retroactively make things better for existing clones when you turn it on, though, just improve the behavior for clones made after you do. |
But that was enabled recently. |
Next commands was not take any effects:
z_livelist_dest is blocked (dmesg):
|
May be I have damaged snap/clone, I can't destroy specific clones of destroy operations is hung:
I was run |
and
Is added: I change storage (move to all flash storage connected by FC 16G) - same slow
I suppose that the fact is that every time |
curious if you are using dedup |
no |
Maybe related: #11480 |
I'm seeing the exact same behaviour. It's convinced me that ZFS is completely unusable when |
I believe the "livelist" feature in 2.0.X should improve your performance markedly, though 20.04 is shipping 0.8.X, so you'd have to go out of your way to try it. (Assuming it's a lot of clones that you're destroying, which seems safe, from my understanding of what containerd does.) (It also, I assume, wouldn't help for existing clones, only ones made after enabling the feature, because of how it works, but that's just an assumption on my part, I haven't looked to see if it has any idea of how to backfill that information if activated later.) |
#14603 might help here. |
This issue has been automatically marked as "stale" because it has not had any activity for a while. It will be closed in 90 days if no further activity occurs. Thank you for your contributions. |
. |
This triggers for me if I recursively delete a dataset. One of the snapshots is causing a panic, see #15030 (comment) |
After shutdown with multiple containers (~1000) on docker start run typical operations:
This forces to abandon use ZFS as storage backend for docker... ((
System information
zfs destroy only wait something...
The text was updated successfully, but these errors were encountered: