-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
zfs mount fails silently on encrypted dataset #12418
Comments
additional data:
|
more digging: |
This happens to me too, with kernel version 5.13.13-arch1-1 and zfs version zfs-2.1.0-1 and zfs-kmod-2.1.0-1. What's weird is that if I use the dkms version, it WILL mount the encrypted dataset on creation with canmount=noauto (with zfs mount pool/dataset,) but won't do so on reboot. However, the module version won't mount the dataset whatsoever. EDIT: Also present in zfs-2.1.99-448_gb9ec4a15e zfs-kmod-2.1.99-1 (commit r7099) |
Are you by any chance using zfs root like me? I've just installed test system on a flash drive with ext4 and there is no issue. If I boot from the zfs root which is unencrypted btw, the issue is there. There could be a connection. Also I don't have nested datasets with different encpryption keys, just one encrypted dataset with children sharing the same key. |
Similar issue, to the point where I believe this is the same issue:
Result: Not a ZFS root, the pool is mounted at |
Ah good it's not just me I was quite confused by this. I'm also running zfs on root. Versions: zfs-2.1.1-1, zfs-kmod-2.1.1-1 with kernel 5.14.18-1-MANJARO. I saw the same thing as @mtippmann where setting the mountpoint works until reboot. I was experimenting with this nice little rust tool shavee https://ashu.io/projects/shavee/ that lets you use a Yubikey to generate an encryption key when I noticed this behaviour. |
On the machine I tried mounting encrypted datasets anywhere after rebooting, checking |
I'm having the same issue on kernel 5.15.25-1-lts. I don't have / on the encrypted dataset, it's supposed to get mounted in a directory in my home folder. |
I ended up using this script as a workaround, after creating two empty folders #!/bin/bash
set -x
sudo zfs set canmount=on pool-raidz/bloatmode
sudo zfs set mountpoint=/home/bloatmode/zfs1 pool-raidz/bloatmode
sudo zfs mount -l pool-raidz/bloatmode
sleep 0.1
if [ -f "/home/bloatmode/zfs1/testfile.txt" ]
then
ln -snf /home/bloatmode/zfs1 /home/bloatmode/zfs
else
sudo zfs set mountpoint=/home/bloatmode/zfs2 pool-raidz/bloatmode
sudo zfs mount -l pool-raidz/bloatmode
ln -snf /home/bloatmode/zfs2 /home/bloatmode/zfs
fi
sudo zfs set canmount=noauto pool-raidz/bloatmode Notice the Update: after some days of using the above script I noticed that sometimes after a few hours the dataset is silently unmounted, but it can be remounted using the script. I also removed the last line from the script and used a systemd service instead, which sets |
A cool new symptom appears! My encrypted datasets are mounted in the last mountpoint they used, rather than the current set one.
|
Hi, This seems to be a systemd problem when running root zfs, systemd creates a .mount and because of this it executes an immediate unmount of the newly mounted zfs dataset. You should see the instant systemd unmounts in /var/log/daemon.log like this (or use journalctl):
So after the first mount you just do ¨systemctl daemon-reload" and it will work in future just as it should. (I also added the datasets to fstab (with noauto), just to be sure.) regards |
@SvenMb thank you! Can confirm that it works fine after masking the mount unit: got this in the logs:
This also explains why |
Maybe systemd is not at fault here - could this be a bug in zfs-mount-generator? |
After having this problem and seeing here that it was somewhat related to systemd the easiest workaround that I've found is doing
instead of
Mind that for the systemd command the mountpoint has to be used instead of the dataset name, if different. |
Actually, I've just noticed that @mtippmann's way is more convenient. I masked the unit:
and can use |
I experienced this yesterday. This started happening after I enabled In sum: created file An alternative solution to masking the generated |
On Debian 12.2 I am currently facing exactly same issue according to description... I have ZFS native encrypted dataset with I am willing to help debug this some, if I get clear commands to run and report :) ADD: |
If the dataset is encrypted and zfs-mount-generator is enabled, the generated mount unit gets a strong dependency on the generated service that loads the encryption key:
So it appears that the actual fix for this problem is to load encryption keys not by running Does that make sense? |
@tinuzz Thanks, was battling the same issue. It doesn't make a lot of sense from a user point of view.
Would make a lot more sense :-) |
Well, there's always |
I can confirm that
When this was occurring, I did not see any failed systemd services either, and |
System information
Describe the problem you're observing
Trying to mount an encrypted dataset with a different passphrase below other encryped datasets. This used to work fine in the past but since some unknown date
zfs mount
fails silently -zfs mount -vvv
is also empty. It works after changing the mountpoint withzfs set mountpoint=/mnt/foo tank/enc/p
but after reboot mount fails.tank/enc
encryption roottank/enc/p
dataset that I'm not able to mount withcanmount=noauto
with a different passphraseDescribe how to reproduce the problem
I could only test with zfs git and kernel 5.13 / 5.10
create an encrypted dataset and a child dataset below that, change the passphrase for a child dataset and try to mount it after rebooting - doing a
zfs set mountpoint=somethingnew tank/enc/p
allows mounting the dataset -zfs unmount
/zfs mount
work fine after that until rebooting, after reboot without changing the mountpoint mounting fails silent again.Include any warning/errors/backtraces from the system logs
zfs list
zpool get all
zfs get all for the dataset that fails to mount (
zfs load-key
works fine):zdb:
strace (
mount()
syscall returns 0 but mount does not appear in/proc/mounts
)The text was updated successfully, but these errors were encountered: