-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
zfs send -p | zfs receive -F
destroys all other snapshots
#5341
Comments
Sure, but at this point I don't know yet if this is the expected behavior (then I can submit a PR for the man pages) or a bug, in which case there's probably not much I could do. |
@kpande |
@kpande i was referring to the
It's not the other way around, Anyway, other ZFS implementation seems to behave in the same way. |
@kpande for me
Yes. On the receiving side I use I certainly didn't expect that adding |
Silly me. I just set Still, if you are of the opinion that what I reported above is the expected behavior (that is, |
According to the name, and the documentation, it is ment to attach the properties of the dataset to the snapshot data, so after receiving the target will have the properties set identical to the source. Apart from that the behaviour shouldn't be different than the same operation without the -p on the sending side. Having zfs send -p destroying unrelated stuff on the receiving side when received with -F (for the rollback to the last snapshot on the target) isn't expected, at least by me. What is expected is that -F performs the equivalent of zfs rollback dataset@(incremental start snaphot) on the target and then to receive the stream. So should the issue be that snapshots after the common incremental start point were discarded on the target then it works as expected (it needs to rollback to the common point to be able to accept the data, since it can't rewrite existing stuff). This wouldn't be related to -p or not to -p. But there is IMHO no sane reason why it should modify the snapshot chain on the target prior to the common snapshot used as the start of the incremental (the one specified with -i/-I). I would see this as a bug. Same should it destroy child datasets on the target that do not exist at the source, since -p dosn't (and shouldn't) imply that the operation is performed on children. Unless -R is specified the send/recv should be limited to the one dataset specified, nothing else. @tobia could you please post the exact commands executed, the list of snapshots of the dataset in question (source side) and more details about what was destroyed on the target? You should be able to get the list of the destroyed snapshots in zpool history of the target pool. |
@GregorKopka My source systems take snapshots of all their zfs filesystems every 15 minutes, with After that, they send them over to the backup server, with This worked fine, but then I thought about transferring the properties along with the snapshots, so I added For this reason I quickly decided against using I looked at |
This looks like a bug to me, there should be no reason for recv -F (that is getting an incremental) to destroy snapshots prior to the common one (specified by -I on the source). Reproducer:
Output on my system:
I would maybe expect that from send -R, but not from send -p. This should be fixed in the code. |
Better IMHO would be if the directive to destroy snapshots (and if so: which) would be completely anchored at recv (instead depending on send), with a fine granularity for the user to decide what he actually wants for incrementals (using
Additionally flags to explicitely allow recv to:
would be helpful, to give better control of the outcome. Especially in light of backup scenarios where the backup host calls into source systems with a zfs send, should the source be compromised it could modify the send invocation to deliver a specially crafted dataset that overloads parts of the target system (through setting overlay, mountpoint, canmount) which compromises the destination system (/root/.ssh filesystem with attacker controlled keys or overloading /etc/cron.* directories). Back to topic: IMHO -p generating a replication stream (or something that recv treats as one) is a bug. |
Yes, that was exactly my point. |
I currently rely on |
@behlendorf this is closed (and without by whom and when) and marked as documentation. But this issue describes an obvious bug in the code, it shouldn't be closed unless there is a merged pull request that fixes the issue. Please reopen and tag accordingly. |
This issue has been automatically marked as "stale" because it has not had any activity for a while. It will be closed in 90 days if no further activity occurs. Thank you for your contributions. |
Why is there a bot closing issues that are not solved |
This issue has been automatically marked as "stale" because it has not had any activity for a while. It will be closed in 90 days if no further activity occurs. Thank you for your contributions. |
The following commit may have fixed this issue a while ago: FWIW, in OpenZFS 2.1.5 on Ubuntu 22.04 this no problem no longer seems to occur. Running @GregorKopka's reproducer, but getting different results: $ DATASET="sandpool/sandbox5341"
$ sudo zfs create $DATASET/src -p
$ sudo zfs allow -u sandperson create,snapshot,send,destroy,mount,receive,hold $DATASET
$ zfs snapshot $DATASET/src@a
$ zfs snapshot $DATASET/src@b
$ zfs snapshot $DATASET/src@c
$ zfs snapshot $DATASET/src@d
$
$ zfs list $DATASET -r -tall
NAME USED AVAIL REFER MOUNTPOINT
sandpool/sandbox5341 48K 23.0M 24K /sandpool/sandbox5341
sandpool/sandbox5341/src 24K 23.0M 24K /sandpool/sandbox5341/src
sandpool/sandbox5341/src@a 0B - 24K -
sandpool/sandbox5341/src@b 0B - 24K -
sandpool/sandbox5341/src@c 0B - 24K -
sandpool/sandbox5341/src@d 0B - 24K -
$ zfs send $DATASET/src@a | zfs recv -F $DATASET/dst
$ zfs list $DATASET -r -tall
NAME USED AVAIL REFER MOUNTPOINT
sandpool/sandbox5341 72K 23.0M 24K /sandpool/sandbox5341
sandpool/sandbox5341/dst 24K 23.0M 24K /sandpool/sandbox5341/dst
sandpool/sandbox5341/dst@a 0B - 24K -
sandpool/sandbox5341/src 24K 23.0M 24K /sandpool/sandbox5341/src
sandpool/sandbox5341/src@a 0B - 24K -
sandpool/sandbox5341/src@b 0B - 24K -
sandpool/sandbox5341/src@c 0B - 24K -
sandpool/sandbox5341/src@d 0B - 24K -
$ zfs send -I a $DATASET/src@b | zfs recv -F $DATASET/dst
$ zfs list $DATASET -r -tall
NAME USED AVAIL REFER MOUNTPOINT
sandpool/sandbox5341 72K 23.0M 24K /sandpool/sandbox5341
sandpool/sandbox5341/dst 24K 23.0M 24K /sandpool/sandbox5341/dst
sandpool/sandbox5341/dst@a 0B - 24K -
sandpool/sandbox5341/dst@b 0B - 24K -
sandpool/sandbox5341/src 24K 23.0M 24K /sandpool/sandbox5341/src
sandpool/sandbox5341/src@a 0B - 24K -
sandpool/sandbox5341/src@b 0B - 24K -
sandpool/sandbox5341/src@c 0B - 24K -
sandpool/sandbox5341/src@d 0B - 24K -
$ zfs destroy $DATASET/src@a
$ zfs send -I b $DATASET/src@c | zfs recv -F $DATASET/dst
$ zfs list $DATASET -r -tall
NAME USED AVAIL REFER MOUNTPOINT
sandpool/sandbox5341 72K 23.0M 24K /sandpool/sandbox5341
sandpool/sandbox5341/dst 24K 23.0M 24K /sandpool/sandbox5341/dst
sandpool/sandbox5341/dst@a 0B - 24K -
sandpool/sandbox5341/dst@b 0B - 24K -
sandpool/sandbox5341/dst@c 0B - 24K -
sandpool/sandbox5341/src 24K 23.0M 24K /sandpool/sandbox5341/src
sandpool/sandbox5341/src@b 0B - 24K -
sandpool/sandbox5341/src@c 0B - 24K -
sandpool/sandbox5341/src@d 0B - 24K -
$ # $DATASET/dst@a is still there
$ zfs destroy $DATASET/src@b
$ zfs send -p -I c $DATASET/src@d | zfs recv -F $DATASET/dst
$ zfs list $DATASET -r -tall
NAME USED AVAIL REFER MOUNTPOINT
sandpool/sandbox5341 72K 23.0M 24K /sandpool/sandbox5341
sandpool/sandbox5341/dst 24K 23.0M 24K /sandpool/sandbox5341/dst
sandpool/sandbox5341/dst@a 0B - 24K -
sandpool/sandbox5341/dst@b 0B - 24K -
sandpool/sandbox5341/dst@c 0B - 24K -
sandpool/sandbox5341/dst@d 0B - 24K -
sandpool/sandbox5341/src 24K 23.0M 24K /sandpool/sandbox5341/src
sandpool/sandbox5341/src@c 0B - 24K -
sandpool/sandbox5341/src@d 0B - 24K -
$
$ # $DATASET/dst@a and @b still exist
$ I believe the behavior now matches the documentation. |
From the man page of zfs receive:
I never used
-R
, but I did add-p
to the send command (zfs send -p -I ...
) because I wanted to transfer the properties along with the snapshots. But then the receive side (with-F
) behaved as if I was sending a replication stream: it deleted all snapshots that didn't exist on the sending side.This was not expected at all. Is this a bug in send, receive, or in the documentation?
Edit: if this is the expected behavior, I'll submit a PR to make it clearer in the man pages.
The text was updated successfully, but these errors were encountered: