-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
zfs recv fails if received into a clone #1788
Comments
I dont believe this is a bug as such, the send fails because when you clone to $POOL/dst2, as a zfs list will show, dst2, does not have the s1 snapshot ( even though it references the same blocks as s1 ). The origin dataset dst only has the s1 snap. You can however promote $POOL/dst2 to make it the holder of the s1 snapshot (now dst is the clone), then do the send, then optionally promote $POOL/dst again which will switch the s1 snap back to dst ( now dst2 is the clone again ) So you could change the last part to:
You will then have:
To watch the effect the promote commands have:
Hope that helps! Cheers Andrew |
Thank you. I did discover the workaround via zfs promote, but in my use case, I guess you are arguing that my request is not a bug but a feature enhancement request. If that is the case, so be it. I'm quite willing to tinker with the code to make this work. I looked at the |
Have you considered creating the clone on the send side (src2), send src@s1, create snap src2@s2, then zfs send -i src@s1 src2@s2 to dst2? |
Here is my test script for above, unless I missing something, this solves your stated problem, although you have not specified if you just need to accomplish this one step or if there is a continuing cycle, still I'd like it if what your doing did just work, and would be great if you managed code a fix, I'm not as familiar with the code as I'd like and perhaps someone could comment, but I think this will require more that making an if statement less restrictive and get the feeling you may have gain an intimate knowledge of the structures and workings to solve, does anyone know what this would involve? Would another option to look at if the code path actually has to recreate the device nodes or can the nodes be kept stable under some circumstances? But if you are just wanting to solve your stated issue and be done with it, this may be something to look at, this should not disturb the dst device nodes. This uses the ability zfs send/recv has in being able to incrementally send a clone as long as the origin already exists ( has been send/recv'd ) on the recv side ( this is why I changed the test to use 2 pools as would not be a proper test otherwise ). From memory this ability has a short mention in the last paragraph of the zfs send section on zfs manpage. Notice in the last send/recv how the 2 snapshots specified are actually from different datasets ( the first is the origin the second is the clone )
|
I'm trying to receive a ZFS inremental stream in a bit unusual way,
and
zfs recv
isn't cooperating.Imagine the source file system with several snapshots on it:
Then imagine I have a "dst" file system that's tracking this file system.
It has already gotten
s1
but it doesn't haves2
yet:What I'm trying to do is to receive
s2
into a cloned file system, insteadof directly receiving into
dst
:I expect this to work, when I clone
dst2
fromdst@s1
,s1
is its latestsnapshot. So the incremental ZFS stream from
s1
tos2
should apply well.But instead I get:
cannot receive incremental stream: invalid backup stream
To reproduce the problem, run the following script. Adjust
POOL
with the actual zpool name:The text was updated successfully, but these errors were encountered: