Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zfs recv fails if received into a clone #1788

Closed
kohsuke opened this issue Oct 14, 2013 · 5 comments
Closed

zfs recv fails if received into a clone #1788

kohsuke opened this issue Oct 14, 2013 · 5 comments
Labels
Type: Documentation Indicates a requested change to the documentation Type: Feature Feature request or new feature

Comments

@kohsuke
Copy link
Contributor

kohsuke commented Oct 14, 2013

I'm trying to receive a ZFS inremental stream in a bit unusual way,
and zfs recv isn't cooperating.

Imagine the source file system with several snapshots on it:

--@-----@-----> "src"
  s1    s2

Then imagine I have a "dst" file system that's tracking this file system.
It has already gotten s1 but it doesn't have s2 yet:

--@--> "dst"
  s1

What I'm trying to do is to receive s2 into a cloned file system, instead
of directly receiving into dst:

     --@---> "dst2"
    /  s2
   /
--@--> "dst"
  s1

I expect this to work, when I clone dst2 from dst@s1, s1 is its latest
snapshot. So the incremental ZFS stream from s1 to s2 should apply well.

But instead I get: cannot receive incremental stream: invalid backup stream

To reproduce the problem, run the following script. Adjust POOL with the actual zpool name:

#!/bin/bash -ex
POOL=rpool
zfs create $POOL/src
zfs snapshot $POOL/src@s1
zfs snapshot $POOL/src@s2

# this is the textbook case that works fine
zfs send $POOL/src@s1 | zfs recv $POOL/dst@s1
zfs send -i $POOL/src@s1 $POOL/src@s2 | zfs recv $POOL/dst

# this is what I want to do and it doesn't work
zfs clone $POOL/dst@s1 $POOL/dst2
zfs send -i $POOL/src@s1 $POOL/src@s2 | zfs recv $POOL/dst2

echo success
@b333z
Copy link
Contributor

b333z commented Oct 15, 2013

I dont believe this is a bug as such, the send fails because when you clone to $POOL/dst2, as a zfs list will show, dst2, does not have the s1 snapshot ( even though it references the same blocks as s1 ). The origin dataset dst only has the s1 snap.

You can however promote $POOL/dst2 to make it the holder of the s1 snapshot (now dst is the clone), then do the send, then optionally promote $POOL/dst again which will switch the s1 snap back to dst ( now dst2 is the clone again )

So you could change the last part to:

# this is what I want to do and it doesn't work 
zfs clone $POOL/dst@s1 $POOL/dst2
zfs promote $POOL/dst2
zfs send -i $POOL/src@s1 $POOL/src@s2 | zfs recv $POOL/dst2
zfs promote $POOL/dst

You will then have:

dst@s1
dst@s2
dst2@s2

To watch the effect the promote commands have:

zfs list -r -t all -o name,origin

Hope that helps!

Cheers Andrew

@kohsuke
Copy link
Contributor Author

kohsuke commented Oct 15, 2013

Thank you. I did discover the workaround via zfs promote, but in my use case, dst is actually zvol, and those snapshots are in active use. The promotion causes a reshuffle of snapshot device nodes under /dev (not to mention the /dev/zvol/... paths), which breaks all sorts of things that run on them.

I guess you are arguing that my request is not a bug but a feature enhancement request. If that is the case, so be it.

I'm quite willing to tinker with the code to make this work. I looked at the zfs_ioc_recv a little but it wasn't obvious where the code rejects the operation in the said situation. Any guidance would be greatly appreciated.

@b333z
Copy link
Contributor

b333z commented Oct 15, 2013

Have you considered creating the clone on the send side (src2), send src@s1, create snap src2@s2, then zfs send -i src@s1 src2@s2 to dst2?

@b333z
Copy link
Contributor

b333z commented Oct 16, 2013

Here is my test script for above, unless I missing something, this solves your stated problem, although you have not specified if you just need to accomplish this one step or if there is a continuing cycle, still I'd like it if what your doing did just work, and would be great if you managed code a fix, I'm not as familiar with the code as I'd like and perhaps someone could comment, but I think this will require more that making an if statement less restrictive and get the feeling you may have gain an intimate knowledge of the structures and workings to solve, does anyone know what this would involve? Would another option to look at if the code path actually has to recreate the device nodes or can the nodes be kept stable under some circumstances?

But if you are just wanting to solve your stated issue and be done with it, this may be something to look at, this should not disturb the dst device nodes. This uses the ability zfs send/recv has in being able to incrementally send a clone as long as the origin already exists ( has been send/recv'd ) on the recv side ( this is why I changed the test to use 2 pools as would not be a proper test otherwise ). From memory this ability has a short mention in the last paragraph of the zfs send section on zfs manpage.

Notice in the last send/recv how the 2 snapshots specified are actually from different datasets ( the first is the origin the second is the clone )

#!/bin/bash -ex
POOL1=tank
POOL2=tank2
zfs create $POOL1/src
zfs snapshot $POOL1/src@s1
zfs snapshot $POOL1/src@s2


# Ensure our origin snapshot is present on the recv side
zfs send $POOL1/src@s1 | zfs recv $POOL2/dst@s1

# Create a clone of src and send just the changes since clone to recv side without triggering dev node changes on dst.
zfs clone $POOL1/src@s1 $POOL1/src2
zfs snapshot $POOL1/src2@s2
zfs send -i $POOL1/src@s1 $POOL1/src2@s2 | zfs recv $POOL2/dst2

echo success

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Documentation Indicates a requested change to the documentation Type: Feature Feature request or new feature
Projects
None yet
Development

No branches or pull requests

4 participants
@kohsuke @behlendorf @b333z and others