Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inherited properties are not sent in send streams #2121

Closed
srlefevre opened this issue Feb 12, 2014 · 7 comments
Closed

Inherited properties are not sent in send streams #2121

srlefevre opened this issue Feb 12, 2014 · 7 comments
Labels
Status: Stale No recent activity for issue Type: Documentation Indicates a requested change to the documentation

Comments

@srlefevre
Copy link

I'm writing/improving a zfs replication script (https://github.com/srlefevre/zfs-repl.git) and doing testing on several platforms supporting zfs.

The script runs the following commands from the source system against the target system:

# create target filesystem
ssh  -n root@vmhost1   /sbin/zfs create  -o compression=lz4 -p tank0/tmp/test2
ssh  -n root@vmhost1   /sbin/zfs set mountpoint=none tank0/tmp/test2
# replicate fs
/usr/bin/sudo /usr/sbin/zfs send -R rpool/export/home/slefevre@20140212112518 | /usr/bin/cat - | ssh   root@vmhost1  "/bin/cat - |  /sbin/zfs recv  -F tank0/tmp/test2"

The first command runs but doesn't set the compression property.

# on target system
zfs get compression tank0/tmp/test2 
NAME             PROPERTY     VALUE     SOURCE
tank0/tmp/test2  compression  off       default

But, I can run the same command from the source system command line and it works (i.e. sets the property).

The same command works as expected going against an OpenIndiana (151.a8) system.

I can reproduce this issue on CentOS 6.5 and Ubuntu 12.04.

I'm confused as to why this isn't working. I tried adding a 5 second delay between the first command (zfs create) and the second command (zfs set) but I got the same results.

How would I go about troubleshooting this?
How would I go about working around this?

I'm using the following pkgs on each system.

CentOS 6.5 pkgs
dkms-2.2.0.3-14.zfs1.el6.noarch
spl-0.6.2-1.el6.x86_64
zfs-0.6.2-1.el6.x86_64
zfs-dkms-0.6.2-1.el6.noarch
zfs-dracut-0.6.2-1.el6.x86_64
zfs-release-1-2.el6.noarch
zfs-test-0.6.2-1.el6.x86_64
Ubuntu 12.04 pkgs version
dkms 2.2.0.3-1ubuntu3.1+zfs6~precise1
libzfs1 0.6.2-1~precise
mountall 2.36.4-zfs2
ubuntu-zfs 7~precise
zfs-dkms 0.6.2-1~precise
zfsutils 0.6.2-1~precise
spl 0.6.2-1~precise
spl-dkms 0.6.2-1~precise
dkms 2.2.0.3-1ubuntu3.1+zfs6~precise1
spl-dkms 0.6.2-1~precise
zfs-dkms 0.6.2-1~precise

Here is the zpool history log from the CentOS 6.5 test if that'll help.

# zpool history -il
2014-02-12.14:30:09 [internal create txg:5642437] dataset = 3776 [user root on vmhost1.si-consulting.us]
2014-02-12.14:30:10 [internal property set txg:5642438] compression=15 dataset = 3776 [user root on vmhost1.si-consulting.us]
2014-02-12.14:30:10 zfs create -o compression=lz4 -p tank0/tmp/test2 [user root on vmhost1.si-consulting.us:linux]
2014-02-12.14:30:10 [internal property set txg:5642439] mountpoint=none dataset = 3776 [user root on vmhost1.si-consulting.us]
2014-02-12.14:30:11 zfs set mountpoint=none tank0/tmp/test2 [user root on vmhost1.si-consulting.us:linux]
2014-02-12.14:30:11 [internal replay_inc_sync txg:5642440] dataset = 3782 [user root on vmhost1.si-consulting.us]
2014-02-12.14:30:11 [internal inherit txg:5642441] compression=2 dataset = 3776 [user root on vmhost1.si-consulting.us]
2014-02-12.14:30:11 [internal inherit txg:5642441] mountpoint=/opt/tmp dataset = 3776 [user root on vmhost1.si-consulting.us]
2014-02-12.14:30:12 [internal property set txg:5642442] $hasrecvd= dataset = 3776 [user root on vmhost1.si-consulting.us]
2014-02-12.14:30:12 [internal property set txg:5642443] mountpoint=/export/home/slefevre dataset = 3776 [user root on vmhost1.si-consulting.us]
2014-02-12.14:30:19 [internal snapshot txg:5642448] dataset = 3785 [user root on vmhost1.si-consulting.us]
2014-02-12.14:30:19 [internal destroy txg:5642449] dataset = 3782 [user root on vmhost1.si-consulting.us]
2014-02-12.14:30:19 [internal property set txg:5642449] reservation=0 dataset = 3782 [user root on vmhost1.si-consulting.us]
2014-02-12.14:30:20 zfs recv -F tank0/tmp/test2 [user root on vmhost1.si-consulting.us:linux]
2014-02-12.14:30:20 [internal replay_inc_sync txg:5642450] dataset = 3793 [user root on vmhost1.si-consulting.us]
2014-02-12.14:30:21 [internal snapshot txg:5642452] dataset = 3799 [user root on vmhost1.si-consulting.us]
2014-02-12.14:30:21 [internal destroy txg:5642453] dataset = 3793 [user root on vmhost1.si-consulting.us]
2014-02-12.14:30:21 [internal property set txg:5642453] reservation=0 dataset = 3793 [user root on vmhost1.si-consulting.us]
2014-02-12.14:30:22 [internal replay_inc_sync txg:5642454] dataset = 3805 [user root on vmhost1.si-consulting.us]
2014-02-12.14:30:22 [internal snapshot txg:5642456] dataset = 3810 [user root on vmhost1.si-consulting.us]
2014-02-12.14:30:23 [internal destroy txg:5642457] dataset = 3805 [user root on vmhost1.si-consulting.us]
2014-02-12.14:30:23 [internal property set txg:5642457] reservation=0 dataset = 3805 [user root on vmhost1.si-consulting.us]
2014-02-12.14:30:23 [internal replay_inc_sync txg:5642458] dataset = 3816 [user root on vmhost1.si-consulting.us]
2014-02-12.14:30:24 [internal snapshot txg:5642460] dataset = 3822 [user root on vmhost1.si-consulting.us]
2014-02-12.14:30:24 [internal destroy txg:5642461] dataset = 3816 [user root on vmhost1.si-consulting.us]
2014-02-12.14:30:24 [internal property set txg:5642461] reservation=0 dataset = 3816 [user root on vmhost1.si-consulting.us]
2014-02-12.14:30:25 [internal replay_inc_sync txg:5642462] dataset = 3842 [user root on vmhost1.si-consulting.us]
2014-02-12.14:30:25 [internal snapshot txg:5642464] dataset = 3883 [user root on vmhost1.si-consulting.us]
2014-02-12.14:30:25 [internal destroy txg:5642465] dataset = 3842 [user root on vmhost1.si-consulting.us]
2014-02-12.14:30:25 [internal property set txg:5642465] reservation=0 dataset = 3842 [user root on vmhost1.si-consulting.us]
2014-02-12.14:30:26 [internal replay_inc_sync txg:5642466] dataset = 3889 [user root on vmhost1.si-consulting.us]
2014-02-12.14:30:26 [internal snapshot txg:5642468] dataset = 3895 [user root on vmhost1.si-consulting.us]
2014-02-12.14:30:27 [internal destroy txg:5642469] dataset = 3889 [user root on vmhost1.si-consulting.us]
2014-02-12.14:30:27 [internal property set txg:5642469] reservation=0 dataset = 3889 [user root on vmhost1.si-consulting.us]
@srlefevre
Copy link
Author

Upon further testing, the property is getting set when the file system is created. The property is getting changed when the 'zfs recv' command is run and the zfs replication takes place.

On other systems (e.g. OpenIndiana) this does not happen. I would consider this to be a bug.

@FransUrbo
Copy link
Contributor

On Feb 14, 2014, at 5:45 PM, srlefevre wrote:

The property is getting changed when the 'zfs recv' command is run

The manpage say this about 'zfs send':

-p  Include the dataset's properties in the stream. [...]   The receiving
    system must also support  this feature.

On other systems (e.g. OpenIndiana) this does not happen.

Sounds like OpenIndia doesn't support receiving the properties... ?
I would consider this to be a bug.

Yes, but where?

Choose a job you love, and you will never have
to work a day in your life.

@srlefevre
Copy link
Author

Here are two simple tests run on a CentOS system which illustrate the inconsistency.
Test 1: Send tank0/test1 --> tank0/tmp/test1

$ sudo zfs get compression /tank0/tmp
NAME       PROPERTY     VALUE     SOURCE
tank0/tmp  compression  off       local

$ sudo zfs get compression /tank0/test1
NAME         PROPERTY     VALUE     SOURCE
tank0/test1  compression  lz4       inherited from tank0

$ sudo zfs get compression /tank0/tmp/test1
NAME             PROPERTY     VALUE     SOURCE
tank0/tmp/test1  compression  off       inherited from tank0/tmp

$sudo zfs send -R tank0/test1@20140130154547 | pv | sudo zfs recv -v -F tank0/tmp/test1
receiving full stream of tank0/test1@20140130154547 into tank0/tmp/test1@20140130154547
64.3MiB 0:00:02 [  26MiB/s] [     <=>                                                                                              ]
received 64.3MB stream in 2 seconds (32.2MB/sec)

$ sudo zfs get compression /tank0/tmp/test1
NAME             PROPERTY     VALUE     SOURCE
tank0/tmp/test1  compression  off       inherited from tank0/tmp

Results:
The source starts off as compressed (inherited from parent) but the target ends up not compressed (inherited from parent)

Test 2 Send tank0/tmp/test2 --> tank0/tmp/test2b

$ sudo zfs snapshot tank0/tmp/test2@20140214162830
$ sudo zfs create tank0/tmp/test2b 

$ sudo zfs get compression tank0/tmp
NAME       PROPERTY     VALUE     SOURCE
tank0/tmp  compression  off       local

$ sudo zfs get compression tank0/tmp/test2
NAME             PROPERTY     VALUE     SOURCE
tank0/tmp/test2  compression  lz4       local

$ sudo zfs get compression tank0/tmp/test2b
NAME              PROPERTY     VALUE     SOURCE
tank0/tmp/test2b  compression  off       inherited from tank0/tmp

$ sudo zfs send -R tank0/tmp/test2@20140214162830 | sudo zfs recv -v -F tank0/tmp/test2b 
receiving full stream of tank0/tmp/test2@20140214162830 into tank0/tmp/test2b@20140214162830
received 64.3MB stream in 1 seconds (64.3MB/sec)

$ sudo zfs get compression tank0/tmp/test2b
NAME              PROPERTY     VALUE     SOURCE
tank0/tmp/test2b  compression  lz4       received

Results:
The source starts off as compressed (local setting) and the target ends up compressed (received)

I would expect the same results from both of these tests. Both received file systems should be compressed.

@behlendorf behlendorf added this to the 0.6.4 milestone Feb 14, 2014
@nedbass
Copy link
Contributor

nedbass commented Feb 14, 2014

It seems inherited properties are not sent in send streams. I don't see this documented anywhere, but I see the same behavior in my OpenIndiana VM. I don't think there's a bug here, though it would be good to improve the documentation with respect to this issue.

@behlendorf behlendorf removed this from the 0.6.4 milestone Oct 30, 2014
@loli10K loli10K changed the title zfs create via ssh doesn't apply properties Inherited properties are not sent in send streams Jul 4, 2017
@loli10K
Copy link
Contributor

loli10K commented Feb 15, 2019

root@linux:~# uname -a
Linux linux 3.16.0-4-amd64 #1 SMP Debian 3.16.51-2 (2017-12-03) x86_64 GNU/Linux
root@linux:~# cat /proc/sys/kernel/spl/gitrev
zfs-0.8.0-rc3-50-g2d76ab9
root@linux:~# 
root@linux:~# TMPDIR='/var/tmp'
root@linux:~# pool='tank'
root@linux:~# pool2='dozer'
root@linux:~# 
root@linux:~# zpool destroy $pool
cannot open 'tank': no such pool
root@linux:~# zpool destroy $pool2
cannot open 'dozer': no such pool
root@linux:~# 
root@linux:~# fallocate -l 128m $TMPDIR/zfs-vdev1
root@linux:~# zpool create -f $pool $TMPDIR/zfs-vdev1
root@linux:~# zfs set compression=gzip tank
root@linux:~# zfs create $pool/send -o mountpoint=/mnt/issue-2121
root@linux:~# zfs snap -r $pool@issue-2121
root@linux:~# 
root@linux:~# 
root@linux:~# fallocate -l 128m $TMPDIR/zfs-vdev2
root@linux:~# zpool create -f $pool2 $TMPDIR/zfs-vdev2
root@linux:~# zfs set compression=off $pool2
root@linux:~# 
root@linux:~# zfs send -R $pool@issue-2121 | zstreamdump | grep compression
				compression = 0xa
root@linux:~# zfs send -R $pool/send@issue-2121 | zstreamdump | grep compression
root@linux:~# zfs send -R $pool/send@issue-2121 | zfs recv -v -F $pool2/recv
receiving full stream of tank/send@issue-2121 into dozer/recv@issue-2121
received 44.8K stream in 1 seconds (44.8K/sec)
root@linux:~# zfs get compression
NAME                   PROPERTY     VALUE     SOURCE
dozer                  compression  off       local
dozer/recv             compression  off       inherited from dozer
dozer/recv@issue-2121  compression  -         -
tank                   compression  gzip      local
tank@issue-2121        compression  -         -
tank/send              compression  gzip      inherited from tank
tank/send@issue-2121   compression  -         -
root@linux:~# 

"compression=gzip" is inherited by "tank/send" but not sent with "tank/send@issue-2121".

@loli10K loli10K reopened this Feb 15, 2019
@loli10K
Copy link
Contributor

loli10K commented Feb 15, 2019

Same with zfs send -p:

root@linux:~# uname -a
Linux linux 3.16.0-4-amd64 #1 SMP Debian 3.16.51-2 (2017-12-03) x86_64 GNU/Linux
root@linux:~# cat /proc/sys/kernel/spl/gitrev
zfs-0.8.0-rc3-50-g2d76ab9
root@linux:~# 
root@linux:~# TMPDIR='/var/tmp'
root@linux:~# pool='tank'
root@linux:~# pool2='dozer'
root@linux:~# 
root@linux:~# zpool destroy $pool
cannot open 'tank': no such pool
root@linux:~# zpool destroy $pool2
cannot open 'dozer': no such pool
root@linux:~# 
root@linux:~# fallocate -l 128m $TMPDIR/zfs-vdev1
root@linux:~# zpool create -f $pool $TMPDIR/zfs-vdev1
root@linux:~# zfs set compression=gzip tank
root@linux:~# zfs create $pool/send -o mountpoint=/mnt/issue-2121
root@linux:~# zfs snap -r $pool@issue-2121
root@linux:~# 
root@linux:~# 
root@linux:~# fallocate -l 128m $TMPDIR/zfs-vdev2
root@linux:~# zpool create -f $pool2 $TMPDIR/zfs-vdev2
root@linux:~# zfs set compression=off $pool2
root@linux:~# 
root@linux:~# zfs send -p $pool@issue-2121 | zstreamdump | grep compression
				compression = 0xa
root@linux:~# zfs send -p $pool/send@issue-2121 | zstreamdump | grep compression
root@linux:~# zfs send -p $pool/send@issue-2121 | zfs recv -v -F $pool2/recv
receiving full stream of tank/send@issue-2121 into dozer/recv@issue-2121
received 44.8K stream in 1 seconds (44.8K/sec)
root@linux:~# zfs get compression
NAME                   PROPERTY     VALUE     SOURCE
dozer                  compression  off       local
dozer/recv             compression  off       inherited from dozer
dozer/recv@issue-2121  compression  -         -
tank                   compression  gzip      local
tank@issue-2121        compression  -         -
tank/send              compression  gzip      inherited from tank
tank/send@issue-2121   compression  -         -
root@linux:~# 
root@linux:~# 

@stale
Copy link

stale bot commented Aug 25, 2020

This issue has been automatically marked as "stale" because it has not had any activity for a while. It will be closed in 90 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the Status: Stale No recent activity for issue label Aug 25, 2020
@stale stale bot closed this as completed Nov 24, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Stale No recent activity for issue Type: Documentation Indicates a requested change to the documentation
Projects
None yet
Development

No branches or pull requests

6 participants
@behlendorf @nedbass @FransUrbo @srlefevre @loli10K and others