Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cannot zpool remove special allocation class device #9038

Closed
dd1dd1 opened this issue Jul 16, 2019 · 26 comments
Closed

cannot zpool remove special allocation class device #9038

dd1dd1 opened this issue Jul 16, 2019 · 26 comments
Labels
Status: Stale No recent activity for issue

Comments

@dd1dd1
Copy link

dd1dd1 commented Jul 16, 2019

System information

Type Version/Name
Distribution Name CentOS
Distribution Version 7
Linux Kernel 3.10.0-957.21.3.el7.x86_64 #1 SMP Tue Jun 18 16:35:19 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Architecture
ZFS Version zfs-0.8.1-1.el7.x86_64
SPL Version n/a

Describe the problem you're observing

Cannot remove device from pool:

zpool remove pool ata-KINGSTON_SV300S37A120G_50026B77630CCB2C

cannot remove ata-KINGSTON_SV300S37A120G_50026B77630CCB2C: invalid config; all top-level vdevs must have the same sector size and not be raidz.

I am trying to undo a mistake, where the "special allocation class" devices have been added to the pool as individual devices instead of as a mirror device. I expected "zpool remove" to work the same as for the l2arc "cache" devices which can be added and removed at will. If "zpool remove" cannot be made to work, what else can I try? Do I now have to erase the whole data array and start from scratch?!? After such a simple mistake?!?

[root@daqbackup ~]# zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
pool 14.7T 12.5T 2.23T - - 30% 84% 1.00x ONLINE -
raidz2 14.5T 12.5T 2.03T - - 30% 86.0% - ONLINE
ata-WDC_WD20EARS-00MVWB0_WD-WCAZA3872943-part1 - - - - - - - - ONLINE
ata-WDC_WD40EZRX-00SPEB0_WD-WCC4E0768606 - - - - - - - - ONLINE
ata-WDC_WD40EZRX-00SPEB0_WD-WCC4E1580087 - - - - - - - - ONLINE
ata-WDC_WD20EARS-00MVWB0_WD-WCAZA1973369-part1 - - - - - - - - ONLINE
ata-WDC_WD20EARS-00MVWB0_WD-WMAZA0858733-part1 - - - - - - - - ONLINE
ata-WDC_WD40EZRX-00SPEB0_WD-WCC4ENZ18YFL - - - - - - - - ONLINE
ata-WDC_WD20EARS-00MVWB0_WD-WMAZA0857075-part1 - - - - - - - - ONLINE
ata-WDC_WD2002FYPS-01U1B0_WD-WCAVY0370983-part1 - - - - - - - - ONLINE
special - - - - - - - - -
ata-ADATA_SP550_2F4320041688 111G 12.4G 98.6G - - 0% 11.2% - ONLINE
ata-KINGSTON_SV300S37A120G_50026B77630CCB2C 111G 5.56G 105G - - 0% 5.01% - ONLINE
[root@daqbackup ~]#

olchansk@triumf.ca
K.O.

@gmelikov
Copy link
Member

I expected "zpool remove" to work the same as for the l2arc "cache" devices which can be added and removed at will.

Where did you get such information? Man pages don't mention this.

zpool remove won't work on pools for persistent VDEVs with any RAIDZ* in it. It's documented https://github.com/zfsonlinux/zfs/blob/master/man/man8/zpool.8#L2058

The best way is to recreate your pool. Or you can add a mirror disk to each of special devs.

@dd1dd1
Copy link
Author

dd1dd1 commented Jul 16, 2019

Thank you for your answer. I did not expect zfs to have a trap where I cannot trivially undo a simple mistake - missed the word "mirror" on the "zfs add special" command.

Perhaps the zfs documentation should have an example for a typical situation - I have a redundant raidz2 data array and I want to add a "special allocation pool" to improve performance. Obviously this pool should also be redundant (i.e. a mirror set of 2 SSDs) to protect the main raidz2 array
from a single SSD failure.

If this action is not undoable - once added, "special allocation pool" cannot be removed - as it seem to be the case - the documentation should clearly spell it out.

K.O.

@dd1dd1
Copy link
Author

dd1dd1 commented Jul 16, 2019

zpool remove won't work on pools for persistent VDEVs with any RAIDZ* in it. It's documented https://github.com/zfsonlinux/zfs/blob/master/man/man8/zpool.8#L2058

I should mention, that frankly, as a lay person, I do not understand what those words mean. Certainly "zpool remove" works just fine for the l2arc cache storage device. If special allocation class devices can only be added, and cannot be removed, I vote that the documentation spells it out clearly.

On my side, I would submit an RFE asking that ZFS should permit removal of "special allocation devices". I do not know if this is hard to implement or easy to implement, I let the developers fight it out.

K.O.

@dd1dd1
Copy link
Author

dd1dd1 commented Jul 16, 2019

you can add a mirror disk to each of special devs.

I think this is the direction I want to try. It is easier for me to add another pair of SSDs
compared to blowing the exiting array and starting from scratch. (I have to find/build another
storage array of similar size, copy all the data, recreate my array, copy all the data back, only about 1 week of work to "recreate your pool").

How do I do this? Will this work:

zpool attach special ata-ADATA_SP550_2F4320041688c0t0d0s0 ata-one-more-ssd (bombs with cannot open 'special': no such pool)

or maybe this:

zpool attach pool ata-ADATA_SP550_2F4320041688c0t0d0s0 ata-one-more-ssd

I find it confusing that "zpool status" and "zpool list -v" show "special" like as a top level pool name, but the zpool commands do not accept "special" as a pool name. Perhaps "special" should be shown under the "pool" that it is attached to, unless "special" is shared by all pools.

K.O.

@dd1dd1
Copy link
Author

dd1dd1 commented Jul 16, 2019

More. This looks like a repeat of #6907. In that case "zpool add" now has protections against accidental reduction of redundancy. Perhaps adding special storage class devices should have similar protections?
K.O.

@dd1dd1
Copy link
Author

dd1dd1 commented Jul 16, 2019

More. Removal of accidentally added storage devices seems to be implemented in zfs 0.8.1, but it does not work for the "special allocation class" devices? #6900.
K.O.

@loli10K
Copy link
Contributor

loli10K commented Jul 16, 2019

For some reason the "Describe how to reproduce the problem" section in the bug report seems to be missing, can you please provide the exact sequence of commands used to add the "special" device(s) to the pool? E.g. outpot of zpool history -i | grep add.

@loli10K
Copy link
Contributor

loli10K commented Jul 16, 2019

root@linux:~# POOLNAME='testpool'
root@linux:~# TMPDIR='/var/tmp'
root@linux:~# mountpoint -q $TMPDIR || mount -t tmpfs tmpfs $TMPDIR
root@linux:~# zpool destroy -f $POOLNAME
root@linux:~# rm -f $TMPDIR/zpool.dat
root@linux:~# truncate -s 128m $TMPDIR/zpool{1,2,3,4,5}.dat
root@linux:~# zpool create -f -O mountpoint=none $POOLNAME raidz $TMPDIR/zpool{1,2,3}.dat
root@linux:~# 
root@linux:~# zpool add $POOLNAME special $TMPDIR/zpool4.dat
invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: pool uses raidz and new vdev is file
root@linux:~# echo $?
1
root@linux:~# zpool add $POOLNAME special $(losetup -f $TMPDIR/zpool4.dat --show)
invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: pool uses raidz and new vdev is disk
root@linux:~# echo $?
1
root@linux:~# zpool add $POOLNAME special mirror $TMPDIR/zpool{4,5}.dat
root@linux:~# echo $?
0
root@linux:~# zpool status
  pool: testpool
 state: ONLINE
  scan: none requested
config:

	NAME                     STATE     READ WRITE CKSUM
	testpool                 ONLINE       0     0     0
	  raidz1-0               ONLINE       0     0     0
	    /var/tmp/zpool1.dat  ONLINE       0     0     0
	    /var/tmp/zpool2.dat  ONLINE       0     0     0
	    /var/tmp/zpool3.dat  ONLINE       0     0     0
	special	
	  mirror-1               ONLINE       0     0     0
	    /var/tmp/zpool4.dat  ONLINE       0     0     0
	    /var/tmp/zpool5.dat  ONLINE       0     0     0

errors: No known data errors
root@linux:~# 

@dd1dd1
Copy link
Author

dd1dd1 commented Jul 16, 2019

it was requested in the original special allocation class ticket (#5182) that this check be removed.

This makes no sense. I would not be using ZFS if I wanted non-redundant storage (zfs metadata stored on a single SSD, which can fail at any time).

K.O.

@dd1dd1
Copy link
Author

dd1dd1 commented Jul 16, 2019

For some reason the "Describe how to reproduce the problem"

[root@daqbackup ~]# zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
pool 14.5T 12.4T 2.06T - - 30% 85% 1.00x ONLINE -
raidz2 14.5T 12.4T 2.06T - - 30% 85.8% - ONLINE
ata-WDC_WD20EARS-00MVWB0_WD-WCAZA3872943-part1 - - - - - - - - ONLINE
ata-WDC_WD40EZRX-00SPEB0_WD-WCC4E0768606 - - - - - - - - ONLINE
ata-WDC_WD40EZRX-00SPEB0_WD-WCC4E1580087 - - - - - - - - ONLINE
ata-WDC_WD20EARS-00MVWB0_WD-WCAZA1973369-part1 - - - - - - - - ONLINE
ata-WDC_WD20EARS-00MVWB0_WD-WMAZA0858733-part1 - - - - - - - - ONLINE
ata-WDC_WD40EZRX-00SPEB0_WD-WCC4ENZ18YFL - - - - - - - - ONLINE
ata-WDC_WD20EARS-00MVWB0_WD-WMAZA0857075-part1 - - - - - - - - ONLINE
ata-WDC_WD2002FYPS-01U1B0_WD-WCAVY0370983-part1 - - - - - - - - ONLINE
[root@daqbackup ~]#
[root@daqbackup ~]# zpool add pool special /dev/disk/by-id/ata-ADATA_SP550_2F4320041688 /dev/disk/by-id/ata-KINGSTON_SV300S37A120G_50026B77630CCB2C
... takes a long time (unexpectedly) ...
[root@daqbackup ~]# zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
pool 14.7T 12.5T 2.23T - - 30% 84% 1.00x ONLINE -
raidz2 14.5T 12.5T 2.03T - - 30% 86.0% - ONLINE
ata-WDC_WD20EARS-00MVWB0_WD-WCAZA3872943-part1 - - - - - - - - ONLINE
ata-WDC_WD40EZRX-00SPEB0_WD-WCC4E0768606 - - - - - - - - ONLINE
ata-WDC_WD40EZRX-00SPEB0_WD-WCC4E1580087 - - - - - - - - ONLINE
ata-WDC_WD20EARS-00MVWB0_WD-WCAZA1973369-part1 - - - - - - - - ONLINE
ata-WDC_WD20EARS-00MVWB0_WD-WMAZA0858733-part1 - - - - - - - - ONLINE
ata-WDC_WD40EZRX-00SPEB0_WD-WCC4ENZ18YFL - - - - - - - - ONLINE
ata-WDC_WD20EARS-00MVWB0_WD-WMAZA0857075-part1 - - - - - - - - ONLINE
ata-WDC_WD2002FYPS-01U1B0_WD-WCAVY0370983-part1 - - - - - - - - ONLINE
special - - - - - - - - -
ata-ADATA_SP550_2F4320041688 111G 12.4G 98.6G - - 0% 11.2% - ONLINE
ata-KINGSTON_SV300S37A120G_50026B77630CCB2C 111G 5.57G 105G - - 0% 5.01% - ONLINE
[root@daqbackup ~]#

Notice the absence of "mirror" in the "zpool add" command (wrt example in previous comment). A simple mistake, cannot be undone.

K.O.

@dd1dd1
Copy link
Author

dd1dd1 commented Jul 16, 2019

recreate the array

Hmm... when I add the "special allocation class" SSDs for faster metadata access,
all the existing metadata remains on the slow HDDs, so I gain nothing in performance
until the data churns around and metadata slowly moves to the SSDs. So to gain full advantage
of this zfs feature, I should recreate the array and refill it with data anyway. Yes?

K.O.

@loli10K
Copy link
Contributor

loli10K commented Jul 17, 2019

This is very odd, with a raidz2 top-level zpool add should fail, even with mirrored "special" devices:

root@linux:~# zpool status
  pool: pool
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	pool        ONLINE       0     0     0
	  raidz2-0  ONLINE       0     0     0
	    sda     ONLINE       0     0     0
	    sdb     ONLINE       0     0     0
	    sdc     ONLINE       0     0     0
	    sdd     ONLINE       0     0     0
	    sde     ONLINE       0     0     0
	    sdf     ONLINE       0     0     0
	    sdg     ONLINE       0     0     0

errors: No known data errors
root@linux:~# zpool add pool special /dev/sdh /dev/sdi 
invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: pool uses raidz and new vdev is disk
root@linux:~# zpool add pool special mirror /dev/sdh /dev/sdi 
invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: pool and new vdev with different redundancy, raidz and mirror vdevs, 2 vs. 1 (2-way)
root@linux:~# 
root@linux:~# zfs version
zfs-0.8.0-117_g7feb7ac4c
zfs-kmod-0.8.0-112_g67af199fb
root@linux:~# 

@RonCollinson
Copy link

RonCollinson commented Jul 17, 2019 via email

@dd1dd1
Copy link
Author

dd1dd1 commented Jul 17, 2019

This is very odd, with a raidz2 top-level zpool add should fail ...
root@linux:~# zfs version
zfs-0.8.0-117_g7feb7ac4c

You have 0.8.0, I have 0.8.1. Maybe this is why your "zpool add special" was rejected, but mine accepted.

K.O.

@dd1dd1
Copy link
Author

dd1dd1 commented Jul 17, 2019

Pretty sure you can remove a special vdev, the issue you have encountered is that a raidz vdev can not store the redirection table that would be generated currently (L2ARC and SLOG removals don't generate this table)

So my "zpool remove" would have worked if I my main array was mirrored HDDs instead of raidz HDDs? This is good - mirrored HDDs is a more typical use case for us these days.

Also thanks for reminding me about "-n" and about checkpointing.

K.O.

@loli10K
Copy link
Contributor

loli10K commented Jul 17, 2019

You have 0.8.0, I have 0.8.1. Maybe this is why your "zpool add special" was rejected, but mine accepted.

root@linux:~# zpool add pool special /dev/sdi
invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: pool uses raidz and new vdev is disk
root@linux:~# echo $?
1
root@linux:~# zpool add pool special /dev/sdi /dev/sdh
invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: pool uses raidz and new vdev is disk
root@linux:~# echo $?
1
root@linux:~# zpool add pool special mirror /dev/sdi /dev/sdh
invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: pool and new vdev with different redundancy, raidz and mirror vdevs, 2 vs. 1 (2-way)
root@linux:~# echo $?
1
root@linux:~# zpool status
  pool: pool
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	pool        ONLINE       0     0     0
	  raidz2-0  ONLINE       0     0     0
	    sda     ONLINE       0     0     0
	    sdb     ONLINE       0     0     0
	    sdc     ONLINE       0     0     0
	    sdd     ONLINE       0     0     0
	    sde     ONLINE       0     0     0
	    sdf     ONLINE       0     0     0
	    sdg     ONLINE       0     0     0

errors: No known data errors
root@linux:~# zfs version
zfs-0.8.1-1
zfs-kmod-0.8.1-1
root@linux:~# 

@dd1dd1
Copy link
Author

dd1dd1 commented Jul 17, 2019

root@linux:~# zpool add pool special /dev/sdi /dev/sdh
invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: pool uses raidz and new vdev is disk

WTF!?! You typed the same command as I did and you got the error that I wish I have gotten. Your zfs-0.8.1 is better than my zfs-0.8.1. Where did you get it? My copy is from the zfs-testing-kmod, I updated from 0.7.something using these commands (centos-7). I did reboot to make sure kernel module is updated. Could it be that you have the old 0.8.0 kernel module still loaded? No, cannot be, "zfs version" reports kernel module version from /sys/module/zfs/version and yours says 0.8.1). So, WTF.

yum --enablerepo=zfs-testing-kmod --showduplicates list zfs
yum --enablerepo=zfs-testing-kmod --showduplicates update
yum --enablerepo=zfs-testing-kmod --showduplicates erase zfs spl
yum --enablerepo=zfs-testing-kmod --showduplicates install zfs

For reference, these are the commands I have run:

root@daqbackup ~]# zpool add pool special /dev/disk/by-id/ata-ADATA_SP550_2F4320041688 /dev/disk/by-id/ata-KINGSTON_SV300S37A120G_50026B77630CCB2C
[root@daqbackup ~]# zfs version
zfs-0.8.1-1
zfs-kmod-0.8.1-1

K.O.

@cibi7
Copy link

cibi7 commented Aug 3, 2019

Maybe you are trying to add a special device without the same replication level. zpool man quote:
The redundancy of this device should match the redundancy of the other normal devices in the pool.

@dd1dd1
Copy link
Author

dd1dd1 commented Aug 4, 2019

Maybe you are trying to add a special device without the same replication level.

What does "same replication level" mean? It is not spelled out.

If the main data array is 8xHDD in a raidz2 configuration, I see 3 choices, which one of them is meant as "same level" on the man page?

a) 4xSSD in a raidz2 configuration ("same" same, but this is silly!) or
b) 2xSSD in a mirror configuration (this is what I want) or
c) 3xSSD in a mirror configuration (same as level of redundancy as raidz2 - both raidz2 and tripple-mirror survives loss of 2 disks).

K.O.

@behlendorf
Copy link
Contributor

What does "same replication level" mean?

What's meant in c) from above. You may forcibly override the warning if you want b) using the -f flag when adding the devices.

@dd1dd1
Copy link
Author

dd1dd1 commented Aug 24, 2019

What's meant in c) from above.

Thank you for your help. I am following the instructions from this ticket and I recreated the ZFS array from scratch. (Unfortunately I lost all the data from the original ZFS array to an unexpected failure of a 6TB disk. On the plus side, I learned how to use zfs send and receive in the presence of disk I/O errors. And I liked what I saw, zfs is doing very reasonable things in the presence of disk failure). Anyhow,

I confirm Brian B. information about option (c). This time I got the expected messages from zpool. I am mystified why I did not see them on my first attempt. (I confirm that the commands and messages from my original report is what I did and saw at the time).

This is what I got this time, everything is as expected. After adding "-f", the array successfully created and I am now filling it with data. I do see a performance improvement from using SSDs for metadata storage.

zpool create test raidz2 `ls -1 /dev/disk/by-id/ata-WDC_WD40EZRX-00SPEB0_WD* | grep -v part`
zpool add -f test special mirror /dev/disk/by-id/ata-WDC_WDS120G2G0A-00JH30_1843A2802212 /dev/disk/by-id/ata-KINGSTON_SV300S37A120G_50026B77630CCB2C
invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: pool and new vdev with different redundancy, raidz and mirror vdevs, 2 vs. 1 (2-way)
[root@daqbackup gobackup]# zpool list -v
NAME                                              SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
pool                                             21.9T  1.61T  20.3T        -         -     0%     7%  1.00x    ONLINE  -
  raidz2                                         21.8T  1.61T  20.2T        -         -     0%  7.36%      -  ONLINE  
    ata-WDC_WD40EZRX-00SPEB0_WD-WCC4E0555419         -      -      -        -         -      -      -      -  ONLINE  
    ata-WDC_WD40EZRX-00SPEB0_WD-WCC4E0602953         -      -      -        -         -      -      -      -  ONLINE  
    ata-WDC_WD40EZRX-00SPEB0_WD-WCC4E0768606         -      -      -        -         -      -      -      -  ONLINE  
    ata-WDC_WD40EZRX-00SPEB0_WD-WCC4E1580087         -      -      -        -         -      -      -      -  ONLINE  
    ata-WDC_WD40EZRX-00SPEB0_WD-WCC4E34ENLNL         -      -      -        -         -      -      -      -  ONLINE  
    ata-WDC_WD40EZRX-00SPEB0_WD-WCC4ENZ18YFL         -      -      -        -         -      -      -      -  ONLINE  
special                                              -      -      -        -         -      -      -      -  -
  mirror                                          111G  7.79G   103G        -         -     7%  7.02%      -  ONLINE  
    ata-WDC_WDS120G2G0A-00JH30_1843A2802212          -      -      -        -         -      -      -      -  ONLINE  
    ata-KINGSTON_SV300S37A120G_50026B77630CCB2C      -      -      -        -         -      -      -      -  ONLINE  
[root@daqbackup gobackup]# 

K.O.

@dd1dd1
Copy link
Author

dd1dd1 commented Aug 27, 2019

FWIW, I see a factor of 2 performance gain from using mirrored SSDs for metadata storage (compared to HDD-only array). The application is an rsync+snapshot backup system for linux home directories and linux system partitions. The slowest rsync is down from 2 hours to 1 hour, typical rsync of "/" is down from 0.5 hours to a few minutes.

Thank you all for the good work.

K.O.

@stale
Copy link

stale bot commented Aug 26, 2020

This issue has been automatically marked as "stale" because it has not had any activity for a while. It will be closed in 90 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the Status: Stale No recent activity for issue label Aug 26, 2020
@stale stale bot closed this as completed Nov 24, 2020
@micolous
Copy link

I believe this is still an issue as at zfs-2.1.11-1, and given that a bot auto-closed this issue, I'd strongly suspect it's not fixed in the current openzfs release either.

According to #9772 (comment), it suggests that it is possible to use zpool add on a pool with a raidz# and can be allowed even without -f, which #6911 should have prevented.

In my case, starting at a pool in this configuration, where there's an ongoing zpool replace of a drive in the raidz2 (I don't think the replace is relevant here; but I've noted it for completeness):

storage
  raidz2-0
    ata-ST16000NT001-REDACTED1-part1
    replacing-1
      ata-ST16000NT001-REDACTED2-part1
      ata-ST12000VN0007-REDACTED8-part1
    ata-ST12000NT001-REDACTED9-part1
    ata-ST16000NT001-REDACTED3-part1

Then running:

zpool add storage /dev/disk/by-id/ata-ST16000NT001-REDACTED5-part1

Note that -f is not set.

My understanding is that this command should fail because of the checks in #6911.

However, the command "succeeds", and we get this pool layout shown in zpool status:

storage
  raidz2-0
    ata-ST16000NT001-REDACTED1-part1
    replacing-1
      ata-ST16000NT001-REDACTED2-part1
      ata-ST12000VN0007-REDACTED8-part1
    ata-ST12000NT001-REDACTED9-part1
    ata-ST16000NT001-REDACTED3-part1
  ata-ST16000NT001-REDACTED5-part1

ata-ST16000NT001-REDACTED5-part1 is now at the top-level, has increased the size of the storage pool and reduced the redundancy of the pool (and added a single point of failure).

Now in this state (with the other replace still ongoing):

  • zpool remove storage ata-ST16000NT001-REDACTED5 fails with "invalid config; all top-level vdevs must have the same sector size and not be raidz"

  • zpool detach storage ata-ST16000NT001-REDACTED5 fails with "only applicable to mirror and replacing vdevs"

I'm pretty sure that the action required to correct this is still "reroll the storage pool" - but I can't do a lot here until the replace is done, and that'll take a while.

A possible cause for this is that /dev/disk/by-id/*-part1 are all symlinks to the underlying block devices (/dev/sdX1); that's in this bug and #9772 (comment).

@lachesis
Copy link

It would be a nice enhancement if zfs could remove special devices or regular vdevs by rewriting their contents onto the other remaining vdevs in the pool. I'm sure many dragons exist here.

@tonyhutter
Copy link
Contributor

It would be a nice enhancement if zfs could remove special devices or regular vdevs by rewriting their contents onto the other remaining vdevs in the pool.

@lachesis Not quite what your asking for, but: #16185

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Stale No recent activity for issue
Projects
None yet
Development

No branches or pull requests

10 participants