-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cannot zpool remove special allocation class device #9038
Comments
Where did you get such information? Man pages don't mention this.
The best way is to recreate your pool. Or you can add a mirror disk to each of special devs. |
Thank you for your answer. I did not expect zfs to have a trap where I cannot trivially undo a simple mistake - missed the word "mirror" on the "zfs add special" command. Perhaps the zfs documentation should have an example for a typical situation - I have a redundant raidz2 data array and I want to add a "special allocation pool" to improve performance. Obviously this pool should also be redundant (i.e. a mirror set of 2 SSDs) to protect the main raidz2 array If this action is not undoable - once added, "special allocation pool" cannot be removed - as it seem to be the case - the documentation should clearly spell it out. K.O. |
I should mention, that frankly, as a lay person, I do not understand what those words mean. Certainly "zpool remove" works just fine for the l2arc cache storage device. If special allocation class devices can only be added, and cannot be removed, I vote that the documentation spells it out clearly. On my side, I would submit an RFE asking that ZFS should permit removal of "special allocation devices". I do not know if this is hard to implement or easy to implement, I let the developers fight it out. K.O. |
I think this is the direction I want to try. It is easier for me to add another pair of SSDs How do I do this? Will this work: zpool attach special ata-ADATA_SP550_2F4320041688c0t0d0s0 ata-one-more-ssd (bombs with cannot open 'special': no such pool) or maybe this: zpool attach pool ata-ADATA_SP550_2F4320041688c0t0d0s0 ata-one-more-ssd I find it confusing that "zpool status" and "zpool list -v" show "special" like as a top level pool name, but the zpool commands do not accept "special" as a pool name. Perhaps "special" should be shown under the "pool" that it is attached to, unless "special" is shared by all pools. K.O. |
More. This looks like a repeat of #6907. In that case "zpool add" now has protections against accidental reduction of redundancy. Perhaps adding special storage class devices should have similar protections? |
More. Removal of accidentally added storage devices seems to be implemented in zfs 0.8.1, but it does not work for the "special allocation class" devices? #6900. |
For some reason the "Describe how to reproduce the problem" section in the bug report seems to be missing, can you please provide the exact sequence of commands used to add the "special" device(s) to the pool? E.g. outpot of |
|
[root@daqbackup ~]# zpool list -v Notice the absence of "mirror" in the "zpool add" command (wrt example in previous comment). A simple mistake, cannot be undone. K.O. |
Hmm... when I add the "special allocation class" SSDs for faster metadata access, K.O. |
This is very odd, with a raidz2 top-level
|
zpool attach is the command you want. It shouldn't need to know that it is
attaching to a special vdev (it is implied by the device you are attaching
to).
Before adding a vdev (special or otherwise) taking a checkpoint first can
allow you to recover from a mistake. There is also the -n flag that allows
you to observe the outcome of the zfs add command without actually
committing the change to your pool.
Pretty sure you can remove a special vdev, the issue you have encountered
is that a raidz vdev can not store the redirection table that would be
generated currently (L2ARC and SLOG removals don't generate this table)
…On Wed, 17 Jul 2019, 03:40 dd1dd1, ***@***.***> wrote:
you can add a mirror disk to each of special devs.
I think this is the direction I want to try. It is easier for me to add
another pair of SSDs
compared to blowing the exiting array and starting from scratch. (I have
to find/build another
storage array of similar size, copy all the data, recreate my array, copy
all the data back, only about 1 week of work to "recreate your pool").
How do I do this? Will this work:
zpool attach special ata-ADATA_SP550_2F4320041688c0t0d0s0 ata-one-more-ssd
(bombs with cannot open 'special': no such pool)
or maybe this:
zpool attach pool ata-ADATA_SP550_2F4320041688c0t0d0s0 ata-one-more-ssd
I find it confusing that "zpool status" and "zpool list -v" show "special"
like as a top level pool name, but the zpool commands do not accept
"special" as a pool name. Perhaps "special" should be shown under the
"pool" that it is attached to, unless "special" is shared by all pools.
K.O.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#9038>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAAPA2VVFVSXIO6JAYVS2PTP7XTV3ANCNFSM4ID5PF6A>
.
|
You have 0.8.0, I have 0.8.1. Maybe this is why your "zpool add special" was rejected, but mine accepted. K.O. |
So my "zpool remove" would have worked if I my main array was mirrored HDDs instead of raidz HDDs? This is good - mirrored HDDs is a more typical use case for us these days. Also thanks for reminding me about "-n" and about checkpointing. K.O. |
|
WTF!?! You typed the same command as I did and you got the error that I wish I have gotten. Your zfs-0.8.1 is better than my zfs-0.8.1. Where did you get it? My copy is from the zfs-testing-kmod, I updated from 0.7.something using these commands (centos-7). I did reboot to make sure kernel module is updated. Could it be that you have the old 0.8.0 kernel module still loaded? No, cannot be, "zfs version" reports kernel module version from /sys/module/zfs/version and yours says 0.8.1). So, WTF.
For reference, these are the commands I have run: root@daqbackup ~]# zpool add pool special /dev/disk/by-id/ata-ADATA_SP550_2F4320041688 /dev/disk/by-id/ata-KINGSTON_SV300S37A120G_50026B77630CCB2C K.O. |
Maybe you are trying to add a special device without the same replication level. zpool man quote: |
What does "same replication level" mean? It is not spelled out. If the main data array is 8xHDD in a raidz2 configuration, I see 3 choices, which one of them is meant as "same level" on the man page? a) 4xSSD in a raidz2 configuration ("same" same, but this is silly!) or K.O. |
What's meant in c) from above. You may forcibly override the warning if you want b) using the |
Thank you for your help. I am following the instructions from this ticket and I recreated the ZFS array from scratch. (Unfortunately I lost all the data from the original ZFS array to an unexpected failure of a 6TB disk. On the plus side, I learned how to use zfs send and receive in the presence of disk I/O errors. And I liked what I saw, zfs is doing very reasonable things in the presence of disk failure). Anyhow, I confirm Brian B. information about option (c). This time I got the expected messages from zpool. I am mystified why I did not see them on my first attempt. (I confirm that the commands and messages from my original report is what I did and saw at the time). This is what I got this time, everything is as expected. After adding "-f", the array successfully created and I am now filling it with data. I do see a performance improvement from using SSDs for metadata storage.
K.O. |
FWIW, I see a factor of 2 performance gain from using mirrored SSDs for metadata storage (compared to HDD-only array). The application is an rsync+snapshot backup system for linux home directories and linux system partitions. The slowest rsync is down from 2 hours to 1 hour, typical rsync of "/" is down from 0.5 hours to a few minutes. Thank you all for the good work. K.O. |
This issue has been automatically marked as "stale" because it has not had any activity for a while. It will be closed in 90 days if no further activity occurs. Thank you for your contributions. |
I believe this is still an issue as at According to #9772 (comment), it suggests that it is possible to use In my case, starting at a pool in this configuration, where there's an ongoing
Then running: zpool add storage /dev/disk/by-id/ata-ST16000NT001-REDACTED5-part1 Note that My understanding is that this command should fail because of the checks in #6911. However, the command "succeeds", and we get this pool layout shown in
Now in this state (with the other
I'm pretty sure that the action required to correct this is still "reroll the storage pool" - but I can't do a lot here until the replace is done, and that'll take a while. A possible cause for this is that |
It would be a nice enhancement if zfs could remove special devices or regular vdevs by rewriting their contents onto the other remaining vdevs in the pool. I'm sure many dragons exist here. |
System information
Describe the problem you're observing
Cannot remove device from pool:
zpool remove pool ata-KINGSTON_SV300S37A120G_50026B77630CCB2C
cannot remove ata-KINGSTON_SV300S37A120G_50026B77630CCB2C: invalid config; all top-level vdevs must have the same sector size and not be raidz.
I am trying to undo a mistake, where the "special allocation class" devices have been added to the pool as individual devices instead of as a mirror device. I expected "zpool remove" to work the same as for the l2arc "cache" devices which can be added and removed at will. If "zpool remove" cannot be made to work, what else can I try? Do I now have to erase the whole data array and start from scratch?!? After such a simple mistake?!?
[root@daqbackup ~]# zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
pool 14.7T 12.5T 2.23T - - 30% 84% 1.00x ONLINE -
raidz2 14.5T 12.5T 2.03T - - 30% 86.0% - ONLINE
ata-WDC_WD20EARS-00MVWB0_WD-WCAZA3872943-part1 - - - - - - - - ONLINE
ata-WDC_WD40EZRX-00SPEB0_WD-WCC4E0768606 - - - - - - - - ONLINE
ata-WDC_WD40EZRX-00SPEB0_WD-WCC4E1580087 - - - - - - - - ONLINE
ata-WDC_WD20EARS-00MVWB0_WD-WCAZA1973369-part1 - - - - - - - - ONLINE
ata-WDC_WD20EARS-00MVWB0_WD-WMAZA0858733-part1 - - - - - - - - ONLINE
ata-WDC_WD40EZRX-00SPEB0_WD-WCC4ENZ18YFL - - - - - - - - ONLINE
ata-WDC_WD20EARS-00MVWB0_WD-WMAZA0857075-part1 - - - - - - - - ONLINE
ata-WDC_WD2002FYPS-01U1B0_WD-WCAVY0370983-part1 - - - - - - - - ONLINE
special - - - - - - - - -
ata-ADATA_SP550_2F4320041688 111G 12.4G 98.6G - - 0% 11.2% - ONLINE
ata-KINGSTON_SV300S37A120G_50026B77630CCB2C 111G 5.56G 105G - - 0% 5.01% - ONLINE
[root@daqbackup ~]#
olchansk@triumf.ca
K.O.
The text was updated successfully, but these errors were encountered: