Permanent errors <metadata>:<0x714> #14693
Replies: 3 comments 13 replies
-
Are you seeing any IO errors at all, or just suddenly errors on disk? You might want to not set redundant_metadata=most when you're setting copies=3, that seems a bit counterproductive. |
Beta Was this translation helpful? Give feedback.
4 replies
-
Nor would I envy you having to test that. I'm not really sure what ZFS can
do better if that's the case though, at the point where the hardware
doesn't error, it's unclear what to do differently...
That said, I was about to ask what distro/kernel/openzfs version, so I
still will.
…On Thu, Mar 30, 2023, 4:34 PM Redsandro ***@***.***> wrote:
*Update:* The resilver scrub I started to see if it would make the pool
state clearable just suspended minutes before being finished, and now there
is a ton of checksum errors. I have not observed this before, but this is
the first time I'm attempting a scrub rather than a destroy and recreate.
pool: backup32
state: SUSPENDED
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-HC
scan: scrub in progress since Thu Mar 30 16:21:51 2023
1.95T scanned at 93.4M/s, 1.85T issued at 88.8M/s, 1.95T total
0B repaired, 95.14% done, 00:18:35 to go
config:
NAME STATE READ WRITE CKSUM
backup32 ONLINE 0 0 0
ata-Hitachi_000000000000000_00000000000000 ONLINE 11 0 687
errors: List of errors unavailable: pool I/O is currently suspended
—
Reply to this email directly, view it on GitHub
<#14693 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABUI7KVX6WLNXLC5G5CIB3W6XU35ANCNFSM6AAAAAAWNPRRBE>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
1 reply
-
One more interesting question - do you use ZFS encryption? It might have
weird interaction with 3 copies.
…On Fri, 31 Mar 2023, 10:08 Redsandro, ***@***.***> wrote:
Yes, of course, I should have said that from the start. Linux Mint flavor
Ubuntu 22.04.1 kernel 5.19.0-38-generic 64 bit with
zfs-2.1.5-1ubuntu6~22.04.1 and zfs-kmod-2.1.5-1ubuntu6.
It will probably take some time before I have the hardware setup and
mental peace to retry this workflow on an Intel chipset. Let's call it plan
C. If there are no other perspectives, that's what needs to be done.
—
Reply to this email directly, view it on GitHub
<#14693 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABXQ6HMQ5TDXFYWOS5YGBXDW6YHAHANCNFSM6AAAAAAWNPRRBE>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
8 replies
Answer selected by
Redsandro
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm trying to use some single disks as offline backup using a SATA to USB 3 dock. While this is not the main raison d'être for ZFS, I figured this is a good use of
copies=3
. This is how I set up the disks:While filling the pools with data, I've ended up with "Permanent errors" three times on two different disks now. I'm getting the following message from
zpool
:Regarding: "Restore the file in question if possible. Otherwise restore the entire pool from backup;" first, I can't figure out what the "file in question" is. I consider the message not helpful in that regard, and perhaps it can be improved. Second, this is happening near the end of filling a brand new pool, so there is no backup.
I have destroyed the pool and restarted for the third time now, this time on a different disk. The process of filling the pool takes hours, so I decided to see if this behavior is indicative of something. After searching, I found that this Q/A was most similar. However, my setup is really very simple. No external log/cache. No multi-disk. No ZFS send. I'm trying to find out:
rsync
;Although the disks had not failed in the past, they do come from an older multi-disk ZFS pool and were replaced merely as a precaution because they are 5 years old, they may be prone to age-based errors. I just want to use them as offline single disk backups with some internal redundancy (copies=3) if I can, rather than discarding them as e-waste.
Unfortunately I didn't record the exact error logs from previous attempts, but they were the same format:
<metadata>:<0xSomething>
.I don't know what it means, but the values on that last one keep changing.
Beta Was this translation helpful? Give feedback.
All reactions