-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Faulted after upgrade on linux - files intact on BSD ZFS #984
Comments
Start with |
I tried an export at first, thinking that when I go to import it may correct some error. I was wrong. Also zpool import -fD tank does not import, heres the output, this would be the same for status. bill@workbox:~$ sudo zpool import
|
@MiceMiceRabies are all the devices available at the time /etc/init.d/zfs runs? i have a system here where the sas controller takes maybe 20s to find all the disks and so it typically will fault on import |
@cwedgwood - I just tried restarting via init.d with no luck. I did boot to the latest NAS4FREE and was successful by doing a Another interesting thing to note, during my last upgrade I did notice that the modules did not get built into the other kernels on the system. |
Have you tried importing the pool after exporting it from FreeBSD? |
Ryao, Yes that was the next step I went for, I still get the same output on the linux side. STATE: UNAVAIL, and all drives are faulted |
ZoL seems to be trying to mount the ZFS pool from partitions. Are the disks in GPT format? Did you initially make the pool on whole disks or partitions? CONFIG_EFI_PARTITION=y enabled in kernel config? Do issues #94, #489 or #955 have relevance for this? The most likely candidate for your problem might be the last one (AF disk detection). Which ZoL version you're running? Is your /etc/zfs/zpool.cache stale? |
The disks are GPT. The lay out is (3) 1TB drives, (2) 2TB drives. The 2TB drives are partitioned in half so I have a total of 5 disks in my pool the leftover partitions are used for LVG. CONFIG_EFI_PARTITION=y is enabled in the kernel config. As for ZoL - Version: 0.6.0.80-0ubuntu2~lucid1 /etc/zfs/zpool.cache doesnt exist the only file in /etc/zfs/ is zdev.conf I think this looks more like Issue #94 then the others because of the 1049kB offset. As for the others I am not too sure as I am very new to zfs. Heres my drive layout: For the 2TB drives (quantity 2) Model: ATA SAMSUNG HD204UI (scsi) And the 1TB drives (quantity 3) Model: ATA SAMSUNG HD103SJ (scsi) |
No. I'll just repeat what I said in #94:
There is no "1049kB offset" issue. 1049kB is 1024kiB. Everything is fine. |
so then its something else im missing. good to know its not #94 |
Fixed! To fix I purged the ubunut-zfs package, then rebooted to a non-xen kernel (3.2bpo) and reinstalled the ubuntu-zfs package. Now all my disks appear as online again with no issues. pool: tank
errors: No known data errors |
Great, I'm glad you got it sorted out. |
Bumps [lru](https://github.com/jeromefroe/lru-rs) from 0.10.0 to 0.10.1. - [Changelog](https://github.com/jeromefroe/lru-rs/blob/master/CHANGELOG.md) - [Commits](jeromefroe/lru-rs@0.10.0...0.10.1) --- updated-dependencies: - dependency-name: lru dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Just upgraded my ZFS on linux, after a reboot my pools are "faulted too many errors" on all disks.
I am running Debian squeeze with kernel 3.2.0. Im not too sure where to go from here... I can supply any info needed to help track this bug.
The text was updated successfully, but these errors were encountered: