Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"zpool import" hangs #6244

Closed
gregfr opened this issue Jun 18, 2017 · 17 comments
Closed

"zpool import" hangs #6244

gregfr opened this issue Jun 18, 2017 · 17 comments
Labels
Status: Stale No recent activity for issue

Comments

@gregfr
Copy link

gregfr commented Jun 18, 2017

Greetings
I have an old proxmox server with 2 zpools. Yesterday, a VM hanged forcing me to restart. Now, one of the pools can be imported, but the other hangs. After some research on Google, I tried "import -Fn" and also importing readonly, no luck in any case.
From the moment I run the "zpool import" command, all zfs command hang (like "zfs list") until I restart; the rest of the system is still responsive.
What can I do to remount the pool?
Thanks in advance
greg

System information

Type Version/Name
Distribution Name proxmox
Distribution Version 3.4
Linux Kernel 2.6.32-48-pve
Architecture x86_64
ZFS Version 0.6.5.7-8_gf1b07c5 -> 0.6.5.10
SPL Version 0.6.5.7-3_g8455153 -> 0.6.5.10
@bunder2015
Copy link
Contributor

bunder2015 commented Jun 18, 2017

Can we see a zpool status -v? We need to know a little bit about how your pool is constructed and its current faults to determine if it's recoverable. Failing that, you could zfs send to another pool since you were able to import as read-only.

edit: How long did you wait to import the pool? If the pool was in use at the time, you might be stuck waiting. Things like resilvering, ZIL replay and such take precedence over completing the import and allowing filesystem access.

@gregfr
Copy link
Author

gregfr commented Jun 18, 2017

Thanks for your answer. The pool is as this:

config:
	SYSV1A                          ONLINE
	  wwn-0x5000cca22ddfc797-part3  ONLINE

I can't use any zfs command on it since it's not imported. Sorry if my message was misleading, I wasn't able to import it in any case.

@gregfr
Copy link
Author

gregfr commented Jun 18, 2017

After leaving the "import" command running for one hour, I just got this:

kernel:PANIC: blkptr at ffff880761edf980 DVA 1 has invalid VDEV 1048576

(command is still hanging)

@bunder2015
Copy link
Contributor

Sounds similar to #4582

@gregfr
Copy link
Author

gregfr commented Jun 18, 2017

Thanks for the pointer. I tried:

echo 1 > /sys/module/zfs/parameters/zfs_recover
zpool import -o readonly=on SYSV1A

it took 20 minutes but it worked!!

now what is the correct way to deal with this read only pool? can "clean" it so I can mount it normally?

@gregfr
Copy link
Author

gregfr commented Jun 18, 2017

I tried export/import and got a solid system freeze...

@mailinglists35
Copy link

is storage healthy?

@gregfr
Copy link
Author

gregfr commented Jun 18, 2017

is storage healthy?
you mean the hard drive itself? as much as I can see, it is...

@bunder2015
Copy link
Contributor

bunder2015 commented Jun 19, 2017

What does zdb -l /dev/disk/by-id/wwn-0x5000cca22ddfc797-part3 say? Long story short, you might have to import the pool read-only, offload the data and recreate the pool.

@gregfr
Copy link
Author

gregfr commented Jun 19, 2017

I'm not familiar with "labels", but why are they 4 identical labels here?

$ zdb -l /dev/disk/by-id/wwn-0x5000cca22ddfc797-part3
--------------------------------------------
LABEL 0
--------------------------------------------
    version: 5000
    name: 'SYSV1A'
    state: 0
    txg: 15154607
    pool_guid: 16260310022571969635
    errata: 0
    hostname: 'sysv1'
    top_guid: 14459990977444447265
    guid: 14459990977444447265
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 14459990977444447265
        path: '/dev/disk/by-id/wwn-0x5000cca22ddfc797-part3'
        whole_disk: 0
        metaslab_array: 35
        metaslab_shift: 34
        ashift: 12
        asize: 1957377343488
        is_log: 0
        DTL: 190
        create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
--------------------------------------------
LABEL 1
--------------------------------------------
    version: 5000
    name: 'SYSV1A'
    state: 0
    txg: 15154607
    pool_guid: 16260310022571969635
    errata: 0
    hostname: 'sysv1'
    top_guid: 14459990977444447265
    guid: 14459990977444447265
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 14459990977444447265
        path: '/dev/disk/by-id/wwn-0x5000cca22ddfc797-part3'
        whole_disk: 0
        metaslab_array: 35
        metaslab_shift: 34
        ashift: 12
        asize: 1957377343488
        is_log: 0
        DTL: 190
        create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
--------------------------------------------
LABEL 2
--------------------------------------------
    version: 5000
    name: 'SYSV1A'
    state: 0
    txg: 15154607
    pool_guid: 16260310022571969635
    errata: 0
    hostname: 'sysv1'
    top_guid: 14459990977444447265
    guid: 14459990977444447265
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 14459990977444447265
        path: '/dev/disk/by-id/wwn-0x5000cca22ddfc797-part3'
        whole_disk: 0
        metaslab_array: 35
        metaslab_shift: 34
        ashift: 12
        asize: 1957377343488
        is_log: 0
        DTL: 190
        create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
--------------------------------------------
LABEL 3
--------------------------------------------
    version: 5000
    name: 'SYSV1A'
    state: 0
    txg: 15154607
    pool_guid: 16260310022571969635
    errata: 0
    hostname: 'sysv1'
    top_guid: 14459990977444447265
    guid: 14459990977444447265
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 14459990977444447265
        path: '/dev/disk/by-id/wwn-0x5000cca22ddfc797-part3'
        whole_disk: 0
        metaslab_array: 35
        metaslab_shift: 34
        ashift: 12
        asize: 1957377343488
        is_log: 0
        DTL: 190
        create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data

@gregfr
Copy link
Author

gregfr commented Jun 19, 2017

Is there a way to "fix" it without moving my 2T data twice?

@GregorKopka
Copy link
Contributor

GregorKopka commented Jul 16, 2017

Is there a way to "fix" it without moving my 2T data twice?

Keep the pool on the new disks and recycle the old ones for something else?

@emptyflask
Copy link

emptyflask commented Aug 14, 2018

I have the same problem with a pool (a single 2TB external drive) I created and exported on MacOS and am trying to import it on Linux. Once I attempt a zfs import, it hangs, as does any other zfs command.

Importing and using on MacOS is fine, though.

@mannahusum
Copy link

I have a similar, but slightly different problem. Import on gentoo works fine with sys-kernel/gentoo-sources-4.9.16 and sys-fs/zfs-kmod-0.7.3-r0-gentoo but hangs with sys-kernel/gentoo-sources-4.12.12 and sys-fs/zfs-kmod-0.7.9-r1

Since it’s my current root, and it worked with 4.12.12 and a previous version of zfs, I wonder what could have changed: zfs or gcc? Should I open a separate bug?

@Kajanos
Copy link

Kajanos commented Sep 15, 2018

I have a similar, but slightly different problem. Import on gentoo works fine with sys-kernel/gentoo-sources-4.9.16 and sys-fs/zfs-kmod-0.7.3-r0-gentoo but hangs with sys-kernel/gentoo-sources-4.12.12 and sys-fs/zfs-kmod-0.7.9-r1

Since it’s my current root, and it worked with 4.12.12 and a previous version of zfs, I wonder what could have changed: zfs or gcc? Should I open a separate bug?

Any update ?

@behlendorf
Copy link
Contributor

For those having problems importing a pool on Linux you may want to try the 0.8.0-rc1 release candidate. This version includes significant improvements to zpool import to make it both more robust and tolerant when importing a pool. Additionally, if a pool fails to import diagnostic information describing the failure is available in the /proc/spl/kstat/zfs/dbgmsg log. As always, if you want to be able to revert to a prior release make sure not to enable any new features.

https://github.com/zfsonlinux/zfs/releases/tag/zfs-0.8.0-rc1

@stale
Copy link

stale bot commented Aug 25, 2020

This issue has been automatically marked as "stale" because it has not had any activity for a while. It will be closed in 90 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the Status: Stale No recent activity for issue label Aug 25, 2020
@stale stale bot closed this as completed Nov 24, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Stale No recent activity for issue
Projects
None yet
Development

No branches or pull requests

8 participants