-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible memory allocation deadlock #3041
Comments
It's new, but it's not an OOPS. Just a warning introduced by the recent kmem rework. We expected a couple of these to put up, thanks for reporting it so we can resolve it before the tag. |
@edillmann I should mention if you need a fix now you could set |
@edillmann are you using the large block patches as well? |
@behlendorf I don't use large block patches for now, I did set spl_kmem_alloc_max to 65536 and will report if it resolv this. |
This should be resolved by openzfs/spl@c7db36a which has been merged to master. The excessively large allocation will automatically retry and use vmalloc() as required. |
Hi,
Is this a new bug ?
I found a lot of oops like this on the receiving side of a snapshot transfert.
I'm using latest master on kernel 3.18.3
[41374.265342] Possible memory allocation deadlock: size=1048576 lflags=0x42d0
[41374.265345] CPU: 2 PID: 71096 Comm: zfs Tainted: P OE 3.18.3-jave #1
[41374.265347] Hardware name: /DH67BL, BIOS BLH6710H.86A.0160.2012.1204.1156 12/04/2012
[41374.265349] 0000000000002000 ffff8801e431ba48 ffffffff8171de28 0000000000000a3c
[41374.265352] 00000000000042d0 ffff8801e431ba88 ffffffffa0427f74 ffff8801e431ba58
[41374.265355] ffff8801e431bcb0 ffff8800b7127550 0000000000000000 ffff8801e431bca0
[41374.265358] Call Trace:
[41374.265363] [] dump_stack+0x46/0x58
[41374.265371] [] spl_kmem_alloc_debug+0x184/0x190 [spl]
[41374.265378] [] spl_vmem_alloc+0x19/0x20 [spl]
[41374.265403] [] dmu_recv_stream+0xa3/0xb30 [zfs]
[41374.265411] [] ? nvlist_common.part.102+0xe2/0x1e0 [znvpair]
[41374.265419] [] ? nvlist_xpack+0xde/0x110 [znvpair]
[41374.265425] [] ? nvlist_common.part.102+0xe2/0x1e0 [znvpair]
[41374.265431] [] ? spl_kmem_free+0x32/0x50 [spl]
[41374.265438] [] ? fnvlist_pack_free+0xe/0x10 [znvpair]
[41374.265476] [] ? put_nvlist+0x91/0xa0 [zfs]
[41374.265513] [] zfs_ioc_recv+0x1ec/0xbd0 [zfs]
[41374.265532] [] ? dbuf_rele_and_unlock+0x2a0/0x390 [zfs]
[41374.265538] [] ? spl_kmem_free+0x32/0x50 [spl]
[41374.265547] [] ? tsd_set+0x67/0x2e0 [spl]
[41374.265554] [] ? tsd_hash_search.isra.1+0x77/0xa0 [spl]
[41374.265585] [] ? rrw_exit+0x51/0x160 [zfs]
[41374.265589] [] ? __kmalloc+0x55/0x230
[41374.265595] [] ? strdup+0x3c/0x60 [spl]
[41374.265631] [] zfsdev_ioctl+0x446/0x470 [zfs]
[41374.265635] [] do_vfs_ioctl+0x2e0/0x4c0
[41374.265638] [] ? vtime_account_user+0x54/0x60
[41374.265641] [] SyS_ioctl+0x81/0xa0
[41374.265644] [] ? int_check_syscall_exit_work+0x34/0x3d
[41374.265666] [] system_call_fastpath+0x16/0x1b
The text was updated successfully, but these errors were encountered: