You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems I'm getting double the expected / accounted memory usage from ARC. Below are the memory statistics on my box with (default auto-selected) arc_meta_limit = 4563402752 (left ~60% of graph) and again after reboot with arc_meta_limit set to 9126805504. Both portions of the graph are after a reboot, and I'm running the same rsync job (runs through large directory tree; minimal new data transfer) that rapidly fills up the meta-limit of the arc.
You can see on the see on the second run that not only is the arc (as expected) twice the size (now 8.5G, up from 4.25G), but the free memory has reduced by ~twice that change. No other changes were made to the system. The "free memory" values are ~ the values as reported by free.
Let me know if there is other data I can provide to help with this.
Put another way: I would have expected free + ARC (the top of the magenta area of the stacked chart) to remain essentially the same between these two runs, with the line separating free from ARC (light blue / magenta boundary) moving down by 4.25G.
Perhaps most/all of this is due to fragmentation? I see in particular zio_buf_512's size being >2x its allocation like this much of the time:
--------------------- cache ------------------------------------------------------- ----- slab ------ ---- object ----- --- emergency ---
name flags size alloc slabsize objsize total alloc max total alloc max dlock alloc max
zio_buf_512 0x00020 3283681280 1588384256 32768 512 100210 100210 100528 3106510 3102313 3116368 0 0 0
I suppose that would be OK, if cache & associated buffers are successfully freed under pressure, but I'm also having this (deadlock under load #1365 (comment)) issue that I was trying to troubleshoot when I filed this ticket.
Closing this ticket. After logging statistics from kmem/slab, it appears that slab fragmentation is the cause of this effect. Other existing tickets cover fragmentation, or deadlocks under memory load.
It seems I'm getting double the expected / accounted memory usage from ARC. Below are the memory statistics on my box with (default auto-selected) arc_meta_limit = 4563402752 (left ~60% of graph) and again after reboot with arc_meta_limit set to 9126805504. Both portions of the graph are after a reboot, and I'm running the same rsync job (runs through large directory tree; minimal new data transfer) that rapidly fills up the meta-limit of the arc.
You can see on the see on the second run that not only is the arc (as expected) twice the size (now 8.5G, up from 4.25G), but the free memory has reduced by ~twice that change. No other changes were made to the system. The "free memory" values are ~ the values as reported by free.
Let me know if there is other data I can provide to help with this.
CentOS 6.4 Linux :host: 2.6.32-358.14.1.el6.x86_64 #1 SMP Tue Jul 16 23:51:20 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux Installed Packages spl.x86_64 0.6.1-1.el6 @zfs spl-dkms.noarch 0.6.1-2.el6 @zfs zfs.x86_64 0.6.1-1.el6 @zfs zfs-dkms.noarch 0.6.1-2.el6 @zfs zfs-dracut.x86_64 0.6.1-1.el6 @zfs zfs-release.noarch 1-2.el6 @/zfs-release-1-2.el6.noarch zfs-test.x86_64 0.6.1-1.el6 @zfs
The text was updated successfully, but these errors were encountered: