Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ARC memory footprint 2x accounted #1623

Closed
eborisch opened this issue Aug 2, 2013 · 3 comments
Closed

ARC memory footprint 2x accounted #1623

eborisch opened this issue Aug 2, 2013 · 3 comments
Milestone

Comments

@eborisch
Copy link

eborisch commented Aug 2, 2013

It seems I'm getting double the expected / accounted memory usage from ARC. Below are the memory statistics on my box with (default auto-selected) arc_meta_limit = 4563402752 (left ~60% of graph) and again after reboot with arc_meta_limit set to 9126805504. Both portions of the graph are after a reboot, and I'm running the same rsync job (runs through large directory tree; minimal new data transfer) that rapidly fills up the meta-limit of the arc.

You can see on the see on the second run that not only is the arc (as expected) twice the size (now 8.5G, up from 4.25G), but the free memory has reduced by ~twice that change. No other changes were made to the system. The "free memory" values are ~ the values as reported by free.

Let me know if there is other data I can provide to help with this.

screen shot 2013-08-02 at 10 17 11 am

CentOS 6.4

Linux :host: 2.6.32-358.14.1.el6.x86_64 #1 SMP Tue Jul 16 23:51:20 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

Installed Packages
spl.x86_64                                                                                     0.6.1-1.el6                                                                           @zfs                        
spl-dkms.noarch                                                                                0.6.1-2.el6                                                                           @zfs                        
zfs.x86_64                                                                                     0.6.1-1.el6                                                                           @zfs                        
zfs-dkms.noarch                                                                                0.6.1-2.el6                                                                           @zfs                        
zfs-dracut.x86_64                                                                              0.6.1-1.el6                                                                           @zfs                        
zfs-release.noarch                                                                             1-2.el6                                                                               @/zfs-release-1-2.el6.noarch
zfs-test.x86_64                                                                                0.6.1-1.el6                                                                           @zfs                        
@eborisch
Copy link
Author

eborisch commented Aug 2, 2013

Put another way: I would have expected free + ARC (the top of the magenta area of the stacked chart) to remain essentially the same between these two runs, with the line separating free from ARC (light blue / magenta boundary) moving down by 4.25G.

@eborisch
Copy link
Author

eborisch commented Aug 6, 2013

Perhaps most/all of this is due to fragmentation? I see in particular zio_buf_512's size being >2x its allocation like this much of the time:

--------------------- cache -------------------------------------------------------  ----- slab ------  ---- object -----  --- emergency ---
name                                    flags      size     alloc slabsize  objsize  total alloc   max  total alloc   max  dlock alloc   max
zio_buf_512                           0x00020 3283681280 1588384256    32768      512  100210 100210 100528  3106510 3102313 3116368      0     0     0

I suppose that would be OK, if cache & associated buffers are successfully freed under pressure, but I'm also having this (deadlock under load #1365 (comment)) issue that I was trying to troubleshoot when I filed this ticket.

Open fragmentation ticket: openzfs/spl#26.

@eborisch
Copy link
Author

eborisch commented Aug 8, 2013

Closing this ticket. After logging statistics from kmem/slab, it appears that slab fragmentation is the cause of this effect. Other existing tickets cover fragmentation, or deadlocks under memory load.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant