-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data Transfers are RAM hungry #4877
Comments
Which version? |
@jsign do you know what version this is happening? I thjnk this was 1.1.3 but maybe a diff commit |
This was with official v1.1.3. |
I had the opportunity to get into this situation again:
(Which is this PR, so some master-ish) Now having 4 sending data transfers of 4GiB to miners. Here some pprof of the heap and goroutines running: |
So is this proportional to the number of deals or the size of the deals? |
Some other profile with full ram and 21GiB of swap: |
graphsync 0.5.1 (the version you're using) should max out at 6 requests and 256MiB of memory so something is definitely funny here. In theory, it could be that GC is falling behind (especially if you're using swap) but really, if you're just sending a few deals as a client, we should never get to that point. |
The max of 6 requests was relaxed recently which may be why this issue came up. cc: @dirkmc |
That shouldn't matter too much (I only bring it up because there may be some per-request overhead or over-allocation, not sure). However, the version @jsign is using has that restriction. |
goodness I wish go had better ref-count diagnostics for catching memory leaks. It will be interesting with the memory watchdog, which can help us remove GC as a cause. We identified a particular memory leak -- but there may be others. We also probably need to look at the IPFS Block Store code cause it may be written so that its hold a reference in a way it should. |
Memory watchdog is now merged. You can set a maximum heap limit through the Could you also follow the steps here? #4445 (comment) And also throw in a |
It looks like the problem is somewhere in this code path: My guess is that either I dug around in the code but I wasn't able to figure out exactly where that might be happening. @hannahhoward do you have an idea where it might occur? |
SimultaneousTransfers where do you set this variable? If you limit it to 1 and that deal got stuck you are stuck forever? |
In
AFAIK, yes, so it's mostly a workaround. |
go-graphsync v0.6.0 fixed the memory scaling issues with graphsync transfers, by moving the memory allocation to the message queue. It was first integrated in Lotus v1.5.1. All our reports indicate that data transfer no longer allocates memory linearly with the file size. Please open new issues if you find other issues with memory footprint during data transfer. |
The memory usage when creating client deals is very high. @jsign created 5 deals (40 GB total) which increased RAM usage by 15GB until transfers finished. Lotus total peak RAM usage = 40GB for these 5 concurrent-ish deals
cc: @hannahhoward
The text was updated successfully, but these errors were encountered: