-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Akka.DistributedData: memory leak when recovering events from LMDB data store #5022
Comments
Based on the logs, looks like this system is trying to continuously prune the data envelopes loaded from storage:
This might be due to the fact that the node we originally recovered this data from no longer exists (indeed, that would likely be the case in the event of any |
akka.net/src/contrib/cluster/Akka.DistributedData/Replicator.cs Lines 1400 to 1401 in 321d0e4
This is where the log statements are coming from, but it looks like the pruning is happening over and over again with no success - so the issue might be persisting that event while pruned doesn't overwrite the original. |
Also worth noting that we're attempting to prune many different unique addresses for the same |
Are these improvements that have been done so far in a nightly build I can try? |
Here's how to get the nightly builds: https://getakka.net/community/getting-access-to-nightly-builds.html . Anything merged into |
Initial observations:
Possible causes:
Further observations:
|
Version: Akka.NET v1.4.19
Reproduction: https://github.com/andyfurnival/ddata
Memory consumption grows steadily in a 2 node cluster, with both nodes recovering events from their own LMDB data stores:
The drivers of memory consumption appear to be non-stop gossip between the replicators:
I think there must be an equality check that is failing somewhere in this system, causing the same objects to be gossiped over and over again. We're not even producing any new events other than what's been deserialized from LMDB in this sample and yet the memory grows continuously.
The text was updated successfully, but these errors were encountered: