-
Notifications
You must be signed in to change notification settings - Fork 521
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Async LRU cache for Ethereum block data #540
Async LRU cache for Ethereum block data #540
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really like the change to make all RPC functions async, which is future proof, and regardless of the rest of the async LRU changes we should merge this.
@nanocryk Have you done any basic benchmarking of this branch?
@nanocryk Just wonder any update on this? |
@sorpaas I'm preparing a branch on top of Moonbeam and it's Frontier fork with the same change to compare the performance with the non-async code. |
I did a few requests and timings seems to stay pretty much the same. I'll do more tests with concurrent requests to check the async cache actually improves performances. Do you have some specific queries in mind that I should benchmark? |
As long as timing stays the same I think this is good, even we just consider this to be an async refactoring. The situation I worried about is that this would unexpectedly bring performance degradation, which would be bad. |
Picked a few other blocks and performance seems to be mostly the same with and without the change, so looks good to me. |
* async block data cache * fmt * fmt (mix of tabs and spaces?) * fmt (remove type annotation to stay inline)
Follow-up to #479 . Moves cache into a dedicated async task managing the cache, which is requested by various RPC functions using channels.
In the original code, in the scenario of a first request starting to fetch block data: a second request requesting the same block data before the first request cached the data would lead to fetch again the block data instead of waiting for the first result.
This new code prevents to fetch the same block data multiple times, and will instead add all awaiting requesters to a pending list which will get the data once it is fetched (once).