-
Notifications
You must be signed in to change notification settings - Fork 790
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement a more performant code cache #3022
Conversation
Codecov Report
Additional details and impacted files
Flags with carried forward coverage won't be shown. Click here to find out more. |
…o into code-cache
d9f485a
to
9161532
Compare
Reverted back to old state due to too many tests failing (note: CI run is thus not up to date with current state since it will not re-run due to merge conflicts) |
…o into code-cache
Cool, but this looks great in general! 🤩 |
Ok, doing a few client runs here:
Oh. But the code cache is also not building up any size. 🤔 If I inject I guess there is still something missing on initialization? Will submit here and have a look. |
Have updated the branch via UI |
Update: ah, I think this is due to the exact same thing which I stumbled upon and am solving here #3063 , so that the --executeBlocks option in client is not properly initialized with the caches. Will wait until this is merged and then update the branch here and then try again (testing this without the --executeBlocks option is basically not possible, since one then cannot get repeated runs on the same (higher-number) block ranges). |
Ah, and then we'll also still need to see if the cache "holds" for live-higher-range-client blocks. This is always an exciting moment. 🙂 😋 |
Ok, I now updated this branch with the changes from the PR (remember to pull your local branch), will now test again. 🙂 |
Just so that I do not forget: There now needs to be a if (!this._codeCacheSettings.deactivate) {
codeCacheOpts = { ...codeCacheOpts, type: CacheType.ORDERED_MAP }
} Generally code cache is not active yet for the |
Haha, it was because the size(): number {
if (this._lruCache) {
return this._lruCache.size
}
return 0
} Please update, maybe you can have another general look if all methods work for both cache types. |
Ok, great, then for the above example cache grows to 246. And I have got 588 reads from cache, 4027 from DB. Interesting, that's something, already for such a relatively small block range. Needs to stop, unfortunately can't do any more performance comparisons, but will continue to experiment tomorrow a bit. |
Updated this via UI |
Some more performance results, if several values this is for consecutive repetitions of runs: npm run client:start -- --sync=none --executeBlocks=1600000-1600100 (100 blocks) npm run client:start -- --sync=none --executeBlocks=1600000-1600100 --codeCache=0 npm run client:start -- --sync=none --executeBlocks=1600000-1601000 (1000 blocks) npm run client:start -- --sync=none --executeBlocks=1600000-1601000 --codeCache=0 Ok, this for the most part matches. One outlier with this 2:07, but this can very well be the first run. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
This PR looks into implementing a more performant code cache for the
statemanager
. Also see #3021.Some followup work that is left to do after this change is merged:
Refactor thestatemanager
caches to reduce code redundanciesIntegrate the code cache withEthersStateManager