-
Notifications
You must be signed in to change notification settings - Fork 313
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cross-call EVM memory management optimization #481
Comments
Nice! I'm not clear whether you are suggesting a change to the protocol, or only an optimization of EVMC.
What is the nature of the risk? |
I'm also confused, if the same |
The buffer is passed to a callee as a pre-allocated memory capacity. But the logical size remains 0 at start. |
The rough estimate is that you will have a chain of 1024 memory buffers (one per each call depth level). And at each level someone can bump the buffer size to ~1MB. In total we are having 1GB of memory. |
This looks like a good optimization, having to share the same memory blob between calls seems natural. |
Still confused. Is it one |
It is one |
I got a very different idea when reading this first time. Now i see that this proposal is related to return data allocations Idea that i got is to share EVM memory between calls. in essence, manually implementing expansion of EVM memory across the whole call stack. |
You can make it work, but I don't think this will be efficient for EVM memory. However, you can use it for EVM stack.
The explanation: when you return back from C2 context to C1 the C2's return data is still in the C2 memory buffer. So C1 has to copy it because otherwise there is a risk of overwriting the data by C1 memory expansion. |
@recmo created good memory upper bound analysis for a transaction: https://2π.com/22/eth-max-mem/ I think avoiding returndata copy is not practical. Therefore, the model presented by @rakita should work great if we apply these improvements:
|
Let's consider memory "flow" for a caller running at depth
d+0
calling a callee at depthd+1
.Present
This is memory buffers workflow with the maximum number of copies. But this is actually current practice for EVMC/evmone.
R0
(its lifetime ends here).M1
.O1
(a copy of a slice ofM1
).M1
.O1
toR0
.O1
.Additionally, if
CALL
output arguments are used the caller must copy a part ofR0
to the dedicated place in its memory. There is not much we can do about this copy.The current EVMC may eliminate
M1
→O1
copy orO1
→R0
copy.Future
What we want is:
It can represent EVM memory and return data at the same time. And it can work like this:
R0
.R0
.R0
as its EVM memory. Expands its capacity if needed.R0
with the reference to the return data in it.R0
as return data buffer.The same buffer
R0
is passed to every call or create the caller makes. It can grow to the value limited by the quadratic memory cost (see below).Additionally, we can create intrusive list of memory buffers to preserve more than single buffer for "deeper" calls.
Memory size analysis
Notes
CREATE
does not go to the return data buffer. The caller must remember to "empty" the buffer got from the callee in this case.identity
is optional expand and single copy.The text was updated successfully, but these errors were encountered: