-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix #42078, improve the idempotency of callsite inlining #42082
Conversation
f63ae02
to
caf272b
Compare
frame.src = finish!(interp, result) | ||
end | ||
run_optimizer && (frame.cached = true) | ||
typeinf(interp, frame) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The previous form here was intended to be an example of running an external optimization pipeline. I think cached=false also has some follow-on effects of deciding how to resolve (and optimize) cycles?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that's why I made this change. Currently the optimization behavior really changes depending on the cache configuration, and it causes typeinf_code
to behave slightly different from actual execution, which is very confusing.
For example,
Line 176 in 5629856
@test length(code_typed(f_ifelse, (String,))[1][1].code) <= 2 |
@test_broken
for long, but I found it has been functional actually(the problem was that the previous
cached = false
hinders type_annotate!
to do additional clean up work).
And this change is necessary for the test case added in this PR, because callsite inlining depends on typeinf_edge
, which actually changes its behavior depending on caller's cache
configuration.
External consumer can still plugin their own optimization pipeline by overloading OptimizationState
, so I think this change is okay.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changing this to a symbol seems like a great idea
After #41328, inference can observe statement flags and try to re-infer a discarded source if it's going to be inlined. The re-inferred source will only be cached into the inference-local cache, and won't be cached globally.
caf272b
to
5629856
Compare
Test failures on linux 32 seem unrelated to this PR. |
…liaLang#42082) After JuliaLang#41328, inference can observe statement flags and try to re-infer a discarded source if it's going to be inlined. The re-inferred source will only be cached into the inference-local cache, and won't be cached globally.
…liaLang#42082) After JuliaLang#41328, inference can observe statement flags and try to re-infer a discarded source if it's going to be inlined. The re-inferred source will only be cached into the inference-local cache, and won't be cached globally.
After #41328, inference can observe statement flags and try to re-infer
a discarded source if it's going to be inlined.
The re-inferred source will only be cached into the inference-local
cache, and won't be cached globally.