fix: Do not compute prettify_macro_expansion()
unless the "Inline macro" assist has actually been invoked
#18900
+7
−7
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
And not just called to be listed.
This was a major performance hang when repeatedly switching back-and-forth between a large
include!
d file, aka. #18879 (but there are others). This does not fix this issue because, well, it was a major hurdle hiding everything else in the profiler, but now that it's fixed other things show up (and it's still slow).To categorize the main reasons remained, as analyzed by my profiler:
We have a lot of diagnostics, influenced mainly by two salsa queries,
body_with_source_map()
andborrowck()
. These two are lru'ed so they of course evict with such large files with lots of small bodies. If I didn't let the file to analyze well before starting my back-and-forth, we also have a lot of calls toinfer()
, half of them are blocking on the other. This is also expected since we cancel them when we switch out of the file, so they never complete and cached, only consume CPU.There is an important observation here, though: diagnostics are called twice. Once for assists and the other for, well, diagnostics. Since some diagnostics are not fully cached in the db this may cause actual slowdown for real users. We probably want to fix this, perhaps by measure of caching the diagnostics of the last file.
There is also a fair share of semantic highlighting, which is also fair, given that a lot of its work doesn't (and couldn't) be cached in the db. Another large part is assist, that tend to execute a lot of code when listing them, for deciding whether they are available or not. Perhaps we could improve on this metric too.
Overall, I'm tending to close (oh no GitHub, don't close it!) #18879 as "won't fix" given this is a very atypical scenario. But perhaps there are still a few points we can improve in, guided by the information from this issue.