-
Notifications
You must be signed in to change notification settings - Fork 13k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Properly capture trailing 'unglued' token #79978
Conversation
(rust-highfive has picked a reviewer for you, use r? to override) |
27fd7e1
to
759403f
Compare
What a coincidence, I reviewed #79912 today and started thinking - how does token collection works with cases like Could you actually add |
All this stuff will go away once lexer starts producing "fine-grained" tokens, e.g. two tokens for |
@petrochenkov: We can't run into this kind of issue with |
If we try to capture the `Vec<u8>` in `Option<Vec<u8>>`, we'll need to capture a `>` token which was 'unglued' from a `>>` token. The processing of unglueing a token for parsing purposes bypasses the usual capturing infrastructure, so we currently lose the trailing `>`. As a result, we fall back to the reparsed `TokenStream`, causing us to lose spans. This commit makes token capturing keep track of a trailing 'unglued' token. Note that we don't need to care about unglueing except at the end of the captured tokens - if we capture both the first and second unglued tokens, then we'll end up capturing the full 'glued' token, which already works correctly.
759403f
to
e6fa633
Compare
@bors r+ |
📌 Commit e6fa633 has been approved by |
☀️ Test successful - checks-actions |
If we try to capture the
Vec<u8>
inOption<Vec<u8>>
, we'llneed to capture a
>
token which was 'unglued' from a>>
token.The processing of unglueing a token for parsing purposes bypasses the
usual capturing infrastructure, so we currently lose the trailing
>
.As a result, we fall back to the reparsed
TokenStream
, causing us tolose spans.
This commit makes token capturing keep track of a trailing 'unglued'
token. Note that we don't need to care about unglueing except at the end
of the captured tokens - if we capture both the first and second unglued
tokens, then we'll end up capturing the full 'glued' token, which
already works correctly.