-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove unnecessary "return 100"s from tests #89550
Conversation
Tagging subscribers to this area: @dotnet/area-system-reflection-metadata Issue DetailsNote: contains #89521Add XUnitWrapperGenerator check for test methods that always return 100 and change them to void.
|
Tagging subscribers to this area: @JulieLeeMSFT, @jakobbotsch Issue DetailsNote: contains #89521Add XUnitWrapperGenerator check for test methods that always return 100 and change them to void.
|
This should be ready @trylek @jkoritzinsky @TIHan cc @dotnet/jit-contrib. The actual change is small. Tests are almost entirely changing to void and very occasionally something related to that. I've scanned the entire diff though of course I'm the author. |
foreach (SyntaxReference node in method.DeclaringSyntaxReferences.Where(n => n != null)) | ||
{ | ||
foreach (ReturnStatementSyntax rs in node.GetSyntax(context.CancellationToken).DescendantNodes().OfType<ReturnStatementSyntax>()) | ||
{ | ||
if (rs.Expression == null) continue; | ||
found = true; | ||
if (rs.Expression is LiteralExpressionSyntax les && les.Token.Value is int i && i == 100) continue; | ||
return; | ||
} | ||
} | ||
|
||
if (!found) return; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For a future PR, I think it's worthwhile moving these checks into a separate analyzer instead of doing them as part of the source generator. If we do them in an analyzer, we can get better performance at build time and we could use better tools (like the IOperation
APIs) to validate these sorts of scenarios.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this idea but mainly did it this way because I'm unfamiliar with all of this and couldn't easily figure out how to set anything up (primarily copying and pasting in this file). So I'll either need a template for doing so or more time to turn the docs/bing chat into something real.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, thank you!
/azp run runtime-coreclr outerloop |
Azure Pipelines successfully started running 1 pipeline(s). |
new analyzer means a race condition with newly added tests (audit commits and/or rerun old test jobs) |
I have rebased this against the current main, the change is passing locally for me, I plan to merge it in once I find the lab results reasonable. |
/azp run runtime-coreclr outerloop |
/azp run runtime-extra-platforms |
Azure Pipelines successfully started running 1 pipeline(s). |
1 similar comment
Azure Pipelines successfully started running 1 pipeline(s). |
Just like in the previous case runtime and outerloop are clean and runtime-extra-platforms, while not clean, doesn't look any worse than in any other runs, merging in. |
Add XUnitWrapperGenerator check for test methods that always return 100 and change them to void.