` to the PR description. All such specified jobs will be executed
+ in the try build once the `@bors try` command is used on the PR. If no try
+ jobs are specified in this way, the jobs defined in the `try` section of
+ [`jobs.yml`] will be executed by default.
+
+> **Using `try-job` PR description directives**
+>
+> 1. Identify which set of try-jobs (max 10) you would like to exercise. You can
+> find the name of the CI jobs in [`jobs.yml`].
+>
+> 2. Amend PR description to include (usually at the end of the PR description)
+> e.g.
+>
+> ```text
+> This PR fixes #123456.
+>
+> try-job: x86_64-msvc
+> try-job: test-various
+> ```
+>
+> Each `try-job` directive must be on its own line.
+>
+> 3. Run the prescribed try jobs with `@bors try`. As aforementioned, this
+> requires the user to either (1) have `try` permissions or (2) be delegated
+> with `try` permissions by `@bors delegate` by someone who has `try`
+> permissions.
+>
+> Note that this is usually easier to do than manually edit [`jobs.yml`].
+> However, it can be less flexible because you cannot adjust the set of tests
+> that are exercised this way.
+
+Try jobs are defined in the `try` section of [`jobs.yml`]. They are executed on
+the `try` branch under the `rust-lang-ci/rust` repository[^rust-lang-ci] and
+their results can be seen [here](https://github.com/rust-lang-ci/rust/actions),
+although usually you will be notified of the result by a comment made by bors on
+the corresponding PR.
+
+Multiple try builds can execute concurrently across different PRs.
+
+
+bors identify try jobs by commit hash. This means that if you have two PRs
+containing the same (latest) commits, running `@bors try` will result in the
+*same* try job and it really confuses `bors`. Please refrain from doing so.
+
[rustc-perf]: https://github.com/rust-lang/rustc-perf
[crater]: https://github.com/rust-lang/crater
### Modifying CI jobs
-If you want to modify what gets executed on our CI, you can simply modify the `pr`, `auto` or `try` sections of the [`jobs.yml`] file.
+If you want to modify what gets executed on our CI, you can simply modify the
+`pr`, `auto` or `try` sections of the [`jobs.yml`] file.
-You can also modify what gets executed temporarily, for example to test a particular platform
-or configuration that is challenging to test locally (for example, if a Windows build fails, but you don't have access to a Windows machine). Don't hesitate to use CI resources in such situations to try out a fix!
+You can also modify what gets executed temporarily, for example to test a
+particular platform or configuration that is challenging to test locally (for
+example, if a Windows build fails, but you don't have access to a Windows
+machine). Don't hesitate to use CI resources in such situations to try out a
+fix!
You can perform an arbitrary CI job in two ways:
-- Use the [try build](#try-builds) functionality, and specify the CI jobs that you want to be
-executed in try builds in your PR description.
-- Modify the [`pr`](#pull-request-builds) section of `jobs.yml` to specify which CI jobs should be
-executed after each push to your PR. This might be faster than repeatedly starting try builds.
-
-To modify the jobs executed after each push to a PR, you can simply copy one of the job definitions from the `auto` section to the `pr` section. For example, the `x86_64-msvc` job is responsible for running the 64-bit MSVC tests.
-You can copy it to the `pr` section to cause it to be executed after a commit is pushed to your
-PR, like this:
+- Use the [try build](#try-builds) functionality, and specify the CI jobs that
+ you want to be executed in try builds in your PR description.
+- Modify the [`pr`](#pull-request-builds) section of `jobs.yml` to specify which
+ CI jobs should be executed after each push to your PR. This might be faster
+ than repeatedly starting try builds.
+
+To modify the jobs executed after each push to a PR, you can simply copy one of
+the job definitions from the `auto` section to the `pr` section. For example,
+the `x86_64-msvc` job is responsible for running the 64-bit MSVC tests. You can
+copy it to the `pr` section to cause it to be executed after a commit is pushed
+to your PR, like this:
```yaml
pr:
@@ -109,61 +221,69 @@ pr:
<<: *job-windows-8c
```
-Then you can commit the file and push it to your PR branch on GitHub. GitHub Actions should then
-execute this CI job after each push to your PR.
+Then you can commit the file and push it to your PR branch on GitHub. GitHub
+Actions should then execute this CI job after each push to your PR.
+
+
+
+**After you have finished your experiments, don't forget to remove any changes
+you have made to `jobs.yml`, if they were supposed to be temporary!**
-**After you have finished your experiments, don't forget to remove any changes you have made to `jobs.yml`, if they were supposed to be temporary!**
+A good practice is to prefix `[WIP]` in PR title while still running try jobs
+and `[DO NOT MERGE]` in the commit that modifies the CI jobs for testing
+purposes.
+
Although you are welcome to use CI, just be conscious that this is a shared
-resource with limited concurrency. Try not to enable too many jobs at once (one or two should be sufficient in
-most cases).
+resource with limited concurrency. Try not to enable too many jobs at once (one
+or two should be sufficient in most cases).
## Merging PRs serially with bors
-CI services usually test the last commit of a branch merged with the last
-commit in `master`, and while that’s great to check if the feature works in
-isolation, it doesn’t provide any guarantee the code is going to work once it’s
-merged. Breakages like these usually happen when another, incompatible PR is
-merged after the build happened.
+CI services usually test the last commit of a branch merged with the last commit
+in `master`, and while that’s great to check if the feature works in isolation,
+it doesn’t provide any guarantee the code is going to work once it’s merged.
+Breakages like these usually happen when another, incompatible PR is merged
+after the build happened.
-To ensure a `master` branch that works all the time, we forbid manual merges. Instead,
-all PRs have to be approved through our bot, [bors] (the software behind it is
-called [homu]). All the approved PRs are put [in a queue][merge queue] (sorted
-by priority and creation date) and are automatically tested one at the time. If
-all the builders are green, the PR is merged, otherwise the failure is recorded
-and the PR will have to be re-approved again.
+To ensure a `master` branch that works all the time, we forbid manual merges.
+Instead, all PRs have to be approved through our bot, [bors] (the software
+behind it is called [homu]). All the approved PRs are put in a [merge queue]
+(sorted by priority and creation date) and are automatically tested one at the
+time. If all the builders are green, the PR is merged, otherwise the failure is
+recorded and the PR will have to be re-approved again.
Bors doesn’t interact with CI services directly, but it works by pushing the
merge commit it wants to test to specific branches (like `auto` or `try`), which
-are configured to execute CI checks. Bors then detects the
-outcome of the build by listening for either Commit Statuses or Check Runs.
-Since the merge commit is based on the latest `master` and only one can be tested
-at the same time, when the results are green, `master` is fast-forwarded to that
-merge commit.
+are configured to execute CI checks. Bors then detects the outcome of the build
+by listening for either Commit Statuses or Check Runs. Since the merge commit is
+based on the latest `master` and only one can be tested at the same time, when
+the results are green, `master` is fast-forwarded to that merge commit.
Unfortunately testing a single PR at the time, combined with our long CI (~2
hours for a full run), means we can’t merge too many PRs in a single day, and a
-single failure greatly impacts our throughput for the day. The maximum number
-of PRs we can merge in a day is around ~10.
+single failure greatly impacts our throughput for the day. The maximum number of
+PRs we can merge in a day is around ~10.
-The large CI run times and requirement for a large builder pool is largely due to the
-fact that full release artifacts are built in the `dist-` builders. This is worth it
-because these release artifacts:
+The large CI run times and requirement for a large builder pool is largely due
+to the fact that full release artifacts are built in the `dist-` builders. This
+is worth it because these release artifacts:
-- Allow perf testing even at a later date
-- Allow bisection when bugs are discovered later
-- Ensure release quality since if we're always releasing, we can catch problems early
+- Allow perf testing even at a later date.
+- Allow bisection when bugs are discovered later.
+- Ensure release quality since if we're always releasing, we can catch problems
+ early.
### Rollups
Some PRs don’t need the full test suite to be executed: trivial changes like
-typo fixes or README improvements *shouldn’t* break the build, and testing
-every single one of them for 2+ hours is a big waste of time. To solve this,
-we regularly create a "rollup", a PR where we merge several pending trivial
-PRs so they can be tested together. Rollups are created manually by a team member
-using the "create a rollup" button on the [merge queue]. The team member uses their
-judgment to decide if a PR is risky or not, and are the best tool we have at
-the moment to keep the queue in a manageable state.
+typo fixes or README improvements *shouldn’t* break the build, and testing every
+single one of them for 2+ hours is a big waste of time. To solve this, we
+regularly create a "rollup", a PR where we merge several pending trivial PRs so
+they can be tested together. Rollups are created manually by a team member using
+the "create a rollup" button on the [merge queue]. The team member uses their
+judgment to decide if a PR is risky or not, and are the best tool we have at the
+moment to keep the queue in a manageable state.
## Docker
@@ -174,20 +294,25 @@ platform’s custom [Docker container]. This has a lot of advantages for us:
underlying image (switching from the trusty image to xenial was painless for
us).
- We can use ancient build environments to ensure maximum binary compatibility,
- for example [using older CentOS releases][dist-x86_64-linux] on our Linux builders.
-- We can avoid reinstalling tools (like QEMU or the Android emulator) every
- time thanks to Docker image caching.
+ for example [using older CentOS releases][dist-x86_64-linux] on our Linux
+ builders.
+- We can avoid reinstalling tools (like QEMU or the Android emulator) every time
+ thanks to Docker image caching.
- Users can run the same tests in the same environment locally by just running
- `src/ci/docker/run.sh image-name`, which is awesome to debug failures.
+ `src/ci/docker/run.sh image-name`, which is awesome to debug failures. Note
+ that there are only linux docker images available locally due to licensing and
+ other restrictions.
-The docker images prefixed with `dist-` are used for building artifacts while those without that prefix run tests and checks.
+The docker images prefixed with `dist-` are used for building artifacts while
+those without that prefix run tests and checks.
We also run tests for less common architectures (mainly Tier 2 and Tier 3
-platforms) in CI. Since those platforms are not x86 we either run
-everything inside QEMU or just cross-compile if we don’t want to run the tests
-for that platform.
+platforms) in CI. Since those platforms are not x86 we either run everything
+inside QEMU or just cross-compile if we don’t want to run the tests for that
+platform.
-These builders are running on a special pool of builders set up and maintained for us by GitHub.
+These builders are running on a special pool of builders set up and maintained
+for us by GitHub.
[Docker container]: https://github.com/rust-lang/rust/tree/master/src/ci/docker
@@ -198,10 +323,11 @@ Our CI workflow uses various caching mechanisms, mainly for two things:
### Docker images caching
The Docker images we use to run most of the Linux-based builders take a *long*
-time to fully build. To speed up the build, we cache it using [Docker registry caching],
-with the intermediate artifacts being stored on [ghcr.io]. We also push the built
-Docker images to ghcr, so that they can be reused by other tools (rustup) or
-by developers running the Docker build locally (to speed up their build).
+time to fully build. To speed up the build, we cache it using [Docker registry
+caching], with the intermediate artifacts being stored on [ghcr.io]. We also
+push the built Docker images to ghcr, so that they can be reused by other tools
+(rustup) or by developers running the Docker build locally (to speed up their
+build).
Since we test multiple, diverged branches (`master`, `beta` and `stable`), we
can’t rely on a single cache for the images, otherwise builds on a branch would
@@ -216,8 +342,9 @@ Dockerfiles and related scripts.
We build some C/C++ stuff in various CI jobs, and we rely on [sccache] to cache
the intermediate LLVM artifacts. Sccache is a distributed ccache developed by
-Mozilla, which can use an object storage bucket as the storage backend. In our case,
-the artefacts are uploaded to an S3 bucket that we control (`rust-lang-ci-sccache2`).
+Mozilla, which can use an object storage bucket as the storage backend. In our
+case, the artefacts are uploaded to an S3 bucket that we control
+(`rust-lang-ci-sccache2`).
[sccache]: https://github.com/mozilla/sccache
@@ -228,16 +355,16 @@ During the years we developed some custom tooling to improve our CI experience.
### Rust Log Analyzer to show the error message in PRs
The build logs for `rust-lang/rust` are huge, and it’s not practical to find
-what caused the build to fail by looking at the logs. To improve the
-developers’ experience we developed a bot called [Rust Log Analyzer][rla] (RLA)
-that receives the build logs on failure and extracts the error message
-automatically, posting it on the PR.
+what caused the build to fail by looking at the logs. To improve the developers’
+experience we developed a bot called [Rust Log Analyzer][rla] (RLA) that
+receives the build logs on failure and extracts the error message automatically,
+posting it on the PR.
-The bot is not hardcoded to look for error strings, but was trained with a
-bunch of build failures to recognize which lines are common between builds and
-which are not. While the generated snippets can be weird sometimes, the bot is
-pretty good at identifying the relevant lines even if it’s an error we've never
-seen before.
+The bot is not hardcoded to look for error strings, but was trained with a bunch
+of build failures to recognize which lines are common between builds and which
+are not. While the generated snippets can be weird sometimes, the bot is pretty
+good at identifying the relevant lines even if it’s an error we've never seen
+before.
[rla]: https://github.com/rust-lang/rust-log-analyzer
@@ -245,16 +372,16 @@ seen before.
The `rust-lang/rust` repo doesn’t only test the compiler on its CI, but also a
variety of tools and documentation. Some documentation is pulled in via git
-submodules. If we blocked merging rustc PRs on the documentation being fixed,
-we would be stuck in a chicken-and-egg problem, because the documentation's CI
-would not pass since updating it would need the not-yet-merged version of
-rustc to test against (and we usually require CI to be passing).
+submodules. If we blocked merging rustc PRs on the documentation being fixed, we
+would be stuck in a chicken-and-egg problem, because the documentation's CI
+would not pass since updating it would need the not-yet-merged version of rustc
+to test against (and we usually require CI to be passing).
To avoid the problem, submodules are allowed to fail, and their status is
-recorded in [rust-toolstate]. When a submodule breaks, a bot automatically
-pings the maintainers so they know about the breakage, and it records the
-failure on the toolstate repository. The release process will then ignore
-broken tools on nightly, removing them from the shipped nightlies.
+recorded in [rust-toolstate]. When a submodule breaks, a bot automatically pings
+the maintainers so they know about the breakage, and it records the failure on
+the toolstate repository. The release process will then ignore broken tools on
+nightly, removing them from the shipped nightlies.
While tool failures are allowed most of the time, they’re automatically
forbidden a week before a release: we don’t care if tools are broken on nightly
diff --git a/src/doc/rustc-dev-guide/src/tests/compiletest.md b/src/doc/rustc-dev-guide/src/tests/compiletest.md
index 6d8a11ecd2290..71b1b918304e4 100644
--- a/src/doc/rustc-dev-guide/src/tests/compiletest.md
+++ b/src/doc/rustc-dev-guide/src/tests/compiletest.md
@@ -4,158 +4,163 @@
## Introduction
-`compiletest` is the main test harness of the Rust test suite.
-It allows test authors to organize large numbers of tests
-(the Rust compiler has many thousands),
-efficient test execution (parallel execution is supported),
-and allows the test author to configure behavior and expected results of both
+`compiletest` is the main test harness of the Rust test suite. It allows test
+authors to organize large numbers of tests (the Rust compiler has many
+thousands), efficient test execution (parallel execution is supported), and
+allows the test author to configure behavior and expected results of both
individual and groups of tests.
-> NOTE:
-> For macOS users, `SIP` (System Integrity Protection) [may consistently
-> check the compiled binary by sending network requests to Apple][zulip],
-> so you may get a huge performance degradation when running tests.
+> **Note for macOS users**
>
-> You can resolve it by tweaking the following settings:
-> `Privacy & Security -> Developer Tools -> Add Terminal (Or VsCode, etc.)`.
+> For macOS users, `SIP` (System Integrity Protection) [may consistently check
+> the compiled binary by sending network requests to Apple][zulip], so you may
+> get a huge performance degradation when running tests.
+>
+> You can resolve it by tweaking the following settings: `Privacy & Security ->
+> Developer Tools -> Add Terminal (Or VsCode, etc.)`.
[zulip]: https://rust-lang.zulipchat.com/#narrow/stream/182449-t-compiler.2Fhelp/topic/.E2.9C.94.20Is.20there.20any.20performance.20issue.20for.20MacOS.3F
-`compiletest` may check test code for success, for runtime failure,
-or for compile-time failure.
-Tests are typically organized as a Rust source file with annotations in
-comments before and/or within the test code.
-These comments serve to direct `compiletest` on if or how to run the test,
-what behavior to expect, and more.
-See [header commands](headers.md) and the test suite documentation below
-for more details on these annotations.
+`compiletest` may check test code for compile-time or run-time success/failure.
+
+Tests are typically organized as a Rust source file with annotations in comments
+before and/or within the test code. These comments serve to direct `compiletest`
+on if or how to run the test, what behavior to expect, and more. See
+[directives](directives.md) and the test suite documentation below for more details
+on these annotations.
-See the [Adding new tests](adding.md) chapter for a tutorial on creating a new
-test, and the [Running tests](running.md) chapter on how to run the test
-suite.
+See the [Adding new tests](adding.md) and [Best practies](best-practiecs.md)
+chapters for a tutorial on creating a new test and advice on writing a good
+test, and the [Running tests](running.md) chapter on how to run the test suite.
-Compiletest itself tries to avoid running tests when the artifacts
-that are involved (mainly the compiler) haven't changed. You can use
-`x test --test-args --force-rerun` to rerun a test even when none of the
-inputs have changed.
+Compiletest itself tries to avoid running tests when the artifacts that are
+involved (mainly the compiler) haven't changed. You can use `x test --test-args
+--force-rerun` to rerun a test even when none of the inputs have changed.
## Test suites
-All of the tests are in the [`tests`] directory.
-The tests are organized into "suites", with each suite in a separate subdirectory.
-Each test suite behaves a little differently, with different compiler behavior
-and different checks for correctness.
-For example, the [`tests/incremental`] directory contains tests for
-incremental compilation.
-The various suites are defined in [`src/tools/compiletest/src/common.rs`] in
-the `pub enum Mode` declaration.
+All of the tests are in the [`tests`] directory. The tests are organized into
+"suites", with each suite in a separate subdirectory. Each test suite behaves a
+little differently, with different compiler behavior and different checks for
+correctness. For example, the [`tests/incremental`] directory contains tests for
+incremental compilation. The various suites are defined in
+[`src/tools/compiletest/src/common.rs`] in the `pub enum Mode` declaration.
The following test suites are available, with links for more information:
-- [`ui`](ui.md) — tests that check the stdout/stderr from the compilation
- and/or running the resulting executable
-- `ui-fulldeps` — `ui` tests which require a linkable build of `rustc` (such
- as using `extern crate rustc_span;` or used as a plugin)
-- [`pretty`](#pretty-printer-tests) — tests for pretty printing
-- [`incremental`](#incremental-tests) — tests incremental compilation behavior
-- [`debuginfo`](#debuginfo-tests) — tests for debuginfo generation running debuggers
-- [`codegen`](#codegen-tests) — tests for code generation
-- [`codegen-units`](#codegen-units-tests) — tests for codegen unit partitioning
-- [`assembly`](#assembly-tests) — verifies assembly output
-- [`mir-opt`](#mir-opt-tests) — tests for MIR generation
-- [`run-make`](#run-make-tests) — general purpose tests using Rust programs (or
- Makefiles (legacy))
-- [`run-pass-valgrind`](#valgrind-tests) — tests run with Valgrind
-- [`coverage`](#coverage-tests) - tests for coverage instrumentation
-- [`coverage-run-rustdoc`](#coverage-tests) - coverage tests that also run
- instrumented doctests
-- [Rustdoc tests](../rustdoc.md#tests):
- - `rustdoc` — tests for rustdoc, making sure that the generated files
- contain the expected documentation.
- - `rustdoc-gui` — tests for rustdoc's GUI using a web browser.
- - `rustdoc-js` — tests to ensure the rustdoc search is working as expected.
- - `rustdoc-js-std` — tests to ensure the rustdoc search is working as expected
- (run specifically on the std docs).
- - `rustdoc-json` — tests on the JSON output of rustdoc.
- - `rustdoc-ui` — tests on the terminal output of rustdoc.
+### Compiler-specific test suites
+
+| Test suite | Purpose |
+|-------------------------------------------|---------------------------------------------------------------------------------------------------------------------|
+| [`ui`](ui.md) | Check the stdout/stderr snapshots from the compilation and/or running the resulting executable |
+| `ui-fulldeps` | `ui` tests which require a linkable build of `rustc` (such as using `extern crate rustc_span;` or used as a plugin) |
+| [`pretty`](#pretty-printer-tests) | Check pretty printing |
+| [`incremental`](#incremental-tests) | Check incremental compilation behavior |
+| [`debuginfo`](#debuginfo-tests) | Check debuginfo generation running debuggers |
+| [`codegen`](#codegen-tests) | Check code generation |
+| [`codegen-units`](#codegen-units-tests) | Check codegen unit partitioning |
+| [`assembly`](#assembly-tests) | Check assembly output |
+| [`mir-opt`](#mir-opt-tests) | Check MIR generation and optimizations |
+| [`run-pass-valgrind`](#valgrind-tests) | Run with Valgrind |
+| [`coverage`](#coverage-tests) | Check coverage instrumentation |
+| [`coverage-run-rustdoc`](#coverage-tests) | `coverage` tests that also run instrumented doctests |
+
+### General purpose test suite
+
+[`run-make`](#run-make-tests) are general purpose tests using Rust programs (or
+Makefiles (legacy)).
+
+### Rustdoc test suites
+
+See [Rustdoc tests](../rustdoc.md#tests) for more details.
+
+| Test suite | Purpose |
+|------------------|--------------------------------------------------------------------------|
+| `rustdoc` | Check `rustdoc` generated files contain the expected documentation |
+| `rustdoc-gui` | Check `rustdoc`'s GUI using a web browser |
+| `rustdoc-js` | Check `rustdoc` search is working as expected |
+| `rustdoc-js-std` | Check rustdoc search is working as expected specifically on the std docs |
+| `rustdoc-json` | Check JSON output of `rustdoc` |
+| `rustdoc-ui` | Check terminal output of `rustdoc` |
[`tests`]: https://github.com/rust-lang/rust/blob/master/tests
[`src/tools/compiletest/src/common.rs`]: https://github.com/rust-lang/rust/tree/master/src/tools/compiletest/src/common.rs
### Pretty-printer tests
-The tests in [`tests/pretty`] exercise the "pretty-printing" functionality of `rustc`.
-The `-Z unpretty` CLI option for `rustc` causes it to translate the input source
-into various different formats, such as the Rust source after macro expansion.
+The tests in [`tests/pretty`] exercise the "pretty-printing" functionality of
+`rustc`. The `-Z unpretty` CLI option for `rustc` causes it to translate the
+input source into various different formats, such as the Rust source after macro
+expansion.
-The pretty-printer tests have several [header commands](headers.md) described below.
+The pretty-printer tests have several [directives](directives.md) described below.
These commands can significantly change the behavior of the test, but the
default behavior without any commands is to:
-1. Run `rustc -Zunpretty=normal` on the source file
-2. Run `rustc -Zunpretty=normal` on the output of the previous step
+1. Run `rustc -Zunpretty=normal` on the source file.
+2. Run `rustc -Zunpretty=normal` on the output of the previous step.
3. The output of the previous two steps should be the same.
4. Run `rustc -Zno-codegen` on the output to make sure that it can type check
- (this is similar to running `cargo check`)
+ (this is similar to running `cargo check`).
If any of the commands above fail, then the test fails.
-The header commands for pretty-printing tests are:
+The directives for pretty-printing tests are:
-* `pretty-mode` specifies the mode pretty-print tests should run in
- (that is, the argument to `-Zunpretty`).
- The default is `normal` if not specified.
-* `pretty-compare-only` causes a pretty test to only compare the pretty-printed output
- (stopping after step 3 from above).
- It will not try to compile the expanded output to type check it.
- This is needed for a pretty-mode that does not expand to valid
- Rust, or for other situations where the expanded output cannot be compiled.
-* `pretty-expanded` allows a pretty test to also check that the expanded
- output can be type checked.
- That is, after the steps above, it does two more steps:
+- `pretty-mode` specifies the mode pretty-print tests should run in (that is,
+ the argument to `-Zunpretty`). The default is `normal` if not specified.
+- `pretty-compare-only` causes a pretty test to only compare the pretty-printed
+ output (stopping after step 3 from above). It will not try to compile the
+ expanded output to type check it. This is needed for a pretty-mode that does
+ not expand to valid Rust, or for other situations where the expanded output
+ cannot be compiled.
+- `pretty-expanded` allows a pretty test to also check that the expanded output
+ can be type checked. That is, after the steps above, it does two more steps:
> 5. Run `rustc -Zunpretty=expanded` on the original source
> 6. Run `rustc -Zno-codegen` on the expanded output to make sure that it can type check
This is needed because not all code can be compiled after being expanded.
- Pretty tests should specify this if they can.
- An example where this cannot be used is if the test includes `println!`.
- That macro expands to reference private internal functions of the standard
- library that cannot be called directly without the `fmt_internals` feature
- gate.
+ Pretty tests should specify this if they can. An example where this cannot be
+ used is if the test includes `println!`. That macro expands to reference
+ private internal functions of the standard library that cannot be called
+ directly without the `fmt_internals` feature gate.
More history about this may be found in
[#23616](https://github.com/rust-lang/rust/issues/23616#issuecomment-484999901).
-* `pp-exact` is used to ensure a pretty-print test results in specific output.
+- `pp-exact` is used to ensure a pretty-print test results in specific output.
If specified without a value, then it means the pretty-print output should
- match the original source.
- If specified with a value, as in `// pp-exact:foo.pp`,
- it will ensure that the pretty-printed output matches the contents of the given file.
- Otherwise, if `pp-exact` is not specified, then the pretty-printed output
- will be pretty-printed one more time, and the output of the two
- pretty-printing rounds will be compared to ensure that the pretty-printed
- output converges to a steady state.
+ match the original source. If specified with a value, as in `//@
+ pp-exact:foo.pp`, it will ensure that the pretty-printed output matches the
+ contents of the given file. Otherwise, if `pp-exact` is not specified, then
+ the pretty-printed output will be pretty-printed one more time, and the output
+ of the two pretty-printing rounds will be compared to ensure that the
+ pretty-printed output converges to a steady state.
[`tests/pretty`]: https://github.com/rust-lang/rust/tree/master/tests/pretty
### Incremental tests
-The tests in [`tests/incremental`] exercise incremental compilation.
-They use [revision headers](#revisions) to tell compiletest to run the
-compiler in a series of steps.
+The tests in [`tests/incremental`] exercise incremental compilation. They use
+[`revisions` directive](#revisions) to tell compiletest to run the compiler in a
+series of steps.
+
Compiletest starts with an empty directory with the `-C incremental` flag, and
then runs the compiler for each revision, reusing the incremental results from
previous steps.
+
The revisions should start with:
* `rpass` — the test should compile and run successfully
* `rfail` — the test should compile successfully, but the executable should fail to run
* `cfail` — the test should fail to compile
-To make the revisions unique, you should add a suffix like `rpass1` and `rpass2`.
+To make the revisions unique, you should add a suffix like `rpass1` and
+`rpass2`.
+
+To simulate changing the source, compiletest also passes a `--cfg` flag with the
+current revision name.
-To simulate changing the source, compiletest also passes a `--cfg` flag with
-the current revision name.
For example, this will run twice, simulating changing a function:
```rust,ignore
@@ -174,28 +179,27 @@ fn foo() {
fn main() { foo(); }
```
-`cfail` tests support the `forbid-output` header to specify that a certain
-substring must not appear anywhere in the compiler output.
-This can be useful to ensure certain errors do not appear, but this can be
-fragile as error messages change over time, and a test may no longer be
-checking the right thing but will still pass.
+`cfail` tests support the `forbid-output` directive to specify that a certain
+substring must not appear anywhere in the compiler output. This can be useful to
+ensure certain errors do not appear, but this can be fragile as error messages
+change over time, and a test may no longer be checking the right thing but will
+still pass.
-`cfail` tests support the `should-ice` header to specify that a test should
-cause an Internal Compiler Error (ICE).
-This is a highly specialized header to check that the incremental cache
-continues to work after an ICE.
+`cfail` tests support the `should-ice` directive to specify that a test should
+cause an Internal Compiler Error (ICE). This is a highly specialized directive
+to check that the incremental cache continues to work after an ICE.
[`tests/incremental`]: https://github.com/rust-lang/rust/tree/master/tests/incremental
### Debuginfo tests
-The tests in [`tests/debuginfo`] test debuginfo generation.
-They build a program, launch a debugger, and issue commands to the debugger.
-A single test can work with cdb, gdb, and lldb.
+The tests in [`tests/debuginfo`] test debuginfo generation. They build a
+program, launch a debugger, and issue commands to the debugger. A single test
+can work with cdb, gdb, and lldb.
-Most tests should have the `// compile-flags: -g` header or something similar
-to generate the appropriate debuginfo.
+Most tests should have the `//@ compile-flags: -g` directive or something
+similar to generate the appropriate debuginfo.
To set a breakpoint on a line, add a `// #break` comment on the line.
@@ -205,15 +209,16 @@ The debuginfo tests consist of a series of debugger commands along with
The commands are comments of the form `// $DEBUGGER-command:$COMMAND` where
`$DEBUGGER` is the debugger being used and `$COMMAND` is the debugger command
to execute.
+
The debugger values can be:
-* `cdb`
-* `gdb`
-* `gdbg` — GDB without Rust support (versions older than 7.11)
-* `gdbr` — GDB with Rust support
-* `lldb`
-* `lldbg` — LLDB without Rust support
-* `lldbr` — LLDB with Rust support (this no longer exists)
+- `cdb`
+- `gdb`
+- `gdbg` — GDB without Rust support (versions older than 7.11)
+- `gdbr` — GDB with Rust support
+- `lldb`
+- `lldbg` — LLDB without Rust support
+- `lldbr` — LLDB with Rust support (this no longer exists)
The command to check the output are of the form `// $DEBUGGER-check:$OUTPUT`
where `$OUTPUT` is the output to expect.
@@ -237,30 +242,32 @@ fn main() {
fn b() {}
```
-The following [header commands](headers.md) are available to disable a
-test based on the debugger currently being used:
+The following [directives](directives.md) are available to disable a test based on
+the debugger currently being used:
-* `min-cdb-version: 10.0.18317.1001` — ignores the test if the version of cdb
+- `min-cdb-version: 10.0.18317.1001` — ignores the test if the version of cdb
is below the given version
-* `min-gdb-version: 8.2` — ignores the test if the version of gdb is below the
+- `min-gdb-version: 8.2` — ignores the test if the version of gdb is below the
given version
-* `ignore-gdb-version: 9.2` — ignores the test if the version of gdb is equal
+- `ignore-gdb-version: 9.2` — ignores the test if the version of gdb is equal
to the given version
-* `ignore-gdb-version: 7.11.90 - 8.0.9` — ignores the test if the version of
+- `ignore-gdb-version: 7.11.90 - 8.0.9` — ignores the test if the version of
gdb is in a range (inclusive)
-* `min-lldb-version: 310` — ignores the test if the version of lldb is below
+- `min-lldb-version: 310` — ignores the test if the version of lldb is below
the given version
-* `rust-lldb` — ignores the test if lldb is not contain the Rust plugin.
- NOTE: The "Rust" version of LLDB doesn't exist anymore, so this will always be ignored.
- This should probably be removed.
+- `rust-lldb` — ignores the test if lldb is not contain the Rust plugin. NOTE:
+ The "Rust" version of LLDB doesn't exist anymore, so this will always be
+ ignored. This should probably be removed.
> **Note on running lldb debuginfo tests locally**
>
-> If you want to run lldb debuginfo tests locally, then currently on Windows it is required that:
+> If you want to run lldb debuginfo tests locally, then currently on Windows it
+> is required that:
>
> - You have Python 3.10 installed.
-> - You have `python310.dll` available in your `PATH` env var. This is not provided by the standard
-> Python installer you obtain from `python.org`; you need to add this to `PATH` manually.
+> - You have `python310.dll` available in your `PATH` env var. This is not
+> provided by the standard Python installer you obtain from `python.org`; you
+> need to add this to `PATH` manually.
>
> Otherwise the lldb debuginfo tests can produce crashes in mysterious ways.
@@ -269,11 +276,11 @@ test based on the debugger currently being used:
### Codegen tests
-The tests in [`tests/codegen`] test LLVM code generation.
-They compile the test with the `--emit=llvm-ir` flag to emit LLVM IR.
-They then run the LLVM [FileCheck] tool.
-The test is annotated with various `// CHECK` comments to check the generated code.
-See the FileCheck documentation for a tutorial and more information.
+The tests in [`tests/codegen`] test LLVM code generation. They compile the test
+with the `--emit=llvm-ir` flag to emit LLVM IR. They then run the LLVM
+[FileCheck] tool. The test is annotated with various `// CHECK` comments to
+check the generated code. See the [FileCheck] documentation for a tutorial and
+more information.
See also the [assembly tests](#assembly-tests) for a similar set of tests.
@@ -283,18 +290,17 @@ See also the [assembly tests](#assembly-tests) for a similar set of tests.
### Assembly tests
-The tests in [`tests/assembly`] test LLVM assembly output.
-They compile the test with the `--emit=asm` flag to emit a `.s` file with the
-assembly output.
-They then run the LLVM [FileCheck] tool.
+The tests in [`tests/assembly`] test LLVM assembly output. They compile the test
+with the `--emit=asm` flag to emit a `.s` file with the assembly output. They
+then run the LLVM [FileCheck] tool.
-Each test should be annotated with the `// assembly-output:` header
-with a value of either `emit-asm` or `ptx-linker` to indicate
-the type of assembly output.
+Each test should be annotated with the `//@ assembly-output:` directive with a
+value of either `emit-asm` or `ptx-linker` to indicate the type of assembly
+output.
Then, they should be annotated with various `// CHECK` comments to check the
-assembly output.
-See the FileCheck documentation for a tutorial and more information.
+assembly output. See the [FileCheck] documentation for a tutorial and more
+information.
See also the [codegen tests](#codegen-tests) for a similar set of tests.
@@ -310,26 +316,27 @@ These tests work by running `rustc` with a flag to print the result of the
monomorphization collection pass, and then special annotations in the file are
used to compare against that.
-Each test should be annotated with the `// compile-flags:-Zprint-mono-items=VAL`
-header with the appropriate VAL to instruct `rustc` to print the
-monomorphization information.
+Each test should be annotated with the `//@
+compile-flags:-Zprint-mono-items=VAL` directive with the appropriate `VAL` to
+instruct `rustc` to print the monomorphization information.
-Then, the test should be annotated with comments of the form `//~ MONO_ITEM name`
-where `name` is the monomorphized string printed by rustc like `fn ::foo`.
+Then, the test should be annotated with comments of the form `//~ MONO_ITEM
+name` where `name` is the monomorphized string printed by rustc like `fn ::foo`.
To check for CGU partitioning, a comment of the form `//~ MONO_ITEM name @@ cgu`
where `cgu` is a space separated list of the CGU names and the linkage
-information in brackets.
-For example: `//~ MONO_ITEM static function::FOO @@ statics[Internal]`
+information in brackets. For example: `//~ MONO_ITEM static function::FOO @@
+statics[Internal]`
[`tests/codegen-units`]: https://github.com/rust-lang/rust/tree/master/tests/codegen-units
### Mir-opt tests
-The tests in [`tests/mir-opt`] check parts of the generated MIR to make
-sure it is generated correctly and is doing the expected optimizations.
-Check out the [MIR Optimizations](../mir/optimizations.md) chapter for more.
+The tests in [`tests/mir-opt`] check parts of the generated MIR to make sure it
+is generated correctly and is doing the expected optimizations. Check out the
+[MIR Optimizations](../mir/optimizations.md) chapter for more.
Compiletest will build the test with several flags to dump the MIR output and
set a baseline for optimizations:
@@ -341,29 +348,28 @@ set a baseline for optimizations:
* `-Zdump-mir-exclude-pass-number`
The test should be annotated with `// EMIT_MIR` comments that specify files that
-will contain the expected MIR output.
-You can use `x test --bless` to create the initial expected files.
+will contain the expected MIR output. You can use `x test --bless` to create the
+initial expected files.
There are several forms the `EMIT_MIR` comment can take:
-* `// EMIT_MIR $MIR_PATH.mir` — This will check that the given filename
- matches the exact output from the MIR dump.
- For example, `my_test.main.SimplifyCfg-elaborate-drops.after.mir` will load
- that file from the test directory, and compare it against the dump from
- rustc.
+- `// EMIT_MIR $MIR_PATH.mir` — This will check that the given filename matches
+ the exact output from the MIR dump. For example,
+ `my_test.main.SimplifyCfg-elaborate-drops.after.mir` will load that file from
+ the test directory, and compare it against the dump from rustc.
Checking the "after" file (which is after optimization) is useful if you are
- interested in the final state after an optimization.
- Some rare cases may want to use the "before" file for completeness.
+ interested in the final state after an optimization. Some rare cases may want
+ to use the "before" file for completeness.
-* `// EMIT_MIR $MIR_PATH.diff` — where `$MIR_PATH` is the filename of the MIR
- dump, such as `my_test_name.my_function.EarlyOtherwiseBranch`.
- Compiletest will diff the `.before.mir` and `.after.mir` files, and compare
- the diff output to the expected `.diff` file from the `EMIT_MIR` comment.
+- `// EMIT_MIR $MIR_PATH.diff` — where `$MIR_PATH` is the filename of the MIR
+ dump, such as `my_test_name.my_function.EarlyOtherwiseBranch`. Compiletest
+ will diff the `.before.mir` and `.after.mir` files, and compare the diff
+ output to the expected `.diff` file from the `EMIT_MIR` comment.
This is useful if you want to see how an optimization changes the MIR.
-* `// EMIT_MIR $MIR_PATH.dot` — When using specific flags that dump additional
+- `// EMIT_MIR $MIR_PATH.dot` — When using specific flags that dump additional
MIR data (e.g. `-Z dump-mir-graphviz` to produce `.dot` files), this will
check that the output matches the given file.
@@ -377,35 +383,33 @@ your test, causing separate files to be generated for 32bit and 64bit systems.
### `run-make` tests
-> NOTE:
+> **Note on phasing out `Makefile`s**
+>
> We are planning to migrate all existing Makefile-based `run-make` tests
-> to Rust recipes. You should not be adding new Makefile-based `run-make`
+> to Rust programs. You should not be adding new Makefile-based `run-make`
> tests.
+>
+> See .
The tests in [`tests/run-make`] are general-purpose tests using Rust *recipes*,
-which are small programs allowing arbitrary Rust code such as `rustc`
-invocations, and is supported by a [`run_make_support`] library. Using Rust
-recipes provide the ultimate in flexibility.
+which are small programs (`rmake.rs`) allowing arbitrary Rust code such as
+`rustc` invocations, and is supported by a [`run_make_support`] library. Using
+Rust recipes provide the ultimate in flexibility.
-*These should be used as a last resort*. If possible, you should use one of the
-other test suites.
-
-If there is some minor feature missing which you need for your test,
-consider extending compiletest to add a header command for what you need.
-However, if running a bunch of commands is really what you need,
-`run-make` is here to the rescue!
+`run-make` tests should be used if no other test suites better suit your needs.
#### Using Rust recipes
Each test should be in a separate directory with a `rmake.rs` Rust program,
-called the *recipe*. A recipe will be compiled and executed by compiletest
-with the `run_make_support` library linked in.
+called the *recipe*. A recipe will be compiled and executed by compiletest with
+the `run_make_support` library linked in.
-If you need new utilities or functionality, consider extending and improving
-the [`run_make_support`] library.
+If you need new utilities or functionality, consider extending and improving the
+[`run_make_support`] library.
-Compiletest directives like `//@ only-` or `//@ ignore-` are supported in
-`rmake.rs`, like in UI tests.
+Compiletest directives like `//@ only-` or `//@ ignore-` are
+supported in `rmake.rs`, like in UI tests. However, revisions or building
+auxiliary via directives are not currently supported.
Two `run-make` tests are ported over to Rust recipes as examples:
@@ -426,14 +430,16 @@ Of course, some tests will not successfully *run* in this way.
#### Using Makefiles (legacy)
-> NOTE:
-> You should avoid writing new Makefile-based `run-make` tests.
+
+You should avoid writing new Makefile-based `run-make` tests.
+
Each test should be in a separate directory with a `Makefile` indicating the
commands to run.
+
There is a [`tools.mk`] Makefile which you can include which provides a bunch of
-utilities to make it easier to run commands and compare outputs.
-Take a look at some of the other tests for some examples on how to get started.
+utilities to make it easier to run commands and compare outputs. Take a look at
+some of the other tests for some examples on how to get started.
[`tools.mk`]: https://github.com/rust-lang/rust/blob/master/tests/run-make/tools.mk
[`tests/run-make`]: https://github.com/rust-lang/rust/tree/master/tests/run-make
@@ -442,9 +448,13 @@ Take a look at some of the other tests for some examples on how to get started.
### Valgrind tests
-The tests in [`tests/run-pass-valgrind`] are for use with [Valgrind].
-These are currently vestigial, as Valgrind is no longer used in CI.
-These may be removed in the future.
+> **TODO**
+>
+> Yeet this if we yeet the test suite.
+
+The tests in [`tests/run-pass-valgrind`] are for use with [Valgrind]. These are
+currently vestigial, as Valgrind is no longer used in CI. These may be removed
+in the future.
[Valgrind]: https://valgrind.org/
[`tests/run-pass-valgrind`]: https://github.com/rust-lang/rust/tree/master/tests/run-pass-valgrind
@@ -453,9 +463,8 @@ These may be removed in the future.
### Coverage tests
The tests in [`tests/coverage`] are shared by multiple test modes that test
-coverage instrumentation in different ways.
-Running the `coverage` test suite will automatically run each test in all of
-the different coverage modes.
+coverage instrumentation in different ways. Running the `coverage` test suite
+will automatically run each test in all of the different coverage modes.
Each mode also has an alias to run the coverage tests in just that mode:
@@ -471,35 +480,34 @@ Each mode also has an alias to run the coverage tests in just that mode:
./x test coverage-map -- tests/coverage/if.rs # runs the specified test in "coverage-map" mode only
```
----
+#### `coverage-map` suite
In `coverage-map` mode, these tests verify the mappings between source code
-regions and coverage counters that are emitted by LLVM.
-They compile the test with `--emit=llvm-ir`,
-then use a custom tool ([`src/tools/coverage-dump`])
-to extract and pretty-print the coverage mappings embedded in the IR.
-These tests don't require the profiler runtime, so they run in PR CI jobs and
-are easy to run/bless locally.
+regions and coverage counters that are emitted by LLVM. They compile the test
+with `--emit=llvm-ir`, then use a custom tool ([`src/tools/coverage-dump`]) to
+extract and pretty-print the coverage mappings embedded in the IR. These tests
+don't require the profiler runtime, so they run in PR CI jobs and are easy to
+run/bless locally.
These coverage map tests can be sensitive to changes in MIR lowering or MIR
optimizations, producing mappings that are different but produce identical
coverage reports.
-As a rule of thumb, any PR that doesn't change coverage-specific
-code should **feel free to re-bless** the `coverage-map` tests as necessary,
-without worrying about the actual changes, as long as the `coverage-run` tests
-still pass.
+As a rule of thumb, any PR that doesn't change coverage-specific code should
+**feel free to re-bless** the `coverage-map` tests as necessary, without
+worrying about the actual changes, as long as the `coverage-run` tests still
+pass.
----
+#### `coverage-run` suite
-In `coverage-run` mode, these tests perform an end-to-end test of coverage reporting.
-They compile a test program with coverage instrumentation, run that program to
-produce raw coverage data, and then use LLVM tools to process that data into a
-human-readable code coverage report.
+In `coverage-run` mode, these tests perform an end-to-end test of coverage
+reporting. They compile a test program with coverage instrumentation, run that
+program to produce raw coverage data, and then use LLVM tools to process that
+data into a human-readable code coverage report.
-Instrumented binaries need to be linked against the LLVM profiler runtime,
-so `coverage-run` tests are **automatically skipped**
-unless the profiler runtime is enabled in `config.toml`:
+Instrumented binaries need to be linked against the LLVM profiler runtime, so
+`coverage-run` tests are **automatically skipped** unless the profiler runtime
+is enabled in `config.toml`:
```toml
# config.toml
@@ -507,10 +515,10 @@ unless the profiler runtime is enabled in `config.toml`:
profiler = true
```
-This also means that they typically don't run in PR CI jobs,
-though they do run as part of the full set of CI jobs used for merging.
+This also means that they typically don't run in PR CI jobs, though they do run
+as part of the full set of CI jobs used for merging.
----
+#### `coverage-run-rustdoc` suite
The tests in [`tests/coverage-run-rustdoc`] also run instrumented doctests and
include them in the coverage report. This avoids having to build rustdoc when
@@ -522,27 +530,30 @@ only running the main `coverage` suite.
### Crashes tests
-[`tests/crashes`] serve as a collection of tests that are expected to cause the compiler to ICE, panic
-or crash in some other way, so that accidental fixes are tracked. This was formally done at
- but doing it inside the rust-lang/rust testsuite is more
-convenient.
+[`tests/crashes`] serve as a collection of tests that are expected to cause the
+compiler to ICE, panic or crash in some other way, so that accidental fixes are
+tracked. This was formally done at but
+doing it inside the rust-lang/rust testsuite is more convenient.
-It is imperative that a test in the suite causes rustc to ICE, panic or crash crash in some other
-way. A test will "pass" if rustc exits with an exit status other than 1 or 0.
+It is imperative that a test in the suite causes rustc to ICE, panic or crash
+crash in some other way. A test will "pass" if rustc exits with an exit status
+other than 1 or 0.
-If you want to see verbose stdout/stderr, you need to set `COMPILETEST_VERBOSE_CRASHES=1`, e.g.
+If you want to see verbose stdout/stderr, you need to set
+`COMPILETEST_VERBOSE_CRASHES=1`, e.g.
```bash
$ COMPILETEST_VERBOSE_CRASHES=1 ./x test tests/crashes/999999.rs --stage 1
```
-When adding crashes from , the issue number should be
-noted in the file name (`12345.rs` should suffice) and also inside the file include a `//@ known-bug
-#4321` directive.
+When adding crashes from , the issue
+number should be noted in the file name (`12345.rs` should suffice) and also
+inside the file include a `//@ known-bug: #4321` directive.
-If you happen to fix one of the crashes, please move it to a fitting subdirectory in `tests/ui` and
-give it a meaningful name. Please add a doc comment at the top of the file explaining why this test
-exists, even better if you can briefly explain how the example causes rustc to crash previously and
+If you happen to fix one of the crashes, please move it to a fitting
+subdirectory in `tests/ui` and give it a meaningful name. Please add a doc
+comment at the top of the file explaining why this test exists, even better if
+you can briefly explain how the example causes rustc to crash previously and
what was done to prevent rustc to ICE/panic/crash.
Adding
@@ -554,24 +565,25 @@ Fixes #MMMMM
to the description of your pull request will ensure the corresponding tickets be closed
automatically upon merge.
-Make sure that your fix actually fixes the root cause of the issue and not just a subset first.
-The issue numbers can be found in the file name or the `//@ known-bug`
-directive inside the test file.
+
+Make sure that your fix actually fixes the root cause of the issue and not just
+a subset first. The issue numbers can be found in the file name or the `//@
+known-bug` directive inside the test file.
[`tests/crashes`]: https://github.com/rust-lang/rust/tree/master/tests/crashes
## Building auxiliary crates
It is common that some tests require additional auxiliary crates to be compiled.
-There are multiple [headers](headers.md) to assist with that:
+There are multiple [directives](directives.md) to assist with that:
-* `aux-build`
-* `aux-crate`
-* `aux-bin`
-* `aux-codegen-backend`
+- `aux-build`
+- `aux-crate`
+- `aux-bin`
+- `aux-codegen-backend`
-`aux-build` will build a separate crate from the named source file.
-The source file should be in a directory called `auxiliary` beside the test file.
+`aux-build` will build a separate crate from the named source file. The source
+file should be in a directory called `auxiliary` beside the test file.
```rust,ignore
//@ aux-build: my-helper.rs
@@ -581,16 +593,14 @@ extern crate my_helper;
```
The aux crate will be built as a dylib if possible (unless on a platform that
-does not support them, or the `no-prefer-dynamic` header is specified in the
-aux file).
-The `-L` flag is used to find the extern crates.
-
-`aux-crate` is very similar to `aux-build`; however, it uses the `--extern`
-flag to link to the extern crate.
-That allows you to specify the additional syntax of the `--extern` flag, such
-as renaming a dependency.
-For example, `// aux-crate:foo=bar.rs` will compile `auxiliary/bar.rs` and
-make it available under then name `foo` within the test.
+does not support them, or the `no-prefer-dynamic` header is specified in the aux
+file). The `-L` flag is used to find the extern crates.
+
+`aux-crate` is very similar to `aux-build`. However, it uses the `--extern` flag
+to link to the extern crate to make the crate be available as an extern prelude.
+That allows you to specify the additional syntax of the `--extern` flag, such as
+renaming a dependency. For example, `// aux-crate:foo=bar.rs` will compile
+`auxiliary/bar.rs` and make it available under then name `foo` within the test.
This is similar to how Cargo does dependency renaming.
`aux-bin` is similar to `aux-build` but will build a binary instead of a
@@ -605,8 +615,9 @@ for tests in `tests/ui-fulldeps`, since it requires the use of compiler crates.
If you want a proc-macro dependency, then there currently is some ceremony
needed.
-Place the proc-macro itself in a file like `auxiliary/my-proc-macro.rs`
-with the following structure:
+
+Place the proc-macro itself in a file like `auxiliary/my-proc-macro.rs` with the
+following structure:
```rust,ignore
//@ force-host
@@ -623,10 +634,10 @@ pub fn foo(input: TokenStream) -> TokenStream {
}
```
-The `force-host` is needed because proc-macros are loaded in the host
-compiler, and `no-prefer-dynamic` is needed to tell compiletest to not use
-`prefer-dynamic` which is not compatible with proc-macros.
-The `#![crate_type]` attribute is needed to specify the correct crate-type.
+The `force-host` is needed because proc-macros are loaded in the host compiler,
+and `no-prefer-dynamic` is needed to tell compiletest to not use
+`prefer-dynamic` which is not compatible with proc-macros. The `#![crate_type]`
+attribute is needed to specify the correct crate-type.
Then in your test, you can build with `aux-build`:
@@ -643,21 +654,20 @@ fn main() {
## Revisions
-Revisions allow a single test file to be used for multiple tests.
-This is done by adding a special header at the top of the file:
+Revisions allow a single test file to be used for multiple tests. This is done
+by adding a special directive at the top of the file:
```rust,ignore
//@ revisions: foo bar baz
```
-This will result in the test being compiled (and tested) three times,
-once with `--cfg foo`, once with `--cfg bar`, and once with `--cfg
-baz`.
-You can therefore use `#[cfg(foo)]` etc within the test to tweak
-each of these results.
+This will result in the test being compiled (and tested) three times, once with
+`--cfg foo`, once with `--cfg bar`, and once with `--cfg baz`. You can therefore
+use `#[cfg(foo)]` etc within the test to tweak each of these results.
-You can also customize headers and expected error messages to a particular
-revision. To do this, add `[revision-name]` after the `//` comment, like so:
+You can also customize directives and expected error messages to a particular
+revision. To do this, add `[revision-name]` after the `//@` for directives, and
+after `//` for UI error annotations, like so:
```rust,ignore
// A flag to pass in only for cfg `foo`:
@@ -677,8 +687,8 @@ also registered as an additional prefix for FileCheck directives:
```rust,ignore
//@ revisions: NORMAL COVERAGE
-//@ [COVERAGE] compile-flags: -Cinstrument-coverage
-//@ [COVERAGE] needs-profiler-support
+//@[COVERAGE] compile-flags: -Cinstrument-coverage
+//@[COVERAGE] needs-profiler-support
// COVERAGE: @__llvm_coverage_mapping
// NORMAL-NOT: @__llvm_coverage_mapping
@@ -687,45 +697,46 @@ also registered as an additional prefix for FileCheck directives:
fn main() {}
```
-Note that not all headers have meaning when customized to a revision.
-For example, the `ignore-test` header (and all "ignore" headers)
-currently only apply to the test as a whole, not to particular
-revisions. The only headers that are intended to really work when
-customized to a revision are error patterns and compiler flags.
+Note that not all directives have meaning when customized to a revision. For
+example, the `ignore-test` directives (and all "ignore" directives) currently
+only apply to the test as a whole, not to particular revisions. The only
+directives that are intended to really work when customized to a revision are
+error patterns and compiler flags.
-Following is classes of tests that support revisions:
-- UI
+The following test suites support revisions:
+
+- ui
- assembly
- codegen
- coverage
- debuginfo
- rustdoc UI tests
-- incremental (these are special in that they inherently cannot be run in parallel)
+- incremental (these are special in that they inherently cannot be run in
+ parallel)
### Ignoring unused revision names
-Normally, revision names mentioned in other headers and error annotations must
-correspond to an actual revision declared in a `revisions` header. This is
+Normally, revision names mentioned in other directives and error annotations
+must correspond to an actual revision declared in a `revisions` directive. This is
enforced by an `./x test tidy` check.
If a revision name needs to be temporarily removed from the revision list for
-some reason, the above check can be suppressed by adding the revision name to
-an `//@ unused-revision-names:` header instead.
+some reason, the above check can be suppressed by adding the revision name to an
+`//@ unused-revision-names:` header instead.
Specifying an unused name of `*` (i.e. `//@ unused-revision-names: *`) will
permit any unused revision name to be mentioned.
## Compare modes
-Compiletest can be run in different modes, called _compare modes_, which can
-be used to compare the behavior of all tests with different compiler flags
-enabled.
+Compiletest can be run in different modes, called _compare modes_, which can be
+used to compare the behavior of all tests with different compiler flags enabled.
This can help highlight what differences might appear with certain flags, and
check for any problems that might arise.
-To run the tests in a different mode, you need to pass the `--compare-mode`
-CLI flag:
+To run the tests in a different mode, you need to pass the `--compare-mode` CLI
+flag:
```bash
./x test tests/ui --compare-mode=chalk
@@ -733,10 +744,12 @@ CLI flag:
The possible compare modes are:
-* `polonius` — Runs with Polonius with `-Zpolonius`.
-* `chalk` — Runs with Chalk with `-Zchalk`.
-* `split-dwarf` — Runs with unpacked split-DWARF with `-Csplit-debuginfo=unpacked`.
-* `split-dwarf-single` — Runs with packed split-DWARF with `-Csplit-debuginfo=packed`.
+- `polonius` — Runs with Polonius with `-Zpolonius`.
+- `chalk` — Runs with Chalk with `-Zchalk`.
+- `split-dwarf` — Runs with unpacked split-DWARF with
+ `-Csplit-debuginfo=unpacked`.
+- `split-dwarf-single` — Runs with packed split-DWARF with
+ `-Csplit-debuginfo=packed`.
See [UI compare modes](ui.md#compare-modes) for more information about how UI
tests support different output for different modes.
@@ -744,10 +757,9 @@ tests support different output for different modes.
In CI, compare modes are only used in one Linux builder, and only with the
following settings:
-* `tests/debuginfo`: Uses `split-dwarf` mode.
- This helps ensure that none of the debuginfo tests are affected when
- enabling split-DWARF.
+- `tests/debuginfo`: Uses `split-dwarf` mode. This helps ensure that none of the
+ debuginfo tests are affected when enabling split-DWARF.
-Note that compare modes are separate to [revisions](#revisions).
-All revisions are tested when running `./x test tests/ui`, however
-compare-modes must be manually run individually via the `--compare-mode` flag.
+Note that compare modes are separate to [revisions](#revisions). All revisions
+are tested when running `./x test tests/ui`, however compare-modes must be
+manually run individually via the `--compare-mode` flag.
diff --git a/src/doc/rustc-dev-guide/src/tests/crater.md b/src/doc/rustc-dev-guide/src/tests/crater.md
index 9a7ff38715b5c..9d4ac87daf36a 100644
--- a/src/doc/rustc-dev-guide/src/tests/crater.md
+++ b/src/doc/rustc-dev-guide/src/tests/crater.md
@@ -1,10 +1,10 @@
# Crater
-[Crater](https://github.com/rust-lang/crater) is a tool for compiling
-and running tests for _every_ crate on [crates.io](https://crates.io) (and a
-few on GitHub). It is mainly used for checking the extent of breakage when
-implementing potentially breaking changes and ensuring lack of breakage by
-running beta vs stable compiler versions.
+[Crater](https://github.com/rust-lang/crater) is a tool for compiling and
+running tests for _every_ crate on [crates.io](https://crates.io) (and a few on
+GitHub). It is mainly used for checking the extent of breakage when implementing
+potentially breaking changes and ensuring lack of breakage by running beta vs
+stable compiler versions.
## When to run Crater
@@ -15,16 +15,16 @@ or could cause breakage. If you are unsure, feel free to ask your PR's reviewer.
The rust team maintains a few machines that can be used for running crater runs
on the changes introduced by a PR. If your PR needs a crater run, leave a
-comment for the triage team in the PR thread. Please inform the team whether
-you require a "check-only" crater run, a "build only" crater run, or a
+comment for the triage team in the PR thread. Please inform the team whether you
+require a "check-only" crater run, a "build only" crater run, or a
"build-and-test" crater run. The difference is primarily in time; the
-conservative (if you're not sure) option is to go for the build-and-test run.
-If making changes that will only have an effect at compile-time (e.g.,
-implementing a new trait) then you only need a check run.
+conservative (if you're not sure) option is to go for the build-and-test run. If
+making changes that will only have an effect at compile-time (e.g., implementing
+a new trait) then you only need a check run.
Your PR will be enqueued by the triage team and the results will be posted when
-they are ready. Check runs will take around ~3-4 days, with the other two
-taking 5-6 days on average.
+they are ready. Check runs will take around ~3-4 days, with the other two taking
+5-6 days on average.
While crater is really useful, it is also important to be aware of a few
caveats:
@@ -37,9 +37,9 @@ caveats:
- Crater only runs Linux builds on x86_64. Thus, other architectures and
platforms are not tested. Critically, this includes Windows.
-- Many crates are not tested. This could be for a lot of reasons, including
- that the crate doesn't compile any more (e.g. used old nightly features),
- has broken or flaky tests, requires network access, or other reasons.
+- Many crates are not tested. This could be for a lot of reasons, including that
+ the crate doesn't compile any more (e.g. used old nightly features), has
+ broken or flaky tests, requires network access, or other reasons.
- Before crater can be run, `@bors try` needs to succeed in building artifacts.
This means that if your code doesn't compile, you cannot run crater.
diff --git a/src/doc/rustc-dev-guide/src/tests/headers.md b/src/doc/rustc-dev-guide/src/tests/directives.md
similarity index 99%
rename from src/doc/rustc-dev-guide/src/tests/headers.md
rename to src/doc/rustc-dev-guide/src/tests/directives.md
index 51d422a3cfaa7..c2705a22c3550 100644
--- a/src/doc/rustc-dev-guide/src/tests/headers.md
+++ b/src/doc/rustc-dev-guide/src/tests/directives.md
@@ -2,6 +2,8 @@
+> **FIXME(jieyouxu)** completely revise this chapter.
+
Header commands are special comments that tell compiletest how to build and
interpret a test.
They must appear before the Rust source in the test.
diff --git a/src/doc/rustc-dev-guide/src/tests/ecosystem.md b/src/doc/rustc-dev-guide/src/tests/ecosystem.md
new file mode 100644
index 0000000000000..083601404255b
--- /dev/null
+++ b/src/doc/rustc-dev-guide/src/tests/ecosystem.md
@@ -0,0 +1,28 @@
+# Ecosystem testing
+
+Rust tests integration with real-world code in the ecosystem to catch
+regressions and make informed decisions about the evolution of the language.
+
+## Testing methods
+
+### Crater
+
+Crater is a tool which runs tests on many thousands of public projects. This
+tool has its own separate infrastructure for running, and is not run as part of
+CI. See the [Crater chapter](crater.md) for more details.
+
+### `cargotest`
+
+`cargotest` is a small tool which runs `cargo test` on a few sample projects
+(such as `servo`, `ripgrep`, `tokei`, etc.). This runs as part of CI and ensures
+there aren't any significant regressions.
+
+> Example: `./x test src/tools/cargotest`
+
+### Large OSS Project builders
+
+We have CI jobs that build large open-source Rust projects that are used as
+regression tests in CI. Our integration jobs build the following projects:
+
+- [Fuchsia](fuchsia.md)
+- [Rust for Linux](rust-for-linux.md)
diff --git a/src/doc/rustc-dev-guide/src/tests/fuchsia.md b/src/doc/rustc-dev-guide/src/tests/fuchsia.md
index 53e1038b56de8..e96290b921529 100644
--- a/src/doc/rustc-dev-guide/src/tests/fuchsia.md
+++ b/src/doc/rustc-dev-guide/src/tests/fuchsia.md
@@ -36,7 +36,7 @@ See the [Testing with Docker](docker.md) chapter for more details on how to run
and debug jobs with Docker.
Note that a Fuchsia checkout is *large* – as of this writing, a checkout and
-build takes 46G of space – and as you might imagine, it takes awhile to
+build takes 46G of space – and as you might imagine, it takes a while to
complete.
### Modifying the Fuchsia checkout
@@ -65,11 +65,12 @@ to add this to your `$PATH` for some workflows.
There are a few `fx` subcommands that are relevant, including:
-* `fx set` accepts build arguments, writes them to `out/default/args.gn`, and runs GN.
-* `fx build` builds the Fuchsia project using Ninja. It will automatically pick
+- `fx set` accepts build arguments, writes them to `out/default/args.gn`, and
+ runs GN.
+- `fx build` builds the Fuchsia project using Ninja. It will automatically pick
up changes to build arguments and rerun GN. By default it builds everything,
but it also accepts target paths to build specific targets (see below).
-* `fx clippy` runs Clippy on specific Rust targets (or all of them). We use this
+- `fx clippy` runs Clippy on specific Rust targets (or all of them). We use this
in the Rust CI build to avoid running codegen on most Rust targets. Underneath
it invokes Ninja, just like `fx build`. The clippy results are saved in json
files inside the build output directory before being printed.
@@ -94,8 +95,8 @@ and can also be used in `fx build`.
#### Modifying compiler flags
-You can put custom compiler flags inside a GN `config` that is added to a target.
-As a simple example:
+You can put custom compiler flags inside a GN `config` that is added to a
+target. As a simple example:
```
config("everybody_loops") {
@@ -162,6 +163,6 @@ rustc book][platform-support].
[`//build/config:compiler`]: https://cs.opensource.google/fuchsia/fuchsia/+/main:build/config/BUILD.gn;l=121;drc=c26c473bef93b33117ae417893118907a026fec7
[build system]: https://fuchsia.dev/fuchsia-src/development/build/build_system
-[^loc]: As of June 2024, Fuchsia had about 2 million lines of first-party Rust code
-and a roughly equal amount of third-party code, as counted by tokei (excluding
-comments and blanks).
+[^loc]: As of June 2024, Fuchsia had about 2 million lines of first-party Rust
+code and a roughly equal amount of third-party code, as counted by tokei
+(excluding comments and blanks).
diff --git a/src/doc/rustc-dev-guide/src/tests/integration.md b/src/doc/rustc-dev-guide/src/tests/integration.md
deleted file mode 100644
index 1eddf7ce376c3..0000000000000
--- a/src/doc/rustc-dev-guide/src/tests/integration.md
+++ /dev/null
@@ -1,49 +0,0 @@
-# Integration testing
-
-Rust tests integration with real-world code to catch regressions and make
-informed decisions about the evolution of the language.
-
-## Testing methods
-
-### Crater
-
-Crater is a tool which runs tests on many thousands of public projects. This
-tool has its own separate infrastructure for running, and is not run as part of
-CI. See the [Crater chapter](crater.md) for more details.
-
-### Cargo test
-
-`cargotest` is a small tool which runs `cargo test` on a few sample projects
-(such as `servo`, `ripgrep`, `tokei`, etc.).
-This runs as part of CI and ensures there aren't any significant regressions.
-
-> Example: `./x test src/tools/cargotest`
-
-### Integration builders
-
-Integration jobs build large open-source Rust projects that are used as
-regression tests in CI. Our integration jobs build the following projects:
-
-- [Fuchsia](fuchsia.md)
-- [Rust for Linux](rust-for-linux.md)
-
-## A note about terminology
-
-The term "integration testing" can be used to mean many things. Many of the
-compiletest tests within the Rust repo could be justifiably called integration
-tests, because they test the integration of many parts of the compiler, or test
-the integration of the compiler with other external tools. Calling all of them
-integration tests would not be very helpful, especially since those kinds of
-tests already have their own specialized names.
-
-We use the term "integration" here to mean integrating the Rust compiler and
-toolchain with the ecosystem of Rust projects that depend on it. This is partly
-for lack of a better term, but it also reflects a difference in testing approach
-from other projects and the comparative advantage it implies.
-
-The Rust compiler is part of the ecosystem, and the ecosystem is in many cases
-part of Rust, both in terms of libraries it uses and in terms of the efforts of many
-contributors who come to "scratch their own itch". Finally, because Rust has the
-ability to do integration testing at such a broad scale, it shortens development
-cycles by finding defects earlier.
-
diff --git a/src/doc/rustc-dev-guide/src/tests/intro.md b/src/doc/rustc-dev-guide/src/tests/intro.md
index d0c718d5c4010..132accad67838 100644
--- a/src/doc/rustc-dev-guide/src/tests/intro.md
+++ b/src/doc/rustc-dev-guide/src/tests/intro.md
@@ -2,10 +2,10 @@
-The Rust project runs a wide variety of different tests, orchestrated by
-the build system (`./x test`).
-This section gives a brief overview of the different testing tools.
-Subsequent chapters dive into [running tests](running.md) and [adding new tests](adding.md).
+The Rust project runs a wide variety of different tests, orchestrated by the
+build system (`./x test`). This section gives a brief overview of the different
+testing tools. Subsequent chapters dive into [running tests](running.md) and
+[adding new tests](adding.md).
## Kinds of tests
@@ -14,9 +14,13 @@ Almost all of them are driven by `./x test`, with some exceptions noted below.
### Compiletest
-The main test harness for testing the compiler itself is a tool called [compiletest].
-It supports running different styles of tests, called *test suites*.
-The tests are all located in the [`tests`] directory.
+The main test harness for testing the compiler itself is a tool called
+[compiletest].
+
+[compiletest] supports running different styles of tests, organized into *test
+suites*. A *test mode* may provide common presets/behavior for a set of *test
+suites*. [compiletest]-supported tests are located in the [`tests`] directory.
+
The [Compiletest chapter][compiletest] goes into detail on how to use this tool.
> Example: `./x test tests/ui`
@@ -26,10 +30,10 @@ The [Compiletest chapter][compiletest] goes into detail on how to use this tool.
### Package tests
-The standard library and many of the compiler packages include typical Rust `#[test]`
-unit tests, integration tests, and documentation tests.
-You can pass a path to `x` to almost any package in the `library` or `compiler` directory,
-and `x` will essentially run `cargo test` on that package.
+The standard library and many of the compiler packages include typical Rust
+`#[test]` unit tests, integration tests, and documentation tests. You can pass a
+path to `./x test` for almost any package in the `library/` or `compiler/`
+directory, and `x` will essentially run `cargo test` on that package.
Examples:
@@ -39,25 +43,25 @@ Examples:
| `./x test library/core` | Runs tests on `core` only |
| `./x test compiler/rustc_data_structures` | Runs tests on `rustc_data_structures` |
-The standard library relies very heavily on documentation tests to cover its functionality.
-However, unit tests and integration tests can also be used as needed.
-Almost all of the compiler packages have doctests disabled.
+The standard library relies very heavily on documentation tests to cover its
+functionality. However, unit tests and integration tests can also be used as
+needed. Almost all of the compiler packages have doctests disabled.
All standard library and compiler unit tests are placed in separate `tests` file
-(which is enforced in [tidy][tidy-unit-tests]).
-This ensures that when the test file is changed, the crate does not need to be recompiled.
-For example:
+(which is enforced in [tidy][tidy-unit-tests]). This ensures that when the test
+file is changed, the crate does not need to be recompiled. For example:
```rust,ignore
#[cfg(test)]
mod tests;
```
-If it wasn't done this way,
-and you were working on something like `core`,
-that would require recompiling the entire standard library, and the entirety of `rustc`.
+If it wasn't done this way, and you were working on something like `core`, that
+would require recompiling the entire standard library, and the entirety of
+`rustc`.
-`./x test` includes some CLI options for controlling the behavior with these tests:
+`./x test` includes some CLI options for controlling the behavior with these
+package tests:
* `--doc` — Only runs documentation tests in the package.
* `--no-doc` — Run all tests *except* documentation tests.
@@ -66,16 +70,18 @@ that would require recompiling the entire standard library, and the entirety of
### Tidy
-Tidy is a custom tool used for validating source code style and formatting conventions,
-such as rejecting long lines.
-There is more information in the [section on coding conventions](../conventions.md#formatting).
+Tidy is a custom tool used for validating source code style and formatting
+conventions, such as rejecting long lines. There is more information in the
+[section on coding conventions](../conventions.md#formatting).
+
+> Examples: `./x test tidy`
-> Example: `./x test tidy`
### Formatting
-Rustfmt is integrated with the build system to enforce uniform style across the compiler.
-The formatting check is automatically run by the Tidy tool mentioned above.
+Rustfmt is integrated with the build system to enforce uniform style across the
+compiler. The formatting check is automatically run by the Tidy tool mentioned
+above.
Examples:
@@ -87,10 +93,10 @@ Examples:
### Book documentation tests
-All of the books that are published have their own tests,
-primarily for validating that the Rust code examples pass.
-Under the hood, these are essentially using `rustdoc --test` on the markdown files.
-The tests can be run by passing a path to a book to `./x test`.
+All of the books that are published have their own tests, primarily for
+validating that the Rust code examples pass. Under the hood, these are
+essentially using `rustdoc --test` on the markdown files. The tests can be run
+by passing a path to a book to `./x test`.
> Example: `./x test src/doc/book`
@@ -106,47 +112,48 @@ This requires building all of the documentation, which might take a while.
### Dist check
-`distcheck` verifies that the source distribution tarball created by the build system
-will unpack, build, and run all tests.
+`distcheck` verifies that the source distribution tarball created by the build
+system will unpack, build, and run all tests.
> Example: `./x test distcheck`
### Tool tests
-Packages that are included with Rust have all of their tests run as well.
-This includes things such as cargo, clippy, rustfmt, miri, bootstrap
-(testing the Rust build system itself), etc.
+Packages that are included with Rust have all of their tests run as well. This
+includes things such as cargo, clippy, rustfmt, miri, bootstrap (testing the
+Rust build system itself), etc.
-Most of the tools are located in the [`src/tools`] directory.
-To run the tool's tests, just pass its path to `./x test`.
+Most of the tools are located in the [`src/tools`] directory. To run the tool's
+tests, just pass its path to `./x test`.
> Example: `./x test src/tools/cargo`
Usually these tools involve running `cargo test` within the tool's directory.
-If you want to run only a specified set of tests, append `--test-args FILTER_NAME` to the command.
+If you want to run only a specified set of tests, append `--test-args
+FILTER_NAME` to the command.
> Example: `./x test src/tools/miri --test-args padding`
-In CI, some tools are allowed to fail.
-Failures send notifications to the corresponding teams, and is tracked on the [toolstate website].
-More information can be found in the [toolstate documentation].
+In CI, some tools are allowed to fail. Failures send notifications to the
+corresponding teams, and is tracked on the [toolstate website]. More information
+can be found in the [toolstate documentation].
[`src/tools`]: https://github.com/rust-lang/rust/tree/master/src/tools/
[toolstate documentation]: https://forge.rust-lang.org/infra/toolstate.html
[toolstate website]: https://rust-lang-nursery.github.io/rust-toolstate/
-### Integration testing
+### Ecosystem testing
Rust tests integration with real-world code to catch regressions and make
informed decisions about the evolution of the language. There are several kinds
-of integration tests, including Crater. See the [Integration testing
-chapter](integration.md) for more details.
+of ecosystem tests, including Crater. See the [Ecosystem testing
+chapter](ecosystem.md) for more details.
### Performance testing
-A separate infrastructure is used for testing and tracking performance of the compiler.
-See the [Performance testing chapter](perf.md) for more details.
+A separate infrastructure is used for testing and tracking performance of the
+compiler. See the [Performance testing chapter](perf.md) for more details.
## Further reading
diff --git a/src/doc/rustc-dev-guide/src/tests/perf.md b/src/doc/rustc-dev-guide/src/tests/perf.md
index d704a2497eb74..dd85e9d455d25 100644
--- a/src/doc/rustc-dev-guide/src/tests/perf.md
+++ b/src/doc/rustc-dev-guide/src/tests/perf.md
@@ -4,14 +4,15 @@
A lot of work is put into improving the performance of the compiler and
preventing performance regressions.
+
The [rustc-perf](https://github.com/rust-lang/rustc-perf) project provides
-several services for testing and tracking performance.
-It provides hosted infrastructure for running benchmarks as a service.
-At this time, only `x86_64-unknown-linux-gnu` builds are tracked.
+several services for testing and tracking performance. It provides hosted
+infrastructure for running benchmarks as a service. At this time, only
+`x86_64-unknown-linux-gnu` builds are tracked.
A "perf run" is used to compare the performance of the compiler in different
-configurations for a large collection of popular crates.
-Different configurations include "fresh builds", builds with incremental compilation, etc.
+configurations for a large collection of popular crates. Different
+configurations include "fresh builds", builds with incremental compilation, etc.
The result of a perf run is a comparison between two versions of the compiler
(by their commit hashes).
@@ -24,30 +25,29 @@ Any changes are noted in a comment on the PR.
### Manual perf runs
-Additionally, performance tests can be ran before a PR is merged on an as-needed basis.
-You should request a perf run if your PR may affect performance, especially if
-it can affect performance adversely.
+Additionally, performance tests can be ran before a PR is merged on an as-needed
+basis. You should request a perf run if your PR may affect performance,
+especially if it can affect performance adversely.
To evaluate the performance impact of a PR, write this comment on the PR:
`@bors try @rust-timer queue`
-> **Note**: Only users authorized to do perf runs are allowed to post this comment.
-> Teams that are allowed to use it are tracked in the [Teams
-> repository](https://github.com/rust-lang/team) with the `perf = true` value
-> in the `[permissions]` section (and bors permissions are also required).
-> If you are not on one of those teams, feel free to ask for someone to post
-> it for you (either on Zulip or ask the assigned reviewer).
+> **Note**: Only users authorized to do perf runs are allowed to post this
+> comment. Teams that are allowed to use it are tracked in the [Teams
+> repository](https://github.com/rust-lang/team) with the `perf = true` value in
+> the `[permissions]` section (and bors permissions are also required). If you
+> are not on one of those teams, feel free to ask for someone to post it for you
+> (either on Zulip or ask the assigned reviewer).
-This will first tell bors to do a "try" build which do a full release build
-for `x86_64-unknown-linux-gnu`.
-After the build finishes, it will place it in the queue to run the performance
-suite against it.
-After the performance tests finish, the bot will post a comment on the PR with
-a summary and a link to a full report.
+This will first tell bors to do a "try" build which do a full release build for
+`x86_64-unknown-linux-gnu`. After the build finishes, it will place it in the
+queue to run the performance suite against it. After the performance tests
+finish, the bot will post a comment on the PR with a summary and a link to a
+full report.
-If you want to do a perf run for an already built artifact (e.g. for a previous try
-build that wasn't benchmarked yet), you can run this instead:
+If you want to do a perf run for an already built artifact (e.g. for a previous
+try build that wasn't benchmarked yet), you can run this instead:
`@rust-timer build `
@@ -56,5 +56,6 @@ You cannot benchmark the same artifact twice though.
More information about the available perf bot commands can be found
[here](https://perf.rust-lang.org/help.html).
-More details about the benchmarking process itself are available in the [perf collector
+More details about the benchmarking process itself are available in the [perf
+collector
documentation](https://github.com/rust-lang/rustc-perf/blob/master/collector/README.md).
diff --git a/src/doc/rustc-dev-guide/src/tests/running.md b/src/doc/rustc-dev-guide/src/tests/running.md
index a081d3db42c72..80789b396e45c 100644
--- a/src/doc/rustc-dev-guide/src/tests/running.md
+++ b/src/doc/rustc-dev-guide/src/tests/running.md
@@ -2,68 +2,84 @@
-You can run the tests using `x`. The most basic command – which
-you will almost never want to use! – is as follows:
+You can run the entire test collection using `x`. But note that running the
+*entire* test collection is almost never what you want to do during local
+development because it takes a really long time. For local development, see the
+subsection after on how to run a subset of tests.
+
+
+Running plain `./x test` will build the stage 1 compiler and then run the whole
+test suite. This not only include `tests/`, but also `library/`, `compiler/`,
+`src/tools/` package tests and more.
+
+You usually only want to run a subset of the test suites (or even a smaller set
+of tests than that) which you expect will exercise your changes. PR CI exercises
+a subset of test collections, and merge queue CI will exercise all of the test
+collection.
+
```bash
./x test
```
-This will build the stage 1 compiler and then run the whole test
-suite. You probably don't want to do this very often, because it takes
-a very long time, and anyway bors / GitHub Actions will do it for you.
-(Often, I will run this command in the background after opening a PR that
-I think is done, but rarely otherwise. -nmatsakis)
-
-The test results are cached and previously successful tests are
-`ignored` during testing. The stdout/stderr contents as well as a
-timestamp file for every test can be found under `build/ARCH/test/`.
-To force-rerun a test (e.g. in case the test runner fails to notice a change)
-you can simply remove the timestamp file, or use the `--force-rerun` CLI
-option.
-
-Note that some tests require a Python-enabled gdb. You can test if
-your gdb install supports Python by using the `python` command from
-within gdb. Once invoked you can type some Python code (e.g.
-`print("hi")`) followed by return and then `CTRL+D` to execute it.
-If you are building gdb from source, you will need to configure with
-`--with-python=`.
+The test results are cached and previously successful tests are `ignored` during
+testing. The stdout/stderr contents as well as a timestamp file for every test
+can be found under `build//test/` for the given
+``. To force-rerun a test (e.g. in case the test runner fails to
+notice a change) you can use the `--force-rerun` CLI option.
+
+> **Note on requirements of external dependencies**
+>
+> Some test suites may require external dependecies. This is especially true of
+> debuginfo tests. Some debuginfo tests require a Python-enabled gdb. You can
+> test if your gdb install supports Python by using the `python` command from
+> within gdb. Once invoked you can type some Python code (e.g. `print("hi")`)
+> followed by return and then `CTRL+D` to execute it. If you are building gdb
+> from source, you will need to configure with
+> `--with-python=`.
## Running a subset of the test suites
-When working on a specific PR, you will usually want to run a smaller
-set of tests. For example, a good "smoke test" that can be used after
-modifying rustc to see if things are generally working correctly would be the
-following:
+When working on a specific PR, you will usually want to run a smaller set of
+tests. For example, a good "smoke test" that can be used after modifying rustc
+to see if things are generally working correctly would be to exercise the `ui`
+test suite ([`tests/ui`]):
```bash
./x test tests/ui
```
-This will run the `ui` test suite. Of course, the choice
-of test suites is somewhat arbitrary, and may not suit the task you are
-doing. For example, if you are hacking on debuginfo, you may be better off
-with the debuginfo test suite:
+This will run the `ui` test suite. Of course, the choice of test suites is
+somewhat arbitrary, and may not suit the task you are doing. For example, if you
+are hacking on debuginfo, you may be better off with the debuginfo test suite:
```bash
./x test tests/debuginfo
```
-If you only need to test a specific subdirectory of tests for any
-given test suite, you can pass that directory to `./x test`:
+If you only need to test a specific subdirectory of tests for any given test
+suite, you can pass that directory as a filter to `./x test`:
```bash
./x test tests/ui/const-generics
```
+> **Note for MSYS2**
+>
+> On MSYS2 the paths seem to be strange and `./x test` neither recognizes
+> `tests/ui/const-generics` nor `tests\ui\const-generics`. In that case, you can
+> workaround it by using e.g. `./x test ui
+> --test-args="tests/ui/const-generics"`.
+
Likewise, you can test a single file by passing its path:
```bash
./x test tests/ui/const-generics/const-test.rs
```
-`x` doesn't support running a single tool test by passing its path yet.
-You'll have to use the `--test-args` argument as describled [below](#running-an-individual-test).
+`x` doesn't support running a single tool test by passing its path yet. You'll
+have to use the `--test-args` argument as describled
+[below](#running-an-individual-test).
```bash
./x test src/tools/miri --test-args tests/fail/uninit/padding-enum.rs
@@ -81,8 +97,8 @@ You'll have to use the `--test-args` argument as describled [below](#running-an-
./x test --stage 0 library/std
```
-Note that this only runs tests on `std`; if you want to test `core` or other crates,
-you have to specify those explicitly.
+Note that this only runs tests on `std`; if you want to test `core` or other
+crates, you have to specify those explicitly.
### Run the tidy script and tests on the standard library
@@ -96,19 +112,23 @@ you have to specify those explicitly.
./x test --stage 1 library/std
```
-By listing which test suites you want to run you avoid having to run
-tests for components you did not change at all.
+By listing which test suites you want to run you avoid having to run tests for
+components you did not change at all.
-**Warning:** Note that bors only runs the tests with the full stage 2
-build; therefore, while the tests **usually** work fine with stage 1,
-there are some limitations.
+
+Note that bors only runs the tests with the full stage 2 build; therefore, while
+the tests **usually** work fine with stage 1, there are some limitations.
+
### Run all tests using a stage 2 compiler
```bash
./x test --stage 2
```
+
+
You almost never need to do this; CI will run these tests for you.
+
## Run unit tests on the compiler/library
@@ -126,19 +146,19 @@ But unfortunately, it's impossible. You should invoke the following instead:
## Running an individual test
-Another common thing that people want to do is to run an **individual
-test**, often the test they are trying to fix. As mentioned earlier,
-you may pass the full file path to achieve this, or alternatively one
-may invoke `x` with the `--test-args` option:
+Another common thing that people want to do is to run an **individual test**,
+often the test they are trying to fix. As mentioned earlier, you may pass the
+full file path to achieve this, or alternatively one may invoke `x` with the
+`--test-args` option:
```bash
./x test tests/ui --test-args issue-1234
```
-Under the hood, the test runner invokes the standard Rust test runner
-(the same one you get with `#[test]`), so this command would wind up
-filtering for tests that include "issue-1234" in the name. (Thus
-`--test-args` is a good way to run a collection of related tests.)
+Under the hood, the test runner invokes the standard Rust test runner (the same
+one you get with `#[test]`), so this command would wind up filtering for tests
+that include "issue-1234" in the name. Thus, `--test-args` is a good way to run
+a collection of related tests.
## Passing arguments to `rustc` when running tests
@@ -151,9 +171,9 @@ additional arguments to the compiler when building the tests.
## Editing and updating the reference files
-If you have changed the compiler's output intentionally, or you are
-making a new test, you can pass `--bless` to the test subcommand. E.g.
-if some tests in `tests/ui` are failing, you can run
+If you have changed the compiler's output intentionally, or you are making a new
+test, you can pass `--bless` to the test subcommand. E.g. if some tests in
+`tests/ui` are failing, you can run
```text
./x test tests/ui --bless
@@ -167,37 +187,35 @@ all tests. Of course you can also target just specific tests with the
There are a few options for running tests:
-* `config.toml` has the `rust.verbose-tests` option.
- If `false`, each test will print a single dot (the default).
- If `true`, the name of every test will be printed.
- This is equivalent to the `--quiet` option in the [Rust test
+* `config.toml` has the `rust.verbose-tests` option. If `false`, each test will
+ print a single dot (the default). If `true`, the name of every test will be
+ printed. This is equivalent to the `--quiet` option in the [Rust test
harness](https://doc.rust-lang.org/rustc/tests/)
* The environment variable `RUST_TEST_THREADS` can be set to the number of
concurrent threads to use for testing.
## Passing `--pass $mode`
-Pass UI tests now have three modes, `check-pass`, `build-pass` and
-`run-pass`. When `--pass $mode` is passed, these tests will be forced
-to run under the given `$mode` unless the directive `// ignore-pass`
-exists in the test file. For example, you can run all the tests in
-`tests/ui` as `check-pass`:
+Pass UI tests now have three modes, `check-pass`, `build-pass` and `run-pass`.
+When `--pass $mode` is passed, these tests will be forced to run under the given
+`$mode` unless the directive `//@ ignore-pass` exists in the test file. For
+example, you can run all the tests in `tests/ui` as `check-pass`:
```bash
./x test tests/ui --pass check
```
-By passing `--pass $mode`, you can reduce the testing time. For each
-mode, please see [Controlling pass/fail
+By passing `--pass $mode`, you can reduce the testing time. For each mode,
+please see [Controlling pass/fail
expectations](ui.md#controlling-passfail-expectations).
## Running tests with different "compare modes"
-UI tests may have different output depending on certain "modes" that
-the compiler is in. For example, when using the Polonius
-mode, a test `foo.rs` will first look for expected output in
-`foo.polonius.stderr`, falling back to the usual `foo.stderr` if not found.
-The following will run the UI test suite in Polonius mode:
+UI tests may have different output depending on certain "modes" that the
+compiler is in. For example, when using the Polonius mode, a test `foo.rs` will
+first look for expected output in `foo.polonius.stderr`, falling back to the
+usual `foo.stderr` if not found. The following will run the UI test suite in
+Polonius mode:
```bash
./x test tests/ui --compare-mode=polonius
@@ -207,25 +225,25 @@ See [Compare modes](compiletest.md#compare-modes) for more details.
## Running tests manually
-Sometimes it's easier and faster to just run the test by hand.
-Most tests are just `rs` files, so after
-[creating a rustup toolchain](../building/how-to-build-and-run.md#creating-a-rustup-toolchain),
-you can do something like:
+Sometimes it's easier and faster to just run the test by hand. Most tests are
+just `.rs` files, so after [creating a rustup
+toolchain](../building/how-to-build-and-run.md#creating-a-rustup-toolchain), you
+can do something like:
```bash
rustc +stage1 tests/ui/issue-1234.rs
```
-This is much faster, but doesn't always work. For example, some tests
-include directives that specify specific compiler flags, or which rely
-on other crates, and they may not run the same without those options.
+This is much faster, but doesn't always work. For example, some tests include
+directives that specify specific compiler flags, or which rely on other crates,
+and they may not run the same without those options.
## Running `run-make` tests
### Windows
-Running the `run-make` test suite on Windows is a bit more involved. There are numerous
-prerequisites and environmental requirements:
+Running the `run-make` test suite on Windows is a currently bit more involved.
+There are numerous prerequisites and environmental requirements:
- Install msys2:
- Specify `MSYS2_PATH_TYPE=inherit` in `msys2.ini` in the msys2 installation directory, run the
@@ -236,29 +254,38 @@ prerequisites and environmental requirements:
- `pacman -S binutils`
- `./x test run-make` (`./x test tests/run-make` doesn't work)
+There is [on-going work][port-run-make] to not rely on `Makefile`s in the
+run-make test suite. Once this work is completed, you can run the entire
+`run-make` test suite on native Windows inside `cmd` or `PowerShell` without
+needing to install and use MSYS2. As of Oct 2024, it is
+already possible to run the vast majority of the `run-make` test suite outside
+of MSYS2, but there will be failures for the tests that still use `Makefile`s
+due to not finding `make`.
## Running tests on a remote machine
Tests may be run on a remote machine (e.g. to test builds for a different
-architecture). This is done using `remote-test-client` on the build machine
-to send test programs to `remote-test-server` running on the remote machine.
+architecture). This is done using `remote-test-client` on the build machine to
+send test programs to `remote-test-server` running on the remote machine.
`remote-test-server` executes the test programs and sends the results back to
the build machine. `remote-test-server` provides *unauthenticated remote code
execution* so be careful where it is used.
-To do this, first build `remote-test-server` for the remote
-machine, e.g. for RISC-V
+To do this, first build `remote-test-server` for the remote machine, e.g. for
+RISC-V
+
```sh
./x build src/tools/remote-test-server --target riscv64gc-unknown-linux-gnu
```
The binary will be created at
-`./build/host/stage2-tools/$TARGET_ARCH/release/remote-test-server`. Copy
-this over to the remote machine.
+`./build/host/stage2-tools/$TARGET_ARCH/release/remote-test-server`. Copy this
+over to the remote machine.
On the remote machine, run the `remote-test-server` with the `--bind
-0.0.0.0:12345` flag (and optionally `-v` for verbose output). Output should
-look like this:
+0.0.0.0:12345` flag (and optionally `-v` for verbose output). Output should look
+like this:
+
```sh
$ ./remote-test-server -v --bind 0.0.0.0:12345
starting test server
@@ -272,6 +299,7 @@ restrictive IP address when binding.
You can test if the `remote-test-server` is working by connecting to it and
sending `ping\n`. It should reply `pong`:
+
```sh
$ nc $REMOTE_IP 12345
ping
@@ -281,13 +309,15 @@ pong
To run tests using the remote runner, set the `TEST_DEVICE_ADDR` environment
variable then use `x` as usual. For example, to run `ui` tests for a RISC-V
machine with the IP address `1.2.3.4` use
+
```sh
export TEST_DEVICE_ADDR="1.2.3.4:12345"
./x test tests/ui --target riscv64gc-unknown-linux-gnu
```
-If `remote-test-server` was run with the verbose flag, output on the test machine
-may look something like
+If `remote-test-server` was run with the verbose flag, output on the test
+machine may look something like
+
```
[...]
run "/tmp/work/test1007/a"
@@ -311,31 +341,28 @@ output) may fail without ever running on the remote machine.
## Testing on emulators
-Some platforms are tested via an emulator for architectures that aren't
-readily available. For architectures where the standard library is well
-supported and the host operating system supports TCP/IP networking, see the
-above instructions for testing on a remote machine (in this case the
-remote machine is emulated).
+Some platforms are tested via an emulator for architectures that aren't readily
+available. For architectures where the standard library is well supported and
+the host operating system supports TCP/IP networking, see the above instructions
+for testing on a remote machine (in this case the remote machine is emulated).
-There is also a set of tools for orchestrating running the
-tests within the emulator. Platforms such as `arm-android` and
-`arm-unknown-linux-gnueabihf` are set up to automatically run the tests under
-emulation on GitHub Actions. The following will take a look at how a target's tests
-are run under emulation.
+There is also a set of tools for orchestrating running the tests within the
+emulator. Platforms such as `arm-android` and `arm-unknown-linux-gnueabihf` are
+set up to automatically run the tests under emulation on GitHub Actions. The
+following will take a look at how a target's tests are run under emulation.
The Docker image for [armhf-gnu] includes [QEMU] to emulate the ARM CPU
-architecture. Included in the Rust tree are the tools [remote-test-client]
-and [remote-test-server] which are programs for sending test programs and
-libraries to the emulator, and running the tests within the emulator, and
-reading the results. The Docker image is set up to launch
-`remote-test-server` and the build tools use `remote-test-client` to
-communicate with the server to coordinate running tests (see
-[src/bootstrap/src/core/build_steps/test.rs]).
-
-> TODO:
-> Is there any support for using an iOS emulator?
+architecture. Included in the Rust tree are the tools [remote-test-client] and
+[remote-test-server] which are programs for sending test programs and libraries
+to the emulator, and running the tests within the emulator, and reading the
+results. The Docker image is set up to launch `remote-test-server` and the
+build tools use `remote-test-client` to communicate with the server to
+coordinate running tests (see [src/bootstrap/src/core/build_steps/test.rs]).
+
+> **TODO**
>
-> It's also unclear to me how the wasm or asm.js tests are run.
+> - Is there any support for using an iOS emulator?
+> - It's also unclear to me how the wasm or asm.js tests are run.
[armhf-gnu]: https://github.com/rust-lang/rust/tree/master/src/ci/docker/host-x86_64/armhf-gnu/Dockerfile
[QEMU]: https://www.qemu.org/
@@ -374,5 +401,9 @@ need to pass the library file path with `LIBRARY_PATH`:
$ LIBRARY_PATH=/usr/lib/gcc/x86_64-linux-gnu/12/ ./x test compiler/rustc_codegen_gcc/
```
-If you encounter bugs or problems, don't hesitate to open issues on
-[rustc_codegen_gcc repository](https://github.com/rust-lang/rustc_codegen_gcc/).
+If you encounter bugs or problems, don't hesitate to open issues on the
+[`rustc_codegen_gcc`
+repository](https://github.com/rust-lang/rustc_codegen_gcc/).
+
+[`tests/ui`]: https://github.com/rust-lang/rust/tree/master/tests/ui
+[port-run-make]: https://github.com/rust-lang/rust/issues/121876
diff --git a/src/doc/rustc-dev-guide/src/tests/rust-for-linux.md b/src/doc/rustc-dev-guide/src/tests/rust-for-linux.md
index 1486383658cd8..0862e0298470b 100644
--- a/src/doc/rustc-dev-guide/src/tests/rust-for-linux.md
+++ b/src/doc/rustc-dev-guide/src/tests/rust-for-linux.md
@@ -1,32 +1,45 @@
# Rust for Linux integration tests
-[Rust for Linux](https://rust-for-linux.com/) (RfL) is an effort for adding support for the Rust programming
-language into the Linux kernel.
+[Rust for Linux](https://rust-for-linux.com/) (RfL) is an effort for adding
+support for the Rust programming language into the Linux kernel.
## Building Rust for Linux in CI
-Rust for Linux builds as part of the suite of bors tests that run before a pull request
-is merged.
+Rust for Linux builds as part of the suite of bors tests that run before a pull
+request is merged.
-The workflow builds a stage1 sysroot of the Rust compiler, downloads the Linux kernel, and tries to compile several Rust for Linux drivers and examples using this sysroot. RfL uses several unstable compiler/language features, therefore this workflow notifies us if a given compiler change would break it.
+The workflow builds a stage1 sysroot of the Rust compiler, downloads the Linux
+kernel, and tries to compile several Rust for Linux drivers and examples using
+this sysroot. RfL uses several unstable compiler/language features, therefore
+this workflow notifies us if a given compiler change would break it.
-If you are worried that a pull request might break the Rust for Linux builder and want
-to test it out before submitting it to the bors queue, simply add this line to
-your PR description:
+If you are worried that a pull request might break the Rust for Linux builder
+and want to test it out before submitting it to the bors queue, simply add this
+line to your PR description:
> try-job: x86_64-rust-for-linux
-Then when you `@bors try` it will pick the job that builds the Rust for Linux integration.
+Then when you `@bors try` it will pick the job that builds the Rust for Linux
+integration.
## What to do in case of failure
-Currently, we use the following unofficial policy for handling failures caused by a change breaking the RfL integration:
+Currently, we use the following unofficial policy for handling failures caused
+by a change breaking the RfL integration:
- If the breakage was unintentional, then fix the PR.
-- If the breakage was intentional, then let [RFL][rfl-ping] know and discuss what will the kernel need to change.
+- If the breakage was intentional, then let [RFL][rfl-ping] know and discuss
+ what will the kernel need to change.
- If the PR is urgent, then disable the test temporarily.
- - If the PR can wait a few days, then wait for RFL maintainers to provide a new Linux kernel commit hash with the needed changes done, and apply it to the PR, which would confirm the changes work.
+ - If the PR can wait a few days, then wait for RFL maintainers to provide a
+ new Linux kernel commit hash with the needed changes done, and apply it to
+ the PR, which would confirm the changes work.
-If something goes wrong with the workflow, you can ping the [Rust for Linux][rfl-ping] ping group to ask for help.
+If something goes wrong with the workflow, you can ping the [Rust for
+Linux][rfl-ping] ping group to ask for help.
+
+```text
+@rustbot ping rfl
+```
[rfl-ping]: ../notification-groups/rust-for-linux.md
diff --git a/src/doc/rustc-dev-guide/src/tests/suggest-tests.md b/src/doc/rustc-dev-guide/src/tests/suggest-tests.md
index 4ab945c0c7416..663e8a5af3b9e 100644
--- a/src/doc/rustc-dev-guide/src/tests/suggest-tests.md
+++ b/src/doc/rustc-dev-guide/src/tests/suggest-tests.md
@@ -1,36 +1,39 @@
# Suggest tests tool
This chapter is about the internals of and contribution instructions for the
-`suggest-tests` tool. For a high-level overview of the tool, see
-[this section](../building/suggested.md#x-suggest). This tool is currently in a
-beta state and is tracked by [this](https://github.com/rust-lang/rust/issues/109933)
+`suggest-tests` tool. For a high-level overview of the tool, see [this
+section](../building/suggested.md#x-suggest). This tool is currently in a beta
+state and is tracked by [this](https://github.com/rust-lang/rust/issues/109933)
issue on Github. Currently the number of tests it will suggest are very limited
in scope, we are looking to expand this (contributions welcome!).
## Internals
-The tool is defined in a separate crate ([`src/tools/suggest-tests`](https://github.com/rust-lang/rust/blob/master/src/tools/suggest-tests))
+The tool is defined in a separate crate
+([`src/tools/suggest-tests`](https://github.com/rust-lang/rust/blob/master/src/tools/suggest-tests))
which outputs suggestions which are parsed by a shim in bootstrap
([`src/bootstrap/src/core/build_steps/suggest.rs`](https://github.com/rust-lang/rust/blob/master/src/bootstrap/src/core/build_steps/suggest.rs)).
-The only notable thing the bootstrap shim does is (when invoked with the
-`--run` flag) use bootstrap's internal mechanisms to create a new `Builder` and
-uses it to invoke the suggested commands. The `suggest-tests` crate is where the
-fun happens, two kinds of suggestions are defined: "static" and "dynamic"
+The only notable thing the bootstrap shim does is (when invoked with the `--run`
+flag) use bootstrap's internal mechanisms to create a new `Builder` and uses it
+to invoke the suggested commands. The `suggest-tests` crate is where the fun
+happens, two kinds of suggestions are defined: "static" and "dynamic"
suggestions.
### Static suggestions
-Defined [here](https://github.com/rust-lang/rust/blob/master/src/tools/suggest-tests/src/static_suggestions.rs).
-Static suggestions are simple: they are just [globs](https://crates.io/crates/glob)
-which map to a `x` command. In `suggest-tests`, this is implemented with a
-simple `macro_rules` macro.
+Defined
+[here](https://github.com/rust-lang/rust/blob/master/src/tools/suggest-tests/src/static_suggestions.rs).
+Static suggestions are simple: they are just
+[globs](https://crates.io/crates/glob) which map to a `x` command. In
+`suggest-tests`, this is implemented with a simple `macro_rules` macro.
### Dynamic suggestions
-Defined [here](https://github.com/rust-lang/rust/blob/master/src/tools/suggest-tests/src/dynamic_suggestions.rs).
+Defined
+[here](https://github.com/rust-lang/rust/blob/master/src/tools/suggest-tests/src/dynamic_suggestions.rs).
These are more complicated than static suggestions and are implemented as
-functions with the following signature: `fn(&Path) -> Vec`. In
-other words, each suggestion takes a path to a modified file and (after running
+functions with the following signature: `fn(&Path) -> Vec`. In other
+words, each suggestion takes a path to a modified file and (after running
arbitrary Rust code) can return any number of suggestions, or none. Dynamic
suggestions are useful for situations where fine-grained control over
suggestions is needed. For example, modifications to the `compiler/xyz/` path
@@ -43,13 +46,14 @@ run.
The following steps should serve as a rough guide to add suggestions to
`suggest-tests` (very welcome!):
-1. Determine the rules for your suggestion. Is it simple and operates only on
- a single path or does it match globs? Does it need fine-grained control over
+1. Determine the rules for your suggestion. Is it simple and operates only on a
+ single path or does it match globs? Does it need fine-grained control over
the resulting command or does "one size fit all"?
2. Based on the previous step, decide if your suggestion should be implemented
as either static or dynamic.
3. Implement the suggestion. If it is dynamic then a test is highly recommended,
- to verify that your logic is correct and to give an example of the suggestion.
- See the [tests.rs](https://github.com/rust-lang/rust/blob/master/src/tools/suggest-tests/src/tests.rs)
+ to verify that your logic is correct and to give an example of the
+ suggestion. See the
+ [tests.rs](https://github.com/rust-lang/rust/blob/master/src/tools/suggest-tests/src/tests.rs)
file.
4. Open a PR implementing your suggestion. **(TODO: add example PR)**
diff --git a/src/doc/rustc-dev-guide/src/tests/ui.md b/src/doc/rustc-dev-guide/src/tests/ui.md
index 8939269b6ab0d..610af41e5969d 100644
--- a/src/doc/rustc-dev-guide/src/tests/ui.md
+++ b/src/doc/rustc-dev-guide/src/tests/ui.md
@@ -2,105 +2,98 @@
-UI tests are a particular [test suite](compiletest.md#test-suites) of compiletest.
+UI tests are a particular [test suite](compiletest.md#test-suites) of
+compiletest.
## Introduction
The tests in [`tests/ui`] are a collection of general-purpose tests which
primarily focus on validating the console output of the compiler, but can be
-used for many other purposes.
-For example, tests can also be configured to [run the resulting
-program](#controlling-passfail-expectations) to verify its behavior.
+used for many other purposes. For example, tests can also be configured to [run
+the resulting program](#controlling-passfail-expectations) to verify its
+behavior.
[`tests/ui`]: https://github.com/rust-lang/rust/blob/master/tests/ui
## General structure of a test
-A test consists of a Rust source file located anywhere in the `tests/ui` directory.
-For example, [`tests/ui/hello.rs`] is a basic hello-world test.
+A test consists of a Rust source file located anywhere in the `tests/ui`
+directory, but they should be placed in a suitable sub-directory. For example,
+[`tests/ui/hello.rs`] is a basic hello-world test.
-Compiletest will use `rustc` to compile the test, and compare the output
-against the expected output which is stored in a `.stdout` or `.stderr` file
-located next to the test.
-See [Output comparison](#output-comparison) for more.
+Compiletest will use `rustc` to compile the test, and compare the output against
+the expected output which is stored in a `.stdout` or `.stderr` file located
+next to the test. See [Output comparison](#output-comparison) for more.
-Additionally, errors and warnings should be annotated with comments within
-the source file.
-See [Error annotations](#error-annotations) for more.
+Additionally, errors and warnings should be annotated with comments within the
+source file. See [Error annotations](#error-annotations) for more.
-[Headers](headers.md) in the form of comments at the top of the file control
-how the test is compiled and what the expected behavior is. Note that tests in
-the "ui" test suite require the use of `//@ header-name` instead of
-`// header-name` like the other test suites do. The other test suites will be
-migrated to use the `//@` syntax too, but that is in progress. Additionally,
-`// ignore-tidy` and `// ignore-tidy-*` are ignored by compiletest when
-handling "ui" test suite tests (note that they are not `//@` directives).
+Compiletest [directives](directives.md) in the form of special comments prefixed
+with `//@` control how the test is compiled and what the expected behavior is.
Tests are expected to fail to compile, since most tests are testing compiler
-errors.
-You can change that behavior with a header, see [Controlling pass/fail
-expectations](#controlling-passfail-expectations).
+errors. You can change that behavior with a directive, see [Controlling
+pass/fail expectations](#controlling-passfail-expectations).
-By default, a test is built as an executable binary.
-If you need a different crate type, you can use the `#![crate_type]` attribute
-to set it as needed.
+By default, a test is built as an executable binary. If you need a different
+crate type, you can use the `#![crate_type]` attribute to set it as needed.
[`tests/ui/hello.rs`]: https://github.com/rust-lang/rust/blob/master/tests/ui/hello.rs
## Output comparison
-UI tests store the expected output from the compiler in `.stderr` and
-`.stdout` files next to the test.
-You normally generate these files with the `--bless` CLI option, and then
-inspect them manually to verify they contain what you expect.
+UI tests store the expected output from the compiler in `.stderr` and `.stdout`
+snapshots next to the test. You normally generate these files with the `--bless`
+CLI option, and then inspect them manually to verify they contain what you
+expect.
The output is normalized to ignore unwanted differences, see the
-[Normalization](#normalization) section.
-If the file is missing, then compiletest expects the corresponding output to
-be empty.
+[Normalization](#normalization) section. If the file is missing, then
+compiletest expects the corresponding output to be empty.
-There can be multiple stdout/stderr files.
-The general form is:
+There can be multiple stdout/stderr files. The general form is:
+```text
*test-name*`.`*revision*`.`*compare_mode*`.`*extension*
+```
-* *test-name* cannot contain dots. This is so that the general form of test
+- *test-name* cannot contain dots. This is so that the general form of test
output filenames have a predictable form we can pattern match on in order to
track stray test output files.
-* *revision* is the [revision](#cfg-revisions) name.
- This is not included when not using revisions.
-* *compare_mode* is the [compare mode](#compare-modes).
- This will only be checked when the given compare mode is active.
- If the file does not exist, then compiletest will check for a file without
- the compare mode.
-* *extension* is the kind of output being checked:
- * `stderr` — compiler stderr
- * `stdout` — compiler stdout
- * `run.stderr` — stderr when running the test
- * `run.stdout` — stdout when running the test
- * `64bit.stderr` — compiler stderr with `stderr-per-bitwidth` header on a 64-bit target
- * `32bit.stderr` — compiler stderr with `stderr-per-bitwidth` header on a 32-bit target
+- *revision* is the [revision](#cfg-revisions) name. This is not included when
+ not using revisions.
+- *compare_mode* is the [compare mode](#compare-modes). This will only be
+ checked when the given compare mode is active. If the file does not exist,
+ then compiletest will check for a file without the compare mode.
+- *extension* is the kind of output being checked:
+ - `stderr` — compiler stderr
+ - `stdout` — compiler stdout
+ - `run.stderr` — stderr when running the test
+ - `run.stdout` — stdout when running the test
+ - `64bit.stderr` — compiler stderr with `stderr-per-bitwidth` directive on a
+ 64-bit target
+ - `32bit.stderr` — compiler stderr with `stderr-per-bitwidth` directive on a
+ 32-bit target
A simple example would be `foo.stderr` next to a `foo.rs` test.
A more complex example would be `foo.my-revision.polonius.stderr`.
-There are several [headers](headers.md) which will change how compiletest will
-check for output files:
+There are several [directives](directives.md) which will change how compiletest
+will check for output files:
-* `stderr-per-bitwidth` — checks separate output files based on the target
- pointer width. Consider using the `normalize-stderr` header instead (see
+- `stderr-per-bitwidth` — checks separate output files based on the target
+ pointer width. Consider using the `normalize-stderr` directive instead (see
[Normalization](#normalization)).
-* `dont-check-compiler-stderr` — Ignores stderr from the compiler.
-* `dont-check-compiler-stdout` — Ignores stdout from the compiler.
-* `compare-output-lines-by-subset` — Checks that the output contains the
+- `dont-check-compiler-stderr` — Ignores stderr from the compiler.
+- `dont-check-compiler-stdout` — Ignores stdout from the compiler.
+- `compare-output-lines-by-subset` — Checks that the output contains the
contents of the stored output files by lines opposed to checking for strict
equality.
-UI tests run with `-Zdeduplicate-diagnostics=no` flag which disables
-rustc's built-in diagnostic deduplication mechanism.
-This means you may see some duplicate messages in the output.
-This helps illuminate situations where duplicate diagnostics are being
-generated.
+UI tests run with `-Zdeduplicate-diagnostics=no` flag which disables rustc's
+built-in diagnostic deduplication mechanism. This means you may see some
+duplicate messages in the output. This helps illuminate situations where
+duplicate diagnostics are being generated.
### Normalization
@@ -109,20 +102,20 @@ platforms, mainly about filenames.
Compiletest makes the following replacements on the compiler output:
-- The directory where the test is defined is replaced with `$DIR`.
- Example: `/path/to/rust/tests/ui/error-codes`
+- The directory where the test is defined is replaced with `$DIR`. Example:
+ `/path/to/rust/tests/ui/error-codes`
- The directory to the standard library source is replaced with `$SRC_DIR`.
Example: `/path/to/rust/library`
- Line and column numbers for paths in `$SRC_DIR` are replaced with `LL:COL`.
This helps ensure that changes to the layout of the standard library do not
- cause widespread changes to the `.stderr` files.
- Example: `$SRC_DIR/alloc/src/sync.rs:53:46`
-- The base directory where the test's output goes is replaced with `$TEST_BUILD_DIR`.
- This only comes up in a few rare circumstances.
- Example: `/path/to/rust/build/x86_64-unknown-linux-gnu/test/ui`
+ cause widespread changes to the `.stderr` files. Example:
+ `$SRC_DIR/alloc/src/sync.rs:53:46`
+- The base directory where the test's output goes is replaced with
+ `$TEST_BUILD_DIR`. This only comes up in a few rare circumstances. Example:
+ `/path/to/rust/build/x86_64-unknown-linux-gnu/test/ui`
- Tabs are replaced with `\t`.
-- Backslashes (`\`) are converted to forward slashes (`/`) within paths (using
- a heuristic). This helps normalize differences with Windows-style paths.
+- Backslashes (`\`) are converted to forward slashes (`/`) within paths (using a
+ heuristic). This helps normalize differences with Windows-style paths.
- CRLF newlines are converted to LF.
- Error line annotations like `//~ ERROR some message` are removed.
- Various v0 and legacy symbol hashes are replaced with placeholders like
@@ -131,20 +124,20 @@ Compiletest makes the following replacements on the compiler output:
Additionally, the compiler is run with the `-Z ui-testing` flag which causes
the compiler itself to apply some changes to the diagnostic output to make it
more suitable for UI testing.
+
For example, it will anonymize line numbers in the output (line numbers
-prefixing each source line are replaced with `LL`).
-In extremely rare situations, this mode can be disabled with the header
-command `//@ compile-flags: -Z ui-testing=no`.
+prefixing each source line are replaced with `LL`). In extremely rare
+situations, this mode can be disabled with the directive `//@
+compile-flags: -Z ui-testing=no`.
-Note: The line and column numbers for `-->` lines pointing to the test are
-*not* normalized, and left as-is. This ensures that the compiler continues
-to point to the correct location, and keeps the stderr files readable.
-Ideally all line/column information would be retained, but small changes to
-the source causes large diffs, and more frequent merge conflicts and test
-errors.
+Note: The line and column numbers for `-->` lines pointing to the test are *not*
+normalized, and left as-is. This ensures that the compiler continues to point to
+the correct location, and keeps the stderr files readable. Ideally all
+line/column information would be retained, but small changes to the source
+causes large diffs, and more frequent merge conflicts and test errors.
-Sometimes these built-in normalizations are not enough. In such cases, you
-may provide custom normalization rules using the header commands, e.g.
+Sometimes these built-in normalizations are not enough. In such cases, you may
+provide custom normalization rules using `normalize-*` directives, e.g.
```rust,ignore
//@ normalize-stdout-test: "foo" -> "bar"
@@ -152,10 +145,10 @@ may provide custom normalization rules using the header commands, e.g.
//@ normalize-stderr-64bit: "fn\(\) \(64 bits\)" -> "fn\(\) \($$PTR bits\)"
```
-This tells the test, on 32-bit platforms, whenever the compiler writes
-`fn() (32 bits)` to stderr, it should be normalized to read `fn() ($PTR bits)`
-instead. Similar for 64-bit. The replacement is performed by regexes using
-default regex flavor provided by `regex` crate.
+This tells the test, on 32-bit platforms, whenever the compiler writes `fn() (32
+bits)` to stderr, it should be normalized to read `fn() ($PTR bits)` instead.
+Similar for 64-bit. The replacement is performed by regexes using default regex
+flavor provided by `regex` crate.
The corresponding reference file will use the normalized output to test both
32-bit and 64-bit platforms:
@@ -168,22 +161,21 @@ The corresponding reference file will use the normalized output to test both
...
```
-Please see [`ui/transmute/main.rs`][mrs] and [`main.stderr`] for a
-concrete usage example.
+Please see [`ui/transmute/main.rs`][mrs] and [`main.stderr`] for a concrete
+usage example.
[mrs]: https://github.com/rust-lang/rust/blob/master/tests/ui/transmute/main.rs
[`main.stderr`]: https://github.com/rust-lang/rust/blob/master/tests/ui/transmute/main.stderr
Besides `normalize-stderr-32bit` and `-64bit`, one may use any target
-information or stage supported by [`ignore-X`](headers.md#ignoring-tests)
-here as well (e.g. `normalize-stderr-windows` or simply
-`normalize-stderr-test` for unconditional replacement).
-
+information or stage supported by [`ignore-X`](directives.md#ignoring-tests) here
+as well (e.g. `normalize-stderr-windows` or simply `normalize-stderr-test` for
+unconditional replacement).
## Error annotations
-Error annotations specify the errors that the compiler is expected to emit.
-They are "attached" to the line in source where the error is located.
+Error annotations specify the errors that the compiler is expected to emit. They
+are "attached" to the line in source where the error is located.
```rust,ignore
fn main() {
@@ -191,41 +183,46 @@ fn main() {
}
```
-Although UI tests have a `.stderr` file which contains the entire compiler output,
-UI tests require that errors are also annotated within the source.
-This redundancy helps avoid mistakes since the `.stderr` files are usually
-auto-generated.
-It also helps to directly see where the error spans are expected to point to
-by looking at one file instead of having to compare the `.stderr` file with
-the source.
-Finally, they ensure that no additional unexpected errors are generated.
-
-They have several forms, but generally are a comment with the diagnostic
-level (such as `ERROR`) and a substring of the expected error output.
-You don't have to write out the entire message, just make sure to include the
-important part of the message to make it self-documenting.
-
-The error annotation needs to match with the line of the diagnostic.
-There are several ways to match the message with the line (see the examples below):
-
-* `~`: Associates the error level and message with the current line
-* `~^`: Associates the error level and message with the previous error
- annotation line.
- Each caret (`^`) that you add adds a line to this, so `~^^^` is three lines
- above the error annotation line.
-* `~|`: Associates the error level and message with the same line as the
- previous comment.
- This is more convenient than using multiple carets when there are multiple
- messages associated with the same line.
-
-The space character between `//~` (or other variants) and the subsequent text
-is negligible (i.e. there is no semantic difference between `//~ ERROR` and
+Although UI tests have a `.stderr` file which contains the entire compiler
+output, UI tests require that errors are also annotated within the source. This
+redundancy helps avoid mistakes since the `.stderr` files are usually
+auto-generated. It also helps to directly see where the error spans are expected
+to point to by looking at one file instead of having to compare the `.stderr`
+file with the source. Finally, they ensure that no additional unexpected errors
+are generated.
+
+They have several forms, but generally are a comment with the diagnostic level
+(such as `ERROR`) and a substring of the expected error output. You don't have
+to write out the entire message, just make sure to include the important part of
+the message to make it self-documenting.
+
+The error annotation needs to match with the line of the diagnostic. There are
+several ways to match the message with the line (see the examples below):
+
+* `~`: Associates the error level and message with the *current* line
+* `~^`: Associates the error level and message with the *previous* error
+ annotation line. Each caret (`^`) that you add adds a line to this, so `~^^^`
+ is three lines above the error annotation line.
+* `~|`: Associates the error level and message with the *same* line as the
+ *previous comment*. This is more convenient than using multiple carets when
+ there are multiple messages associated with the same line.
+
+Example:
+
+```rust,ignore
+let _ = same_line; //~ ERROR undeclared variable
+fn meow(_: [u8]) {}
+//~^ ERROR unsized
+//~| ERROR anonymous parameters
+```
+
+The space character between `//~` (or other variants) and the subsequent text is
+negligible (i.e. there is no semantic difference between `//~ ERROR` and
`//~ERROR` although the former is more common in the codebase).
### Error annotation examples
-Here are examples of error annotations on different lines of UI test
-source.
+Here are examples of error annotations on different lines of UI test source.
#### Positioned on error line
@@ -243,10 +240,9 @@ fn main() {
#### Positioned below error line
-Use the `//~^` idiom with number of carets in the string to indicate the
-number of lines above.
-In the example below, the error line is four lines above the error annotation
-line so four carets are included in the annotation.
+Use the `//~^` idiom with number of carets in the string to indicate the number
+of lines above. In the example below, the error line is four lines above the
+error annotation line so four carets are included in the annotation.
```rust,ignore
fn main() {
@@ -280,8 +276,8 @@ fn main() {
### `error-pattern`
-The `error-pattern` [header](headers.md) can be used for
-messages that don't have a specific span.
+The `error-pattern` [directive](directives.md) can be used for messages that don't
+have a specific span.
Let's think about this test:
@@ -294,9 +290,9 @@ fn main() {
}
```
-We want to ensure this shows "index out of bounds" but we cannot use the
-`ERROR` annotation since the error doesn't have any span.
-Then it's time to use the `error-pattern` header:
+We want to ensure this shows "index out of bounds" but we cannot use the `ERROR`
+annotation since the error doesn't have any span. Then it's time to use the
+`error-pattern` directive:
```rust,ignore
//@ error-pattern: index out of bounds
@@ -314,33 +310,32 @@ But for strict testing, try to use the `ERROR` annotation as much as possible.
The error levels that you can have are:
-1. `ERROR`
-2. `WARN` or `WARNING`
-3. `NOTE`
-4. `HELP` and `SUGGESTION`
+- `ERROR`
+- `WARN` or `WARNING`
+- `NOTE`
+- `HELP` and `SUGGESTION`
You are allowed to not include a level, but you should include it at least for
the primary message.
-The `SUGGESTION` level is used for specifying what the expected replacement
-text should be for a diagnostic suggestion.
+The `SUGGESTION` level is used for specifying what the expected replacement text
+should be for a diagnostic suggestion.
UI tests use the `-A unused` flag by default to ignore all unused warnings, as
-unused warnings are usually not the focus of a test.
-However, simple code samples often have unused warnings.
-If the test is specifically testing an unused warning, just add the
-appropriate `#![warn(unused)]` attribute as needed.
+unused warnings are usually not the focus of a test. However, simple code
+samples often have unused warnings. If the test is specifically testing an
+unused warning, just add the appropriate `#![warn(unused)]` attribute as needed.
-### cfg revisions
+### `cfg` revisions
When using [revisions](compiletest.md#revisions), different messages can be
-conditionally checked based on the current revision.
-This is done by placing the revision cfg name in brackets like this:
+conditionally checked based on the current revision. This is done by placing the
+revision cfg name in brackets like this:
```rust,ignore
//@ edition:2018
//@ revisions: mir thir
-//@ [thir]compile-flags: -Z thir-unsafeck
+//@[thir] compile-flags: -Z thir-unsafeck
async unsafe fn f() {}
@@ -356,10 +351,10 @@ fn main() {
In this example, the second error message is only emitted in the `mir` revision.
The `thir` revision only emits the first error.
-If the cfg causes the compiler to emit different output, then a test can have
-multiple `.stderr` files for the different outputs.
-In the example above, there would be a `.mir.stderr` and `.thir.stderr` file
-with the different outputs of the different revisions.
+If the `cfg` causes the compiler to emit different output, then a test can have
+multiple `.stderr` files for the different outputs. In the example above, there
+would be a `.mir.stderr` and `.thir.stderr` file with the different outputs of
+the different revisions.
> Note: cfg revisions also work inside the source code with `#[cfg]` attributes.
>
@@ -368,113 +363,108 @@ with the different outputs of the different revisions.
## Controlling pass/fail expectations
By default, a UI test is expected to **generate a compile error** because most
-of the tests are checking for invalid input and error diagnostics.
-However, you can also make UI tests where compilation is expected to succeed,
-and you can even run the resulting program.
-Just add one of the following [header commands](headers.md):
+of the tests are checking for invalid input and error diagnostics. However, you
+can also make UI tests where compilation is expected to succeed, and you can
+even run the resulting program. Just add one of the following
+[directives](directives.md):
-* Pass headers:
- * `//@ check-pass` — compilation should succeed but skip codegen
+- Pass directives:
+ - `//@ check-pass` — compilation should succeed but skip codegen
(which is expensive and isn't supposed to fail in most cases).
- * `//@ build-pass` — compilation and linking should succeed but do
+ - `//@ build-pass` — compilation and linking should succeed but do
not run the resulting binary.
- * `//@ run-pass` — compilation should succeed and running the resulting
+ - `//@ run-pass` — compilation should succeed and running the resulting
binary should also succeed.
-* Fail headers:
- * `//@ check-fail` — compilation should fail (the codegen phase is skipped).
+- Fail directives:
+ - `//@ check-fail` — compilation should fail (the codegen phase is skipped).
This is the default for UI tests.
- * `//@ build-fail` — compilation should fail during the codegen phase.
+ - `//@ build-fail` — compilation should fail during the codegen phase.
This will run `rustc` twice, once to verify that it compiles successfully
without the codegen phase, then a second time the full compile should
fail.
- * `//@ run-fail` — compilation should succeed, but running the resulting
+ - `//@ run-fail` — compilation should succeed, but running the resulting
binary should fail.
-For `run-pass` and `run-fail` tests, by default the output of the program
-itself is not checked.
+For `run-pass` and `run-fail` tests, by default the output of the program itself
+is not checked.
+
If you want to check the output of running the program, include the
-`check-run-results` header.
-This will check for a `.run.stderr` and `.run.stdout` files to compare
-against the actual output of the program.
+`check-run-results` directive. This will check for a `.run.stderr` and
+`.run.stdout` files to compare against the actual output of the program.
-Tests with the `*-pass` headers can be overridden with the `--pass`
+Tests with the `*-pass` directives can be overridden with the `--pass`
command-line option:
```sh
./x test tests/ui --pass check
```
-The `--pass` option only affects UI tests.
-Using `--pass check` can run the UI test suite much faster (roughly twice as
-fast on my system), though obviously not exercising as much.
+The `--pass` option only affects UI tests. Using `--pass check` can run the UI
+test suite much faster (roughly twice as fast on my system), though obviously
+not exercising as much.
-The `ignore-pass` header can be used to ignore the `--pass` CLI flag if the
+The `ignore-pass` directive can be used to ignore the `--pass` CLI flag if the
test won't work properly with that override.
## Known bugs
-The `known-bug` header may be used for tests that demonstrate a known bug that
-has not yet been fixed.
-Adding tests for known bugs is helpful for several reasons, including:
+The `known-bug` directive may be used for tests that demonstrate a known bug
+that has not yet been fixed. Adding tests for known bugs is helpful for several
+reasons, including:
-1. Maintaining a functional test that can be conveniently reused when the bug is fixed.
-2. Providing a sentinel that will fail if the bug is incidentally fixed.
- This can alert the developer so they know that the associated issue has
- been fixed and can possibly be closed.
+1. Maintaining a functional test that can be conveniently reused when the bug is
+ fixed.
+2. Providing a sentinel that will fail if the bug is incidentally fixed. This
+ can alert the developer so they know that the associated issue has been fixed
+ and can possibly be closed.
-Do not include [error annotations](#error-annotations) in a test with `known-bug`.
-The test should still include other normal headers and stdout/stderr files.
+Do not include [error annotations](#error-annotations) in a test with
+`known-bug`. The test should still include other normal directives and
+stdout/stderr files.
## Test organization
-When deciding where to place a test file, please try to find a subdirectory
-that best matches what you are trying to exercise.
-Do your best to keep things organized.
-Admittedly it can be difficult as some tests can overlap different categories,
-and the existing layout may not fit well.
-
-For regression tests – basically, some random snippet of code that came in
-from the internet – we often name the test after the issue plus a short
-description.
-Ideally, the test should be added to a directory that helps identify what
-piece of code is being tested here (e.g.,
-`tests/ui/borrowck/issue-54597-reject-move-out-of-borrow-via-pat.rs`)
-
-When writing a new feature, **create a subdirectory to store your tests**.
-For example, if you are implementing RFC 1234 ("Widgets"), then it might make
-sense to put the tests in a directory like `tests/ui/rfc1234-widgets/`.
-
-In other cases, there may already be a suitable directory. (The proper
-directory structure to use is actually an area of active debate.)
-
-Over time, the [`tests/ui`] directory has grown very fast.
-There is a check in [tidy](intro.md#tidy) that will ensure none of the
-subdirectories has more than 1000 entries.
-Having too many files causes problems because it isn't editor/IDE friendly and
-the GitHub UI won't show more than 1000 entries.
-However, since `tests/ui` (UI test root directory) and `tests/ui/issues`
-directories have more than 1000 entries, we set a different limit for those
-directories.
-So, please avoid putting a new test there and try to find a more relevant
-place.
+When deciding where to place a test file, please try to find a subdirectory that
+best matches what you are trying to exercise. Do your best to keep things
+organized. Admittedly it can be difficult as some tests can overlap different
+categories, and the existing layout may not fit well.
+
+Name the test by a concise description of what the test is checking. Avoid
+including the issue number in the test name. See [best
+practices](best-practices.md) for a more in-depth discussion of this.
+
+Ideally, the test should be added to a directory that helps identify what piece
+of code is being tested here (e.g.,
+`tests/ui/borrowck/reject-move-out-of-borrow-via-pat.rs`)
+
+When writing a new feature, you may want to **create a subdirectory to store
+your tests**. For example, if you are implementing RFC 1234 ("Widgets"), then it
+might make sense to put the tests in a directory like
+`tests/ui/rfc1234-widgets/`.
+
+In other cases, there may already be a suitable directory.
+
+Over time, the [`tests/ui`] directory has grown very fast. There is a check in
+[tidy](intro.md#tidy) that will ensure none of the subdirectories has more than
+1000 entries. Having too many files causes problems because it isn't editor/IDE
+friendly and the GitHub UI won't show more than 1000 entries. However, since
+`tests/ui` (UI test root directory) and `tests/ui/issues` directories have more
+than 1000 entries, we set a different limit for those directories. So, please
+avoid putting a new test there and try to find a more relevant place.
For example, if your test is related to closures, you should put it in
-`tests/ui/closures`.
-If you're not sure where is the best place, it's still okay to add to
-`tests/ui/issues/`.
-When you reach the limit, you could increase it by tweaking [here][ui test
-tidy].
+`tests/ui/closures`. When you reach the limit, you could increase it by tweaking
+[here][ui test tidy].
[ui test tidy]: https://github.com/rust-lang/rust/blob/master/src/tools/tidy/src/ui_tests.rs
-
## Rustfix tests
-UI tests can validate that diagnostic suggestions apply correctly
-and that the resulting changes compile correctly.
-This can be done with the `run-rustfix` header:
+UI tests can validate that diagnostic suggestions apply correctly and that the
+resulting changes compile correctly. This can be done with the `run-rustfix`
+directive:
```rust,ignore
//@ run-rustfix
@@ -487,40 +477,40 @@ pub struct not_camel_case {}
//~| SUGGESTION NotCamelCase
```
-Rustfix tests should have a file with the `.fixed` extension which contains
-the source file after the suggestion has been applied.
+Rustfix tests should have a file with the `.fixed` extension which contains the
+source file after the suggestion has been applied.
-When the test is run, compiletest first checks that the correct
-lint/warning is generated.
-Then, it applies the suggestion and compares against `.fixed` (they must match).
-Finally, the fixed source is compiled, and this compilation is required to succeed.
+- When the test is run, compiletest first checks that the correct lint/warning
+ is generated.
+- Then, it applies the suggestion and compares against `.fixed` (they must
+ match).
+- Finally, the fixed source is compiled, and this compilation is required to
+ succeed.
Usually when creating a rustfix test you will generate the `.fixed` file
automatically with the `x test --bless` option.
-The `run-rustfix` header will cause *all* suggestions to be applied, even
-if they are not [`MachineApplicable`](../diagnostics.md#suggestions).
-If this is a problem, then you can add the `rustfix-only-machine-applicable`
-header in addition to `run-rustfix`.
-This should be used if there is a mixture of different suggestion levels, and
-some of the non-machine-applicable ones do not apply cleanly.
+The `run-rustfix` directive will cause *all* suggestions to be applied, even if
+they are not [`MachineApplicable`](../diagnostics.md#suggestions). If this is a
+problem, then you can add the `rustfix-only-machine-applicable` directive in
+addition to `run-rustfix`. This should be used if there is a mixture of
+different suggestion levels, and some of the non-machine-applicable ones do not
+apply cleanly.
## Compare modes
-[Compare modes](compiletest.md#compare-modes) can be used to run all tests
-with different flags from what they are normally compiled with.
-In some cases, this might result in different output from the compiler.
-To support this, different output files can be saved which contain the
-output based on the compare mode.
+[Compare modes](compiletest.md#compare-modes) can be used to run all tests with
+different flags from what they are normally compiled with. In some cases, this
+might result in different output from the compiler. To support this, different
+output files can be saved which contain the output based on the compare mode.
-For example, when using the Polonius mode, a test `foo.rs` will
-first look for expected output in `foo.polonius.stderr`, falling back to the usual
-`foo.stderr` if not found.
-This is useful as different modes can sometimes result in different
-diagnostics and behavior.
-This can help track which tests have differences between the modes, and to
-visually inspect those diagnostic differences.
+For example, when using the Polonius mode, a test `foo.rs` will first look for
+expected output in `foo.polonius.stderr`, falling back to the usual `foo.stderr`
+if not found. This is useful as different modes can sometimes result in
+different diagnostics and behavior. This can help track which tests have
+differences between the modes, and to visually inspect those diagnostic
+differences.
If in the rare case you encounter a test that has different behavior, you can
run something like the following to generate the alternate stderr file:
@@ -533,13 +523,15 @@ Currently none of the compare modes are checked in CI for UI tests.
## `rustc_*` TEST attributes
-The compiler defines several perma-unstable `#[rustc_*]` attributes gated behind the internal feature
-`rustc_attrs` that dump extra compiler-internal information. See the corresponding subsection in
-[compiler debugging] for more details.
+The compiler defines several perma-unstable `#[rustc_*]` attributes gated behind
+the internal feature `rustc_attrs` that dump extra compiler-internal
+information. See the corresponding subsection in [compiler debugging] for more
+details.
-They can be used in tests to more precisely, legibly and easily test internal compiler state in cases
-where it would otherwise be very hard to do the same with "user-facing" Rust alone. Indeed, one could
-say that this slightly abuses the term "UI" (*user* interface) and turns such UI tests from black-box
-tests into white-box ones. Use them carefully and sparingly.
+They can be used in tests to more precisely, legibly and easily test internal
+compiler state in cases where it would otherwise be very hard to do the same
+with "user-facing" Rust alone. Indeed, one could say that this slightly abuses
+the term "UI" (*user* interface) and turns such UI tests from black-box tests
+into white-box ones. Use them carefully and sparingly.
[compiler debugging]: ../compiler-debugging.md#rustc_test-attributes