You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
This repository contains all the nasty implementation logic for WebGPU, yet there are no tests or examples here that allow us to verify that the logic is consistent, and new features work. We currently recommend preparing a wgpu-rs PR draft, so that implementors test its examples on the new code. This unfortunately adds complexity to our workflow, we'd like to make this as simple as cargo test in wgpu.
Describe the solution you'd like
We could have something along the lines of Warden in gfx land - gfx-rs/gfx#1589
Roughly speaking, that would be a set of RON-based recordings with some meta-data describing what the expectations are (outside of "should not error/panic"). For example, the contents of some texture at row X should be bytes A, B, C (see warden example).
The project layout could be the following:
player is turned from a binary app ("play") into a lib with a single binary. This means cargo run still works the same way. The logic of GPU operations go into the lib. Ideally, the logic of window handling would go into the binary.
player/tests folder is added. Each rust file in it would load a RON file describing a list of RON traces with the corresponding expectations. Not sure where we should put the two kinds of RON files. One possible location is in player/tests/traces and player/tests/sets, or something like that.
if a test fails to request adapters and devices, we issue a warning but do not fail the test. This will allow it to be run on any CIs without GPUs
As a result, cargo test should be doing a bunch of API replays on GPU and testing our logic.
Describe alternatives you've considered
I don't think there is another solution that fully covers the need here. But generally there are solid infrastructures at different levels that can partially help, like:
actual WebGPU CTS will be run on either NodeJS bindings to wgpu-native, or on Gecko
I wonder if we can easily extend this to test for errors. I.e. verify that a particular scenario leads to some specific error.
It's not required here, since we are going to be tested by the upstream WebGPU conformance test suite, which has that.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
This repository contains all the nasty implementation logic for WebGPU, yet there are no tests or examples here that allow us to verify that the logic is consistent, and new features work. We currently recommend preparing a wgpu-rs PR draft, so that implementors test its examples on the new code. This unfortunately adds complexity to our workflow, we'd like to make this as simple as
cargo test
inwgpu
.Describe the solution you'd like
We could have something along the lines of Warden in gfx land - gfx-rs/gfx#1589
Roughly speaking, that would be a set of RON-based recordings with some meta-data describing what the expectations are (outside of "should not error/panic"). For example, the contents of some texture at row X should be bytes A, B, C (see warden example).
The project layout could be the following:
player
is turned from a binary app ("play") into a lib with a single binary. This meanscargo run
still works the same way. The logic of GPU operations go into the lib. Ideally, the logic of window handling would go into the binary.player/tests
folder is added. Each rust file in it would load a RON file describing a list of RON traces with the corresponding expectations. Not sure where we should put the two kinds of RON files. One possible location is inplayer/tests/traces
andplayer/tests/sets
, or something like that.As a result,
cargo test
should be doing a bunch of API replays on GPU and testing our logic.Describe alternatives you've considered
I don't think there is another solution that fully covers the need here. But generally there are solid infrastructures at different levels that can partially help, like:
Additional context
It's soft-blocked on #792
I wonder if we can easily extend this to test for errors. I.e. verify that a particular scenario leads to some specific error.
It's not required here, since we are going to be tested by the upstream WebGPU conformance test suite, which has that.
The text was updated successfully, but these errors were encountered: