You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We can use the pytest-split pytest plugin (or an alternative?) to split long-running integration test suites into groups and run the groups in parallel to speed up CI times.
Discovered some flaky tests that had dependencies on other tests for their success in the draft. We should aim to detect resolve as many of these as possible prior to introducing test groups else more flakiness may be discovered in unrelated PRs.
More GHA tasks => more email clutter when things fail
Integration test results are currently uploaded in a unified (all tests for python-version/os) form at the end of the workflow. How is this data used? Is it feasible to stitch together separate artifacts (one per python-version/os/group) into a unified view? using dbt??
The text was updated successfully, but these errors were encountered:
This issue has been marked as Stale because it has been open for 180 days with no activity. If you would like the issue to remain open, please comment on the issue or else it will be closed in 7 days.
We can use the pytest-split pytest plugin (or an alternative?) to split long-running integration test suites into groups and run the groups in parallel to speed up CI times.
Draft example for how this could look: #6346
Considerations
The text was updated successfully, but these errors were encountered: