-
Notifications
You must be signed in to change notification settings - Fork 446
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change target ci-runner to gh-hosted runner for unittests #2847
Conversation
0a3ccc7
to
470216f
Compare
7c09060
to
7b1418d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AFAIK, there is no unit test which needs a GPU device and I think there will not be as well in the future. So, do we need to run our unit test on the T4 node?
If you can confirm that all unit tests are not required GPUs, it would be targeted to the Github hosted runner. |
======================================================================== short test summary info ========================================================================
FAILED tests/unit/algo/segmentation/heads/test_class_incremental_mixin.py::TestClassIncrementalMixin::test_ignore_label - RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.l...
FAILED tests/unit/algo/visual_prompting/encoders/test_sam_prompt_encoder.py::TestSAMPromptEncoder::test_get_device[cuda] - RuntimeError: No CUDA GPUs are available
======================================================== 2 failed, 370 passed, 1 skipped, 53 warnings in 38.90s =========================================================
(otx-v2) vinnamki@vinnamki:~/otx/training_extensions$ I found two cases. |
Signed-off-by: Kim, Vinnam <vinnam.kim@intel.com>
Signed-off-by: Kim, Vinnam <vinnam.kim@intel.com>
Signed-off-by: Kim, Vinnam <vinnam.kim@intel.com>
Summary
To distribute CI workloads across all available resources, unit test for the v2 would be targeted to run on the GH-hosted runner.
For the integration tests, it required larger GPU memory (>15G) so it would keep using the dedicated ci-runner (3090) and it will be re-targeted to the runner on AWS whenever there are the GPU instances that have suitable memory size.
How to test
Checklist
License
Feel free to contact the maintainers if that's a concern.