Skip to content

Commit

Permalink
no_cuda does not take effect in non distributed environment
Browse files Browse the repository at this point in the history
Signed-off-by: Wang, Yi <yi.a.wang@intel.com>
  • Loading branch information
sywangyi committed May 26, 2023
1 parent d61d747 commit 605716d
Showing 1 changed file with 3 additions and 1 deletion.
4 changes: 3 additions & 1 deletion src/transformers/training_args.py
Original file line number Diff line number Diff line change
Expand Up @@ -1684,7 +1684,9 @@ def _setup_devices(self) -> "torch.device":
)
device = torch.device("mps")
self._n_gpu = 1

elif self.no_cuda:
device = torch.device("cpu")
self._n_gpu = 0
else:
# if n_gpu is > 1 we'll use nn.DataParallel.
# If you only want to use a specific subset of GPUs use `CUDA_VISIBLE_DEVICES=0`
Expand Down

0 comments on commit 605716d

Please sign in to comment.