You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ValueError: FP16 Mixed precision training with AMP or APEX (--fp16) and FP16 half precision evaluation (--fp16_full_eval) can only be used on CUDA devices
#24
Open
chintan-donda opened this issue
Jun 13, 2023
· 1 comment
Getting below error when trying to finetune the model.
Converted as Half.
trainable params: 8355840 || all params: 1075691520 || trainable%: 0.7767877541695225
Found cached dataset json (/home/users/users/.cache/huggingface/datasets/json/default-7089e4ef944c023b/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4)
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 21.48it/s]
Loading cached split indices for dataset at /home/users/users/.cache/huggingface/datasets/json/default-7089e4ef944c023b/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4/cache-a03d095090258b35.arrow and /home/users/users/.cache/huggingface/datasets/json/default-7089e4ef944c023b/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4/cache-f83f741993333274.arrow
Run eval every 6 steps
Found safetensors installation, but --save_safetensors=False. Safetensors should be a preferred weights saving format due to security and performance reasons. If your model cannot be saved by safetensors please feel free to open an issue at https://github.com/huggingface/safetensors!
PyTorch: setting up devices
Traceback (most recent call last):
File "/home/users/users/falcontune/venv_falcontune/bin/falcontune", line 33, in <module>
sys.exit(load_entry_point('falcontune==0.1.0', 'console_scripts', 'falcontune')())
File "/home/users/users/falcontune/venv_falcontune/lib/python3.8/site-packages/falcontune-0.1.0-py3.8.egg/falcontune/run.py", line 87, in main
args.func(args)
File "/home/users/users/falcontune/venv_falcontune/lib/python3.8/site-packages/falcontune-0.1.0-py3.8.egg/falcontune/finetune.py", line 116, in finetune
training_arguments = transformers.TrainingArguments(
File "<string>", line 111, in __init__
File "/home/users/users/falcontune/venv_falcontune/lib/python3.8/site-packages/transformers/training_args.py", line 1338, in __post_init__
raise ValueError(
ValueError: FP16 Mixed precision training with AMP or APEX (`--fp16`) and FP16 half precision evaluation (`--fp16_full_eval`) can only be used on CUDA devices.
Getting below error when trying to finetune the model.
Experimental setup details:
OS:
Ubuntu 18.04.5 LTS
GPU:
Tesla V100-SXM2-32GB
Libs:
Finetuning command:
Any help please?
The text was updated successfully, but these errors were encountered: