You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi all,
I am trying to finetune falcon-40b-instruct but running into the following error:
ValueError: You can't train a model that has been loaded in 8-bit precision on multiple devices in any distributed mode. In
order to use 8-bit models that have been loaded across multiple GPUs the solution is to use Naive Pipeline Parallelism.
Therefore you should not specify that you are under any distributed regime in your accelerate config.
Any suggestion?
Thanks!
The text was updated successfully, but these errors were encountered:
Hi all,
I am trying to finetune falcon-40b-instruct but running into the following error:
Any suggestion?
Thanks!
The text was updated successfully, but these errors were encountered: