-
Notifications
You must be signed in to change notification settings - Fork 28.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
While using the integration of bitsandbytes, Error shows: name 'torch' is not defined #31273
Comments
Hi @46319943, thanks for report ! This issue is the same as #31243 (comment) and a fix has already been merged on main. Could you please try it again with the main branch of transformers ? |
Wow, I encountered this problem just yesterday, and it has already been fixed! There are no issues in the main branch now. @SunMarc, thank you for the quick response. However, I'm still confused about the import of torch and bnb. Since many other functions in the file rely on them, shouldn't there be a check to raise an exception if bitsandbytes isn't imported? |
The issue was that we didn't import torch since |
Yes, your explanation is very clear. The existing solution uses a forward reference to avoid importing at the time of definition. What I'm confused is that there are many other functions that use torch, and if bitsandbytes is not installed, shouldn't they raise an error because torch is not imported initially? For example, the first function in the file, set_module_quantized_tensor_to_device, uses torch. If bitsandbytes is not installed, torch is not imported at the beginning, which should result in an import error for torch. |
If torch is called inside the function, it's fine. Moreover, all the function in this file are used only when |
After carefully reviewing Python's import mechanism, I realized what was confusing me before. Initially, I thought the reason why it's fine to call The real reason it's fine is that these functions are designed to be called only if I assumed that importing |
System Info
transformers
version: 4.41.2Who can help?
@SunMarc @youse
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
I'm using the quantized version of a model. The quantization is conducted using the AWQ. After installing the AutoAWQ and run the following code:
The error shows:
which implies that there is something wrong about the
torch
.However, after viewing the
transformers\integrations\bitsandbytes.py
, it can be found that thetorch
is imported only if theis_bitsandbytes_available()
is true.So, this is not the problem with the
torch
, but the installation of thebitsandbytes
.After installing the bitsandbytes
pip install bitsandbytes
, the error is resolved.Expected behavior
So, the error message is confusing and misleading. It should give the error about the installation of the
bitsandbytes
, instead of the torch error.The text was updated successfully, but these errors were encountered: