-
Notifications
You must be signed in to change notification settings - Fork 198
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The internal torchao tensor subclasses cause errors with torch.compile #1463
Comments
Do you have a repro? ao/torchao/float8/float8_tensor.py Line 361 in 52b6f4d
I have often found this is indicative a graph break though somewhere un expected. |
please provide a repro, also what is the version of pytorch you are using? can you run |
I can't share my repro directly as the codebase for the model that this happens with is large and complicated (no clue on where it is having problems with torchao). I can't seem to reproduce it with small toy models, as it seems to occur only before Triton begins to compile the kernels in the back. The tensor classes seem to have something to do with testing the quantization optimizations being performed on the model with Triton. I will try to narrow down the modules that are problematic and give a repro if possible. |
no problem, could you try upgrade pytorch to most recent version to see if it is fixed? since there could be tensor subclass + compile issues that's fixed recently:
|
When I use
torch.compile
with certain models that cause the following error:LinearActivationQuantizedTensor
andFakeTensor
which are tensor subclasses are not supported bytorch.compile
which raises the errors.The model compiles correct if the errors are suppressed with the following:
The text was updated successfully, but these errors were encountered: