Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[torch] torch.dequantize for per channel tensors to linalg #2769

Merged
merged 7 commits into from
Jan 26, 2024

Conversation

rsuderman
Copy link
Contributor

Support a lowering for dequantization for per channel tensors from
torch dialect to a linalg decomposition. Tested via a numerical
torch test.

Copy link
Collaborator

@renxida renxida left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"ElementwiseDequantizePerChannelModule_basic",

needs to be inserted at line 1392 of projects/pt1/e2e_testing/xfail_sets.py
to make CI pass

otherwise LGTM!

Support a lowering for dequantization for per channel tensors from
`torch` dialect to a linalg decomposition. Tested via a numerical
`torch` test.
@rsuderman rsuderman force-pushed the quant_dequant_per_channel branch from e967a91 to 7804534 Compare January 25, 2024 00:05
@rsuderman rsuderman merged commit 2ef2283 into llvm:main Jan 26, 2024
5 checks passed
zjgarvey pushed a commit to zjgarvey/torch-mlir that referenced this pull request Jan 29, 2024
…2769)

Support a lowering for dequantization for per channel tensors from
`torch` dialect to a linalg decomposition. Tested via a numerical
`torch` test.
@rsuderman rsuderman deleted the quant_dequant_per_channel branch February 28, 2024 20:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants