-
Notifications
You must be signed in to change notification settings - Fork 525
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MLIR][TORCH] Add OnnxToLinalg lowering pipeline #2844
Conversation
This commit adds the lowering pipeline for the onnx-torch IR to linalg-on-tensors backend IR. The pipeline comprises of three stages: 1.) OnnxToTorch Lowering Pass 2.) TorchFunctionToTorchBackend Pipeline 3.) TorchBackendToLinalgOnTensorsBackend Pipeline Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
90f4218
to
dfb16f1
Compare
@@ -0,0 +1,99 @@ | |||
// RUN: torch-mlir-opt -pass-pipeline='builtin.module(onnx-backend-to-linalg-on-tensors-backend-pipeline)' -split-input-file %s | FileCheck %s | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it necessary to have check resulting IR quite so thoroughly? I would have thought something like
// CHECK-LABEL: test_softmax_axis_0
// CHECK-NOT: onnx
// CHECK-NOT: torch
// CHECK: return
// CHECK-SAME: tensor<3x4x5xf32>
would be enough.
Motivation: avoiding brittle tests
I'm also curious how you create long lit tests, isn't it a bit tedious :-D
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a python script to generate the checks, located at llvm-project/mlir/utils/generate-test-checks.py
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for sharing the script link. My feeling is that there is too much being tested here, interested to hear your thoughts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @newling, I'm not sure if there exists any specific guideline anywhere about adding these checks but in my opinion having thorough checks do not have any disadvantage as compared to doing the reverse. Also, for the test where the lowering contains a large set of ops being generated it's really hard to add the checks selectively. This is the reason that I have added the complete checks. I think someone more familiar with the LLVM contribution guidelines can comment better on this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, this particular PR adds a pipeline to lower the ONNX Torch IR to Linalg IR, I think the complete checks here are a representative of the IR being generated during the execution of this complete pipeline.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The disadvantage that I see is that an unrelated change in any of the passes in the pipeline can make this test fail. For example if the order in which IR is created changes in DecomposeAtenLeakyReluOp::matchAndRewrite
then this test will fail. The person who makes the unrelated change then needs to understand what this test is doing, and update it. That's what I call a "brittle" test.
I understand your concern about "doing the reverse", but in this case I don't think my suggestion above is undertesting what the new functionality adds?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a python script to generate the checks, located at
llvm-project/mlir/utils/generate-test-checks.py
.
Yes, I know about that. I often have to come back and properly generalize tests when people have wholesale used it.
+1 to a basic check that the pipeline is doing what it should do and not an overly comprehensive integration test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed on keeping the tests as minimal as possible. This was the reason we shifted to not adding lit tests for conversion patterns. They were often not generalized and only caused friction when updating patterns.
Also, there's a nice MLIR document describing some of the best practices for lit tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is this doing above and beyond the normal torch to linalg pipeline?
I'd rather just teach that one what to do vs having another bespoke pipeline.
@@ -0,0 +1,99 @@ | |||
// RUN: torch-mlir-opt -pass-pipeline='builtin.module(onnx-backend-to-linalg-on-tensors-backend-pipeline)' -split-input-file %s | FileCheck %s | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a python script to generate the checks, located at
llvm-project/mlir/utils/generate-test-checks.py
.
Yes, I know about that. I often have to come back and properly generalize tests when people have wholesale used it.
+1 to a basic check that the pipeline is doing what it should do and not an overly comprehensive integration test.
(Or if just adding a pass at the front door onnx, delegate to the regular one) |
OpPassManager &pm) { | ||
pm.addNestedPass<func::FuncOp>(onnx_c::createTorchOnnxToTorchPass()); | ||
Torch::TorchLoweringPipelineOptions options; | ||
Torch::createTorchFunctionToTorchBackendPipeline(pm, options); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this entire pipeline needed? From skimming the ONNX Python importer, it seems that it has similar guarantees as the FX Python importer (value semantics, shapes, dtypes, etc). If that's the case, then we don't need to perform the entire TorchFunctionToTorchBackendPipeline
. I think you just need the decompositions pass.
@@ -0,0 +1,99 @@ | |||
// RUN: torch-mlir-opt -pass-pipeline='builtin.module(onnx-backend-to-linalg-on-tensors-backend-pipeline)' -split-input-file %s | FileCheck %s | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed on keeping the tests as minimal as possible. This was the reason we shifted to not adding lit tests for conversion patterns. They were often not generalized and only caused friction when updating patterns.
Also, there's a nice MLIR document describing some of the best practices for lit tests.
This PR is not required anymore therefore closing it. |
This commit adds the lowering pipeline for the onnx-torch IR to linalg-on-tensors backend IR. The pipeline comprises of three stages: 1.) OnnxToTorch Lowering Pass
2.) TorchFunctionToTorchBackend Pipeline
3.) TorchBackendToLinalgOnTensorsBackend Pipeline
Signed-Off By: Vivek Khandelwal vivekkhandelwal1424@gmail.com