Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MLIR][TORCH] Add OnnxToLinalg lowering pipeline #2844

Closed
wants to merge 1 commit into from

Conversation

vivekkhandelwal1
Copy link
Collaborator

This commit adds the lowering pipeline for the onnx-torch IR to linalg-on-tensors backend IR. The pipeline comprises of three stages: 1.) OnnxToTorch Lowering Pass
2.) TorchFunctionToTorchBackend Pipeline
3.) TorchBackendToLinalgOnTensorsBackend Pipeline

Signed-Off By: Vivek Khandelwal vivekkhandelwal1424@gmail.com

This commit adds the lowering pipeline for the onnx-torch IR to
linalg-on-tensors backend IR. The pipeline comprises of three stages:
1.) OnnxToTorch Lowering Pass
2.) TorchFunctionToTorchBackend Pipeline
3.) TorchBackendToLinalgOnTensorsBackend Pipeline

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
@@ -0,0 +1,99 @@
// RUN: torch-mlir-opt -pass-pipeline='builtin.module(onnx-backend-to-linalg-on-tensors-backend-pipeline)' -split-input-file %s | FileCheck %s

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it necessary to have check resulting IR quite so thoroughly? I would have thought something like

// CHECK-LABEL: test_softmax_axis_0
// CHECK-NOT: onnx
// CHECK-NOT: torch
// CHECK: return
// CHECK-SAME: tensor<3x4x5xf32>

would be enough.

Motivation: avoiding brittle tests

I'm also curious how you create long lit tests, isn't it a bit tedious :-D

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a python script to generate the checks, located at llvm-project/mlir/utils/generate-test-checks.py.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for sharing the script link. My feeling is that there is too much being tested here, interested to hear your thoughts.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @newling, I'm not sure if there exists any specific guideline anywhere about adding these checks but in my opinion having thorough checks do not have any disadvantage as compared to doing the reverse. Also, for the test where the lowering contains a large set of ops being generated it's really hard to add the checks selectively. This is the reason that I have added the complete checks. I think someone more familiar with the LLVM contribution guidelines can comment better on this.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, this particular PR adds a pipeline to lower the ONNX Torch IR to Linalg IR, I think the complete checks here are a representative of the IR being generated during the execution of this complete pipeline.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The disadvantage that I see is that an unrelated change in any of the passes in the pipeline can make this test fail. For example if the order in which IR is created changes in DecomposeAtenLeakyReluOp::matchAndRewrite then this test will fail. The person who makes the unrelated change then needs to understand what this test is doing, and update it. That's what I call a "brittle" test.

I understand your concern about "doing the reverse", but in this case I don't think my suggestion above is undertesting what the new functionality adds?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a python script to generate the checks, located at llvm-project/mlir/utils/generate-test-checks.py.

Yes, I know about that. I often have to come back and properly generalize tests when people have wholesale used it.

+1 to a basic check that the pipeline is doing what it should do and not an overly comprehensive integration test.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed on keeping the tests as minimal as possible. This was the reason we shifted to not adding lit tests for conversion patterns. They were often not generalized and only caused friction when updating patterns.

Also, there's a nice MLIR document describing some of the best practices for lit tests.

Copy link
Collaborator

@stellaraccident stellaraccident left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is this doing above and beyond the normal torch to linalg pipeline?

I'd rather just teach that one what to do vs having another bespoke pipeline.

@@ -0,0 +1,99 @@
// RUN: torch-mlir-opt -pass-pipeline='builtin.module(onnx-backend-to-linalg-on-tensors-backend-pipeline)' -split-input-file %s | FileCheck %s

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a python script to generate the checks, located at llvm-project/mlir/utils/generate-test-checks.py.

Yes, I know about that. I often have to come back and properly generalize tests when people have wholesale used it.

+1 to a basic check that the pipeline is doing what it should do and not an overly comprehensive integration test.

@stellaraccident
Copy link
Collaborator

What is this doing above and beyond the normal torch to linalg pipeline?

I'd rather just teach that one what to do vs having another bespoke pipeline.

(Or if just adding a pass at the front door onnx, delegate to the regular one)

OpPassManager &pm) {
pm.addNestedPass<func::FuncOp>(onnx_c::createTorchOnnxToTorchPass());
Torch::TorchLoweringPipelineOptions options;
Torch::createTorchFunctionToTorchBackendPipeline(pm, options);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this entire pipeline needed? From skimming the ONNX Python importer, it seems that it has similar guarantees as the FX Python importer (value semantics, shapes, dtypes, etc). If that's the case, then we don't need to perform the entire TorchFunctionToTorchBackendPipeline. I think you just need the decompositions pass.

@@ -0,0 +1,99 @@
// RUN: torch-mlir-opt -pass-pipeline='builtin.module(onnx-backend-to-linalg-on-tensors-backend-pipeline)' -split-input-file %s | FileCheck %s

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed on keeping the tests as minimal as possible. This was the reason we shifted to not adding lit tests for conversion patterns. They were often not generalized and only caused friction when updating patterns.

Also, there's a nice MLIR document describing some of the best practices for lit tests.

@vivekkhandelwal1
Copy link
Collaborator Author

This PR is not required anymore therefore closing it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants