Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[onnx] Fix onnx-to-torch lowering for flatten shape #2834

Merged
merged 5 commits into from
Feb 5, 2024

Conversation

rsuderman
Copy link
Contributor

The existing flatten lowering did not define what the intermediate
shape was. This could result in failures to lower further to linalg as
the intermediate shape was unknown. Added a shape refinement section.

The existing `flatten` lowering did not define what the intermediate
shape was. This could result in failures to lower further to linalg as
the intermediate shape was unknown. Added a shape refinement section.
Copy link
Collaborator

@newling newling left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did this come up in a downstream test? Would this be caught with #2795 landed?

@rsuderman
Copy link
Contributor Author

Did this come up in a downstream test? Would this be caught with #2795 landed?

#2795 will help catch these tests in the future. It was caught by attempting to push the first model through (resnet) and we found this error, along with a few others.

@rsuderman rsuderman requested review from newling and renxida February 1, 2024 18:48
Copy link
Collaborator

@newling newling left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just the 2 issues

  1. axis == rank
  2. handling dynamic

@rsuderman rsuderman requested a review from newling February 1, 2024 22:43
@rsuderman
Copy link
Contributor Author

Just the 2 issues

  1. axis == rank
  2. handling dynamic

If axis == rank then the loop never executes meaning we never index into shape.

Copy link
Collaborator

@newling newling left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The bool dynamic = . line needs another look

Copy link
Collaborator

@renxida renxida left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like at DefaultDomainAtoF.cpp:1424 there's a redundant or. Did you mean shape[i]?

Code:

         bool dynamic = shape[axis] == Torch::kUnknownSize ||
                         shape[axis] == Torch::kUnknownSize;

otherwise lgtm

@rsuderman
Copy link
Contributor Author

The bool dynamic = . line needs another look

Woops, fixed it up. Just needed to check whether the value is dynamic.

@rsuderman rsuderman requested review from newling and renxida February 5, 2024 19:48
Copy link
Collaborator

@newling newling left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks all good to me now

@renxida renxida merged commit cb52c4b into llvm:main Feb 5, 2024
3 checks passed
@rsuderman rsuderman deleted the flatten branch February 28, 2024 20:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants