Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change dtype functions interface to take ints tuple for each tensor #1865

Merged

Conversation

ramiro050
Copy link
Collaborator

The original design for the dtype functions outlined in #1462 was unable to properly handle ops that take optional tensors as an input when the optional tensor has a value of None. By the time the op gets imported into torch-mlir, if an optional value is None, all information about the original type is lost from the op type signature, preventing torch-mlir from knowing if a value of None was from an optional tensor or not, which was crucial in the original design since each tensor argument must be turned into two separate arguments for the dtype function.

This commit changes the interface to dtype functions such that each tensor turns into a tuple of two ints, the first representing the rank of the tensor and the second the dtype of the tensor. Since now there is a one-to-one correspondence between the operands of an op and the operands of its dtype function, there is no ambiguity about which operand of the op corresponds with which operand of the dtype function.

To test the implementation, this commit defines dtype functions for the convolution ops, all of which take one optional tensor as an argument.

@li-plus
Copy link
Collaborator

li-plus commented Feb 13, 2023

LGTM!

@ramiro050 ramiro050 force-pushed the add-conv-dtype-funcs branch 2 times, most recently from ae86968 to c7ce0ba Compare February 13, 2023 22:41
The original design for the dtype functions outlined in
llvm#1462 was unable to properly
handle ops that take optional tensors as an input when the optional
tensor has a value of None. By the time the op gets imported into
torch-mlir, if an optional value is None, all information about the
original type is lost from the op type signature, preventing
torch-mlir from knowing if a value of None was from an optional tensor
or not, which was crucial in the original design since each tensor
argument must be turned into two separate arguments for the dtype
function.

This commit changes the interface to dtype functions such that each
tensor turns into a tuple of two ints, the first representing the rank
of the tensor and the second the dtype of the tensor. Since now there
is a one-to-one correspondence between the operands of an op and the
operands of its dtype function, there is no ambiguity about which
operand of the op corresponds with which operand of the dtype
function.

To test the implementation, this commit defines dtype functions for
the convolution ops, all of which take one optional tensor as an
argument.
@ramiro050 ramiro050 force-pushed the add-conv-dtype-funcs branch from c7ce0ba to a0f6291 Compare February 13, 2023 23:01
@ramiro050 ramiro050 merged commit 63945a2 into llvm:dtype-functions-staging Feb 14, 2023
@ramiro050 ramiro050 deleted the add-conv-dtype-funcs branch February 14, 2023 01:56
ramiro050 added a commit to ramiro050/torch-mlir that referenced this pull request May 9, 2023
commit bafe339
Author: Ramiro Leal-Cavazos <ramiroleal050@gmail.com>
Date:   Mon May 8 21:26:56 2023 +0000

    Add dtype functions for aten.atan and prims.squeeze

commit bebf695
Author: Ramiro Leal-Cavazos <ramiroleal050@gmail.com>
Date:   Mon May 8 21:26:10 2023 +0000

    Remove duplicate code from merge with main

commit 0d11895
Author: Ramiro Leal-Cavazos <ramiroleal050@gmail.com>
Date:   Fri May 5 21:39:02 2023 +0000

    Update LLVM tag

commit 73d5c07
Merge: 899d8bc eaaaeb6
Author: Ramiro Leal-Cavazos <ramiroleal050@gmail.com>
Date:   Fri May 5 21:30:09 2023 +0000

    Merge remote-tracking branch 'upstream/main' into merge-main

commit 899d8bc
Author: Ramiro Leal-Cavazos <ramiroleal050@gmail.com>
Date:   Mon Mar 13 21:39:14 2023 +0000

    Add dtype functions for `aten.ge.Tensor` and `aten.le.Tensor`

commit f58f9c2
Merge: ce7abf4 4912c39
Author: Ramiro Leal-Cavazos <ramiroleal050@gmail.com>
Date:   Mon Mar 13 21:32:00 2023 +0000

    Merge branch 'main' into merge-main

commit ce7abf4
Author: Jiahao Li <liplus17@163.com>
Date:   Wed Feb 22 06:54:41 2023 +0800

    Add dtype functions for ops that take dtype from 2nd operand (llvm#1891)

commit 63945a2
Author: Ramiro Leal-Cavazos <ramiroleal050@gmail.com>
Date:   Mon Feb 13 17:56:09 2023 -0800

    Change dtype functions interface to take ints tuple for each tensor (llvm#1865)

    The original design for the dtype functions outlined in
    llvm#1462 was unable to properly
    handle ops that take optional tensors as an input when the optional
    tensor has a value of None. By the time the op gets imported into
    torch-mlir, if an optional value is None, all information about the
    original type is lost from the op type signature, preventing
    torch-mlir from knowing if a value of None was from an optional tensor
    or not, which was crucial in the original design since each tensor
    argument must be turned into two separate arguments for the dtype
    function.

    This commit changes the interface to dtype functions such that each
    tensor turns into a tuple of two ints, the first representing the rank
    of the tensor and the second the dtype of the tensor. Since now there
    is a one-to-one correspondence between the operands of an op and the
    operands of its dtype function, there is no ambiguity about which
    operand of the op corresponds with which operand of the dtype
    function.

    To test the implementation, this commit defines dtype functions for
    the convolution ops, all of which take one optional tensor as an
    argument.

commit 981ac88
Author: Ramiro Leal-Cavazos <ramiroleal050@gmail.com>
Date:   Wed Feb 1 22:30:27 2023 +0000

    Add dtype functions for two tensor promotion ops (llvm#1831)

    This commit adds dtype functions for ops in RefineTypes under the
    category of "Promote the two dtypes". The only ops not added here are
    convolution ops, since they take an optional tensor argument, and the
    dtype pipeline currently does not correctly handle that case. I will
    add a follow up patch fixing this.

    This commit also adds two helper functions that perform a very
    thorough testing of dtype functions. The helper function
    `_check_two_tensor_op` is able to independently test invalid input
    dtypes and invalid output dtypes.

    Lastly, this commit also XFAILs "MobilenetV3Module_basic".

commit 83d4e89
Author: Jiahao Li <liplus17@163.com>
Date:   Sat Jan 21 02:39:41 2023 +0800

    Add dtype functions for floating point ops (llvm#1813)

commit 8cae5ba
Author: Ramiro Leal-Cavazos <ramiroleal050@gmail.com>
Date:   Mon Jan 16 14:32:23 2023 -0800

    Add dtype functions for comparison ops (llvm#1806)

    This commit adds dtype functions for comparison ops that always return
    a tensor of dtype `i1`.

commit 5b77c15
Author: Ramiro Leal-Cavazos <ramiroleal050@gmail.com>
Date:   Mon Jan 16 20:27:49 2023 +0000

    Add CI to `dtype-functions-staging` branch

commit ac94ba2
Author: Ramiro Leal-Cavazos <ramiroleal050@gmail.com>
Date:   Thu Jan 12 22:41:04 2023 +0000

    Move dtype functions into their own section in lib gen file

    In order to easily keep track of the dtype functions that have been
    moved to `abstract_interp_lib_gen.py` and make it easier to add new
    ones, this commit groups all the dtype functions together, rather than
    having them interspersed between the shape functions.
ramiro050 added a commit to ramiro050/torch-mlir that referenced this pull request May 9, 2023
commit bafe339
Author: Ramiro Leal-Cavazos <ramiroleal050@gmail.com>
Date:   Mon May 8 21:26:56 2023 +0000

    Add dtype functions for aten.atan and prims.squeeze

commit bebf695
Author: Ramiro Leal-Cavazos <ramiroleal050@gmail.com>
Date:   Mon May 8 21:26:10 2023 +0000

    Remove duplicate code from merge with main

commit 0d11895
Author: Ramiro Leal-Cavazos <ramiroleal050@gmail.com>
Date:   Fri May 5 21:39:02 2023 +0000

    Update LLVM tag

commit 73d5c07
Merge: 899d8bc eaaaeb6
Author: Ramiro Leal-Cavazos <ramiroleal050@gmail.com>
Date:   Fri May 5 21:30:09 2023 +0000

    Merge remote-tracking branch 'upstream/main' into merge-main

commit 899d8bc
Author: Ramiro Leal-Cavazos <ramiroleal050@gmail.com>
Date:   Mon Mar 13 21:39:14 2023 +0000

    Add dtype functions for `aten.ge.Tensor` and `aten.le.Tensor`

commit f58f9c2
Merge: ce7abf4 4912c39
Author: Ramiro Leal-Cavazos <ramiroleal050@gmail.com>
Date:   Mon Mar 13 21:32:00 2023 +0000

    Merge branch 'main' into merge-main

commit ce7abf4
Author: Jiahao Li <liplus17@163.com>
Date:   Wed Feb 22 06:54:41 2023 +0800

    Add dtype functions for ops that take dtype from 2nd operand (llvm#1891)

commit 63945a2
Author: Ramiro Leal-Cavazos <ramiroleal050@gmail.com>
Date:   Mon Feb 13 17:56:09 2023 -0800

    Change dtype functions interface to take ints tuple for each tensor (llvm#1865)

    The original design for the dtype functions outlined in
    llvm#1462 was unable to properly
    handle ops that take optional tensors as an input when the optional
    tensor has a value of None. By the time the op gets imported into
    torch-mlir, if an optional value is None, all information about the
    original type is lost from the op type signature, preventing
    torch-mlir from knowing if a value of None was from an optional tensor
    or not, which was crucial in the original design since each tensor
    argument must be turned into two separate arguments for the dtype
    function.

    This commit changes the interface to dtype functions such that each
    tensor turns into a tuple of two ints, the first representing the rank
    of the tensor and the second the dtype of the tensor. Since now there
    is a one-to-one correspondence between the operands of an op and the
    operands of its dtype function, there is no ambiguity about which
    operand of the op corresponds with which operand of the dtype
    function.

    To test the implementation, this commit defines dtype functions for
    the convolution ops, all of which take one optional tensor as an
    argument.

commit 981ac88
Author: Ramiro Leal-Cavazos <ramiroleal050@gmail.com>
Date:   Wed Feb 1 22:30:27 2023 +0000

    Add dtype functions for two tensor promotion ops (llvm#1831)

    This commit adds dtype functions for ops in RefineTypes under the
    category of "Promote the two dtypes". The only ops not added here are
    convolution ops, since they take an optional tensor argument, and the
    dtype pipeline currently does not correctly handle that case. I will
    add a follow up patch fixing this.

    This commit also adds two helper functions that perform a very
    thorough testing of dtype functions. The helper function
    `_check_two_tensor_op` is able to independently test invalid input
    dtypes and invalid output dtypes.

    Lastly, this commit also XFAILs "MobilenetV3Module_basic".

commit 83d4e89
Author: Jiahao Li <liplus17@163.com>
Date:   Sat Jan 21 02:39:41 2023 +0800

    Add dtype functions for floating point ops (llvm#1813)

commit 8cae5ba
Author: Ramiro Leal-Cavazos <ramiroleal050@gmail.com>
Date:   Mon Jan 16 14:32:23 2023 -0800

    Add dtype functions for comparison ops (llvm#1806)

    This commit adds dtype functions for comparison ops that always return
    a tensor of dtype `i1`.

commit 5b77c15
Author: Ramiro Leal-Cavazos <ramiroleal050@gmail.com>
Date:   Mon Jan 16 20:27:49 2023 +0000

    Add CI to `dtype-functions-staging` branch

commit ac94ba2
Author: Ramiro Leal-Cavazos <ramiroleal050@gmail.com>
Date:   Thu Jan 12 22:41:04 2023 +0000

    Move dtype functions into their own section in lib gen file

    In order to easily keep track of the dtype functions that have been
    moved to `abstract_interp_lib_gen.py` and make it easier to add new
    ones, this commit groups all the dtype functions together, rather than
    having them interspersed between the shape functions.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants