Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Tcp] Merge main into mlir-tcp #2518

Merged
merged 42 commits into from
Oct 18, 2023
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
42 commits
Select commit Hold shift + click to select a range
7a7be60
Fix python package install instructions (#2464)
sogartar Sep 14, 2023
b03efdf
build: manually update PyTorch version
vivekkhandelwal1 Sep 19, 2023
278c41e
Bump llvm-project to f66cd9e9556a53142a26a5c21a72e21f1579217c. (#2466)
stellaraccident Sep 19, 2023
20ea1c9
Revert accidental change to submodule origin. (#2477)
stellaraccident Sep 20, 2023
023fc90
[Torch Dialect] add avg_pool 2d and 3d op variants (#2473)
davidgens-cerebras Sep 20, 2023
b9847b1
Fixing implicit double to float casts. (#2476)
benvanik Sep 20, 2023
059041e
[LTC] Support torch.ones/zeros/arange ops (#2440)
GlebKazantaev Sep 21, 2023
6699cbc
build: manually update PyTorch version (#2480)
vivekkhandelwal1 Sep 22, 2023
5f772e8
CI: reconcile differences between RollPyTorch and pre-merge checks (#…
ashay Sep 23, 2023
a520d39
[MLIR][TORCH] Add device "cpu" support for aten.to.dtype_layout op (…
brucekimrokcmu Sep 25, 2023
c9fd789
[NFC] Clean-up `ConvertAtenViewOp` in linalg backend (#2470)
ramiro050 Sep 26, 2023
ff7f8b2
update llvm-project to d13da154a7c7eff77df8686b2de1cfdfa7cc7029 (#2483)
dan-garvey Sep 26, 2023
7760bda
build: manually update PyTorch version
vivekkhandelwal1 Sep 27, 2023
e69266a
update PyTorch version to 2.2.0.dev20230927 (#2489)
stellaraccident Sep 27, 2023
7c6b9d2
[linalg] Fix handling of trailing size-1 dimensions in aten.view (#2474)
ramiro050 Sep 27, 2023
8abfa5b
Use PyTorch nightly for Arm release build (#2488)
vivekkhandelwal1 Sep 27, 2023
4e1dd3b
add e2e support for torch.log10 (#2479)
saienduri Sep 28, 2023
860be09
Elide dynamic broadcast checks when in strict symbolic shapes mode. (…
stellaraccident Sep 29, 2023
71ac62f
build: manually update PyTorch version
vivekkhandelwal1 Sep 29, 2023
c434736
[MLIR][TORCH] Add support for conversion to int8 dtype
vivekkhandelwal1 Sep 29, 2023
9293326
[MLIR][TORCH] Add support for bitwise_right_shit and bitwise_and.Scal…
vivekkhandelwal1 Sep 28, 2023
b75c208
update PyTorch version to 2.2.0.dev20231002 (#2497)
stellaraccident Oct 2, 2023
d10a86f
Disable LTC for arm release
vivekkhandelwal1 Sep 28, 2023
32d9b20
Add linspace/cumprod/roll ops (#2498)
antoniojkim Oct 3, 2023
ca6ce89
[MLIR][TORCH] Add support for int8 dtype for sub, add, and bitwise_an…
vivekkhandelwal1 Oct 3, 2023
4892ed4
update PyTorch version to 2.2.0.dev20231003 (#2500)
stellaraccident Oct 3, 2023
1c508af
Revert "[linalg] Fix handling of trailing size-1 dimensions in aten.v…
ramiro050 Oct 3, 2023
2e5d650
[linalg] Add handling for leadin and trailing size-1 dims in ViewOp
ramiro050 Oct 3, 2023
14e6da8
update PyTorch version to 2.2.0.dev20231004 (#2502)
stellaraccident Oct 4, 2023
ae72eec
Improve aten.broadcast_to folder when in strict symbol mode (#2504)
qedawkins Oct 5, 2023
42b6c0a
update PyTorch version to 2.2.0.dev20231005 (#2506)
stellaraccident Oct 5, 2023
6f81ad7
[TorchToLinalg] Improve broadcast lowerings in strict symbolic modes …
qedawkins Oct 5, 2023
26ea13d
update PyTorch version to 2.2.0.dev20231006 (#2507)
stellaraccident Oct 6, 2023
9b5a4af
Update README to include new meeting schedule (#2503)
ramiro050 Oct 10, 2023
e649e06
Add aten.unflatten.int support and its torch-to-tosa lowering (#2509)
zezhang Oct 14, 2023
f2c53b8
Add aten.isclose support and its torch-to-tosa lowering (#2512)
zezhang Oct 16, 2023
14a4da9
Update llvm-project to b44b3494f60296db6aca38a14cab061d9b747a0a (#2511)
Oct 17, 2023
4279b75
update AtenClampOp in torch-to-tosa to handle fp inputs (#2516)
zezhang Oct 17, 2023
52abae1
Bump LLVM to get bazel fixes (#2517)
sjain-stanford Oct 18, 2023
86cf909
Merge branch 'main' into raghavanr/torch-mlir-upgrade
navahgar Oct 18, 2023
b846437
Fix the names of arith MaximumF and MinimumF ops
navahgar Oct 18, 2023
9624268
[Tcp] Add new e2e tests to pass list
navahgar Oct 18, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-09-28.

aten.baddbmm changes done because upstream PyTorch has now added
support for fp16 gemm on CPU.
Refer: pytorch/pytorch@9399e0b
  • Loading branch information
vivekkhandelwal1 committed Oct 2, 2023
commit 71ac62f3a89f751a2750e922757789ff0cff489e
25 changes: 10 additions & 15 deletions lib/Dialect/Torch/Transforms/AbstractInterpLibrary.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -9950,39 +9950,34 @@ StringRef mlir::torch::Torch::getAbstractInterpLibrary() {
" func.func @\"__torch_mlir_dtype_fn.aten.baddbmm\"(%arg0: !torch.tuple<int, int>, %arg1: !torch.tuple<int, int>, %arg2: !torch.tuple<int, int>, %arg3: !torch.number, %arg4: !torch.number) -> !torch.int {\n"
" %none = torch.constant.none\n"
" %str = torch.constant.str \"AssertionError: \"\n"
" %int5 = torch.constant.int 5\n"
" %int11 = torch.constant.int 11\n"
" %0:2 = torch.prim.TupleUnpack %arg1 : !torch.tuple<int, int> -> !torch.int, !torch.int\n"
" %1:2 = torch.prim.TupleUnpack %arg2 : !torch.tuple<int, int> -> !torch.int, !torch.int\n"
" %2 = torch.prim.ListConstruct %int11, %int5 : (!torch.int, !torch.int) -> !torch.list<int>\n"
" %3 = torch.aten.__contains__.int_list %2, %0#1 : !torch.list<int>, !torch.int -> !torch.bool\n"
" %4 = torch.aten.__not__ %3 : !torch.bool -> !torch.bool\n"
" torch.prim.If %4 -> () {\n"
" %2 = torch.aten.__isnot__ %0#1, %int11 : !torch.int, !torch.int -> !torch.bool\n"
" torch.prim.If %2 -> () {\n"
" torch.prim.If.yield\n"
" } else {\n"
" torch.prim.RaiseException %str, %none : !torch.str, !torch.none\n"
" torch.prim.If.yield\n"
" }\n"
" %5 = torch.prim.ListConstruct %int11, %int5 : (!torch.int, !torch.int) -> !torch.list<int>\n"
" %6 = torch.aten.__contains__.int_list %5, %1#1 : !torch.list<int>, !torch.int -> !torch.bool\n"
" %7 = torch.aten.__not__ %6 : !torch.bool -> !torch.bool\n"
" torch.prim.If %7 -> () {\n"
" %3 = torch.aten.__isnot__ %1#1, %int11 : !torch.int, !torch.int -> !torch.bool\n"
" torch.prim.If %3 -> () {\n"
" torch.prim.If.yield\n"
" } else {\n"
" torch.prim.RaiseException %str, %none : !torch.str, !torch.none\n"
" torch.prim.If.yield\n"
" }\n"
" %8 = torch.aten.eq.int %0#1, %1#1 : !torch.int, !torch.int -> !torch.bool\n"
" torch.prim.If %8 -> () {\n"
" %4 = torch.aten.eq.int %0#1, %1#1 : !torch.int, !torch.int -> !torch.bool\n"
" torch.prim.If %4 -> () {\n"
" torch.prim.If.yield\n"
" } else {\n"
" torch.prim.RaiseException %str, %none : !torch.str, !torch.none\n"
" torch.prim.If.yield\n"
" }\n"
" %9 = torch.prim.ListConstruct %0#0, %1#0 : (!torch.int, !torch.int) -> !torch.list<optional<int>>\n"
" %10 = torch.prim.ListConstruct %0#1, %1#1 : (!torch.int, !torch.int) -> !torch.list<int>\n"
" %11 = call @__torch__.torch_mlir.dialects.torch.importer.jit_ir.build_tools.library_generator.promote_dtypes(%9, %10) : (!torch.list<optional<int>>, !torch.list<int>) -> !torch.int\n"
" return %11 : !torch.int\n"
" %5 = torch.prim.ListConstruct %0#0, %1#0 : (!torch.int, !torch.int) -> !torch.list<optional<int>>\n"
" %6 = torch.prim.ListConstruct %0#1, %1#1 : (!torch.int, !torch.int) -> !torch.list<int>\n"
" %7 = call @__torch__.torch_mlir.dialects.torch.importer.jit_ir.build_tools.library_generator.promote_dtypes(%5, %6) : (!torch.list<optional<int>>, !torch.list<int>) -> !torch.int\n"
" return %7 : !torch.int\n"
" }\n"
" func.func @\"__torch_mlir_dtype_fn.aten.where.self\"(%arg0: !torch.tuple<int, int>, %arg1: !torch.tuple<int, int>, %arg2: !torch.tuple<int, int>) -> !torch.int {\n"
" %0:2 = torch.prim.TupleUnpack %arg1 : !torch.tuple<int, int> -> !torch.int, !torch.int\n"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2822,7 +2822,7 @@ def aten〇remainder〇Scalar〡dtype(self_rank_dtype: Tuple[int, int], other: U

# TODO: This should be fixed by switching to FakeTensor instead of Meta tensor
@check_dtype_function(
_check_tensors_with_the_same_dtype(tensor_shapes=[(1, 1, 1), (1, 1, 1), (1, 1, 1)], tensor_device="cpu", error_types={torch.bool, torch.float16}) +
_check_tensors_with_the_same_dtype(tensor_shapes=[(1, 1, 1), (1, 1, 1), (1, 1, 1)], tensor_device="cpu", error_types={torch.bool}) +
[ErrorInvocation(TensorOfShape(
1, 1, 1, dtype=torch.float64, device="cpu"), TensorOfShape(1, 1, 1, dtype=torch.int16, device="cpu"), TensorOfShape(1, 1, 1, dtype=torch.int32, device="cpu")),
ErrorInvocation(
Expand All @@ -2834,8 +2834,8 @@ def aten〇remainder〇Scalar〡dtype(self_rank_dtype: Tuple[int, int], other: U
def aten〇baddbmm〡dtype(self_rank_dtype: Tuple[int, int], batch1_rank_dtype: Tuple[int, int], batch2_rank_dtype: Tuple[int, int], beta: Union[int, float, complex] = 1, alpha: Union[int, float, complex] = 1) -> int:
batch1_rank, batch1_dtype = batch1_rank_dtype
batch2_rank, batch2_dtype = batch2_rank_dtype
assert batch1_dtype not in [torch.bool, torch.float16]
assert batch2_dtype not in [torch.bool, torch.float16]
assert batch1_dtype is not torch.bool
assert batch2_dtype is not torch.bool
assert batch1_dtype == batch2_dtype
ranks: List[Optional[int]] = [batch1_rank, batch2_rank]
dtypes = [batch1_dtype, batch2_dtype]
Expand Down
2 changes: 1 addition & 1 deletion pytorch-hash.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
d7520d8668dc08f7bed27a64f006c909006e653a
fecde478ac83edf78e7d0e9d11ab73cb1580f6cf
2 changes: 1 addition & 1 deletion pytorch-requirements.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
-f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html
--pre
torch==2.2.0.dev20230927
torch==2.2.0.dev20230928
2 changes: 1 addition & 1 deletion torchvision-requirements.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
-f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html
--pre
torchvision==0.17.0.dev20230927
torchvision==0.17.0.dev20230928