Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI jobs broken due to upstream changes on padding and stride #5873

Closed
datumbox opened this issue Apr 25, 2022 · 0 comments
Closed

CI jobs broken due to upstream changes on padding and stride #5873

datumbox opened this issue Apr 25, 2022 · 0 comments

Comments

@datumbox
Copy link
Contributor

datumbox commented Apr 25, 2022

🐛 Describe the bug

The latest main seems broken potentially due to upstream changes on PyTorch's core. I believe the problem was introduced due to the changes on pad at pytorch/pytorch#73433 and on stride at pytorch/pytorch#72962

Here are different failures we get:

test_pad[1-edge-cpu]:
Traceback (most recent call last):
  File "/root/project/test/test_transforms_tensor.py", line 205, in test_pad
    _test_functional_op(F.pad, fn_kwargs={"padding": mul * 2, "fill": fill, "padding_mode": m}, device=device)
  File "/root/project/test/test_transforms_tensor.py", line 54, in _test_functional_op
    transformed_tensor = f(tensor, **fn_kwargs)
  File "/root/project/torchvision/transforms/functional.py", line 480, in pad
    return F_t.pad(img, padding=padding, fill=fill, padding_mode=padding_mode)
  File "/root/project/torchvision/transforms/functional_tensor.py", line 415, in pad
    img = torch_pad(img, p, mode=padding_mode, value=float(fill))
RuntimeError: Padding mode "replicate" doesn't take in value argument
test_pad[1-reflect-cpu]
Traceback (most recent call last):
  File "/root/project/test/test_transforms_tensor.py", line 205, in test_pad
    _test_functional_op(F.pad, fn_kwargs={"padding": mul * 2, "fill": fill, "padding_mode": m}, device=device)
  File "/root/project/test/test_transforms_tensor.py", line 54, in _test_functional_op
    transformed_tensor = f(tensor, **fn_kwargs)
  File "/root/project/torchvision/transforms/functional.py", line 480, in pad
    return F_t.pad(img, padding=padding, fill=fill, padding_mode=padding_mode)
  File "/root/project/torchvision/transforms/functional_tensor.py", line 415, in pad
    img = torch_pad(img, p, mode=padding_mode, value=float(fill))
RuntimeError: Padding mode "reflect" doesn't take in value argument
test_color_jitter_all[3-cpu-9]:
Traceback (most recent call last):
  File "/root/project/test/test_transforms_tensor.py", line 194, in test_color_jitter_all
    channels=channels,
  File "/root/project/test/test_transforms_tensor.py", line 86, in _test_class_op
    _test_transform_vs_scripted_on_batch(f, scripted_fn, batch_tensors)
  File "/root/project/test/test_transforms_tensor.py", line 46, in _test_transform_vs_scripted_on_batch
    s_transformed_batch = s_transform(batch_tensors)
  File "/root/project/env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1129, in _call_impl
    return forward_call(*input, **kwargs)
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
RuntimeError: is_tensor_creation || ((is_contiguous ^ is_channels_last_contiguous) && (is_contiguous || is_channels_last_contiguous)) INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1650784222208/work/torch/csrc/jit/tensorexpr/kernel.cpp":519, please report a bug to PyTorch.

Versions

Latest main e99278a

cc @seemethere

peterbell10 added a commit to pytorch/pytorch that referenced this issue Apr 25, 2022
Fixes pytorch/vision#5873

In the python version of `F.pad`, checking that the fill value was
left as default was done by comparing against zero:
https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/nn/functional.py#L4366

So if someone does explicitly pass in a zero-value, then this
`TORCH_CHECK` was an accidental BC-break. Instead, we should just warn
in that case.

[ghstack-poisoned]
peterbell10 added a commit to pytorch/pytorch that referenced this issue Apr 25, 2022
…e is zero"

Fixes pytorch/vision#5873

In the python version of `F.pad`, checking that the fill value was
left as default was done by comparing against zero:
https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/nn/functional.py#L4366

So if someone does explicitly pass in a zero-value, then this
`TORCH_CHECK` was an accidental BC-break.

[ghstack-poisoned]
pytorchmergebot pushed a commit to pytorch/pytorch that referenced this issue Apr 25, 2022
@datumbox datumbox changed the title CI jobs broken due to upstream changes on padding CI jobs broken due to upstream changes on padding and stride Apr 26, 2022
pytorchmergebot pushed a commit to pytorch/pytorch that referenced this issue Apr 26, 2022
Fixes pytorch/vision#5873

In the python version of `F.pad`, checking that the fill value was
left as default was done by comparing against zero:
https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/nn/functional.py#L4366

So if someone does explicitly pass in a zero-value, then this
`TORCH_CHECK` was an accidental BC-break.

Pull Request resolved: #76307

Approved by: https://github.com/albanD, https://github.com/jbschlosser, https://github.com/datumbox
facebook-github-bot pushed a commit to pytorch/pytorch that referenced this issue Apr 26, 2022
Summary:
This reverts commit 9390609.

Fixes pytorch/vision#5873

Pull Request resolved: #76332
Approved by: https://github.com/seemethere

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/1d55518198de14a011e7acd6c604afe512ee28b4

Reviewed By: osalpekar

Differential Revision: D35938180

Pulled By: zengk95

fbshipit-source-id: 15f48235b3b11ce9ca2379781f9e75cd1af39aed
facebook-github-bot pushed a commit to pytorch/pytorch that referenced this issue Apr 26, 2022
Summary:
Fixes pytorch/vision#5873

In the python version of `F.pad`, checking that the fill value was
left as default was done by comparing against zero:
https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/nn/functional.py#L4366

So if someone does explicitly pass in a zero-value, then this
`TORCH_CHECK` was an accidental BC-break.

Pull Request resolved: #76307

Approved by: https://github.com/albanD, https://github.com/jbschlosser, https://github.com/datumbox

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/f02b7a9c36dd6182da694bc47a5c345285dfd951

Reviewed By: osalpekar

Differential Revision: D35938194

fbshipit-source-id: dabefdded870182ddc198d0e6473009270b895d5
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant