-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cleanup prototype transforms functional test #5668
Conversation
💊 CI failures summary and remediationsAs of commit 373feaf (more details on the Dr. CI page): ✅ None of the CI failures appear to be your fault 💚
🚧 3 ongoing upstream failures:These were probably caused by upstream breakages that are not fixed yet.
This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
@pmeier |
As of pytorch/pytorch#73348 roughly a month ago, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @pmeier , I agree it's worth encouraging good test practices and it's best done by example. Just a random question but LGTM
assert len(output_boxes) == len(expected_bboxes) | ||
for a_out_box, out_box in zip(expected_bboxes, output_boxes.cpu()): | ||
np.testing.assert_allclose(out_box.cpu().numpy(), a_out_box) | ||
torch.testing.assert_close(output_boxes.tolist(), expected_bboxes) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
assert_close
refuses to compare a list to a tensor?
IIRC numpy testing utils allow to compare array-likes with array-likes right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes and yes. numpy
testing functions coerce everything into an np.ndarray
first and than compare. We consciously opted against that for mainly two reasons:
-
If you have tests parametrized over operators like PyTorch core does, it is impossible to see if the returned type is correct. For example
>>> expected = 1.0 >>> np.testing.assert_allclose(np.array(expected), expected) >>> torch.testing.assert_close(torch.tensor(expected), expected) TypeError: No comparison pair was able to handle inputs of type <class 'torch.Tensor'> and <class 'float'>.
-
It allows us to compare sequences of tensors with different shapes elementwise
>>> shapes = [(2, 3), (2, 4)] >>> np.testing.assert_allclose(*([np.ones(shape) for shape in shapes],) * 2) ValueError: could not broadcast input array from shape (2,3) into shape (2,) >>> torch.testing.assert_close(*([torch.ones(shape) for shape in shapes],) * 2)
This aligns nicely with the ability of
torch.testing.assert_close
to compare mappings elementwise, which is not possible with thenumpy
functions.
Summary: (Note: this ignores all push blocking failures!) Reviewed By: datumbox Differential Revision: D35216775 fbshipit-source-id: cc4aa86ec81c3d2b88b601d190ed763f2b4f3b91
The goal of this PR is to facilitate best practices for our tests. In a normal scenario, these changes would be considered nitpicks. In the light of #5520 and new contributors usually (and rightfully) copy-paste code from similar tests, I think it would be beneficial to have proper usages early on.
There are two patterns I found:
device
kwarg. Thus, something likefeatures.Image(...).to(device)
is an anti-pattern and should be replaced byfeatures.Image(..., device=device)
.torch.testing.assert_close
is able to handle nested sequence comparison. For example, in the bounding box reference tests, we want to test the output tensor of a kernel against a two level list of values computed with another library. Instead of writing custom comparison logic for this scenario, we can simply dotorch.tensor(tensor.tolist(), list_of_lists)
.cc @vfdev-5