-
Notifications
You must be signed in to change notification settings - Fork 23.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
provide "size" parameter in torch.normal when called with two floats #20545
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@umanwizard has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Ping -- can someone review this? |
@umanwizard assign it to a reviewer or ping specific people. Most people (or at this point all people) dont read all their github notifications. |
@soumith Whoops, dind't realize I hadn't added anyone |
Can you add some documentation for this overload? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@umanwizard has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@umanwizard has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
@fmassa ping, can you review again? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Args: | ||
mean (float): the mean for all distributions | ||
std (float): the standard deviation for all distributions | ||
size (int...): a sequence of integers defining the shape of the output tensor. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we add size everywhere? That seems consistent and doesn't need yet another overload.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We would need the extra overload anyway, to support all of normal(Tensor, float, sizes)
, normal(float, Tensor, sizes)
, and normal(float, float, sizes)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can't size have a default value (None)? So you would have only the overloads that exist now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My point is we don't have any overload now that takes two floats. We have overloads that take two tensors, or one tensor and one float.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry I missed that. I still think we should just add size everywhere, though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For posterity: gchanan and I spoke about this offline. We agreed that we do need to have a new override for the "two floats" case, but there's also no good reason not to add "sizes" everywhere. So I'll do that.
@gchanan let me know if you have any other questions. It seems ready to land. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zhangguanheng66 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
@umanwizard Could you rebase and resolve the merge conflicts? I tried to land the diff and got merge conflict warnings. Thanks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zhangguanheng66 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zhangguanheng66 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
…(#20545) Summary: This has been requested in pytorch/pytorch#20323 (It is still not exactly the same as NumPy, which allows you to pass tensors at mean/std and broadcast them with size, but the present PR is extremely simple and does the main thing people are asking for) Pull Request resolved: pytorch/pytorch#20545 Differential Revision: D15358736 Pulled By: zhangguanheng66 fbshipit-source-id: 762ea5eab5b8667afbac2df0137df017ba6e413c
@zhangguanheng66 merged this pull request in dcd902b. |
Were you able to land this? I will have some free time next week to fix the merge conflicts if still needed |
…ytorch#20545) Summary: This has been requested in pytorch#20323 (It is still not exactly the same as NumPy, which allows you to pass tensors at mean/std and broadcast them with size, but the present PR is extremely simple and does the main thing people are asking for) Pull Request resolved: pytorch#20545 Differential Revision: D15358736 Pulled By: zhangguanheng66 fbshipit-source-id: 762ea5eab5b8667afbac2df0137df017ba6e413c
@umanwizard I have landed the Diff after resolving the merge conflict. No worry for this now. |
This has been requested in #20323
(It is still not exactly the same as NumPy, which allows you to pass tensors at mean/std and broadcast them with size, but the present PR is extremely simple and does the main thing people are asking for)