Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

provide "size" parameter in torch.normal when called with two floats #20545

Closed
wants to merge 5 commits into from

Conversation

umanwizard
Copy link
Contributor

This has been requested in #20323

(It is still not exactly the same as NumPy, which allows you to pass tensors at mean/std and broadcast them with size, but the present PR is extremely simple and does the main thing people are asking for)

@umanwizard umanwizard added the module: numpy Related to numpy support, and also numpy compatibility of our operators label May 15, 2019
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@umanwizard has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@umanwizard
Copy link
Contributor Author

Ping -- can someone review this?

@soumith
Copy link
Member

soumith commented May 24, 2019

@umanwizard assign it to a reviewer or ping specific people. Most people (or at this point all people) dont read all their github notifications.

@umanwizard
Copy link
Contributor Author

@soumith Whoops, dind't realize I hadn't added anyone

@fmassa
Copy link
Member

fmassa commented May 24, 2019

Can you add some documentation for this overload?

@pytorchbot pytorchbot added the module: docs Related to our documentation, both in docs/ and docblocks label May 24, 2019
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@umanwizard has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@umanwizard has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@umanwizard
Copy link
Contributor Author

@fmassa ping, can you review again?

Copy link
Member

@fmassa fmassa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Args:
mean (float): the mean for all distributions
std (float): the standard deviation for all distributions
size (int...): a sequence of integers defining the shape of the output tensor.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we add size everywhere? That seems consistent and doesn't need yet another overload.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We would need the extra overload anyway, to support all of normal(Tensor, float, sizes), normal(float, Tensor, sizes), and normal(float, float, sizes)

Copy link
Contributor

@gchanan gchanan Jun 6, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can't size have a default value (None)? So you would have only the overloads that exist now.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My point is we don't have any overload now that takes two floats. We have overloads that take two tensors, or one tensor and one float.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry I missed that. I still think we should just add size everywhere, though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For posterity: gchanan and I spoke about this offline. We agreed that we do need to have a new override for the "two floats" case, but there's also no good reason not to add "sizes" everywhere. So I'll do that.

@zhangguanheng66
Copy link
Contributor

@gchanan let me know if you have any other questions. It seems ready to land.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zhangguanheng66 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@zhangguanheng66
Copy link
Contributor

@umanwizard Could you rebase and resolve the merge conflicts? I tried to land the diff and got merge conflict warnings. Thanks.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zhangguanheng66 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zhangguanheng66 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Jul 3, 2019
…(#20545)

Summary:
This has been requested in pytorch/pytorch#20323

(It is still not exactly the same as NumPy, which allows you to pass tensors at mean/std and broadcast them with size, but the present PR is extremely simple and does the main thing people are asking for)
Pull Request resolved: pytorch/pytorch#20545

Differential Revision: D15358736

Pulled By: zhangguanheng66

fbshipit-source-id: 762ea5eab5b8667afbac2df0137df017ba6e413c
@facebook-github-bot
Copy link
Contributor

@zhangguanheng66 merged this pull request in dcd902b.

@umanwizard
Copy link
Contributor Author

Were you able to land this? I will have some free time next week to fix the merge conflicts if still needed

xzhu1900 pushed a commit to xzhu1900/pytorch that referenced this pull request Jul 5, 2019
…ytorch#20545)

Summary:
This has been requested in pytorch#20323

(It is still not exactly the same as NumPy, which allows you to pass tensors at mean/std and broadcast them with size, but the present PR is extremely simple and does the main thing people are asking for)
Pull Request resolved: pytorch#20545

Differential Revision: D15358736

Pulled By: zhangguanheng66

fbshipit-source-id: 762ea5eab5b8667afbac2df0137df017ba6e413c
@zhangguanheng66
Copy link
Contributor

@umanwizard I have landed the Diff after resolving the merge conflict. No worry for this now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Merged module: docs Related to our documentation, both in docs/ and docblocks module: numpy Related to numpy support, and also numpy compatibility of our operators open source
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants