Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Experimental: allow fp16 in mps #961

Merged
merged 11 commits into from
Oct 29, 2022
Merged

Experimental: allow fp16 in mps #961

merged 11 commits into from
Oct 29, 2022

Conversation

pcuenca
Copy link
Member

@pcuenca pcuenca commented Oct 24, 2022

The pipeline works if this change is applied, but results differ from those obtained when using full precision.

This is intended to be applied after #926.
Fixes #660.

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Oct 24, 2022

The documentation is not available anymore as the PR was closed or merged.

@pcuenca
Copy link
Member Author

pcuenca commented Oct 24, 2022

The RC of PyTorch 1.13 is ready: pytorch/pytorch#86312 (comment)

This is now ready for review. I removed workarounds for fixed issues, applied performance optimizations and updated the documentation.

Copy link
Contributor

@patrickvonplaten patrickvonplaten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Feel free to merge whenever :-)

@pcuenca pcuenca merged commit 95414bd into main Oct 29, 2022
@pcuenca pcuenca deleted the mps-allow-fp16 branch October 29, 2022 19:09
yoonseokjin pushed a commit to yoonseokjin/diffusers that referenced this pull request Dec 25, 2023
* Docs: refer to pre-RC version of PyTorch 1.13.0.

* Remove temporary workaround for unavailable op.

* Update comment to make it less ambiguous.

* Remove use of contiguous in mps.

It appears to not longer be necessary.

* Special case: use einsum for much better performance in mps

* Update mps docs.

* MPS: make pipeline work in half precision.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Running revision="fp16", torch_dtype=torch.float16 on mps M1
3 participants