-
Notifications
You must be signed in to change notification settings - Fork 28.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add Cambricon MLUs support #29627
add Cambricon MLUs support #29627
Conversation
@muellerzr Hi, could you help to review this PR? thx. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall this looks good to me, I can't see any real issues with what we have going on here.
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Do you want to add a bit of doc about this? 🤗
The MLU torch has good compatibility with CUDA torch, and it is simple and convenient to use. The usage of MLU torch is the same as CUDA torch, and you can refer to the CUDA documentation. |
What does this PR do?
Currently, Accelerate has supported cambricon mlu (huggingface/accelerate#2552).
This PR enables users to leverage the cambricon mlu for training and inference of 🤗 Transformers models.
For example, you can run the official glue text-classification task using cambricon mlu with below command:
Below are the output logs: