-
Notifications
You must be signed in to change notification settings - Fork 28.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
xpu: Support new PyTorch XPU backend (>=2.4) #31237
xpu: Support new PyTorch XPU backend (>=2.4) #31237
Comments
Fixes: huggingface/transformers#31237 XPU backend is available in the stock PyTorch starting from version 2.4, see [1]. This commit extends huggingface accelerate to support XPU from both IPEX and the stock pytorch. IPEX is being tried first. See: pytorch/pytorch#114842 Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
As of pytorch/pytorch@21144ce, huggingface/accelerate@b7fa2fa and 485d913 with applied PRs:
Below are my try out results for Huggingface examples (https://github.com/huggingface/transformers/tree/main/examples/pytorch) running with XPU backend on ATS-M (requires Overall, Huggingface examples can run on XPU backend with the low performance at the moment due to range of operations falling back to CPU. Effectively one of the goal was to identify these ops for future prioritization. The only example which failed due to missing of some uAPI is speech-pretraining. See details below.
|
Fixes: huggingface/transformers#31237 XPU backend is available in the stock PyTorch starting from version 2.4, see [1]. This commit extends huggingface accelerate to support XPU from both IPEX and the stock pytorch. IPEX is being tried first. See: pytorch/pytorch#114842 Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
Fixes: huggingface/transformers#31237 XPU backend is available in the stock PyTorch starting from version 2.4, see [1]. This commit extends huggingface accelerate to support XPU from both IPEX and the stock pytorch. IPEX is being tried first. See: pytorch/pytorch#114842 Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
@dvrogozh Thank you for such an extensive write up, diving into how it affects the library functionality and opening up draft PRs for enabling this ❤️ It's OK if there isn't full coverage of operations - we support the mps backend despite there not being full coverage yet. It's great that you've investigated and we have an idea how much the fallback can slow things down. Overall, I don't see any reason why this shouldn't be something we enable. Similar to mps, it's not something we'll probably test on our side though at the moment |
Yep agreed :) We are working towards getting this in accelerate first, then the Trainer in terms of which PRs to merge when |
Fixes: huggingface/transformers#31237 XPU backend is available in the stock PyTorch starting from version 2.4, see [1]. This commit extends huggingface accelerate to support XPU from both IPEX and the stock pytorch. IPEX is being tried first. See: pytorch/pytorch#114842 Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
Fixes: huggingface/transformers#31237 XPU backend is available in the stock PyTorch starting from version 2.4, see [1]. This commit extends huggingface accelerate to support XPU from both IPEX and the stock pytorch. IPEX is being tried first. See: pytorch/pytorch#114842 Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
Fixes: huggingface/transformers#31237 XPU backend is available in the stock PyTorch starting from version 2.4, see [1]. This commit extends huggingface accelerate to support XPU from both IPEX and the stock pytorch. IPEX is being tried first. See: pytorch/pytorch#114842 Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
Fixes: huggingface/transformers#31237 XPU backend is available in the stock PyTorch starting from version 2.4, see [1]. This commit extends huggingface accelerate to support XPU from both IPEX and the stock pytorch. IPEX is being tried first. See: pytorch/pytorch#114842 Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
I filed one more issue affecting some (not all) examples and tests - cuda path is wrongly hit sometimes on |
Fixes: huggingface/transformers#31237 XPU backend is available in the stock PyTorch starting from version 2.4, see [1]. This commit extends huggingface accelerate to support XPU from both IPEX and the stock pytorch. IPEX is being tried first. See: pytorch/pytorch#114842 Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
Fixes: huggingface#31237 XPU backend is available in the stock PyTorch starting from version 2.4, see [1]. This commit extends huggingface transformers to support XPU from both IPEX and the stock pytorch. IPEX is being tried first. See: pytorch/pytorch#114842 Requires: huggingface/accelerate#2825 Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
* xpu: support xpu backend from stock pytorch (>=2.4) Fixes: #31237 XPU backend is available in the stock PyTorch starting from version 2.4, see [1]. This commit extends huggingface transformers to support XPU from both IPEX and the stock pytorch. IPEX is being tried first. See: pytorch/pytorch#114842 Requires: huggingface/accelerate#2825 Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com> * xpu: enable gpt2 and decision_transformer tests for xpu pytorch backend Note that running xpu tests requires TRANSFORMERS_TEST_DEVICE_SPEC=spec.py passed to the test runner: import torch DEVICE_NAME = 'xpu' MANUAL_SEED_FN = torch.xpu.manual_seed EMPTY_CACHE_FN = torch.xpu.empty_cache DEVICE_COUNT_FN = torch.xpu.device_count Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com> --------- Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
XPU backend is a new backend in PyTorch which targets to enabled hardware acceleration on Intel GPUs via sycl. It's being actively worked on at the moment with first set of patches landed in PyTorch upstream and support disclosed in documentation [1]. Initial version should be available starting from PyTorch 2.4, with 2.5 release as a target point of maturity. Current focus of the effort is on functional aspect to identify and close API gaps, if any, and populate set of offloadable aten operations. Some models and scenarios can already be tried out with the caveat of the low performance due to CPU fallbacks on some operations. Overall, [2] outlines upsrreaming process for XPU backend. Note also some relevant XPU related issues opened on PyTorch side [3].
Previously Intel GPU support in PyTorch was only available via Intel Extension for PyTorch (IPEX). Effectively this support is what is getting now upstreamed to the stock PyTorch.
Here I would like to request Huggingface to enable stock Pytorch XPU backend. Considering that IPEX is actually already enabled in Huggingface repos, this should be fairly trivial to extend it to cover XPU backend since the latter reuses XPU device and operations naming from IPEX era.
I did prototype XPU backend support in Huggingface. Please, check these PRs:
I do observe some issues which needs to be addressed on XPU backend side. To avoid long description I will publish it separately in a comment below.
[1] https://github.com/pytorch/pytorch?tab=readme-ov-file#intel-gpu-support
[2] pytorch/pytorch#114842
[3] https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+xpu+in%3Atitle
CC: @gujinghui @EikanWang @fengyuan14 @guangyey @jgong5 @sywangyi @kding1
The text was updated successfully, but these errors were encountered: