Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: "slow_conv2d_cpu" not implemented for 'Half' #74625

Closed
Tuxius opened this issue Mar 23, 2022 · 2 comments
Closed

RuntimeError: "slow_conv2d_cpu" not implemented for 'Half' #74625

Tuxius opened this issue Mar 23, 2022 · 2 comments

Comments

@Tuxius
Copy link

Tuxius commented Mar 23, 2022

🐛 Describe the bug

Trying to run the new Aleph-Alpha/magma (https://github.com/Aleph-Alpha/magma) I run into a Pytorch Bug / missing implementation. It seems that some torch operations which work on the GPU on half precision are not implemented on CPU. I did discuss this issue with Aleph-Alpha/magma devloper @Mayukhdeb here: https://github.com/Aleph-Alpha/magma/issues/28, who pointed towards the missing implementation in Pytorch.

Here is the error message:

Traceback (most recent call last):
  File "C:\Python\magma-master\example_inference.py", line 18, in <module>
    embeddings = model.preprocess_inputs(inputs)
  File "C:\Python\magma-master\magma\magma.py", line 192, in preprocess_inputs
    return self.embed(input_list)
  File "C:\Python\magma-master\magma\magma.py", line 209, in embed
    image_embeddings = self.image_prefix(x)
  File "C:\Users\frank\anaconda3\envs\magma\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Python\magma-master\magma\image_prefix.py", line 83, in forward
    logits = self.enc(x)
  File "C:\Users\frank\anaconda3\envs\magma\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\frank\anaconda3\envs\magma\lib\site-packages\clip\model.py", line 143, in forward
    x = stem(x)
  File "C:\Users\frank\anaconda3\envs\magma\lib\site-packages\clip\model.py", line 138, in stem
    x = self.relu(bn(conv(x)))
  File "C:\Users\frank\anaconda3\envs\magma\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\frank\anaconda3\envs\magma\lib\site-packages\torch\nn\modules\conv.py", line 447, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "C:\Users\frank\anaconda3\envs\magma\lib\site-packages\torch\nn\modules\conv.py", line 443, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: "slow_conv2d_cpu" not implemented for 'Half'

Is there any way to get this implemented / going with CPU?

Best
Tuxius

Versions

Collecting environment information...
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A

OS: Microsoft Windows 11 Home
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A

Python version: 3.9.7 (default, Sep 16 2021, 16:59:28) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22000-SP0
Is CUDA available: True
CUDA runtime version: 11.3.58
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2070
Nvidia driver version: 496.76
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\bin\cudnn_ops_train64_8.dll
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

Versions of relevant libraries:
[pip3] numpy==1.21.5
[pip3] torch==1.11.0
[pip3] torchaudio==0.11.0
[pip3] torchtyping==0.1.4
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl anaconda
[conda] cudatoolkit 11.3.1 h59b6b97_2
[conda] mkl 2021.4.0 haa95532_640
[conda] mkl-service 2.4.0 py39h2bbff1b_0
[conda] mkl_fft 1.3.1 py39h277e83a_0
[conda] mkl_random 1.2.2 py39hf11a4ad_0
[conda] numpy 1.21.5 py39ha4e8547_0
[conda] numpy-base 1.21.5 py39hc2deb75_0
[conda] pytorch 1.11.0 py3.9_cuda11.3_cudnn8_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 0.11.0 py39_cu113 pytorch
[conda] torchtyping 0.1.4 pypi_0 pypi
[conda] torchvision 0.12.0 py39_cu113 pytorch

@ngimel
Copy link
Collaborator

ngimel commented Mar 23, 2022

No, we won't be implementing this with CPU. CPU does not support efficient computations with half datatype, and therefore half datatype support on cpu is limited and shouldn't be relied upon.

@Atomic-Germ
Copy link

No, we won't be implementing this with CPU. CPU does not support efficient computations with half datatype, and therefore half datatype support on cpu is limited and shouldn't be relied upon.

I'm sorry, but nobody should be relying upon any machine learning at this point. It's almost all for research, and enabling the possibility of that research being done individually is a good thing. Beyond that, this bug affects mps as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants