Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to set to AMD mode? #160

Closed
linonetwo opened this issue Mar 19, 2023 · 4 comments
Closed

How to set to AMD mode? #160

linonetwo opened this issue Mar 19, 2023 · 4 comments

Comments

@linonetwo
Copy link

linonetwo commented Mar 19, 2023

PS E:\repo\ComfyUI> ..\stable-diffusion-webui\venv\Scripts\Activate.ps1  # based on sd webui's env, which can runs on AMD card

python main.py # or $env:HSA_OVERRIDE_GFX_VERSION=10.3.0 python main.py

Cause the error that cuda not found, while I'm using AMD Rx480 card (poor guy)

Set vram state to: NORMAL VRAM
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention
Starting server

To see the GUI go to: http://127.0.0.1:8188
got prompt
Traceback (most recent call last):
  File "E:\repo\ComfyUI\execution.py", line 174, in execute
    executed += recursive_execute(self.server, prompt, self.outputs, x, extra_data)
  File "E:\repo\ComfyUI\execution.py", line 54, in recursive_execute
    executed += recursive_execute(server, prompt, outputs, input_unique_id, extra_data)
  File "E:\repo\ComfyUI\execution.py", line 54, in recursive_execute
    executed += recursive_execute(server, prompt, outputs, input_unique_id, extra_data)
  File "E:\repo\ComfyUI\execution.py", line 54, in recursive_execute
    executed += recursive_execute(server, prompt, outputs, input_unique_id, extra_data)
  File "E:\repo\ComfyUI\execution.py", line 63, in recursive_execute
    outputs[unique_id] = getattr(obj, obj.FUNCTION)(**input_data_all)
  File "E:\repo\ComfyUI\nodes.py", line 217, in load_checkpoint
    out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings"))
  File "E:\repo\ComfyUI\comfy\sd.py", line 779, in load_checkpoint_guess_config
    fp16 = model_management.should_use_fp16()
  File "E:\repo\ComfyUI\comfy\model_management.py", line 226, in should_use_fp16
    if torch.cuda.is_bf16_supported():
  File "E:\repo\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\__init__.py", line 102, in is_bf16_supported
    return torch.cuda.get_device_properties(torch.cuda.current_device()).major >= 8 and cuda_maj_decide
  File "E:\repo\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\__init__.py", line 552, in current_device
    _lazy_init()
  File "E:\repo\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\__init__.py", line 229, in _lazy_init
    torch._C._cuda_init()
RuntimeError: The NVIDIA driver on your system is too old (found version 5000). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver.

E:\repo\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\__init__.py:88: UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 5000). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. (Triggered internally at ..\c10\cuda\CUDAFunctions.cpp:109.)
  return torch._C._cuda_getDeviceCount() > 0

I have also done the https://github.com/comfyanonymous/ComfyUI#amd-linux-only

P.S.

webui works with .\webui.bat --skip-torch-cuda-test --precision full --no-half that is learnt from https://huggingface.co/CompVis/stable-diffusion-v1-4/discussions/64

@comfyanonymous
Copy link
Owner

Right now AMD is only supported with ROCm on Linux.

When you run your a1111 ui like that it runs it in CPU mode so the equivalent would be to use the run_cpu.bat on the standalone or the --cpu option from the command line.

@linonetwo
Copy link
Author

linonetwo commented Mar 19, 2023

Thanks, my fault, I just notice I was in CPU mode, GPU usage is 0%. And a1111 ui has a fork that supports AMD GPU https://github.com/lshqqytiger/stable-diffusion-webui-directml

With stable-diffusion-webui-directml it runs on AMD GPU.

图片

So I can imagine ComfyUI will also need such a large modification to run on Windows&AMD GPU. Am I right? So I can understand this might not on your roadmap.

@comfyanonymous
Copy link
Owner

I'm planning on adding pytorch-directml support, assuming there isn't anything I missed it doesn't seem to be difficult to add support.

@robinjhuang
Copy link
Collaborator

Fixed by: 3baded9

helloworld9999 pushed a commit to helloworld9999/ComfyUI that referenced this issue Feb 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants