-
Notifications
You must be signed in to change notification settings - Fork 317
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docker Desktop and Nvidia Runtime inoperable across multiple distros #229
Comments
Same issue. |
Same issue |
same issue |
1 similar comment
same issue |
I had the same problem and followed the same debug steps and saw an almost identical setup:
This looks quite related to #154 as well as the linked docker forum thread |
I encountered same error running nvidia docker. I made my docker run with rootless privileges and set |
Would love to see this solved |
same error |
Docker Desktop is not currently supported by the NVIDIA Container Stack. |
It works with new Docker Desktop Version >=4.32.0. Apparently they fixed this bs and never cared to tell us. Phew! |
It still does not work for me. I'm on Ubuntu 24.04, Docker Desktop v4.36.0. I'm trying to start up services:
ollama:
image: ollama/ollama:0.4.3
container_name: ollama
ports:
- "11434:11434"
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu] If I run it with sudo or Docker Desktop off, it uses the docker engine installation ( If I start Docker Desktop and run
Looked all over the web with no avail. ChatGPT was of no help either |
Exactly.. I am also facing the same while installing via harbor app. Working with I am using Docker desktop v4.38.0. Seems like it is still not fixed for other context. |
1. Issue or feature description
On fresh Arch (EndeavourOS) and Ubuntu (20.04 and 22.04) installations, attempts to utilize nvidia runtime through any image via Docker Desktop fail with this error:
2. Steps to reproduce the issue
sudo dockerd --add-runtime=nvidia=/usr/bin/nvidia-container-runtime
At this point, there are two docker contexts installed.
Any GPU-related image only succeeds if you
docker context use default
, but usingdesktop-linux
context fails.3. Information to attach (optional if deemed irrelevant)
nvidia-container-cli -k -d /dev/tty info
uname -a
5.15.0-56-generic NVIDIA/nvidia-docker#62~20.04.1-Ubuntu SMP Tue Nov 22 21:24:20 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
dmesg
[drm:nv_drm_master_set [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000c00] Failed to grab modeset ownership
nvidia-smi -a
docker version
dpkg -l '*nvidia*'
orrpm -qa '*nvidia*'
nvidia-container-cli -V
No logs produced.
docker run --rm --gpus all nvidia/cuda:12.0.0-devel-ubuntu22.04 nvidia-smi
The text was updated successfully, but these errors were encountered: