From 1dc7443a63117faed2262d03c70b8c22801a0836 Mon Sep 17 00:00:00 2001 From: Yuan Tang Date: Thu, 16 Jan 2025 23:45:36 -0500 Subject: [PATCH] [Doc] Add instructions on using Podman when SELinux is active (#12136) Signed-off-by: Yuan Tang Signed-off-by: Isotr0py <2037008807@qq.com> --- docs/source/deployment/docker.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/docs/source/deployment/docker.md b/docs/source/deployment/docker.md index 2606e2765c1ae..438be47316f3b 100644 --- a/docs/source/deployment/docker.md +++ b/docs/source/deployment/docker.md @@ -42,6 +42,9 @@ DOCKER_BUILDKIT=1 docker build . --target vllm-openai --tag vllm/vllm-openai By default vLLM will build for all GPU types for widest distribution. If you are just building for the current GPU type the machine is running on, you can add the argument `--build-arg torch_cuda_arch_list=""` for vLLM to find the current GPU type and build for that. + +If you are using Podman instead of Docker, you might need to disable SELinux labeling by +adding `--security-opt label=disable` when running `podman build` command to avoid certain [existing issues](https://github.com/containers/buildah/discussions/4184). ``` ## Building for Arm64/aarch64