Skip to content

Commit

Permalink
[Doc] Add instructions on using Podman when SELinux is active (vllm-p…
Browse files Browse the repository at this point in the history
…roject#12136)

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
Signed-off-by: Isotr0py <2037008807@qq.com>
  • Loading branch information
terrytangyuan authored and Isotr0py committed Feb 2, 2025
1 parent 75ca7e0 commit 1dc7443
Showing 1 changed file with 3 additions and 0 deletions.
3 changes: 3 additions & 0 deletions docs/source/deployment/docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,9 @@ DOCKER_BUILDKIT=1 docker build . --target vllm-openai --tag vllm/vllm-openai
By default vLLM will build for all GPU types for widest distribution. If you are just building for the
current GPU type the machine is running on, you can add the argument `--build-arg torch_cuda_arch_list=""`
for vLLM to find the current GPU type and build for that.
If you are using Podman instead of Docker, you might need to disable SELinux labeling by
adding `--security-opt label=disable` when running `podman build` command to avoid certain [existing issues](https://github.com/containers/buildah/discussions/4184).
```

## Building for Arm64/aarch64
Expand Down

0 comments on commit 1dc7443

Please sign in to comment.