Skip to content

Publish Docker image #38

Publish Docker image

Publish Docker image #38

Triggered via schedule January 8, 2025 04:29
Status Failure
Total duration 4m 37s
Artifacts 2

docker.yml

on: schedule
Matrix: Push Docker image to Docker Hub
Fit to window
Zoom out
Zoom in

Annotations

21 errors and 1 warning
Push Docker image to Docker Hub (server-intel, .devops/llama-server-intel.Dockerfile, linux/amd64)
buildx failed with: ERROR: failed to solve: process "/bin/sh -c if [ \"${GGML_SYCL_F16}\" = \"ON\" ]; then echo \"GGML_SYCL_F16 is set\" && export OPT_SYCL_F16=\"-DGGML_SYCL_F16=ON\"; fi && echo \"Building with dynamic libs\" && cmake -B build -DGGML_NATIVE=OFF -DGGML_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DLLAMA_CURL=ON ${OPT_SYCL_F16} && cmake --build build --config Release --target llama-server" did not complete successfully: exit code: 1
Push Docker image to Docker Hub (light-intel, .devops/llama-cli-intel.Dockerfile, linux/amd64)
The job was canceled because "server-intel__devops_llam" failed.
Push Docker image to Docker Hub (light-intel, .devops/llama-cli-intel.Dockerfile, linux/amd64)
buildx failed with: ERROR: failed to solve: process "/bin/sh -c if [ \"${GGML_SYCL_F16}\" = \"ON\" ]; then echo \"GGML_SYCL_F16 is set\" && export OPT_SYCL_F16=\"-DGGML_SYCL_F16=ON\"; fi && echo \"Building with static libs\" && cmake -B build -DGGML_NATIVE=OFF -DGGML_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx ${OPT_SYCL_F16} -DBUILD_SHARED_LIBS=OFF && cmake --build build --config Release --target llama-cli" did not complete successfully: exit code: 1
Push Docker image to Docker Hub (light-cuda, .devops/llama-cli-cuda.Dockerfile, linux/amd64)
The job was canceled because "server-intel__devops_llam" failed.
Push Docker image to Docker Hub (light-musa, .devops/llama-cli-musa.Dockerfile, linux/amd64)
The job was canceled because "server-intel__devops_llam" failed.
Push Docker image to Docker Hub (server, .devops/llama-server.Dockerfile, linux/amd64,linux/arm64)
The job was canceled because "server-intel__devops_llam" failed.
Push Docker image to Docker Hub (full, .devops/full.Dockerfile, linux/amd64,linux/arm64)
The job was canceled because "server-intel__devops_llam" failed.
Push Docker image to Docker Hub (full-musa, .devops/full-musa.Dockerfile, linux/amd64)
The job was canceled because "server-intel__devops_llam" failed.
Push Docker image to Docker Hub (server-musa, .devops/llama-server-musa.Dockerfile, linux/amd64)
The job was canceled because "server-intel__devops_llam" failed.
Push Docker image to Docker Hub (light, .devops/llama-cli.Dockerfile, linux/amd64,linux/arm64)
The job was canceled because "server-intel__devops_llam" failed.
Push Docker image to Docker Hub (server-cuda, .devops/llama-server-cuda.Dockerfile, linux/amd64)
The job was canceled because "server-intel__devops_llam" failed.
Push Docker image to Docker Hub (full-cuda, .devops/full-cuda.Dockerfile, linux/amd64)
The job was canceled because "server-intel__devops_llam" failed.

Artifacts

Produced during runtime
Name Size
apicalshark~llama.cpp~3G0O24.dockerbuild
26.9 KB
apicalshark~llama.cpp~NWYF4I.dockerbuild
24.4 KB