Skip to content

Publish Docker image #34

Publish Docker image

Publish Docker image #34

Triggered via schedule January 4, 2025 04:28
Status Failure
Total duration 6m 0s
Artifacts 1

docker.yml

on: schedule
Matrix: Push Docker image to Docker Hub
Fit to window
Zoom out
Zoom in

Annotations

21 errors and 1 warning
Push Docker image to Docker Hub (light-intel, .devops/llama-cli-intel.Dockerfile, linux/amd64)
buildx failed with: ERROR: failed to solve: process "/bin/sh -c if [ \"${GGML_SYCL_F16}\" = \"ON\" ]; then echo \"GGML_SYCL_F16 is set\" && export OPT_SYCL_F16=\"-DGGML_SYCL_F16=ON\"; fi && echo \"Building with static libs\" && cmake -B build -DGGML_NATIVE=OFF -DGGML_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx ${OPT_SYCL_F16} -DBUILD_SHARED_LIBS=OFF && cmake --build build --config Release --target llama-cli" did not complete successfully: exit code: 1
Push Docker image to Docker Hub (light, .devops/llama-cli.Dockerfile, linux/amd64,linux/arm64)
The job was canceled because "light-intel__devops_llama" failed.
Push Docker image to Docker Hub (server, .devops/llama-server.Dockerfile, linux/amd64,linux/arm64)
The job was canceled because "light-intel__devops_llama" failed.
Push Docker image to Docker Hub (full, .devops/full.Dockerfile, linux/amd64,linux/arm64)
The job was canceled because "light-intel__devops_llama" failed.
Push Docker image to Docker Hub (full-musa, .devops/full-musa.Dockerfile, linux/amd64)
The job was canceled because "light-intel__devops_llama" failed.
Push Docker image to Docker Hub (light-musa, .devops/llama-cli-musa.Dockerfile, linux/amd64)
The job was canceled because "light-intel__devops_llama" failed.
Push Docker image to Docker Hub (light-cuda, .devops/llama-cli-cuda.Dockerfile, linux/amd64)
The job was canceled because "light-intel__devops_llama" failed.
Push Docker image to Docker Hub (full-cuda, .devops/full-cuda.Dockerfile, linux/amd64)
The job was canceled because "light-intel__devops_llama" failed.
Push Docker image to Docker Hub (server-musa, .devops/llama-server-musa.Dockerfile, linux/amd64)
The job was canceled because "light-intel__devops_llama" failed.
Push Docker image to Docker Hub (server-cuda, .devops/llama-server-cuda.Dockerfile, linux/amd64)
The job was canceled because "light-intel__devops_llama" failed.
Push Docker image to Docker Hub (server-intel, .devops/llama-server-intel.Dockerfile, linux/amd64)
The job was canceled because "light-intel__devops_llama" failed.
Push Docker image to Docker Hub (server-intel, .devops/llama-server-intel.Dockerfile, linux/amd64)
buildx failed with: ERROR: failed to solve: process "/bin/sh -c if [ \"${GGML_SYCL_F16}\" = \"ON\" ]; then echo \"GGML_SYCL_F16 is set\" && export OPT_SYCL_F16=\"-DGGML_SYCL_F16=ON\"; fi && echo \"Building with dynamic libs\" && cmake -B build -DGGML_NATIVE=OFF -DGGML_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DLLAMA_CURL=ON ${OPT_SYCL_F16} && cmake --build build --config Release --target llama-server" did not complete successfully: exit code: 1

Artifacts

Produced during runtime
Name Size
apicalshark~llama.cpp~WN2SSK.dockerbuild
24.9 KB