Push Docker image to Docker Hub (light-intel, .devops/llama-cli-intel.Dockerfile, linux/amd64)
buildx failed with: ERROR: failed to solve: process "/bin/sh -c if [ \"${GGML_SYCL_F16}\" = \"ON\" ]; then echo \"GGML_SYCL_F16 is set\" && export OPT_SYCL_F16=\"-DGGML_SYCL_F16=ON\"; fi && echo \"Building with static libs\" && cmake -B build -DGGML_NATIVE=OFF -DGGML_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx ${OPT_SYCL_F16} -DBUILD_SHARED_LIBS=OFF && cmake --build build --config Release --target llama-cli" did not complete successfully: exit code: 1
Show more
Show less
Push Docker image to Docker Hub (server-intel, .devops/llama-server-intel.Dockerfile, linux/amd64)
buildx failed with: ERROR: failed to solve: process "/bin/sh -c if [ \"${GGML_SYCL_F16}\" = \"ON\" ]; then echo \"GGML_SYCL_F16 is set\" && export OPT_SYCL_F16=\"-DGGML_SYCL_F16=ON\"; fi && echo \"Building with dynamic libs\" && cmake -B build -DGGML_NATIVE=OFF -DGGML_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DLLAMA_CURL=ON ${OPT_SYCL_F16} && cmake --build build --config Release --target llama-server" did not complete successfully: exit code: 1
Show more
Show less