-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG]: Docker Inspect throws exit code 125 due to not finding some layer IDs in final built image. #20189
Comments
Exactly same problem here... the version 2.240.2 it works. |
Same issue here. Error was there on 2.240.2, but allowed the task to complete successfully. On 2.243.0, the image is pushed to the registry, but the pipeline fails. |
Same issue here as well. |
@webjoaoneto - how did you go back to 2.240.2 ? |
+1 on this. It's now halted our build pipelines for our projects. We were on version 2.240.2 up to this afternoon (around midday - 12pm - UK time on the 24th July), then we seem to have gone to 2.243.0 and the stage fails now, but the image is pushed to the ACR. No changes to our docker config, azure pipeline config etc, this seems to be handled by the Docker@2 stage alone.
I did see something like this a while ago (few months back), a very similar issue on the push command, but by the time I went back to it the issue had resolved itself and the stage was successful.
I think the issues around docker inspect are a bit misleading for this directly because looking at our runs from earlier today when 2.240.2 was used, we got the same errors about the docker inspect command but the stage was allowed to complete, whereas under 2.243.0 the same docker inspect errors appear but then the Unhandled message appears and this then causes the stage to fail. This is from a run where the stage completed successfully:
Oddly this is only failing for a pipeline building an Angular image. We run 2.243.0 when building a C# API and that runs fine (the underscores are my own to shorten the lines):
Anyone able to help or is there any way to force the stage to use the previous 2.240.2 version? Thanks. |
OK, found it. |
I had the same problem here and the solution was to force the previous minor version in the yaml, simply by changing the code from:
to:
|
This worked! Thank you so much. One to remember for the future too... |
Hi @lucasrcorreia @webjoaoneto @MarkKharitonov @chrislanzara |
@v-schhabra i'm encountering this same issue. I've attached a log from a build today that shows the error with I've reverted to |
I've the same problem. Pipelines that ran fine a week ago are now broken. Changing the docker@2 task from
to
fixed it for now. With command 'buildAndPush' the images are still pushed but the task fails with an error:
|
Hi @v-schhabra A debug log for the Push to ACR stage is attached. Run using the "Enable system diagnostics" checkbox on a pipeline run. HTH |
Hi @chrislanzara |
Same setup here (Azure DevOps agents on k8s using KEDA scaled jobs with podman) and same issue appearing since a few hours. |
We have the same issue with agents on VMSS with container jobs using docker |
Hi @chrislanzara @Dom-Heal @philipp-durrer-jarowa @Bodewes |
Hi @v-schhabra, Frankly, I'm not sure we are, or aware that we are. Our pipeline references the Docker@2 stage only. I've included the build and dev deployment stage from our yaml file so you can see what we actually reference:
We've had Docker@2 in our pipeline for quite a while now, and use it on C# APIs as well as Angular UX projects. I see that using the Docker@2 stages is still on the Microsoft docs website, for example
Our deployment target is an AKS instance running Kubernetes version 1.29.4: We would have referenced the Microsoft Docs or the classic pipeline builder UI in DevOps when we originally set the pipelines up several years ago. So unless Azure is doing something "under the covers", I'm not consciously aware that we are using podman, if we actually are. If there is another way of doing it you want us to explore, I'm happy to help test, but can you offer any more specific instructions on what alternative you wish us to test please? Thanks! |
Hi @v-schhabra - We are not using podman, we are only using docker and this problem exists. Our agents run on Azure VMSS and the jobs run within docker containers. The docker push step is running inside the "docker" container job. https://learn.microsoft.com/en-us/azure/devops/pipelines/process/container-phases?view=azure-devops Hope this helps |
Hi, We have started our investigation on this issue. |
Hello, Are there any updates on this issue? |
#20397 Fixes are created for this issue and will update here once it is deployed to all the rings. |
The above fix introduced a regression so we created new fix for this issue. |
New issue checklist
Task name
Docker@2
Task version
2.243.0
Issue Description
When docker push after update version to 2.243.0 raises this error on Docker push pipeline
The task pushes the docker image to the right place, but pipeline crashes because the command
docker inspect -f inspect -f {{.RootFS.Layers}}
is not passing the image name as an next argument.Fix:
We back to the version 2.240.2
Environment type (Please select at least one enviroment where you face this issue)
Azure DevOps Server type
Azure DevOps Server (Please specify exact version in the textbox below)
Azure DevOps Server Version (if applicable)
No response
Operation system
ubuntu
Relevant log output
Full task logs with system.debug enabled
Repro steps
No response
The text was updated successfully, but these errors were encountered: