-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mvn compile jib:build
no further progress after log about Container entrypoint set to ...
#1946
Comments
mvn compile jib:build
no further progress after log about Container entrypoint set to ...
`mvn compile jib:build
no further progress after log about Container entrypoint set to ...
Can you try with 1.5.1? |
@loosebazooka |
Can you hook up a network monitor to see if there is any traffic going across? If you're pushing across a slow connection, one possibility is that our progress ticks don't have sufficient granularity to capture the slow pushes. On macOS, you should be able to run Does a If you follow the process in the FAQ to obtain a network trace (https://github.com/GoogleContainerTools/jib/blob/master/docs/faq.md#how-can-i-examine-network-traffic), you'll see |
The same happened to me, I went back to version 1.4.0 and that's how it works |
@gpando we'd like to know exactly what is happening and why with which Jib versions in your case. Could you also capture detailed network traces for both 1.4.0 and 1.5.1 and share them (after stripping out any sensitive info, if any)? Also add |
I'm hitting this one as well, but under Gradle and not Maven. Essentially, last week I upgraded Jib from 1.2 to 1.5.1. Initially, my builds hung. I then downgraded to 1.2.0, removed my build/ directory, and my initial build was very fast. I then re-upgraded to 1.5.1, blaming intermittent network issues and my slow connection. But today slow builds are back, coinciding with a new upstream base image. I haven't followed the earlier debugging suggestions yet because I came to this thread a bit late, having already done some debugging already. I'm also kind of on a time/budget crunch and don't have the luxury of comparing logs on different versions. Here's what I've found, though: I'm pulling a base image that's around 3.1G in size and has some largeish layers (one is ~700MB.) I'm also on a relatively slow connection.
It almost seems like there's significant buffering happening in the HTTP layer. Maybe it isn't as noticeable on fast connections, but slow things down and you'll almost certainly see it. But it isn't entirely the connection's fault, since again Docker is very fast. Was something changed in how images are downloaded/processed since 1.2, or possibly 1.4? I can try switching versions, capturing logs, and analyzing cache growth if I have time. But I at least thought I'd note that downloads are significantly slower than Docker on the same connection, and that this does seem to have changed since 1.2. Is there any way you might instrument HTTP client throughput on 1.2/1.4 vs. 1.5.1? Not just logging, but determining how many bytes are passing through the download process per second on 1.2/1.4 and 1.5? Meanwhile, I'm going to let this download finish and hope our upstream image doesn't bump again. |
Heh, and while I wrote that last comment, I noticed the download finished with a final cache size of 1.5G and about 1.5 hours' duration. I guess the cache size is compressed whereas the 3G image size is uncompressed. Anyhow, Docker definitely didn't take 1.5 hours to download that image. So for anyone thinking their download is hanging, make sure your cache is growing, and that the download isn't just happening very slowly. :) Sorry for the spam--I'd expected that download to take another 2-3 hours and was pleasantly surprised that it'd finished while I was ranting. |
@ndarilek thanks for the update. I'd like to rule out one thing. Could you try
|
I'll look into this more tomorrow, but I'm pretty sure it's a non-issue.
Our dev environment is Minikube-based, and our setup script points
Docker at the Minikube-hosted daemon. Whenever I send reports like the
previous, I try doing so from a known base state. In this case, that
base state was `minikube delete; minikube start` and running our
dev-setup script, which provisions an empty VM. And I've confirmed via
`docker images -a` that my non-Minikube Docker cache is empty.
I'll see if I can spare some time next week to collect more detailed
logs. Sorry I can't be of more help.
|
Sorry, looked at your command stream. I'll do a `time docker pull...` to
report on actual time spent pulling the image in the VM so we'll have
another data point next to my ~1.5 hour pull time in Jib.
|
Oh, sorry, forgot to ask one thing. (I feel like I'm spamming.) Are you using |
Nope, it's gcr.io.
|
I believe I can reproduce this at home where my network is very slow. Closing as a dup of #1970, but if anyone finds their case is different, feel free to re-open or open a new issue. |
@semistone222 @gpando 1.6.0 has been released with the fix. |
Hi, May i know how can we skip JUNIT tests while running skaffold configuration. |
Environment:
Description of the issue:
I wanted to test jib on Spring boot demo application.
So, I added jib-maven-plugin and set docker image url.
After running
mvn compile jib:build
,There is no further progress after log about
Container entrypoint set to ...
.I waited about 5 min, but It's still not over yet.
Please, give me a advice.
Thanks for providing awesome tools. :D
(It seems to be not related about auth.)
Expected behavior:
mvn compile jib:build
should be done.Steps to reproduce:
mvn compile jib:build
Container entrypoint set to ...
jib-maven-plugin
Configuration:Log output:
Additional Information:

The text was updated successfully, but these errors were encountered: