-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't connect to Ryuk container when using remote Docker host #572
Comments
Hi @alexcase52, Could you please try with 1.5.1 + multiple threads? I want to make sure that this is a regression in 1.6.0 :) |
Hi @alexcase52, I get also the error only on 1.6.0, 1.5.1 is fine. Is deamon exposition on tcp://xy:2375 without TLS mandatory ? |
This is currently still mandatory (until we will be able to support npipes or Docker for Windows will support Unix sockets). |
https://blogs.msdn.microsoft.com/commandline/2017/12/19/af_unix-comes-to-windows/ Well... kind of tiny scope for now... |
I'm experiencing a similar issue when running on Jenkins, with a
Downgrading to 1.5.1 fixes the issue. |
@alvarosanchez |
Yes, it is.
No, because then version 1.5.1 wouldn't work either. Note that the client machine (a Jenkins instance) is able to run Docker commands on that |
@alvarosanchez I've changed the title of the issue to better reflect the current state of knowledge we have, I hope it's okay for everyone involved? @alexcase52 Were you able to find out if this issue is related to multiple Gradle threads? |
@alvarosanchez |
@kiview 1.7.2 :) |
Hi, Just encountered this on 1.7.3 running a test on our Jenkins server:
|
Hi, I have similar error on 1.7.3 inside Jenkins image.
|
Actually, I resolved my problem. Root cause is that moby-ryuk image isn't accessible via External Docker IP (172.17.0.1 in my case) inside Jenkins docker container due to firewall rule. I open necessary ports and it works fine now. |
@Eschenko could you please share why you have a firewall affecting Docker? Is it something custom on your instances or it is configured by default? :) |
That instance had been configured by other person before I started using it. So I suppose it had custom configuration. Anyway that problem happened because of running inside Jenkins container. Testcontainers used External Docker API for connection to moby-ryuk (172.17.0.1 in my case). Ports were closed and I couldn't connect to it. But I was able to connect via internal container IP ( 172.22.0.6 in my case) and it worked fine (just via telnet command). I couldn't find proper way to configure testcontainers to access moby-ryuk via internal IP. Is it possible? Or we can add some notes to documentation about access to moby-ryuk from other containers like Jenkins etc. It was not obvious that root cause were closed ports. |
I'm seeing exactly the same trying to run in Drone CI with the host Docker API exposed via IP instead of mounting /var/run/docker.sock, I suspect with the same cause as @Eschenko. If I run it on a laptop using |
We are seeing a similar issue on our jenkins agents running centos with selinux.
When starting the ryuk container manually, we see a socket permission problem.
The access is blocked by selinux for security reasons, see https://danwalsh.livejournal.com/74095.html |
I am having a similar issue. But I'm getting this locally on Fedora. When using 1.8.1, I get this issue with the Ryuk container on localhost:32788 My other Docker containers that I run locally are all on 172.17.0.2. I'm not sure if the Ryuk one was supposed to run on that one too? Also the first time I tried running with 1.8.1 a warning popped that I didn't get to click on from SELinux. I can't seem to get that message to pop up again however. I tried using 1.5.1 as well, however I get another issue: I have a local container that is running that exact image and when re-pulling I don't get any error messages. My issues seems to be similar, though I'm not running remotely. |
Just tried starting a ryuk container manually as suggested my @jmaicher and I got the same error from the logs: |
I also have a Ubuntu 18 VM with docker installed and starting the ryuk image manually actually works. Could this be a CentOS/Fedora issue? |
Is there any further update on this. My scenario is Jenkins. We run our tests inside docker containers and then mount the docker socket in to talk with other containers. The issue seems sporadic for us. Some builds fail, others pass.
Edit: We have downgraded from 1.8 to 1.7 and the issue remains in both. |
Is the expecation that #843 will also resolve this issue? |
@bearrito Would be great if you can test with 1.9.1. |
I have tested 1.9.1 in bitbucket pipelines, 119 tests (1 thread) and 1 fails (seemingly random, different test fail in both actual test and execution order). |
@kiview I tested with a single job in a fairly up to date jenkins install. We did not see any regressions when running a single build, I also ran multiple builds in parallel (not sure this matters) and didn't see any regressions in that case either. We are tentatively upgrading from 1.7.1 to 1.9.1. If I see regressions so I continue to report them to this ticket? |
@bearrito Thanks, would be great! |
1.9.1 doesn't appear to have fixed my variant on the issue. I suspect it's due to routing/iptables configuration on the Drone CI instance we're using that prevents the build container from accessing the docker0 network and only allowing Docker port-forwarding. |
@GJKrupa I actually finally got around this yesterday with 1.10.1. We also had to disable using our remote docker host (not 100% sure that's a requirement but we had no reason for using it except 'that's how it was set up N years ago') and use the locally installed docker daemon. You might try with that version and see if it fixes your issue. |
I tried switching to 1.10.1 yesterday after ambassador stopped working on my 1.5.1-based tests. 1.10.1 didn't solve the issue in my case and I'm still seeing the same errors:
Unfortunately, disabling the remote host isn't an option for me as our CI tool is a shared resource being managed by a central team and they only support $DOCKER_HOST. (as an aside, the ambassador issue actually seems to be a networking issue in docker-compose 1.23.1 and was fixed when I downgraded back to 1.22.0). |
I faced with this issue at version 1.8.3 although it worked in the past and no firewall constrain.
Hope it help. |
we are facing the same issue on a Jenkins Docker Agent
Error log:
|
Same here on teamcity:
|
We ran into the same problem as well
We're using Testcontainers inside a custom JUnit-Rule which
This might not be the polite way to use Testcontainers but this replaced some legacy code and works much better than before. Using Testcontainers version 1.10.5. |
have tried creating the docker group? it solves permission issues # create new group
sudo groupadd docker
# add current user to the group
sudo usermod -aG docker $USER
# log in to the group
newgrp - docker |
Please see #1274 . It is very likely that the local UNIX socket is not opened on your docker host. |
Ryuk seems to be listening on random port(33088 and 33089 and so on). This is unacceptable when docker host is behind firewall. I can't open all the port only for Ryuk. |
@skyline75489 |
@bsideup |
@Wosin no, sorry. As I said - it is not just Ryuk, but any container we start. |
Maybe I'm wrong but this could be a 'known docker bug'. Long story short: It is a docker/firewall/iptables issue. You have to permit it on iptables in filter/INPUT chain an exact ip or docker network.
Or iptables example: Where 172.17.0.0/24 is default docker network out of box. This will INSERT rule in filter/INPUT chain as FIRST one. To test it, just run two containers like centos (install telnet there) and nginx with exposed port. Just FYI my docker-compose.yml, config.toml and gitlab-ci.yml in attached zip file. |
I've run into a similar problem but on localhost. After starting a docker container, the program hanged for two minutes trying to connect to it (only info log is saved, but there were requests every second in debug log). After that, while obtaining connection, it raised Reboot solved the problem. I think that the reason for such behavior was a nearly full load of the RAM with enabled swap, that had an about quarter load. |
If someone comes here having broken bitbucket pipeline with error |
Closing as outdated, since the issue seemed to be caused by network misconfiguration (see #572 (comment) for an advice on how to fix it) |
What do you think about this variable TESTCONTAINERS_HOST_OVERRIDE=host.docker.internal ? The source is here testcontainers configuration and this can be relevant for some people also https://github.com/testcontainers/testcontainers-java/blob/main/docs/supported_docker_environment/continuous_integration/dind_patterns.md |
Version 1.6.0
Not sure if this came from multiple threads but never seen it with single thread.
Many tests use several types of containers.
The text was updated successfully, but these errors were encountered: