-
Notifications
You must be signed in to change notification settings - Fork 40.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubernetes ignores /root/.docker/config.json #45487
Comments
Some logs:
What is that "Non-root verification doesn't support non-numeric user (jenkins)" stuff ?? |
Okay, This is madness... The documentation suggests that this file should be in $HOME/.docker/config.json. Since docker runs as root, I assume this means /root/.docker/config.json Then I stumbled on this issue: #12835. This suggests /var/lib/kubelet as the parent directory, so I assumes this means /var/lib/kubelet/.docker/config.json However, it seems the kubelet setup by kubeadm simply has root (/) as the working dir, so I tried to put it in /.docker/config.json as well.. Restarted kubelet multiple times, still no luck. |
Okay, I troubleshooted some more, and I think part of the problem is my particular setup. Still, I would like to know this behaviour is expected. Some background info: I'm using Nexus as my docker registry which is also configured as proxy for the official docker repository. On my K8s node (running on Centos7), I have configured Docker like this:
This blocks the official repo, and forces docker to use the nexus proxy. Credentials are configured, and this works correctly for "docker pull" commands. Example: when I do "docker pull ubuntu:latest", I see it connects to my nexus proxy and pulls the image, using the credentials supplied in the config.json file. However, when I specify "ubuntu:latest" in my yml for kubelet, it connects to nexus-docker.mydomain.be but the pull fails and it complains about authentication being required. When I specify "nexus-docker.mydomain.be/ubuntu:latest" kubelet pulls the image just fine and uses the configured docker credentials. So it seems kubelet connects to the correct registry, but just ignores the credentials when no server is specified in the image-name. |
I am having the same issue with Kubernetes 1.6.2. Previously using 1.5.1 and everything worked with the credentials in /root/.docker/config.json. I was able to get it to work with Kubernetes 1.6.2 only by setting the deprecated flag when starting Kubelet: --enable-cri=false. So, somehow, the new CRI method of reading images is different - but I can find no documentation saying how to load static pods from a private registry with CRI enabled. |
To be clear, in my case, the private registry server is specified in the image name, but the problem is the same as described by jeroenjacobs1205 - which is that the credentials in /root/.docker/config.json do not seem to be used properly when CRI is enabled. |
@jeroenjacobs1205 kubelet looks up the credentials before calling docker to pull images. Most likely (i.e., I did not verify) things went down this way:
If you don't specify the registry in the image string and configure docker to behave differently, kubelet will be able to get the right credential to use. |
Both the CRI and non-CRI implementations use the exact same code ( |
Yes it is very reproducible. Here are more details: I removed the "--enable-cri=false" on the kubelet command line, and it once again failed - with these log messages:
The kubelet was itself running via a systemd unit:
I edited this to change the ExecStart statement to (no other chanages):
and did "systemctl daemon-reload; systemctl restart kubelet". After that, it then loaded the static pods in /etc/kubernetes/manifests, and brought everything up. I've switched back and forth on the --enable-cri=false option several times now. It always fails without it (i.e., CRI enabled by default), and always succeeds when it is included, to disable CRI. The specific static pod it was trying to load first was defined like this in /etc/kubernetes/manifests/kube-monitor-apiserver.manifest:
Note the HOME=/root environment variable set in the systemd unit shown above. The /root/.docker/config.json file is:
This is obviously correct credentials, because it works as soon as I disable CRI. |
I think my Docker is configured correctly. When I pull the image from the commandline (eg: "docker pull ubuntu:latest") this works fine, and the credentials are being used. So it only happens when Kubelet pulls the image. |
@jeroenjacobs1205, I understand that. My #45487 (comment) explained why that's the case. |
@djmckinley, your case is different. The image you tried to pull is a pod sandbox image (or previously known as pod infra container image). With CRI, this image is considered an implementation detail of the runtime and we do not reuse the credential package kubelet uses. I'll file an issue to support reading docker config in the CRI implementation. Pulling images for user containers should work though. |
I know how to fix this as it's specific to the Docker package we build for RHEL/Fedora/CentOS. It just requires a Docker API call to gather additional registries and resolve the correct credentials for the unqualified image. Please feel free to assign this to me. |
@runcom this is a general regression that affects all platforms, why do you think it's specific to the docker package you built for RHEL/Fedora/CentOS? What API call would you use to gather the credentials? AFAIK, the docker configs are read by the docker CLI, and not maintained by the API. |
@yujuhong how has this worked before? let me elaborate. On upstream docker, if you pull an image with an unqualified image you always pull from the Docker Hub, like: # this always hit Docker Hub!
$ docker pull ubuntu On RHEL/Fedora/CentOS you have the ability to add additional registries when docker pulls images. Let me shows you an example: # docker has been started with
# --add-registry=mydomain.com:5000 --add-registry=another.net:8080
docker pull ubuntu
# the action above, will:
# 1. try docker pull mydomain.com:5000/ubuntu
# 2. if the above fails, try with another.net:8080/ubuntu
# 3. if that also fails, fall back to docker.io/ubuntu (same behavior as upstream) In the issue in question, if your pod defines an unqualified image (e.g. What instead the keyring should do is to gather the Therefore, @yujuhong I can't see how this has ever worked 😕 |
The regression reported by @djmckinley is handled in #45738. Removing the milestone and leaving this for the original issue (reported by @jeroenjacobs1205). I believe we've never supported that, so marking this a feature request instead. |
I dug into this a bit and the issue for me was that the Setting |
@alindeman 's suggestion is right, or create a link
|
Actually, reading the credential provider code, it seems that kubelet searches for the following files:
|
…stemd_unit Automatic merge from submit-queue Fixes reading /root/.docker/config.json on debian Debian and probably others apparently don't automatically default to using the root account if it's not specified. ref: kubernetes/kubernetes#45487 (comment)
From the logs. It searches in below path for docker config.json/.dockercfg. I'm running K8s v 1.7
|
I'm adding my feedback here as it might help. When I created regsecret as per instructions [1], deployment did not work. I finally managed to fix it by removing the registry credentials with I believe kubelet does not know how to work with multiple credentials in docker. [1] https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ |
Same issue here on Kubernetes v1.9.0 built with Kubeadm. Spent a lot of time trying to figure out why it's not working. I tried all examples from the official documentation, setting up Thanks to @alindeman comment above, I was able to fix it by adding kubelet systemd snippet, reloading systemd configuration and restarting kubelet on all nodes. Now it finally works as intended, if This is a serious issue which will bite lots of people, it's a shame this takes so long to fix. |
Ah, so it's actually combination of 2 bugs that bit me. Thanks for the info @dims, looking forward to new release. 👍 |
fixed in 1.9.1, now released - https://github.com/kubernetes/kubernetes/releases/tag/v1.9.1 |
Came across this with standalone kubelet (hyperkube) v1.8.7 and the fix was to mount /root onto the container using the docker run option |
I'm on v1.9.4, using CentOS 7.4. Still, it didn't work for me until I applied @alindeman 's suggestion to add |
I'm on v1.9.2, Docker 1.13.1. I face this isue too. Fix it by adding User=root or Environment=HOME=/root in kubelet.service. |
Does Kubernetes support the Docker credential store? I can't get kubernetes to respect a credential store defined in config.json and I don't know if my issue is related to this one. |
For all stumbling upon this. I have setup k8s via kubeadm 1.13.3 on an Ubuntu 18.04 host (upgraded from 1.11.x). Had problems pulling from my nexus on my slave nodes. What fixed it for me was as mentioned above: (1) as root run:
(2) Check that docker can now pull images from nexus
(3)
(4) Reload and restart kubelet
|
great! |
Any update on this to make this more admin friendly, especially when running in fully automated cloud infrastructures? edit:
Then I pass the node_userdata base64 encoded to the aws_launch_configuration like so:
And the eks nodes come up cleanly and kubernetes is able to pull from dockerhub using those credentials. |
Without setting this (at least on debian) $HOME is not set when running kubelet. Without $HOME being set the search paths for docker credentials are not as they would be expected. See: kubernetes/kubernetes#45487 (comment)
Without setting this (at least on debian) $HOME is not set when running kubelet. Without $HOME being set the search paths for docker credentials are not as they would be expected. See: kubernetes/kubernetes#45487 (comment)
Without setting this (at least on debian) $HOME is not set when running kubelet. Without $HOME being set the search paths for docker credentials are not as they would be expected. See: kubernetes/kubernetes#45487 (comment)
* Explicitly set root user in kubelet systemd unit Without setting this (at least on debian) $HOME is not set when running kubelet. Without $HOME being set the search paths for docker credentials are not as they would be expected. See: kubernetes/kubernetes#45487 (comment) * Adjust testdata Adjust userdata kubelet test data More test data adjustments Adjust flatcar iginition testdata * Add docker credentials support for ubuntu Go template trimming Only write docker auth config if CR is docker and credentials are given * Add docker credential support to remaining distributions * Fix typo * Add support for SecretTypeDockerConfigJson * Add documentation for additional supported secret type * AuthConfig to return emptystring if registryCredentials is not set
I have docker setup with a private registry, the credentials are stored in /root/.docker/config.json.
Pulling images manually with "docker pull" works just fine, no issues there.
However, when images are pulled from Kubernetes it complains it's unable to authenticate.
Judging from the documentation here (https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/), this should work as my setup is the first one that is mentioned in the doc.
If any more steps are necessary, please make it clear in the docs. Talking to private registries is a must in any real enterprise-deployment.
Running Kubernetes 1.6.2 on CentOS7, configured via kubeadm, btw...
The text was updated successfully, but these errors were encountered: