-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kube-dashboard generates 502 error #2340
Comments
It seems my pod keeps restarting:
How can I troubleshoot this if no errors are logged in the output of the kube-dashboard pod? |
Run |
Not dashboard-related I just discovered. Something is not behaving correctly with my pod-networking in the entire cluster. |
Seems I was wrong, it was not a pod networking issue. The dashboard pods itself were killed by kubernetes itself. I removed the health-cheks from the manifest, and now the dashboard pod keeps running. It just took a bit longer for the dashboard to become responsive than the healthcheck anticipated. However, it seems this issue occurs when accessing the dashboard with the "kubectl proxy" method. Within the cluster itself, the dashboard is working fine. Withe "kubectl proxy" method, I keep getting 502 errors. |
Try to set dashboard service type to |
It works when I expose it as NodePort ans surf to a worker-node. However, when using "kubectl proxy", I'm able to access to kubernetes api:
Or like this:
Now this is the full trace of trying to access the ui endpoints:
and now curl gives:
This weird, 10.40.0.3 is the pod address of kube-dashboard. Why is trying to connect directly when using the "kubectl proxy" method? |
I don't know. Maybe core guys responsible for it will know more. I don't think however that this is related to dashboard since it works fine when accessed directly (NodePort). |
I beg to differ, It's only dashboard that causes me issues. I'm able to access the kubernetes APi via "kubectl proxy" just fine. |
There is a big difference between acessing kubernetes api and accessing applications over kubernetes service proxy. |
Oh, maybe important: I'm not running kube-apiserver, kube-scheduler and kube-controller-manager as pods. This is a cluster built from "kubernetes the hard way" tutorial. I don't know if this makes a difference for kube-dashboard and the proxy-method? |
Care to enlighten me on those differences? |
It does not make the difference for dashboard but there might be an issue with cluster setup. In case of NodePort traffic is redirected through service directly to pod. With kubectl proxy you are running proxy tunnel and whole traffic goes through |
I also notice that dashboard is not listed here:
|
You can check how |
It won't be. Applications managed by addon manager are listed there and we have specifically dropped annotation that enables it. |
Kubeadm is not an option, I used it, I know how it works. My issue is why it doesn't work on MY cluster that is configured from scratch. I need HA, and kubeadm doesn't support that. |
Also, don't just suggest another deployment tool. I NEED to figure out how Kubernetes works, it's part of my job. And the K8s reference manuals are vague on the system operations part. |
do you have another service running in your cluster that you could try and access via kubeproxy? If not you can create a very simple one like this:
this will create a deployment and service both named nginx in your default namespace and then try to contact the running nginx via the kubeproxy. If this also doesn't work something in the kubeproxy is not working. If however this works we can try to investigate further why the dashboard is not playing nicely with the proxy. |
I did it so you can check that with 100% correctly configured cluster dashboard is accessible over kubectl proxy and this is indeed not a dashboard issue. That is why I have suggested to ask for help on core repository. I don't have time to check kubernetes code and investigate differences between different ways of accessing applications in the cluster. Documentation is missing a lot of things and best way is either to look into the code or ask core community for help. They are more familiar with advanced topics such as manual setup of the whole cluster. |
@jeroenjacobs1205 any progress here? |
@jeroenjacobs1205 Follow @rf232 steps and let us know what happened. We cannot help you unless we will be able to reproduce your "Kubernetes hard way" setup. |
Hi, yes, I solved the issue. I think the problem was caused by the fact that my api-server was not running in a pod (in kubeadm, the master processes run in pods). Since the requests were proxied through the api-server, but api-server had no access not the pod network or service-network (kube-proxy wasn't installed on my master nodes either), kube-apiserver is unable to access any services. I now run all my master processes as pods (using static pod manifests) on the master nodes, and everything works fine. It makes sense when I think about it. Thanks for your assistance with this issue. Next time, I should think a little harder on how all the components work together :-) |
Thx very much @jeroenjacobs1205 for your last comment! That brought me on the right track. But your answer is not completely correct: You don´t need to run kube-apiserver inside a Pod (although the "from-scratch-guide" recommends that, Kelsey Hightower ignores that recommendation though)! What you need is to make sure, that kube-apiserver has access to the network fabric used by your worker nodes (and thus Services, Pods - incl. a dashboard). In my case, as I wanted to have a more comprehensible cloud provider independend setup with https://github.com/jonashackt/kubernetes-the-ansible-way using Vagrant, I choose not to go with the manual plumbed network routes of https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/11-pod-network-routes.md, but to go with one of the recommende networking solutions from https://kubernetes.io/docs/setup/scratch/#network-connectivity - where I choose Flannel. The problem was I only installed and configured Flannel onto my worker nodes - thus getting the same errors like you. Now fixing the setup with having Flannel also on my master nodes (jonashackt/kubernetes-the-ansible-way@fcd203d), I can now flawlessly access all deployed K8s Services and Pods - incl. Dashboard. |
Environment
Steps to reproduce
Installed kube-dashboard according to the instructions provided on the website. When I run
kubectl proxy
, I'm unable to access the Dashboard UI at http://localhost:8001/ui/. When I access http://Localhost:8001/ url, I see the kubernetes api stuff, so kubectl itself is working fine.Observed result
Getting an 502 error on the following url: http://localhost:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/
console output of kubectl proxy:
Starting to serve on 127.0.0.1:8001I0906 16:46:06.470208 31586 logs.go:41] http: proxy error: unexpected EOF
pod log of the dashboard container:
Expected result
Expected to see the Dashboard
Comments
I also have heapster installed, and it is able the kubernetes api just fine. So I guess that pod-networking, service-cidr networking, and service accounts itself are working fine. It's only kube-dashboard that is giving me issues.
This is the yml file I used to deploy Dashboard:
The text was updated successfully, but these errors were encountered: