-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Istio addon not working on multinode #10248
Comments
/kind support |
I'm running into the same issue. Is this possible with istio and minikube? |
@mikala3 what error do u get ? have tried allocating more memory ? istio needs a lot of memory |
Tried again on latest minikube
Still getting the same errors
|
@timhughes hw much memory did you allocate to your Docker Desktop and minikube ? I nticed u are using multi node --nodes=4 --cpus=4 is there a reason you are using 4 nodes ? it is unlikely that you need multi node on local cluster, how much memory your system has ? |
/triage needs-information |
I am on Linux so no need for Docker Desktop, My Local machine has 24 cores and 64GB ram and the disks are nvme rated at 3500MB/s so I am not worried about resources
I am attempting to run rook-ceph which requires 3 worker nodes
|
minikube start --profile=${CTX_CLUSTER1} --cpus=6 --memory=20g --driver=kvm2 --nodes=2 --addons storage-provisioner,default-storageclass,metallb |
This may just be a DNS issue. If I kill the coredns pod and then the istio pods, it appears to fix it. As a workaround, this is fine for me but coming up with a way if it working first go would be better |
So you are able to run with istio and minikube ( with multiple nodes)? |
Yes
…On Thu, 22 Apr 2021, 17:10 mikala3, ***@***.***> wrote:
This may just be a DNS issue.
If I kill the coredns pod and then the istio pods, it appears to fix it.
As a workaround, this is fine for me but coming up with a way if it
working first go would be better
So you are able to run with istio and minikube ( with multiple nodes)?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#10248 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAJFZD77HPTJN7YBZNWY5J3TKBC7PANCNFSM4WQW24ZA>
.
|
@timhughes I wonder if the binary in this Pr helps ? http://storage.googleapis.com/minikube-builds/11731/minikube-linux-amd64 this PR by @andriyDev is doing improvements on multinode |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
Hi @timhughes, we haven't heard back from you, do you still have this issue? I will close this issue for now but feel free to reopen when you feel ready to provide more information. |
When trying to run istio on a multi node minikube the
istio-proxy
containers do not start.Steps to reproduce the issue:
Full output of
minikube start
command used, if not already included:minikube-start-nodes4-cpus4-memory8g.log
minikube-addons-enable-istio-provisioner.log
minikube-addons-enable-istio.log
Full output of
minikube logs
command:minikube-logs.log
Full container logs from
istio-operator
andistio-system namespaces
:istio-operator-istio-operator-6dbfd4446f-xxqnn-1611512011468363254.log
istio-system-istiod-6ccd677dc7-gmzsw-1611512018108421785.log
istio-system-istio-ingressgateway-8577c95547-v25xk-1611512014880656996.log
istio-system-prometheus-7767dfd55-kd9c8-1611512021807342019.log
istio-system-prometheus-7767dfd55-kd9c8-1611512024337276220.log
Relevant Logs from
istio-proxy
container:The text was updated successfully, but these errors were encountered: