Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gcp-auth returned an error: field is immutable #9465

Closed
matthewmichihara opened this issue Oct 14, 2020 · 7 comments · Fixed by #11486
Closed

gcp-auth returned an error: field is immutable #9465

matthewmichihara opened this issue Oct 14, 2020 · 7 comments · Fixed by #11486
Assignees
Labels
addon/gcp-auth Issues with the GCP Auth addon area/addons kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@matthewmichihara
Copy link

I don't exactly know how I triggered this but I ran into this error when starting minikube today.

I do notice that my gcloud application default credentials are expired, so that may be related:

$ gcloud auth application-default print-access-token
ERROR: (gcloud.auth.application-default.print-access-token) There was a problem refreshing your current auth tokens: ('invalid_grant: Token has been expired or revoked.', '{\n  "error": "invalid_grant",\n  "error_description": "Token has been expired or revoked."\n}')
Please run:

  $ gcloud auth application-default login

to obtain new credentials.
$ ./minikube start
😄  minikube v1.14.0 on Darwin 10.15.7
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🔄  Restarting existing docker container for "minikube" ...
^[🐳  Preparing Kubernetes v1.19.2 on Docker 19.03.8 ...
🔎  Verifying Kubernetes components...
🔎  Verifying gcp-auth addon...
📌  Your GCP credentials will now be mounted into every pod created in the minikube cluster.
📌  If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
❗  Enabling 'gcp-auth' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: Process exited with status 1
stdout:
namespace/gcp-auth unchanged
service/gcp-auth unchanged
serviceaccount/minikube-gcp-auth-certs unchanged
clusterrole.rbac.authorization.k8s.io/minikube-gcp-auth-certs unchanged
clusterrolebinding.rbac.authorization.k8s.io/minikube-gcp-auth-certs unchanged
deployment.apps/gcp-auth unchanged
mutatingwebhookconfiguration.admissionregistration.k8s.io/gcp-auth-webhook-cfg unchanged

stderr:
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"batch/v1\",\"kind\":\"Job\",\"metadata\":{\"annotations\":{},\"name\":\"gcp-auth-certs-create\",\"namespace\":\"gcp-auth\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"gcp-auth-skip-secret\":\"true\"},\"name\":\"gcp-auth-certs-create\"},\"spec\":{\"containers\":[{\"args\":[\"create\",\"--host=gcp-auth,gcp-auth.gcp-auth,gcp-auth.gcp-auth.svc\",\"--namespace=gcp-auth\",\"--secret-name=gcp-auth-certs\"],\"image\":\"jettech/kube-webhook-certgen:v1.3.0\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"create\"}],\"restartPolicy\":\"OnFailure\",\"serviceAccountName\":\"minikube-gcp-auth-certs\"}}}}\n"}},"spec":{"template":{"metadata":{"labels":{"gcp-auth-skip-secret":"true"}}}}}
to:
Resource: "batch/v1, Resource=jobs", GroupVersionKind: "batch/v1, Kind=Job"
Name: "gcp-auth-certs-create", Namespace: "gcp-auth"
for: "/etc/kubernetes/addons/gcp-auth-webhook.yaml": Job.batch "gcp-auth-certs-create" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"gcp-auth-certs-create", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"controller-uid":"57bdfef2-466d-476e-ad9a-17865b017b00", "gcp-auth-skip-secret":"true", "job-name":"gcp-auth-certs-create"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"create", Image:"jettech/kube-webhook-certgen:v1.3.0", Command:[]string(nil), Args:[]string{"create", "--host=gcp-auth,gcp-auth.gcp-auth,gcp-auth.gcp-auth.svc", "--namespace=gcp-auth", "--secret-name=gcp-auth-certs"}, WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar(nil), Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:"OnFailure", TerminationGracePeriodSeconds:(*int64)(0xc0112bff70), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"minikube-gcp-auth-certs", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc0076bfa80), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", SetHostnameAsFQDN:(*bool)(nil), Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil)}}: field is immutable
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"batch/v1\",\"kind\":\"Job\",\"metadata\":{\"annotations\":{},\"name\":\"gcp-auth-certs-patch\",\"namespace\":\"gcp-auth\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"gcp-auth-skip-secret\":\"true\"},\"name\":\"gcp-auth-certs-patch\"},\"spec\":{\"containers\":[{\"args\":[\"patch\",\"--secret-name=gcp-auth-certs\",\"--namespace=gcp-auth\",\"--patch-validating=false\",\"--webhook-name=gcp-auth-webhook-cfg\"],\"image\":\"jettech/kube-webhook-certgen:v1.3.0\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"patch\"}],\"restartPolicy\":\"OnFailure\",\"serviceAccountName\":\"minikube-gcp-auth-certs\"}}}}\n"}},"spec":{"template":{"metadata":{"labels":{"gcp-auth-skip-secret":"true"}}}}}
to:
Resource: "batch/v1, Resource=jobs", GroupVersionKind: "batch/v1, Kind=Job"
Name: "gcp-auth-certs-patch", Namespace: "gcp-auth"
for: "/etc/kubernetes/addons/gcp-auth-webhook.yaml": Job.batch "gcp-auth-certs-patch" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"gcp-auth-certs-patch", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"controller-uid":"7bf8016c-eef5-4085-9382-53e5e080c7ff", "gcp-auth-skip-secret":"true", "job-name":"gcp-auth-certs-patch"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"patch", Image:"jettech/kube-webhook-certgen:v1.3.0", Command:[]string(nil), Args:[]string{"patch", "--secret-name=gcp-auth-certs", "--namespace=gcp-auth", "--patch-validating=false", "--webhook-name=gcp-auth-webhook-cfg"}, WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar(nil), Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:"OnFailure", TerminationGracePeriodSeconds:(*int64)(0xc0113d6960), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"minikube-gcp-auth-certs", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc007c21080), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", SetHostnameAsFQDN:(*bool)(nil), Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil)}}: field is immutable
]
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" by default

Full output of minikube logs command:**

==> Docker <==
-- Logs begin at Wed 2020-10-14 15:46:36 UTC, end at Wed 2020-10-14 15:50:55 UTC. --
Oct 14 15:46:36 minikube systemd[1]: Starting Docker Application Container Engine...
Oct 14 15:46:36 minikube dockerd[156]: time="2020-10-14T15:46:36.997488263Z" level=info msg="Starting up"
Oct 14 15:46:37 minikube dockerd[156]: time="2020-10-14T15:46:37.002748551Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Oct 14 15:46:37 minikube dockerd[156]: time="2020-10-14T15:46:37.002783896Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Oct 14 15:46:37 minikube dockerd[156]: time="2020-10-14T15:46:37.002804961Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
Oct 14 15:46:37 minikube dockerd[156]: time="2020-10-14T15:46:37.002815282Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Oct 14 15:46:37 minikube dockerd[156]: time="2020-10-14T15:46:37.007208395Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Oct 14 15:46:37 minikube dockerd[156]: time="2020-10-14T15:46:37.007242580Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Oct 14 15:46:37 minikube dockerd[156]: time="2020-10-14T15:46:37.007256576Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
Oct 14 15:46:37 minikube dockerd[156]: time="2020-10-14T15:46:37.007269062Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Oct 14 15:46:37 minikube dockerd[156]: time="2020-10-14T15:46:37.040763026Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Oct 14 15:46:37 minikube dockerd[156]: time="2020-10-14T15:46:37.138688111Z" level=info msg="Loading containers: start."
Oct 14 15:46:37 minikube dockerd[156]: time="2020-10-14T15:46:37.467992981Z" level=info msg="Removing stale sandbox 26d9ca9cac54a6bf5bece92d7f7bec7f8164cd1271839c75f234323d4b23decf (09f5ed3c69da220e174019e02b0c3013bfbe705161dc089d331dcf409d3969c3)"
Oct 14 15:46:37 minikube dockerd[156]: time="2020-10-14T15:46:37.469613395Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 81927765ffb8afdc52f382db5dea4be10fee60f60cc6a98f1361af361e807954 88bee15002a9e841f57d302a7bdeb2621eb6e7d6e97bf550c47527386dece55c], retrying...."
Oct 14 15:46:37 minikube dockerd[156]: time="2020-10-14T15:46:37.574787594Z" level=info msg="Removing stale sandbox 2a803a1fa5d585a328ee830484c372f197bf075765251b02e1ad8708cdb89754 (d715cf39075cfe3b83d178387735fead1a2feef7bb9a888a4f8766cd03ef1712)"
Oct 14 15:46:37 minikube dockerd[156]: time="2020-10-14T15:46:37.576590676Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 2aeccfab79f870d70b098505de3e186b036c0d2d80bb16eea750fa38f9b825f4 c6ae433975bb5d838f32d97164e00b28393a5ba710307adf99eb15cabd29ae50], retrying...."
Oct 14 15:46:37 minikube dockerd[156]: time="2020-10-14T15:46:37.683390920Z" level=info msg="Removing stale sandbox 311127f5024e6685215ad320e8e5d4f4b5d277553f4b0b7cbeec379f56d09e40 (0dadceddb9fc9689d7f1f87e90eeb4e8a96a26d8bd5e547b2d482675b8c93ecc)"
Oct 14 15:46:37 minikube dockerd[156]: time="2020-10-14T15:46:37.684390537Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 81927765ffb8afdc52f382db5dea4be10fee60f60cc6a98f1361af361e807954 c81c6255eb66d0ce3673f93e13e8362238b9fab62e8c5fdf0e34f53407b8e965], retrying...."
Oct 14 15:46:37 minikube dockerd[156]: time="2020-10-14T15:46:37.797184670Z" level=info msg="Removing stale sandbox 450721a33847190f18186c1fd1c1518d76ad79f028eec4c9b8b8aef8a4838dcf (88152a0586c354c1ffd5e5f7ab3c49f6726db6deb64b7eadfdd6d1665873081e)"
Oct 14 15:46:37 minikube dockerd[156]: time="2020-10-14T15:46:37.798134109Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 81927765ffb8afdc52f382db5dea4be10fee60f60cc6a98f1361af361e807954 5d7b95736af272dfb8d131c4531b498ad9c3dc1403154b2137185667df521669], retrying...."
Oct 14 15:46:37 minikube dockerd[156]: time="2020-10-14T15:46:37.905633400Z" level=info msg="Removing stale sandbox 477ba66a12f6c1871a7ae384e207e467b60515bc8aca4e87ee756a18bd057447 (203106cceb986b256f06d2f2037bae505a48d299e632246446b08cfb01165c1d)"
Oct 14 15:46:37 minikube dockerd[156]: time="2020-10-14T15:46:37.906546372Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 81927765ffb8afdc52f382db5dea4be10fee60f60cc6a98f1361af361e807954 0ff5ebcaadd0269b3b03b2a7eddae6ff01df2385e480a9b38b149076b680bd93], retrying...."
Oct 14 15:46:38 minikube dockerd[156]: time="2020-10-14T15:46:38.023220959Z" level=info msg="Removing stale sandbox 8fe5c151c152b5e76f7f03b1d913c4f6a0656a3f9711f14a3556050365f0f111 (347f328321f3e314467d61df4bef752d3058738d94bcf41bcd583e3864a74d40)"
Oct 14 15:46:38 minikube dockerd[156]: time="2020-10-14T15:46:38.024939822Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 2aeccfab79f870d70b098505de3e186b036c0d2d80bb16eea750fa38f9b825f4 202ace22077827306bf39605c60d1fa668251a4819eb31eb24ecddf940f6d6c2], retrying...."
Oct 14 15:46:38 minikube dockerd[156]: time="2020-10-14T15:46:38.129628742Z" level=info msg="Removing stale sandbox ee82a0b9c822b3eb8321f48dd470eea9ee6d44d9026c086edde0b04e14f6fcc6 (03d2b70949896b101cfc086f68ce782fe1a67d982cd1a5901d77c3d168eaba92)"
Oct 14 15:46:38 minikube dockerd[156]: time="2020-10-14T15:46:38.130475530Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 81927765ffb8afdc52f382db5dea4be10fee60f60cc6a98f1361af361e807954 bf169a5ad41cda4e804aa64ad34f84c8fd7e0e5f373d288bae82a07ea45b1d2e], retrying...."
Oct 14 15:46:38 minikube dockerd[156]: time="2020-10-14T15:46:38.235960725Z" level=info msg="Removing stale sandbox 16c3efa54f1e4c05a15b3ee9f22b3aec1dd78987736ee7a7e5c9b69be6bea323 (eeaa166f7337a29e576afdc2e1ac9ebed1469acf47e8a69c0299ff27263b1545)"
Oct 14 15:46:38 minikube dockerd[156]: time="2020-10-14T15:46:38.237013771Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 81927765ffb8afdc52f382db5dea4be10fee60f60cc6a98f1361af361e807954 729b7ab1952d0295952c1a3da7667b3cb7c17500ed56ecdb8990d960cb6d6ea4], retrying...."
Oct 14 15:46:38 minikube dockerd[156]: time="2020-10-14T15:46:38.272645937Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Oct 14 15:46:38 minikube dockerd[156]: time="2020-10-14T15:46:38.324940426Z" level=info msg="Loading containers: done."
Oct 14 15:46:38 minikube dockerd[156]: time="2020-10-14T15:46:38.373242084Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8
Oct 14 15:46:38 minikube dockerd[156]: time="2020-10-14T15:46:38.373782440Z" level=info msg="Daemon has completed initialization"
Oct 14 15:46:38 minikube dockerd[156]: time="2020-10-14T15:46:38.406968635Z" level=info msg="API listen on /var/run/docker.sock"
Oct 14 15:46:38 minikube dockerd[156]: time="2020-10-14T15:46:38.407029879Z" level=info msg="API listen on [::]:2376"
Oct 14 15:46:38 minikube systemd[1]: Started Docker Application Container Engine.
Oct 14 15:47:05 minikube dockerd[156]: time="2020-10-14T15:47:05.488275025Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 14 15:47:12 minikube dockerd[156]: time="2020-10-14T15:47:12.962011298Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 14 15:47:13 minikube dockerd[156]: time="2020-10-14T15:47:13.158675601Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"

==> container status <==
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
5bcd4905ae5e4       bad58561c4be7       3 minutes ago       Running             storage-provisioner       22                  1211fa5e39209
6094510df2e94       9a4acd66bd661       3 minutes ago       Running             gcp-auth                  0                   c40232233b49b
29251fcb723c2       bfe3a36ebd252       3 minutes ago       Running             coredns                   12                  6be1caa4b75b2
0b4f01f998984       bad58561c4be7       3 minutes ago       Exited              storage-provisioner       21                  1211fa5e39209
dfcbc322ac66e       d373dd5a8593a       3 minutes ago       Running             kube-proxy                12                  b1cdf52a495dc
28f21b74ae470       2f32d66b884f8       4 minutes ago       Running             kube-scheduler            12                  950c60ecf27b8
6a11193fdedea       0369cf4303ffd       4 minutes ago       Running             etcd                      12                  71e608d9a561b
b1cb9ac3aad5d       8603821e1a7a5       4 minutes ago       Running             kube-controller-manager   12                  bf80131493a43
b6966af2b7aa9       607331163122e       4 minutes ago       Running             kube-apiserver            12                  b71f81ab07f0d
3e9c081208f42       4d4f44df9f905       39 hours ago        Exited              patch                     1                   7466d14b02aaa
bf2e715d2bddf       4d4f44df9f905       39 hours ago        Exited              create                    0                   d89696bf9d0cd
5d00de89d23aa       bfe3a36ebd252       39 hours ago        Exited              coredns                   11                  d715cf39075cf
a2f5127213f57       d373dd5a8593a       39 hours ago        Exited              kube-proxy                11                  88152a0586c35
63a0423aae4e5       8603821e1a7a5       39 hours ago        Exited              kube-controller-manager   11                  03d2b70949896
c64001b76f037       0369cf4303ffd       39 hours ago        Exited              etcd                      11                  203106cceb986
2ee42d51f9e00       607331163122e       39 hours ago        Exited              kube-apiserver            11                  0dadceddb9fc9
1c7132799a79e       2f32d66b884f8       39 hours ago        Exited              kube-scheduler            11                  eeaa166f7337a

==> coredns [29251fcb723c] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d
E1014 15:47:04.981131       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": x509: certificate signed by unknown authority
E1014 15:47:04.981604       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": x509: certificate signed by unknown authority
E1014 15:47:04.981837       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": x509: certificate signed by unknown authority

==> coredns [5d00de89d23a] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d
E1013 00:37:55.397986       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": x509: certificate signed by unknown authority
E1013 00:37:55.399663       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": x509: certificate signed by unknown authority
E1013 00:37:55.401629       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": x509: certificate signed by unknown authority

==> describe nodes <==
Name:               minikube
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=minikube
                    kubernetes.io/os=linux
                    minikube.k8s.io/commit=30dd00755bc16377c23b04b014aea6f53320e7ba
                    minikube.k8s.io/name=minikube
                    minikube.k8s.io/updated_at=2020_10_06T12_21_41_0700
                    minikube.k8s.io/version=v1.13.1
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Tue, 06 Oct 2020 19:21:38 +0000
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  minikube
  AcquireTime:     <unset>
  RenewTime:       Wed, 14 Oct 2020 15:50:52 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Wed, 14 Oct 2020 15:47:02 +0000   Tue, 06 Oct 2020 19:21:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Wed, 14 Oct 2020 15:47:02 +0000   Tue, 06 Oct 2020 19:21:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Wed, 14 Oct 2020 15:47:02 +0000   Tue, 06 Oct 2020 19:21:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Wed, 14 Oct 2020 15:47:02 +0000   Tue, 06 Oct 2020 19:21:52 +0000   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.49.2
  Hostname:    minikube
Capacity:
  cpu:                4
  ephemeral-storage:  61255492Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             4035056Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  61255492Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             4035056Ki
  pods:               110
System Info:
  Machine ID:                 fd5e1988754640c68a95694db9553ef2
  System UUID:                50123684-8471-4e80-a48e-2f3081fd9500
  Boot ID:                    91afd76d-b6e3-4e34-a555-232af5f643d5
  Kernel Version:             4.19.76-linuxkit
  OS Image:                   Ubuntu 20.04 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://19.3.8
  Kubelet Version:            v1.19.2
  Kube-Proxy Version:         v1.19.2
Non-terminated Pods:          (8 in total)
  Namespace                   Name                                CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                ------------  ----------  ---------------  -------------  ---
  gcp-auth                    gcp-auth-5ff8987f65-8pc2q           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
  kube-system                 coredns-f9fd979d6-nr7dr             100m (2%)     0 (0%)      70Mi (1%)        170Mi (4%)     7d20h
  kube-system                 etcd-minikube                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7d20h
  kube-system                 kube-apiserver-minikube             250m (6%)     0 (0%)      0 (0%)           0 (0%)         7d20h
  kube-system                 kube-controller-manager-minikube    200m (5%)     0 (0%)      0 (0%)           0 (0%)         7d20h
  kube-system                 kube-proxy-lmngd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7d20h
  kube-system                 kube-scheduler-minikube             100m (2%)     0 (0%)      0 (0%)           0 (0%)         7d20h
  kube-system                 storage-provisioner                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         7d20h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                650m (16%)  0 (0%)
  memory             70Mi (1%)   170Mi (4%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:
  Type     Reason                   Age                  From        Message
  ----     ------                   ----                 ----        -------
  Normal   Starting                 4m6s                 kubelet     Starting kubelet.
  Normal   NodeHasSufficientMemory  4m6s (x8 over 4m6s)  kubelet     Node minikube status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    4m6s (x7 over 4m6s)  kubelet     Node minikube status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     4m6s (x8 over 4m6s)  kubelet     Node minikube status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  4m6s                 kubelet     Updated Node Allocatable limit across pods
  Warning  readOnlySysFS            3m51s                kube-proxy  CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)
  Normal   Starting                 3m51s                kube-proxy  Starting kube-proxy.

==> dmesg <==
[Oct14 15:17] virtio-pci 0000:00:01.0: can't derive routing for PCI INT A
[  +0.000789] virtio-pci 0000:00:01.0: PCI INT A: no GSI
[  +0.001555] virtio-pci 0000:00:02.0: can't derive routing for PCI INT A
[  +0.000756] virtio-pci 0000:00:02.0: PCI INT A: no GSI
[  +0.002471] virtio-pci 0000:00:07.0: can't derive routing for PCI INT A
[  +0.000788] virtio-pci 0000:00:07.0: PCI INT A: no GSI
[  +0.050067] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds).
[  +0.599933] i8042: Can't read CTR while initializing i8042
[  +0.000655] i8042: probe of i8042 failed with error -5
[  +0.006744] ACPI Error: Could not enable RealTimeClock event (20180810/evxfevnt-184)
[  +0.000949] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20180810/evxface-620)
[  +0.159748] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[  +0.019003] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[  +3.547204] FAT-fs (sr2): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[  +0.081785] FAT-fs (sr2): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[Oct14 15:37] hrtimer: interrupt took 4256276 ns

==> etcd [6a11193fdede] <==
2020-10-14 15:46:52.430382 I | embed: member dir = /var/lib/minikube/etcd/member
2020-10-14 15:46:52.430394 I | embed: heartbeat = 100ms
2020-10-14 15:46:52.430452 I | embed: election = 1000ms
2020-10-14 15:46:52.430479 I | embed: snapshot count = 10000
2020-10-14 15:46:52.430519 I | embed: advertise client URLs = https://192.168.49.2:2379
2020-10-14 15:46:52.430570 I | embed: initial advertise peer URLs = https://192.168.49.2:2380
2020-10-14 15:46:52.430694 I | embed: initial cluster = 
2020-10-14 15:46:54.180208 I | etcdserver: recovered store from snapshot at index 100010
2020-10-14 15:46:54.180915 I | mvcc: restore compact to 72435
2020-10-14 15:46:54.657086 I | etcdserver: restarting member aec36adc501070cc in cluster fa54960ea34d58be at commit index 103147
raft2020/10/14 15:46:54 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
raft2020/10/14 15:46:54 INFO: aec36adc501070cc became follower at term 13
raft2020/10/14 15:46:54 INFO: newRaft aec36adc501070cc [peers: [aec36adc501070cc], term: 13, commit: 103147, applied: 100010, lastindex: 103147, lastterm: 13]
2020-10-14 15:46:54.658331 I | etcdserver/api: enabled capabilities for version 3.4
2020-10-14 15:46:54.659089 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be from store
2020-10-14 15:46:54.659208 I | etcdserver/membership: set the cluster version to 3.4 from store
2020-10-14 15:46:54.661925 W | auth: simple token is not cryptographically signed
2020-10-14 15:46:54.663358 I | mvcc: restore compact to 72435
2020-10-14 15:46:54.667225 I | etcdserver: starting server... [version: 3.4.13, cluster version: 3.4]
2020-10-14 15:46:54.668281 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
2020-10-14 15:46:54.672940 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
2020-10-14 15:46:54.679576 I | embed: listening for metrics on http://127.0.0.1:2381
2020-10-14 15:46:54.680370 I | embed: listening for peers on 192.168.49.2:2380
raft2020/10/14 15:46:54 INFO: aec36adc501070cc is starting a new election at term 13
raft2020/10/14 15:46:54 INFO: aec36adc501070cc became candidate at term 14
raft2020/10/14 15:46:54 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 14
raft2020/10/14 15:46:54 INFO: aec36adc501070cc became leader at term 14
raft2020/10/14 15:46:54 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 14
2020-10-14 15:46:54.763533 I | etcdserver: published {Name:minikube ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
2020-10-14 15:46:54.763849 I | embed: ready to serve client requests
2020-10-14 15:46:54.766294 I | embed: ready to serve client requests
2020-10-14 15:46:54.769591 I | embed: serving client requests on 127.0.0.1:2379
2020-10-14 15:46:54.770498 I | embed: serving client requests on 192.168.49.2:2379
2020-10-14 15:47:03.110274 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:6" took too long (115.288986ms) to execute
2020-10-14 15:47:03.110544 W | etcdserver: read-only range request "key:\"/registry/clusterroles/view\" " with result "range_response_count:1 size:2043" took too long (123.6021ms) to execute
2020-10-14 15:47:03.472881 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-14 15:47:06.425258 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/ttl-controller\" " with result "range_response_count:1 size:240" took too long (217.295366ms) to execute
2020-10-14 15:47:09.379933 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-14 15:47:19.378788 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-14 15:47:29.380736 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-14 15:47:39.381094 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-14 15:47:49.379643 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-14 15:47:59.381222 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-14 15:48:09.378964 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-14 15:48:19.379984 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-14 15:48:29.380766 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-14 15:48:39.381184 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-14 15:48:49.380855 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-14 15:48:59.380505 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-14 15:49:09.380821 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-14 15:49:19.381952 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-14 15:49:29.380584 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-14 15:49:39.380351 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-14 15:49:49.380942 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-14 15:49:59.382164 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-14 15:50:09.381727 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-14 15:50:19.384692 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-14 15:50:29.381846 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-14 15:50:39.385126 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-14 15:50:49.383200 I | etcdserver/api/etcdhttp: /health OK (status code 200)

==> etcd [c64001b76f03] <==
2020-10-13 04:34:53.817818 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:35:03.818686 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:35:13.818571 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:35:23.817098 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:35:33.817786 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:35:43.818023 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:35:53.819052 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:36:03.818480 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:36:13.819469 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:36:23.819414 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:36:33.818598 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:36:43.819444 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:36:53.819772 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:37:03.820739 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:37:13.819020 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:37:23.821337 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:37:33.820221 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:37:43.820790 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:37:51.649041 I | mvcc: store.index: compact 72224
2020-10-13 04:37:51.649887 I | mvcc: finished scheduled compaction at 72224 (took 676.617µs)
2020-10-13 04:37:53.821041 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:38:03.820858 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:38:13.820220 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:38:23.820164 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:38:33.820422 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:38:43.821353 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:38:53.821979 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:39:03.821203 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:39:13.820995 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:39:23.821238 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:39:33.822399 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:39:43.821893 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:39:53.822324 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:40:03.822608 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:40:13.904703 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:40:23.823404 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:40:33.823121 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:40:36.705224 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:258" took too long (128.31575ms) to execute
2020-10-13 04:40:43.823377 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:40:53.826657 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:41:03.823329 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:41:13.823345 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:41:23.824577 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:41:33.824150 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:41:43.824174 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:41:53.825939 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:42:03.824829 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:42:13.825915 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:42:23.827881 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:42:33.825541 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:42:43.824737 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:42:51.660713 I | mvcc: store.index: compact 72435
2020-10-13 04:42:51.662775 I | mvcc: finished scheduled compaction at 72435 (took 1.616664ms)
2020-10-13 04:42:53.826720 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:43:03.826023 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:43:13.826494 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:43:23.825193 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:43:33.825575 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:43:43.826939 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 04:43:45.957293 N | pkg/osutil: received terminated signal, shutting down...

==> kernel <==
 15:51:00 up 33 min,  0 users,  load average: 1.02, 0.73, 0.42
Linux minikube 4.19.76-linuxkit #1 SMP Tue May 26 11:42:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04 LTS"

==> kube-apiserver [2ee42d51f9e0] <==
I1013 04:32:31.083131       1 client.go:360] parsed scheme: "passthrough"
I1013 04:32:31.083186       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1013 04:32:31.083194       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1013 04:33:14.188184       1 client.go:360] parsed scheme: "passthrough"
I1013 04:33:14.188253       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1013 04:33:14.188295       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1013 04:33:56.018005       1 client.go:360] parsed scheme: "passthrough"
I1013 04:33:56.018088       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1013 04:33:56.018104       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1013 04:34:31.555486       1 client.go:360] parsed scheme: "passthrough"
I1013 04:34:31.555580       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1013 04:34:31.555594       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1013 04:35:15.537613       1 client.go:360] parsed scheme: "passthrough"
I1013 04:35:15.537725       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1013 04:35:15.537740       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1013 04:35:49.304725       1 client.go:360] parsed scheme: "passthrough"
I1013 04:35:49.304776       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1013 04:35:49.304783       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1013 04:36:31.855811       1 client.go:360] parsed scheme: "passthrough"
I1013 04:36:31.855908       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1013 04:36:31.855916       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1013 04:37:07.434830       1 client.go:360] parsed scheme: "passthrough"
I1013 04:37:07.434901       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1013 04:37:07.434908       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1013 04:37:47.239180       1 client.go:360] parsed scheme: "passthrough"
I1013 04:37:47.239253       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1013 04:37:47.239264       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1013 04:38:29.091730       1 client.go:360] parsed scheme: "passthrough"
I1013 04:38:29.091995       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1013 04:38:29.092018       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1013 04:39:00.926676       1 client.go:360] parsed scheme: "passthrough"
I1013 04:39:00.926736       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1013 04:39:00.926745       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1013 04:39:41.937954       1 client.go:360] parsed scheme: "passthrough"
I1013 04:39:41.938029       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1013 04:39:41.938042       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1013 04:40:12.911299       1 client.go:360] parsed scheme: "passthrough"
I1013 04:40:12.915925       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1013 04:40:12.915961       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1013 04:40:46.842768       1 client.go:360] parsed scheme: "passthrough"
I1013 04:40:46.843161       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1013 04:40:46.843298       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1013 04:41:23.265398       1 client.go:360] parsed scheme: "passthrough"
I1013 04:41:23.265476       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1013 04:41:23.265492       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1013 04:41:52.171844       1 trace.go:205] Trace[1897496780]: "Update" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.2 (13-Oct-2020 04:41:51.624) (total time: 546ms):
Trace[1897496780]: ---"About to convert to expected version" 523ms (04:41:00.148)
Trace[1897496780]: [546.371134ms] [546.371134ms] END
I1013 04:41:54.865497       1 client.go:360] parsed scheme: "passthrough"
I1013 04:41:54.865679       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1013 04:41:54.865727       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1013 04:42:25.733996       1 client.go:360] parsed scheme: "passthrough"
I1013 04:42:25.734137       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1013 04:42:25.734153       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1013 04:43:10.376636       1 client.go:360] parsed scheme: "passthrough"
I1013 04:43:10.376680       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1013 04:43:10.376688       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1013 04:43:45.009585       1 client.go:360] parsed scheme: "passthrough"
I1013 04:43:45.009642       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1013 04:43:45.009652       1 clientconn.go:948] ClientConn switching balancer to "pick_first"

==> kube-apiserver [b6966af2b7aa] <==
I1014 15:47:01.881494       1 secure_serving.go:197] Serving securely on [::]:8443
I1014 15:47:01.881729       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I1014 15:47:01.881927       1 available_controller.go:404] Starting AvailableConditionController
I1014 15:47:01.882038       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I1014 15:47:01.884263       1 controller.go:83] Starting OpenAPI AggregationController
I1014 15:47:01.884653       1 customresource_discovery_controller.go:209] Starting DiscoveryController
I1014 15:47:01.885512       1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key
I1014 15:47:01.885723       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I1014 15:47:01.885927       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
I1014 15:47:01.886000       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
I1014 15:47:01.886020       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1014 15:47:01.886096       1 autoregister_controller.go:141] Starting autoregister controller
I1014 15:47:01.886104       1 cache.go:32] Waiting for caches to sync for autoregister controller
I1014 15:47:01.886810       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I1014 15:47:01.886866       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I1014 15:47:01.887128       1 crdregistration_controller.go:111] Starting crd-autoregister controller
I1014 15:47:01.887162       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
I1014 15:47:01.897530       1 controller.go:86] Starting OpenAPI controller
I1014 15:47:01.897623       1 naming_controller.go:291] Starting NamingConditionController
I1014 15:47:01.897644       1 establishing_controller.go:76] Starting EstablishingController
I1014 15:47:01.897656       1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I1014 15:47:01.897665       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1014 15:47:01.897678       1 crd_finalizer.go:266] Starting CRDFinalizer
I1014 15:47:01.992769       1 shared_informer.go:247] Caches are synced for crd-autoregister 
I1014 15:47:01.994454       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
I1014 15:47:01.994521       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1014 15:47:02.046475       1 cache.go:39] Caches are synced for autoregister controller
I1014 15:47:02.072182       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I1014 15:47:02.082488       1 cache.go:39] Caches are synced for AvailableConditionController controller
E1014 15:47:02.206540       1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I1014 15:47:02.879167       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1014 15:47:02.879265       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1014 15:47:02.896478       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I1014 15:47:05.610297       1 controller.go:606] quota admission added evaluator for: serviceaccounts
I1014 15:47:05.700157       1 controller.go:606] quota admission added evaluator for: deployments.apps
I1014 15:47:05.779377       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I1014 15:47:05.809085       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1014 15:47:05.831199       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1014 15:47:08.463409       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1014 15:47:08.492147       1 controller.go:606] quota admission added evaluator for: endpoints
I1014 15:47:10.655629       1 controller.go:606] quota admission added evaluator for: replicasets.apps
I1014 15:47:10.710452       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
I1014 15:47:36.339020       1 client.go:360] parsed scheme: "passthrough"
I1014 15:47:36.339105       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1014 15:47:36.339141       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1014 15:48:16.624288       1 client.go:360] parsed scheme: "passthrough"
I1014 15:48:16.624368       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1014 15:48:16.624381       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1014 15:48:50.750114       1 client.go:360] parsed scheme: "passthrough"
I1014 15:48:50.750203       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1014 15:48:50.750210       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1014 15:49:25.824313       1 client.go:360] parsed scheme: "passthrough"
I1014 15:49:25.824361       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1014 15:49:25.824368       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1014 15:49:59.954555       1 client.go:360] parsed scheme: "passthrough"
I1014 15:49:59.954648       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1014 15:49:59.954673       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1014 15:50:35.623233       1 client.go:360] parsed scheme: "passthrough"
I1014 15:50:35.623460       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1014 15:50:35.623489       1 clientconn.go:948] ClientConn switching balancer to "pick_first"

==> kube-controller-manager [63a0423aae4e] <==
I1013 00:37:59.905323       1 controllermanager.go:549] Started "endpoint"
I1013 00:37:59.905367       1 endpoints_controller.go:184] Starting endpoint controller
I1013 00:37:59.905385       1 shared_informer.go:240] Waiting for caches to sync for endpoint
I1013 00:38:00.053468       1 controllermanager.go:549] Started "podgc"
I1013 00:38:00.053611       1 gc_controller.go:89] Starting GC controller
I1013 00:38:00.053928       1 shared_informer.go:240] Waiting for caches to sync for GC
I1013 00:38:00.203545       1 controllermanager.go:549] Started "bootstrapsigner"
I1013 00:38:00.203611       1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer
I1013 00:38:00.353387       1 controllermanager.go:549] Started "pv-protection"
W1013 00:38:00.353729       1 controllermanager.go:541] Skipping "root-ca-cert-publisher"
I1013 00:38:00.353559       1 pv_protection_controller.go:83] Starting PV protection controller
I1013 00:38:00.354003       1 shared_informer.go:240] Waiting for caches to sync for PV protection
I1013 00:38:00.357420       1 shared_informer.go:240] Waiting for caches to sync for resource quota
W1013 00:38:00.369107       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I1013 00:38:00.397970       1 shared_informer.go:247] Caches are synced for taint 
I1013 00:38:00.398200       1 taint_manager.go:187] Starting NoExecuteTaintManager
I1013 00:38:00.398620       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
W1013 00:38:00.398806       1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube. Assuming now as a timestamp.
I1013 00:38:00.398950       1 node_lifecycle_controller.go:1245] Controller detected that zone  is now in state Normal.
I1013 00:38:00.399383       1 shared_informer.go:247] Caches are synced for disruption 
I1013 00:38:00.399395       1 disruption.go:339] Sending events to api server.
I1013 00:38:00.399519       1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller"
I1013 00:38:00.399641       1 shared_informer.go:247] Caches are synced for job 
I1013 00:38:00.404161       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
I1013 00:38:00.404750       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
I1013 00:38:00.405503       1 shared_informer.go:247] Caches are synced for endpoint 
I1013 00:38:00.405812       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
I1013 00:38:00.406550       1 shared_informer.go:247] Caches are synced for ReplicaSet 
I1013 00:38:00.408588       1 shared_informer.go:247] Caches are synced for expand 
I1013 00:38:00.410660       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
I1013 00:38:00.410690       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
I1013 00:38:00.412386       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
I1013 00:38:00.412431       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
I1013 00:38:00.419650       1 shared_informer.go:247] Caches are synced for TTL 
I1013 00:38:00.440781       1 shared_informer.go:247] Caches are synced for daemon sets 
I1013 00:38:00.441397       1 shared_informer.go:247] Caches are synced for deployment 
I1013 00:38:00.448186       1 shared_informer.go:247] Caches are synced for endpoint_slice 
I1013 00:38:00.452443       1 shared_informer.go:247] Caches are synced for HPA 
I1013 00:38:00.452453       1 shared_informer.go:247] Caches are synced for ReplicationController 
I1013 00:38:00.452511       1 shared_informer.go:247] Caches are synced for stateful set 
I1013 00:38:00.453272       1 shared_informer.go:247] Caches are synced for attach detach 
I1013 00:38:00.454134       1 shared_informer.go:247] Caches are synced for PV protection 
I1013 00:38:00.454618       1 shared_informer.go:247] Caches are synced for GC 
I1013 00:38:00.498917       1 shared_informer.go:247] Caches are synced for persistent volume 
I1013 00:38:00.502895       1 shared_informer.go:247] Caches are synced for PVC protection 
I1013 00:38:00.557645       1 shared_informer.go:247] Caches are synced for resource quota 
I1013 00:38:00.593381       1 shared_informer.go:247] Caches are synced for resource quota 
I1013 00:38:00.605059       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
I1013 00:38:00.662788       1 shared_informer.go:247] Caches are synced for namespace 
I1013 00:38:00.703034       1 shared_informer.go:247] Caches are synced for service account 
I1013 00:38:00.709134       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I1013 00:38:01.009992       1 shared_informer.go:247] Caches are synced for garbage collector 
I1013 00:38:01.029939       1 shared_informer.go:247] Caches are synced for garbage collector 
I1013 00:38:01.030021       1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1013 00:43:04.391236       1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-certs-create-kz5dw"
I1013 00:43:04.412512       1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set gcp-auth-5c58dc7db8 to 1"
I1013 00:43:04.418574       1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-certs-patch-7vf86"
I1013 00:43:04.422504       1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-5c58dc7db8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-5c58dc7db8-gtnfq"
I1013 00:43:06.335133       1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
I1013 00:43:07.743177       1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"

==> kube-controller-manager [b1cb9ac3aad5] <==
I1014 15:47:08.314160       1 resource_quota_controller.go:272] Starting resource quota controller
I1014 15:47:08.314171       1 shared_informer.go:240] Waiting for caches to sync for resource quota
I1014 15:47:08.314194       1 resource_quota_monitor.go:303] QuotaMonitor running
I1014 15:47:08.324315       1 controllermanager.go:549] Started "statefulset"
I1014 15:47:08.324398       1 stateful_set.go:146] Starting stateful set controller
I1014 15:47:08.324410       1 shared_informer.go:240] Waiting for caches to sync for stateful set
I1014 15:47:08.337882       1 controllermanager.go:549] Started "csrapproving"
I1014 15:47:08.338032       1 certificate_controller.go:118] Starting certificate controller "csrapproving"
I1014 15:47:08.338084       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving
I1014 15:47:08.409833       1 controllermanager.go:549] Started "csrcleaner"
W1014 15:47:08.409876       1 controllermanager.go:541] Skipping "nodeipam"
W1014 15:47:08.409887       1 controllermanager.go:541] Skipping "root-ca-cert-publisher"
I1014 15:47:08.411195       1 cleaner.go:83] Starting CSR cleaner controller
W1014 15:47:08.440555       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I1014 15:47:08.442901       1 shared_informer.go:247] Caches are synced for TTL 
I1014 15:47:08.449759       1 shared_informer.go:247] Caches are synced for endpoint_slice 
I1014 15:47:08.451596       1 shared_informer.go:247] Caches are synced for job 
I1014 15:47:08.459685       1 shared_informer.go:247] Caches are synced for HPA 
I1014 15:47:08.461799       1 shared_informer.go:247] Caches are synced for PV protection 
I1014 15:47:08.462269       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
I1014 15:47:08.462458       1 shared_informer.go:247] Caches are synced for ReplicationController 
I1014 15:47:08.463905       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
I1014 15:47:08.464280       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
I1014 15:47:08.464770       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
I1014 15:47:08.464925       1 shared_informer.go:247] Caches are synced for disruption 
I1014 15:47:08.464978       1 disruption.go:339] Sending events to api server.
I1014 15:47:08.465577       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
I1014 15:47:08.468298       1 shared_informer.go:247] Caches are synced for ReplicaSet 
I1014 15:47:08.468666       1 shared_informer.go:247] Caches are synced for taint 
I1014 15:47:08.468774       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
W1014 15:47:08.469402       1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube. Assuming now as a timestamp.
I1014 15:47:08.469466       1 node_lifecycle_controller.go:1245] Controller detected that zone  is now in state Normal.
I1014 15:47:08.471105       1 taint_manager.go:187] Starting NoExecuteTaintManager
I1014 15:47:08.472638       1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller"
I1014 15:47:08.473261       1 shared_informer.go:247] Caches are synced for namespace 
I1014 15:47:08.479484       1 shared_informer.go:247] Caches are synced for persistent volume 
I1014 15:47:08.480470       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
I1014 15:47:08.482233       1 shared_informer.go:247] Caches are synced for endpoint 
I1014 15:47:08.491716       1 shared_informer.go:247] Caches are synced for service account 
I1014 15:47:08.492460       1 shared_informer.go:247] Caches are synced for deployment 
I1014 15:47:08.492750       1 shared_informer.go:247] Caches are synced for GC 
I1014 15:47:08.511405       1 shared_informer.go:247] Caches are synced for PVC protection 
I1014 15:47:08.511897       1 shared_informer.go:247] Caches are synced for expand 
I1014 15:47:08.524815       1 shared_informer.go:247] Caches are synced for stateful set 
I1014 15:47:08.546393       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
I1014 15:47:08.608497       1 shared_informer.go:247] Caches are synced for daemon sets 
I1014 15:47:08.614017       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
I1014 15:47:08.662227       1 shared_informer.go:247] Caches are synced for attach detach 
I1014 15:47:08.714384       1 shared_informer.go:247] Caches are synced for resource quota 
I1014 15:47:08.764772       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I1014 15:47:09.049387       1 shared_informer.go:247] Caches are synced for garbage collector 
I1014 15:47:09.049448       1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1014 15:47:09.065450       1 shared_informer.go:247] Caches are synced for garbage collector 
I1014 15:47:09.460101       1 request.go:645] Throttling request took 1.038408701s, request: GET:https://192.168.49.2:8443/apis/authentication.k8s.io/v1beta1?timeout=32s
I1014 15:47:10.162328       1 shared_informer.go:240] Waiting for caches to sync for resource quota
I1014 15:47:10.162364       1 shared_informer.go:247] Caches are synced for resource quota 
I1014 15:47:10.662204       1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set gcp-auth-5ff8987f65 to 1"
I1014 15:47:10.679860       1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-5ff8987f65" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-5ff8987f65-8pc2q"
I1014 15:47:12.791827       1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set gcp-auth-5c58dc7db8 to 0"
I1014 15:47:12.804718       1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-5c58dc7db8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: gcp-auth-5c58dc7db8-gtnfq"

==> kube-proxy [a2f5127213f5] <==
I1013 00:37:55.144178       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
I1013 00:37:55.144290       1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation
W1013 00:37:55.211866       1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
I1013 00:37:55.211955       1 server_others.go:186] Using iptables Proxier.
W1013 00:37:55.211965       1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
I1013 00:37:55.212009       1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
I1013 00:37:55.212304       1 server.go:650] Version: v1.19.2
I1013 00:37:55.212753       1 conntrack.go:52] Setting nf_conntrack_max to 131072
E1013 00:37:55.213834       1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime])
I1013 00:37:55.213901       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I1013 00:37:55.213957       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I1013 00:37:55.214962       1 config.go:315] Starting service config controller
I1013 00:37:55.214968       1 shared_informer.go:240] Waiting for caches to sync for service config
I1013 00:37:55.214986       1 config.go:224] Starting endpoint slice config controller
I1013 00:37:55.214990       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I1013 00:37:55.315196       1 shared_informer.go:247] Caches are synced for endpoint slice config 
I1013 00:37:55.315240       1 shared_informer.go:247] Caches are synced for service config 

==> kube-proxy [dfcbc322ac66] <==
I1014 15:47:05.289095       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
I1014 15:47:05.289336       1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation
W1014 15:47:05.665722       1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
I1014 15:47:05.666286       1 server_others.go:186] Using iptables Proxier.
W1014 15:47:05.666304       1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
I1014 15:47:05.666309       1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
I1014 15:47:05.667875       1 server.go:650] Version: v1.19.2
I1014 15:47:05.669166       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I1014 15:47:05.669198       1 conntrack.go:52] Setting nf_conntrack_max to 131072
E1014 15:47:05.670317       1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime])
I1014 15:47:05.670423       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I1014 15:47:05.670793       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I1014 15:47:05.677738       1 config.go:315] Starting service config controller
I1014 15:47:05.677811       1 shared_informer.go:240] Waiting for caches to sync for service config
I1014 15:47:05.677835       1 config.go:224] Starting endpoint slice config controller
I1014 15:47:05.677852       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I1014 15:47:05.777992       1 shared_informer.go:247] Caches are synced for endpoint slice config 
I1014 15:47:05.778052       1 shared_informer.go:247] Caches are synced for service config 

==> kube-scheduler [1c7132799a79] <==
I1013 00:37:49.328911       1 registry.go:173] Registering SelectorSpread plugin
I1013 00:37:49.328986       1 registry.go:173] Registering SelectorSpread plugin
I1013 00:37:50.295102       1 serving.go:331] Generated self-signed cert in-memory
I1013 00:37:53.937986       1 registry.go:173] Registering SelectorSpread plugin
I1013 00:37:53.938002       1 registry.go:173] Registering SelectorSpread plugin
I1013 00:37:53.948639       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I1013 00:37:53.948659       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
I1013 00:37:53.948798       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1013 00:37:53.948806       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1013 00:37:53.948823       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1013 00:37:53.948827       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1013 00:37:53.949065       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I1013 00:37:53.949106       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I1013 00:37:54.049009       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController 
I1013 00:37:54.049087       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
I1013 00:37:54.049176       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 

==> kube-scheduler [28f21b74ae47] <==
I1014 15:46:52.633586       1 registry.go:173] Registering SelectorSpread plugin
I1014 15:46:52.634169       1 registry.go:173] Registering SelectorSpread plugin
I1014 15:46:54.990092       1 serving.go:331] Generated self-signed cert in-memory
W1014 15:47:01.905404       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1014 15:47:01.905425       1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1014 15:47:01.905433       1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous.
W1014 15:47:01.905437       1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1014 15:47:02.093315       1 registry.go:173] Registering SelectorSpread plugin
I1014 15:47:02.093332       1 registry.go:173] Registering SelectorSpread plugin
I1014 15:47:02.154058       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I1014 15:47:02.154189       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1014 15:47:02.154200       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1014 15:47:02.154228       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I1014 15:47:02.254823       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 

==> kubelet <==
-- Logs begin at Wed 2020-10-14 15:46:36 UTC, end at Wed 2020-10-14 15:51:05 UTC. --
Oct 14 15:47:02 minikube kubelet[909]: I1014 15:47:02.079686     909 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/9102ef83-4448-4b16-a961-90c91c320c12-kube-proxy") pod "kube-proxy-lmngd" (UID: "9102ef83-4448-4b16-a961-90c91c320c12")
Oct 14 15:47:02 minikube kubelet[909]: I1014 15:47:02.079715     909 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/9102ef83-4448-4b16-a961-90c91c320c12-xtables-lock") pod "kube-proxy-lmngd" (UID: "9102ef83-4448-4b16-a961-90c91c320c12")
Oct 14 15:47:02 minikube kubelet[909]: I1014 15:47:02.079796     909 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f40048e8-6b9b-4a3e-bbb0-3a3cfc178bee-config-volume") pod "coredns-f9fd979d6-nr7dr" (UID: "f40048e8-6b9b-4a3e-bbb0-3a3cfc178bee")
Oct 14 15:47:02 minikube kubelet[909]: I1014 15:47:02.079819     909 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "gcp-project" (UniqueName: "kubernetes.io/host-path/99e40804-df2d-4828-ade9-25109976d4c3-gcp-project") pod "gcp-auth-5c58dc7db8-gtnfq" (UID: "99e40804-df2d-4828-ade9-25109976d4c3")
Oct 14 15:47:02 minikube kubelet[909]: I1014 15:47:02.079835     909 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-vrdbf" (UniqueName: "kubernetes.io/secret/3535b914-7c85-4f33-b868-fde484b9eba7-storage-provisioner-token-vrdbf") pod "storage-provisioner" (UID: "3535b914-7c85-4f33-b868-fde484b9eba7")
Oct 14 15:47:02 minikube kubelet[909]: I1014 15:47:02.079850     909 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-wrtkb" (UniqueName: "kubernetes.io/secret/9102ef83-4448-4b16-a961-90c91c320c12-kube-proxy-token-wrtkb") pod "kube-proxy-lmngd" (UID: "9102ef83-4448-4b16-a961-90c91c320c12")
Oct 14 15:47:02 minikube kubelet[909]: I1014 15:47:02.079877     909 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-l7rj7" (UniqueName: "kubernetes.io/secret/99e40804-df2d-4828-ade9-25109976d4c3-default-token-l7rj7") pod "gcp-auth-5c58dc7db8-gtnfq" (UID: "99e40804-df2d-4828-ade9-25109976d4c3")
Oct 14 15:47:02 minikube kubelet[909]: I1014 15:47:02.079965     909 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/9102ef83-4448-4b16-a961-90c91c320c12-lib-modules") pod "kube-proxy-lmngd" (UID: "9102ef83-4448-4b16-a961-90c91c320c12")
Oct 14 15:47:02 minikube kubelet[909]: I1014 15:47:02.079988     909 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-rqxdf" (UniqueName: "kubernetes.io/secret/f40048e8-6b9b-4a3e-bbb0-3a3cfc178bee-coredns-token-rqxdf") pod "coredns-f9fd979d6-nr7dr" (UID: "f40048e8-6b9b-4a3e-bbb0-3a3cfc178bee")
Oct 14 15:47:02 minikube kubelet[909]: I1014 15:47:02.181446     909 reconciler.go:157] Reconciler: start to sync state
Oct 14 15:47:02 minikube kubelet[909]: I1014 15:47:02.247632     909 kubelet_node_status.go:108] Node minikube was previously registered
Oct 14 15:47:02 minikube kubelet[909]: I1014 15:47:02.248568     909 kubelet_node_status.go:73] Successfully registered node minikube
Oct 14 15:47:03 minikube kubelet[909]: W1014 15:47:03.300539     909 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-nr7dr through plugin: invalid network status for
Oct 14 15:47:03 minikube kubelet[909]: W1014 15:47:03.352523     909 pod_container_deletor.go:79] Container "6be1caa4b75b262d8430a5401467f82123c4376773aaad98f8a86caf65cd2d31" not found in pod's containers
Oct 14 15:47:03 minikube kubelet[909]: W1014 15:47:03.614631     909 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for gcp-auth/gcp-auth-5c58dc7db8-gtnfq through plugin: invalid network status for
Oct 14 15:47:03 minikube kubelet[909]: W1014 15:47:03.648677     909 pod_container_deletor.go:79] Container "dac3b0375bafba849d7ef53a16f95f69c4802525013d64e3f2f13bcbb1dbe825" not found in pod's containers
Oct 14 15:47:03 minikube kubelet[909]: W1014 15:47:03.957576     909 pod_container_deletor.go:79] Container "b1cdf52a495dc183cc3134f567f8c936363ab100de095f6329d5167ea28163fb" not found in pod's containers
Oct 14 15:47:05 minikube kubelet[909]: W1014 15:47:05.115863     909 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for gcp-auth/gcp-auth-5c58dc7db8-gtnfq through plugin: invalid network status for
Oct 14 15:47:05 minikube kubelet[909]: W1014 15:47:05.270179     909 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-nr7dr through plugin: invalid network status for
Oct 14 15:47:06 minikube kubelet[909]: I1014 15:47:06.524183     909 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 31ef78faede429ec71c9b4211d17897a8e44633097884e3c980bf1b5cd3f651a
Oct 14 15:47:06 minikube kubelet[909]: I1014 15:47:06.526541     909 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 0b4f01f9989844c0bbe60bf7b61954cd225e3250c422799c9e7a801f61e2e83e
Oct 14 15:47:06 minikube kubelet[909]: E1014 15:47:06.526992     909 pod_workers.go:191] Error syncing pod 3535b914-7c85-4f33-b868-fde484b9eba7 ("storage-provisioner_kube-system(3535b914-7c85-4f33-b868-fde484b9eba7)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(3535b914-7c85-4f33-b868-fde484b9eba7)"
Oct 14 15:47:07 minikube kubelet[909]: I1014 15:47:07.549120     909 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 0b4f01f9989844c0bbe60bf7b61954cd225e3250c422799c9e7a801f61e2e83e
Oct 14 15:47:07 minikube kubelet[909]: E1014 15:47:07.549610     909 pod_workers.go:191] Error syncing pod 3535b914-7c85-4f33-b868-fde484b9eba7 ("storage-provisioner_kube-system(3535b914-7c85-4f33-b868-fde484b9eba7)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(3535b914-7c85-4f33-b868-fde484b9eba7)"
Oct 14 15:47:10 minikube kubelet[909]: E1014 15:47:10.514103     909 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
Oct 14 15:47:10 minikube kubelet[909]: E1014 15:47:10.514150     909 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
Oct 14 15:47:10 minikube kubelet[909]: I1014 15:47:10.704746     909 topology_manager.go:233] [topologymanager] Topology Admit Handler
Oct 14 15:47:10 minikube kubelet[909]: I1014 15:47:10.718356     909 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d7afffb3-8602-4a5c-84f0-95af211dc474-webhook-certs") pod "gcp-auth-5ff8987f65-8pc2q" (UID: "d7afffb3-8602-4a5c-84f0-95af211dc474")
Oct 14 15:47:10 minikube kubelet[909]: I1014 15:47:10.718520     909 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "gcp-project" (UniqueName: "kubernetes.io/host-path/d7afffb3-8602-4a5c-84f0-95af211dc474-gcp-project") pod "gcp-auth-5ff8987f65-8pc2q" (UID: "d7afffb3-8602-4a5c-84f0-95af211dc474")
Oct 14 15:47:10 minikube kubelet[909]: I1014 15:47:10.718949     909 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-l7rj7" (UniqueName: "kubernetes.io/secret/d7afffb3-8602-4a5c-84f0-95af211dc474-default-token-l7rj7") pod "gcp-auth-5ff8987f65-8pc2q" (UID: "d7afffb3-8602-4a5c-84f0-95af211dc474")
Oct 14 15:47:11 minikube kubelet[909]: W1014 15:47:11.518528     909 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for gcp-auth/gcp-auth-5ff8987f65-8pc2q through plugin: invalid network status for
Oct 14 15:47:11 minikube kubelet[909]: W1014 15:47:11.591825     909 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for gcp-auth/gcp-auth-5ff8987f65-8pc2q through plugin: invalid network status for
Oct 14 15:47:12 minikube kubelet[909]: W1014 15:47:12.745100     909 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for gcp-auth/gcp-auth-5ff8987f65-8pc2q through plugin: invalid network status for
Oct 14 15:47:13 minikube kubelet[909]: I1014 15:47:13.779141     909 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 4cde311df231bd1ba800f51580091894877abf4531796751d37db97dfc4e9f70
Oct 14 15:47:13 minikube kubelet[909]: I1014 15:47:13.801708     909 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 0eaf1b88ec8b9d0c6995c727b0baba192146baa3be065055b0079c9ce6c9b13e
Oct 14 15:47:13 minikube kubelet[909]: I1014 15:47:13.821985     909 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 4cde311df231bd1ba800f51580091894877abf4531796751d37db97dfc4e9f70
Oct 14 15:47:13 minikube kubelet[909]: E1014 15:47:13.823076     909 remote_runtime.go:329] ContainerStatus "4cde311df231bd1ba800f51580091894877abf4531796751d37db97dfc4e9f70" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 4cde311df231bd1ba800f51580091894877abf4531796751d37db97dfc4e9f70
Oct 14 15:47:13 minikube kubelet[909]: W1014 15:47:13.823138     909 pod_container_deletor.go:52] [pod_container_deletor] DeleteContainer returned error for (id={docker 4cde311df231bd1ba800f51580091894877abf4531796751d37db97dfc4e9f70}): failed to get container status "4cde311df231bd1ba800f51580091894877abf4531796751d37db97dfc4e9f70": rpc error: code = Unknown desc = Error: No such container: 4cde311df231bd1ba800f51580091894877abf4531796751d37db97dfc4e9f70
Oct 14 15:47:13 minikube kubelet[909]: I1014 15:47:13.823159     909 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 0eaf1b88ec8b9d0c6995c727b0baba192146baa3be065055b0079c9ce6c9b13e
Oct 14 15:47:13 minikube kubelet[909]: E1014 15:47:13.824028     909 remote_runtime.go:329] ContainerStatus "0eaf1b88ec8b9d0c6995c727b0baba192146baa3be065055b0079c9ce6c9b13e" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 0eaf1b88ec8b9d0c6995c727b0baba192146baa3be065055b0079c9ce6c9b13e
Oct 14 15:47:13 minikube kubelet[909]: W1014 15:47:13.824085     909 pod_container_deletor.go:52] [pod_container_deletor] DeleteContainer returned error for (id={docker 0eaf1b88ec8b9d0c6995c727b0baba192146baa3be065055b0079c9ce6c9b13e}): failed to get container status "0eaf1b88ec8b9d0c6995c727b0baba192146baa3be065055b0079c9ce6c9b13e": rpc error: code = Unknown desc = Error: No such container: 0eaf1b88ec8b9d0c6995c727b0baba192146baa3be065055b0079c9ce6c9b13e
Oct 14 15:47:13 minikube kubelet[909]: I1014 15:47:13.939692     909 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-l7rj7" (UniqueName: "kubernetes.io/secret/99e40804-df2d-4828-ade9-25109976d4c3-default-token-l7rj7") pod "99e40804-df2d-4828-ade9-25109976d4c3" (UID: "99e40804-df2d-4828-ade9-25109976d4c3")
Oct 14 15:47:13 minikube kubelet[909]: I1014 15:47:13.939757     909 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/99e40804-df2d-4828-ade9-25109976d4c3-webhook-certs") pod "99e40804-df2d-4828-ade9-25109976d4c3" (UID: "99e40804-df2d-4828-ade9-25109976d4c3")
Oct 14 15:47:13 minikube kubelet[909]: I1014 15:47:13.939785     909 reconciler.go:196] operationExecutor.UnmountVolume started for volume "gcp-project" (UniqueName: "kubernetes.io/host-path/99e40804-df2d-4828-ade9-25109976d4c3-gcp-project") pod "99e40804-df2d-4828-ade9-25109976d4c3" (UID: "99e40804-df2d-4828-ade9-25109976d4c3")
Oct 14 15:47:13 minikube kubelet[909]: I1014 15:47:13.940507     909 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99e40804-df2d-4828-ade9-25109976d4c3-gcp-project" (OuterVolumeSpecName: "gcp-project") pod "99e40804-df2d-4828-ade9-25109976d4c3" (UID: "99e40804-df2d-4828-ade9-25109976d4c3"). InnerVolumeSpecName "gcp-project". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct 14 15:47:13 minikube kubelet[909]: I1014 15:47:13.952415     909 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99e40804-df2d-4828-ade9-25109976d4c3-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "99e40804-df2d-4828-ade9-25109976d4c3" (UID: "99e40804-df2d-4828-ade9-25109976d4c3"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue ""
Oct 14 15:47:13 minikube kubelet[909]: I1014 15:47:13.957402     909 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99e40804-df2d-4828-ade9-25109976d4c3-default-token-l7rj7" (OuterVolumeSpecName: "default-token-l7rj7") pod "99e40804-df2d-4828-ade9-25109976d4c3" (UID: "99e40804-df2d-4828-ade9-25109976d4c3"). InnerVolumeSpecName "default-token-l7rj7". PluginName "kubernetes.io/secret", VolumeGidValue ""
Oct 14 15:47:14 minikube kubelet[909]: I1014 15:47:14.040414     909 reconciler.go:319] Volume detached for volume "default-token-l7rj7" (UniqueName: "kubernetes.io/secret/99e40804-df2d-4828-ade9-25109976d4c3-default-token-l7rj7") on node "minikube" DevicePath ""
Oct 14 15:47:14 minikube kubelet[909]: I1014 15:47:14.040468     909 reconciler.go:319] Volume detached for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/99e40804-df2d-4828-ade9-25109976d4c3-webhook-certs") on node "minikube" DevicePath ""
Oct 14 15:47:14 minikube kubelet[909]: I1014 15:47:14.040483     909 reconciler.go:319] Volume detached for volume "gcp-project" (UniqueName: "kubernetes.io/host-path/99e40804-df2d-4828-ade9-25109976d4c3-gcp-project") on node "minikube" DevicePath ""
Oct 14 15:47:14 minikube kubelet[909]: E1014 15:47:14.293635     909 kubelet_pods.go:1250] Failed killing the pod "gcp-auth-5c58dc7db8-gtnfq": failed to "KillContainer" for "gcp-auth" with KillContainerError: "rpc error: code = Unknown desc = Error: No such container: 4cde311df231bd1ba800f51580091894877abf4531796751d37db97dfc4e9f70"
Oct 14 15:47:18 minikube kubelet[909]: I1014 15:47:18.284869     909 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 0b4f01f9989844c0bbe60bf7b61954cd225e3250c422799c9e7a801f61e2e83e
Oct 14 15:47:20 minikube kubelet[909]: E1014 15:47:20.536345     909 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
Oct 14 15:47:20 minikube kubelet[909]: E1014 15:47:20.536489     909 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
Oct 14 15:47:30 minikube kubelet[909]: E1014 15:47:30.549418     909 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
Oct 14 15:47:30 minikube kubelet[909]: E1014 15:47:30.549461     909 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
Oct 14 15:47:40 minikube kubelet[909]: E1014 15:47:40.573155     909 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
Oct 14 15:47:40 minikube kubelet[909]: E1014 15:47:40.573272     909 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
Oct 14 15:47:51 minikube kubelet[909]: E1014 15:47:51.518504     909 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
Oct 14 15:47:51 minikube kubelet[909]: E1014 15:47:51.528905     909 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics

==> storage-provisioner [0b4f01f99898] <==
F1014 15:47:04.979687       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": x509: certificate signed by unknown authority

==> storage-provisioner [5bcd4905ae5e] <==
I1014 15:47:18.522868       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
I1014 15:47:35.930096       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1014 15:47:35.930580       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c675e0fc-2044-4c9a-9a0f-6102bc435e6a", APIVersion:"v1", ResourceVersion:"72828", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_029212b9-51d4-4148-920c-b8eecf774a6e became leader
I1014 15:47:35.930613       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_minikube_029212b9-51d4-4148-920c-b8eecf774a6e!
I1014 15:47:36.031296       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_minikube_029212b9-51d4-4148-920c-b8eecf774a6e!

@tstromberg tstromberg added kind/bug Categorizes issue or PR as related to a bug. area/addons labels Oct 14, 2020
@tstromberg tstromberg changed the title "Error from server (Invalid): error when applying patch" when starting minikube with gcp-auth previously enabled gcp-auth returned an error: field is immutable Oct 14, 2020
@tstromberg
Copy link
Contributor

This error is probably safe to ignore - but something we'll need to address nonetheless.

Some Kubernetes objects are immutable, and the addon doesn't yet understand being enabled twice on a running cluster.

@sharifelgamal
Copy link
Collaborator

there are some race conditions around enabling and disabling the addon consecutively, the errors are spurious and the addon should still work.

@medyagh medyagh added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Oct 14, 2020
@medyagh
Copy link
Member

medyagh commented Oct 28, 2020

this is fixed in 1.14.0

@medyagh medyagh closed this as completed Oct 28, 2020
@matthewmichihara
Copy link
Author

@medyagh are you sure? I ask because in the minikube start output it says it was using version 1.14.0.

@sharifelgamal
Copy link
Collaborator

@matthewmichihara Can you try with 1.14.2 and see if it still happens? Just reopen if you can repro it.

@matthewmichihara
Copy link
Author

matthewmichihara commented Apr 5, 2021

I saw this pop up again with minikube 1.18.1 via Cloud Code. @medyagh @sharifelgamal can we re-open this issue?

/Users/michihara/Library/Application Support/google-cloud-tools-java/managed-cloud-sdk/LATEST/google-cloud-sdk/bin/minikube start --wait true --interactive false --delete-on-failure
* minikube v1.18.1 on Darwin 11.2.3
  - MINIKUBE_WANTUPDATENOTIFICATION=false
* Kubernetes 1.20.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.20.2
* Using the docker driver based on existing profile
* Starting control plane node minikube in cluster minikube
* Restarting existing docker container for "minikube" ...
* Preparing Kubernetes v1.19.2 on Docker 19.03.8 ...
* Verifying Kubernetes components...
  - Using image gcr.io/k8s-minikube/storage-provisioner:v4
  - Using image jettech/kube-webhook-certgen:v1.3.0
  - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.0.4
* Verifying gcp-auth addon...
* Your GCP credentials will now be mounted into every pod created in the minikube cluster.
* If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
! Enabling 'gcp-auth' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: Process exited with status 1
stdout:
namespace/gcp-auth unchanged
service/gcp-auth unchanged
serviceaccount/minikube-gcp-auth-certs unchanged
clusterrole.rbac.authorization.k8s.io/minikube-gcp-auth-certs unchanged
clusterrolebinding.rbac.authorization.k8s.io/minikube-gcp-auth-certs unchanged
deployment.apps/gcp-auth unchanged
mutatingwebhookconfiguration.admissionregistration.k8s.io/gcp-auth-webhook-cfg unchanged
 
stderr:
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"batch/v1\",\"kind\":\"Job\",\"metadata\":{\"annotations\":{},\"name\":\"gcp-auth-certs-create\",\"namespace\":\"gcp-auth\"},\"spec\":{\"template\":{\"metadata\":{\"name\":\"gcp-auth-certs-create\"},\"spec\":{\"containers\":[{\"args\":[\"create\",\"--host=gcp-auth,gcp-auth.gcp-auth,gcp-auth.gcp-auth.svc\",\"--namespace=gcp-auth\",\"--secret-name=gcp-auth-certs\"],\"image\":\"jettech/kube-webhook-certgen:v1.3.0@sha256:ff01fba91131ed260df3f3793009efbf9686f5a5ce78a85f81c386a4403f7689\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"create\"}],\"restartPolicy\":\"OnFailure\",\"serviceAccountName\":\"minikube-gcp-auth-certs\"}}}}
"}},"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"create"}],"containers":[{"image":"jettech/kube-webhook-certgen:v1.3.0@sha256:ff01fba91131ed260df3f3793009efbf9686f5a5ce78a85f81c386a4403f7689","name":"create"}]}}}}
to:
Resource: "batch/v1, Resource=jobs", GroupVersionKind: "batch/v1, Kind=Job"
Name: "gcp-auth-certs-create", Namespace: "gcp-auth"
for: "/etc/kubernetes/addons/gcp-auth-webhook.yaml": Job.batch "gcp-auth-certs-create" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"gcp-auth-certs-create", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"controller-uid":"987a4bdc-e645-4aa5-b956-71ae9cbaeb27", "job-name":"gcp-auth-certs-create"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"create", Image:"jettech/kube-webhook-certgen:v1.3.0@sha256:ff01fba91131ed260df3f3793009efbf9686f5a5ce78a85f81c386a4403f7689", Command:[]string(nil), Args:[]string{"create", "--host=gcp-auth,gcp-auth.gcp-auth,gcp-auth.gcp-auth.svc", "--namespace=gcp-auth", "--secret-name=gcp-auth-certs"}, WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar(nil), Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:"OnFailure", TerminationGracePeriodSeconds:(*int64)(0xc0090bc3a0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"minikube-gcp-auth-certs", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc00ef9eb00), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", SetHostnameAsFQDN:(*bool)(nil), Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil)}}: field is immutable
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"batch/v1\",\"kind\":\"Job\",\"metadata\":{\"annotations\":{},\"name\":\"gcp-auth-certs-patch\",\"namespace\":\"gcp-auth\"},\"spec\":{\"template\":{\"metadata\":{\"name\":\"gcp-auth-certs-patch\"},\"spec\":{\"containers\":[{\"args\":[\"patch\",\"--secret-name=gcp-auth-certs\",\"--namespace=gcp-auth\",\"--patch-validating=false\",\"--webhook-name=gcp-auth-webhook-cfg\"],\"image\":\"jettech/kube-webhook-certgen:v1.3.0@sha256:ff01fba91131ed260df3f3793009efbf9686f5a5ce78a85f81c386a4403f7689\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"patch\"}],\"restartPolicy\":\"OnFailure\",\"serviceAccountName\":\"minikube-gcp-auth-certs\"}}}}
"}},"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"patch"}],"containers":[{"image":"jettech/kube-webhook-certgen:v1.3.0@sha256:ff01fba91131ed260df3f3793009efbf9686f5a5ce78a85f81c386a4403f7689","name":"patch"}]}}}}
to:
Resource: "batch/v1, Resource=jobs", GroupVersionKind: "batch/v1, Kind=Job"
Name: "gcp-auth-certs-patch", Namespace: "gcp-auth"
for: "/etc/kubernetes/addons/gcp-auth-webhook.yaml": Job.batch "gcp-auth-certs-patch" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"gcp-auth-certs-patch", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"controller-uid":"981acbae-26ce-485f-9bdf-bf7de398b76d", "job-name":"gcp-auth-certs-patch"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"patch", Image:"jettech/kube-webhook-certgen:v1.3.0@sha256:ff01fba91131ed260df3f3793009efbf9686f5a5ce78a85f81c386a4403f7689", Command:[]string(nil), Args:[]string{"patch", "--secret-name=gcp-auth-certs", "--namespace=gcp-auth", "--patch-validating=false", "--webhook-name=gcp-auth-webhook-cfg"}, WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar(nil), Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:"OnFailure", TerminationGracePeriodSeconds:(*int64)(0xc002b15720), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"minikube-gcp-auth-certs", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc009c0dd80), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", SetHostnameAsFQDN:(*bool)(nil), Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil)}}: field is immutable
]
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "minikube" cluster and "" namespace by default
 
minikube started successfully.
Enabling GCP auth addon...
 
Failed to enable GCP auth addon. Deployment will continue but GCP credentials will not be added to minikube. Please ensure you have up to date application default credentials (ADC) by running `gcloud auth login --update-adc`

@spowelljr spowelljr reopened this Apr 7, 2021
@medyagh medyagh added this to the v1.21.0 milestone May 3, 2021
@sharifelgamal sharifelgamal added the addon/gcp-auth Issues with the GCP Auth addon label May 5, 2021
@sharifelgamal
Copy link
Collaborator

I'm working on a repro case for this so we can fix this once and for all.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
addon/gcp-auth Issues with the GCP Auth addon area/addons kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants