Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[minikube] Upgrade from 19.2.0 to any newer version fails: could not translate host name "awx-postgres" to address: Name or service not known #445

Closed
Commifreak opened this issue Jul 1, 2021 · 11 comments

Comments

@Commifreak
Copy link

Commifreak commented Jul 1, 2021

Hi,

I migrated from docker to minikube and that was working great. The last few updates as well. But now any newer update ends in error and blank awx page.

Current config

minikube start command until now:
minikube start --cpus=4 --memory=8g --driver=docker --addons=ingress

which I had updated to:
minikube start --cpus=4 --memory=8g --driver=docker --addons=ingress --cni=flannel --install-addons=true --kubernetes-version=stable

I got an ingress error but fixed it via kubectl -n kube-system edit deployment ingress-nginx-controller and change of the image path - see kubernetes/minikube#8756 (comment)

My awx yml:

---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
  name: awx
spec:
  service_type: nodeport
  ingress_type: none
  hostname: ansible-qlb.jki.intern

Latest minikube start output:

Starting...
* minikube v1.21.0 on Ubuntu 20.04
* Using the docker driver based on existing profile
! Your cgroup does not allow setting memory.
  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
! Your cgroup does not allow setting memory.
  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
* Starting control plane node minikube in cluster minikube
* Pulling base image ...
* Downloading Kubernetes v1.20.7 preload ...
    > preloaded-images-k8s-v11-v1...: 492.20 MiB / 492.20 MiB  100.00% 21.66 Mi
* Restarting existing docker container for "minikube" ...
* Found network options:
  - NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,192.168.49.0/24
  - http_proxy=http://172.31.42.58:3128
  - https_proxy=http://172.31.42.58:3128
! This container is having trouble accessing https://k8s.gcr.io
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
* Preparing Kubernetes v1.20.7 on Docker 20.10.3 ...
  - env NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,192.168.49.0/24
  - env HTTP_PROXY=http://172.31.42.58:3128
  - env HTTPS_PROXY=http://172.31.42.58:3128
* Configuring Flannel (Container Networking Interface) ...
* Verifying Kubernetes components...
  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
  - Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
* Verifying ingress addon...
* Enabled addons: storage-provisioner, default-storageclass, ingress
* kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Current pods:

Click to view
  :~$ minikube kubectl -- describe po -A
    > kubectl.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
    > kubectl: 38.36 MiB / 38.36 MiB [-------------] 100.00% 18.01 MiB p/s 2.3s
Name:         awx-6d97bb8b9f-b8xf6
Namespace:    default
Priority:     0
Node:         minikube/192.168.49.2
Start Time:   Fri, 11 Jun 2021 06:53:43 +0200
Labels:       app.kubernetes.io/component=awx
              app.kubernetes.io/managed-by=awx-operator
              app.kubernetes.io/name=awx
              app.kubernetes.io/part-of=awx
              app.kubernetes.io/version=19.2.0
              pod-template-hash=6d97bb8b9f
Annotations:  <none>
Status:       Running
IP:           172.17.0.5
IPs:
  IP:           172.17.0.5
Controlled By:  ReplicaSet/awx-6d97bb8b9f
Containers:
  redis:
    Container ID:  docker://c84b14d37154f72932452d122a8c884721a548e0cd967f7ecaf693dfa2d035bf
    Image:         docker.io/redis:latest
    Image ID:      docker-pullable://redis@sha256:7e2c6181ad5c425443b56c7c73a9cd6df24a122345847d1ea9bb86a5afc76325
    Port:          <none>
    Host Port:     <none>
    Args:
      redis-server
      /etc/redis.conf
    State:          Running
      Started:      Thu, 01 Jul 2021 14:06:57 +0200
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 01 Jul 2021 14:03:40 +0200
      Finished:     Thu, 01 Jul 2021 14:05:05 +0200
    Ready:          True
    Restart Count:  27
    Environment:    <none>
    Mounts:
      /data from awx-redis-data (rw)
      /etc/redis.conf from awx-redis-config (ro,path="redis.conf")
      /var/run/redis from awx-redis-socket (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from awx-token-7psvk (ro)
  awx-web:
    Container ID:   docker://19a41ebf8af1e3bbb40a4e7fce8c6d4283ddeea9c82770b2a6326156d4743a8f
    Image:          quay.io/ansible/awx:19.2.0
    Image ID:       docker-pullable://quay.io/ansible/awx@sha256:f7cdabee0da2ea195e3dab8a8b39f3f5f1f32f0d2ee3d0ac561ec7d640d7042d
    Port:           8052/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 01 Jul 2021 14:06:57 +0200
    Last State:     Terminated
      Reason:       Error
      Exit Code:    143
      Started:      Thu, 01 Jul 2021 14:03:40 +0200
      Finished:     Thu, 01 Jul 2021 14:05:05 +0200
    Ready:          True
    Restart Count:  27
    Requests:
      cpu:     1
      memory:  2Gi
    Environment:
      MY_POD_NAMESPACE:  default (v1:metadata.namespace)
    Mounts:
      /etc/nginx/nginx.conf from awx-nginx-conf (ro,path="nginx.conf")
      /etc/tower/SECRET_KEY from awx-secret-key (ro,path="SECRET_KEY")
      /etc/tower/conf.d/credentials.py from awx-application-credentials (ro,path="credentials.py")
      /etc/tower/conf.d/execution_environments.py from awx-application-credentials (ro,path="execution_environments.py")
      /etc/tower/conf.d/ldap.py from awx-application-credentials (ro,path="ldap.py")
      /etc/tower/settings.py from awx-settings (ro,path="settings.py")
      /var/lib/awx/projects from awx-projects (rw)
      /var/lib/awx/rsyslog from rsyslog-dir (rw)
      /var/run/awx-rsyslog from rsyslog-socket (rw)
      /var/run/redis from awx-redis-socket (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from awx-token-7psvk (ro)
      /var/run/supervisor from supervisor-socket (rw)
  awx-task:
    Container ID:  docker://4d7ca6f5789194e9148307620626c4e0e028368bd771616db613fa1c03590c32
    Image:         quay.io/ansible/awx:19.2.0
    Image ID:      docker-pullable://quay.io/ansible/awx@sha256:f7cdabee0da2ea195e3dab8a8b39f3f5f1f32f0d2ee3d0ac561ec7d640d7042d
    Port:          <none>
    Host Port:     <none>
    Args:
      /usr/bin/launch_awx_task.sh
    State:          Running
      Started:      Thu, 01 Jul 2021 14:06:58 +0200
    Last State:     Terminated
      Reason:       Error
      Exit Code:    143
      Started:      Thu, 01 Jul 2021 14:03:41 +0200
      Finished:     Thu, 01 Jul 2021 14:05:05 +0200
    Ready:          True
    Restart Count:  27
    Requests:
      cpu:     500m
      memory:  1Gi
    Environment:
      SUPERVISOR_WEB_CONFIG_PATH:  /etc/supervisord.conf
      AWX_SKIP_MIGRATIONS:         1
      MY_POD_UID:                   (v1:metadata.uid)
      MY_POD_IP:                    (v1:status.podIP)
      MY_POD_NAMESPACE:            default (v1:metadata.namespace)
    Mounts:
      /etc/tower/SECRET_KEY from awx-secret-key (ro,path="SECRET_KEY")
      /etc/tower/conf.d/credentials.py from awx-application-credentials (ro,path="credentials.py")
      /etc/tower/conf.d/execution_environments.py from awx-application-credentials (ro,path="execution_environments.py")
      /etc/tower/conf.d/ldap.py from awx-application-credentials (ro,path="ldap.py")
      /etc/tower/settings.py from awx-settings (ro,path="settings.py")
      /var/lib/awx/projects from awx-projects (rw)
      /var/lib/awx/rsyslog from rsyslog-dir (rw)
      /var/run/awx-rsyslog from rsyslog-socket (rw)
      /var/run/receptor from receptor-socket (rw)
      /var/run/redis from awx-redis-socket (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from awx-token-7psvk (ro)
      /var/run/supervisor from supervisor-socket (rw)
  awx-ee:
    Container ID:  docker://8d40c9d145b0d811bc16e803f6804943e982e9f6012abb1302269fade524f80f
    Image:         quay.io/ansible/awx-ee:0.3.0
    Image ID:      docker-pullable://quay.io/ansible/awx-ee@sha256:885facada773ef85bfd4fc952a268f3d6e4331d5d134e79c54bb2bb201f81968
    Port:          <none>
    Host Port:     <none>
    Args:
      receptor
      --config
      /etc/receptor.conf
    State:          Running
      Started:      Thu, 01 Jul 2021 14:06:58 +0200
    Last State:     Terminated
      Reason:       Error
      Exit Code:    143
      Started:      Thu, 01 Jul 2021 14:03:41 +0200
      Finished:     Thu, 01 Jul 2021 14:05:05 +0200
    Ready:          True
    Restart Count:  27
    Environment:    <none>
    Mounts:
      /etc/receptor.conf from awx-receptor-config (ro,path="receptor.conf")
      /var/lib/awx/projects from awx-projects (rw)
      /var/run/receptor from receptor-socket (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from awx-token-7psvk (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  awx-application-credentials:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  awx-app-credentials
    Optional:    false
  awx-secret-key:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  awx-secret-key
    Optional:    false
  awx-settings:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      awx-awx-configmap
    Optional:  false
  awx-nginx-conf:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      awx-awx-configmap
    Optional:  false
  awx-redis-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      awx-awx-configmap
    Optional:  false
  awx-redis-socket:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  awx-redis-data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  supervisor-socket:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  rsyslog-socket:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  receptor-socket:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  rsyslog-dir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  awx-receptor-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      awx-awx-configmap
    Optional:  false
  awx-projects:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  awx-token-7psvk:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  awx-token-7psvk
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason          Age    From     Message
  ----     ------          ----   ----     -------
  Warning  FailedMount     161m   kubelet  MountVolume.SetUp failed for volume "awx-secret-key" : failed to sync secret cache: timed out waiting for the condition
  Normal   Pulled          161m   kubelet  Container image "quay.io/ansible/awx:19.2.0" already present on machine
  Normal   SandboxChanged  161m   kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          161m   kubelet  Container image "docker.io/redis:latest" already present on machine
  Normal   Created         161m   kubelet  Created container redis
  Normal   Started         161m   kubelet  Started container redis
  Normal   Created         161m   kubelet  Created container awx-web
  Normal   Started         161m   kubelet  Started container awx-web
  Normal   Pulled          161m   kubelet  Container image "quay.io/ansible/awx:19.2.0" already present on machine
  Normal   Started         161m   kubelet  Started container awx-ee
  Normal   Started         161m   kubelet  Started container awx-task
  Normal   Created         161m   kubelet  Created container awx-task
  Normal   Created         161m   kubelet  Created container awx-ee
  Normal   Pulled          161m   kubelet  Container image "quay.io/ansible/awx-ee:0.3.0" already present on machine
  Normal   SandboxChanged  143m   kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          143m   kubelet  Container image "docker.io/redis:latest" already present on machine
  Normal   Created         143m   kubelet  Created container redis
  Normal   Started         143m   kubelet  Started container redis
  Normal   Pulled          143m   kubelet  Container image "quay.io/ansible/awx:19.2.0" already present on machine
  Normal   Created         143m   kubelet  Created container awx-web
  Normal   Started         143m   kubelet  Started container awx-web
  Normal   Pulled          143m   kubelet  Container image "quay.io/ansible/awx:19.2.0" already present on machine
  Normal   Created         143m   kubelet  Created container awx-task
  Normal   Started         143m   kubelet  Started container awx-task
  Normal   Pulled          143m   kubelet  Container image "quay.io/ansible/awx-ee:0.3.0" already present on machine
  Normal   Created         143m   kubelet  Created container awx-ee
  Normal   Started         143m   kubelet  Started container awx-ee
  Normal   SandboxChanged  21m    kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          21m    kubelet  Container image "quay.io/ansible/awx-ee:0.3.0" already present on machine
  Normal   Created         21m    kubelet  Created container redis
  Normal   Started         21m    kubelet  Started container redis
  Normal   Pulled          21m    kubelet  Container image "quay.io/ansible/awx:19.2.0" already present on machine
  Normal   Created         21m    kubelet  Created container awx-web
  Normal   Pulled          21m    kubelet  Container image "docker.io/redis:latest" already present on machine
  Normal   Pulled          21m    kubelet  Container image "quay.io/ansible/awx:19.2.0" already present on machine
  Normal   Created         21m    kubelet  Created container awx-task
  Normal   Started         21m    kubelet  Started container awx-task
  Normal   Started         21m    kubelet  Started container awx-web
  Normal   Created         21m    kubelet  Created container awx-ee
  Normal   Started         21m    kubelet  Started container awx-ee
  Normal   SandboxChanged  9m55s  kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          9m54s  kubelet  Container image "quay.io/ansible/awx:19.2.0" already present on machine
  Normal   Pulled          9m54s  kubelet  Container image "docker.io/redis:latest" already present on machine
  Normal   Created         9m54s  kubelet  Created container redis
  Normal   Started         9m54s  kubelet  Started container redis
  Normal   Started         9m53s  kubelet  Started container awx-ee
  Normal   Started         9m53s  kubelet  Started container awx-web
  Normal   Pulled          9m53s  kubelet  Container image "quay.io/ansible/awx:19.2.0" already present on machine
  Normal   Created         9m53s  kubelet  Created container awx-task
  Normal   Started         9m53s  kubelet  Started container awx-task
  Normal   Pulled          9m53s  kubelet  Container image "quay.io/ansible/awx-ee:0.3.0" already present on machine
  Normal   Created         9m53s  kubelet  Created container awx-ee
  Normal   Created         9m53s  kubelet  Created container awx-web
  Warning  FailedMount     3m53s  kubelet  MountVolume.SetUp failed for volume "awx-nginx-conf" : failed to sync configmap cache: timed out waiting for the condition
  Warning  FailedMount     3m53s  kubelet  MountVolume.SetUp failed for volume "awx-redis-config" : failed to sync configmap cache: timed out waiting for the condition
  Warning  FailedMount     3m53s  kubelet  MountVolume.SetUp failed for volume "awx-secret-key" : failed to sync secret cache: timed out waiting for the condition
  Warning  FailedMount     3m53s  kubelet  MountVolume.SetUp failed for volume "awx-token-7psvk" : failed to sync secret cache: timed out waiting for the condition
  Warning  FailedMount     3m53s  kubelet  MountVolume.SetUp failed for volume "awx-receptor-config" : failed to sync configmap cache: timed out waiting for the condition
  Warning  FailedMount     3m53s  kubelet  MountVolume.SetUp failed for volume "awx-application-credentials" : failed to sync secret cache: timed out waiting for the condition
  Warning  FailedMount     3m53s  kubelet  MountVolume.SetUp failed for volume "awx-settings" : failed to sync configmap cache: timed out waiting for the condition
  Normal   Created         3m52s  kubelet  Created container awx-web
  Normal   Pulled          3m52s  kubelet  Container image "docker.io/redis:latest" already present on machine
  Normal   Created         3m52s  kubelet  Created container redis
  Normal   Started         3m52s  kubelet  Started container redis
  Normal   Pulled          3m52s  kubelet  Container image "quay.io/ansible/awx:19.2.0" already present on machine
  Normal   SandboxChanged  3m52s  kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Started         3m52s  kubelet  Started container awx-web
  Normal   Pulled          3m52s  kubelet  Container image "quay.io/ansible/awx:19.2.0" already present on machine
  Normal   Started         3m51s  kubelet  Started container awx-ee
  Normal   Started         3m51s  kubelet  Started container awx-task
  Normal   Created         3m51s  kubelet  Created container awx-task
  Normal   Created         3m51s  kubelet  Created container awx-ee
  Normal   Pulled          3m51s  kubelet  Container image "quay.io/ansible/awx-ee:0.3.0" already present on machine
  Normal   SandboxChanged  36s    kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          35s    kubelet  Container image "docker.io/redis:latest" already present on machine
  Normal   Created         35s    kubelet  Created container redis
  Normal   Started         35s    kubelet  Started container redis
  Normal   Pulled          35s    kubelet  Container image "quay.io/ansible/awx:19.2.0" already present on machine
  Normal   Created         35s    kubelet  Created container awx-web
  Normal   Created         35s    kubelet  Created container awx-task
  Normal   Started         35s    kubelet  Started container awx-web
  Normal   Pulled          35s    kubelet  Container image "quay.io/ansible/awx:19.2.0" already present on machine
  Normal   Started         34s    kubelet  Started container awx-ee
  Normal   Started         34s    kubelet  Started container awx-task
  Normal   Pulled          34s    kubelet  Container image "quay.io/ansible/awx-ee:0.3.0" already present on machine
  Normal   Created         34s    kubelet  Created container awx-ee


Name:         awx-operator-5dd757f594-2b9tr
Namespace:    default
Priority:     0
Node:         minikube/192.168.49.2
Start Time:   Fri, 11 Jun 2021 06:39:49 +0200
Labels:       name=awx-operator
              pod-template-hash=5dd757f594
Annotations:  <none>
Status:       Running
IP:           172.17.0.6
IPs:
  IP:           172.17.0.6
Controlled By:  ReplicaSet/awx-operator-5dd757f594
Containers:
  awx-operator:
    Container ID:   docker://d87f114b3cf91b51d5948435bca629b8acaa2803f52879168819de5e8e9ae67b
    Image:          quay.io/ansible/awx-operator:0.10.0
    Image ID:       docker-pullable://quay.io/ansible/awx-operator@sha256:ab354e85f782a787f384687296833a9be12d52ffe62c09bbd10a810a28119f69
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 01 Jul 2021 14:06:59 +0200
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 01 Jul 2021 14:04:26 +0200
      Finished:     Thu, 01 Jul 2021 14:05:05 +0200
    Ready:          True
    Restart Count:  38
    Liveness:       http-get http://:6789/healthz delay=15s timeout=1s period=20s #success=1 #failure=3
    Environment:
      WATCH_NAMESPACE:
      POD_NAME:            awx-operator-5dd757f594-2b9tr (v1:metadata.name)
      OPERATOR_NAME:       awx-operator
      ANSIBLE_GATHERING:   explicit
      OPERATOR_VERSION:    0.10.0
      ANSIBLE_DEBUG_LOGS:  false
    Mounts:
      /tmp/ansible-operator/runner from runner (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from awx-operator-token-7lwjl (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  runner:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  awx-operator-token-7lwjl:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  awx-operator-token-7lwjl
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason          Age                   From     Message
  ----     ------          ----                  ----     -------
  Normal   SandboxChanged  162m                  kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          161m                  kubelet  Successfully pulled image "quay.io/ansible/awx-operator:0.10.0" in 1.814774068s
  Warning  Unhealthy       161m                  kubelet  Liveness probe failed: Get "http://172.17.0.3:6789/healthz": dial tcp 172.17.0.3:6789: connect: connection refused
  Warning  BackOff         161m (x2 over 161m)   kubelet  Back-off restarting failed container
  Normal   Pulling         161m (x2 over 161m)   kubelet  Pulling image "quay.io/ansible/awx-operator:0.10.0"
  Normal   Started         161m (x2 over 161m)   kubelet  Started container awx-operator
  Normal   Pulled          161m                  kubelet  Successfully pulled image "quay.io/ansible/awx-operator:0.10.0" in 1.365658871s
  Normal   Created         161m (x2 over 161m)   kubelet  Created container awx-operator
  Normal   SandboxChanged  143m                  kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          143m                  kubelet  Successfully pulled image "quay.io/ansible/awx-operator:0.10.0" in 2.088682171s
  Warning  Unhealthy       143m                  kubelet  Liveness probe failed: Get "http://172.17.0.2:6789/healthz": dial tcp 172.17.0.2:6789: connect: connection refused
  Warning  BackOff         142m (x2 over 142m)   kubelet  Back-off restarting failed container
  Normal   Pulling         142m (x2 over 143m)   kubelet  Pulling image "quay.io/ansible/awx-operator:0.10.0"
  Normal   Started         142m (x2 over 143m)   kubelet  Started container awx-operator
  Normal   Created         142m (x2 over 143m)   kubelet  Created container awx-operator
  Normal   Pulled          142m                  kubelet  Successfully pulled image "quay.io/ansible/awx-operator:0.10.0" in 1.416479293s
  Normal   SandboxChanged  21m                   kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          21m                   kubelet  Successfully pulled image "quay.io/ansible/awx-operator:0.10.0" in 1.600233364s
  Warning  Unhealthy       21m                   kubelet  Liveness probe failed: Get "http://172.17.0.2:6789/healthz": dial tcp 172.17.0.2:6789: connect: connection refused
  Warning  BackOff         20m (x2 over 21m)     kubelet  Back-off restarting failed container
  Normal   Pulling         20m (x2 over 21m)     kubelet  Pulling image "quay.io/ansible/awx-operator:0.10.0"
  Normal   Started         20m (x2 over 21m)     kubelet  Started container awx-operator
  Normal   Pulled          20m                   kubelet  Successfully pulled image "quay.io/ansible/awx-operator:0.10.0" in 1.285045254s
  Normal   Created         20m (x2 over 21m)     kubelet  Created container awx-operator
  Normal   SandboxChanged  9m55s                 kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulling         9m54s                 kubelet  Pulling image "quay.io/ansible/awx-operator:0.10.0"
  Normal   Pulled          9m52s                 kubelet  Successfully pulled image "quay.io/ansible/awx-operator:0.10.0" in 1.740081344s
  Normal   Created         9m52s                 kubelet  Created container awx-operator
  Normal   Started         9m52s                 kubelet  Started container awx-operator
  Normal   SandboxChanged  3m54s                 kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          3m52s                 kubelet  Successfully pulled image "quay.io/ansible/awx-operator:0.10.0" in 1.478372999s
  Warning  Unhealthy       3m25s                 kubelet  Liveness probe failed: Get "http://172.17.0.4:6789/healthz": dial tcp 172.17.0.4:6789: connect: connection refused
  Warning  BackOff         3m21s                 kubelet  Back-off restarting failed container
  Normal   Pulling         3m7s (x2 over 3m53s)  kubelet  Pulling image "quay.io/ansible/awx-operator:0.10.0"
  Normal   Started         3m6s (x2 over 3m51s)  kubelet  Started container awx-operator
  Normal   Created         3m6s (x2 over 3m52s)  kubelet  Created container awx-operator
  Normal   Pulled          3m6s                  kubelet  Successfully pulled image "quay.io/ansible/awx-operator:0.10.0" in 1.324950481s
  Normal   SandboxChanged  36s                   kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulling         35s                   kubelet  Pulling image "quay.io/ansible/awx-operator:0.10.0"
  Normal   Pulled          34s                   kubelet  Successfully pulled image "quay.io/ansible/awx-operator:0.10.0" in 1.665972356s
  Normal   Started         33s                   kubelet  Started container awx-operator
  Normal   Created         33s                   kubelet  Created container awx-operator


Name:         awx-postgres-0
Namespace:    default
Priority:     0
Node:         minikube/192.168.49.2
Start Time:   Fri, 11 Jun 2021 06:53:34 +0200
Labels:       app.kubernetes.io/component=database
              app.kubernetes.io/instance=postgres-awx
              app.kubernetes.io/managed-by=awx-operator
              app.kubernetes.io/name=postgres
              app.kubernetes.io/part-of=awx
              controller-revision-hash=awx-postgres-78d8b767c8
              statefulset.kubernetes.io/pod-name=awx-postgres-0
Annotations:  <none>
Status:       Running
IP:           172.17.0.3
IPs:
  IP:           172.17.0.3
Controlled By:  StatefulSet/awx-postgres
Containers:
  postgres:
    Container ID:   docker://86cf33f6f31b4fde6e158a9371f1f21f7cc8f99a098bf2a68a728151a6a9fdf6
    Image:          postgres:12
    Image ID:       docker-pullable://postgres@sha256:1ad9a00724bdd8d8da9f2d8a782021a8503eff908c9413b5b34f22d518088f26
    Port:           5432/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 01 Jul 2021 14:06:57 +0200
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 01 Jul 2021 14:03:43 +0200
      Finished:     Thu, 01 Jul 2021 14:05:06 +0200
    Ready:          True
    Restart Count:  27
    Environment:
      POSTGRESQL_DATABASE:        <set to the key 'database' in secret 'awx-postgres-configuration'>  Optional: false
      POSTGRESQL_USER:            <set to the key 'username' in secret 'awx-postgres-configuration'>  Optional: false
      POSTGRESQL_PASSWORD:        <set to the key 'password' in secret 'awx-postgres-configuration'>  Optional: false
      POSTGRES_DB:                <set to the key 'database' in secret 'awx-postgres-configuration'>  Optional: false
      POSTGRES_USER:              <set to the key 'username' in secret 'awx-postgres-configuration'>  Optional: false
      POSTGRES_PASSWORD:          <set to the key 'password' in secret 'awx-postgres-configuration'>  Optional: false
      PGDATA:                     /var/lib/postgresql/data/pgdata
      POSTGRES_INITDB_ARGS:       --auth-host=scram-sha-256
      POSTGRES_HOST_AUTH_METHOD:  scram-sha-256
    Mounts:
      /var/lib/postgresql/data from postgres (rw,path="data")
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-bsj94 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  postgres:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  postgres-awx-postgres-0
    ReadOnly:   false
  default-token-bsj94:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-bsj94
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason          Age    From     Message
  ----    ------          ----   ----     -------
  Normal  SandboxChanged  161m   kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          161m   kubelet  Container image "postgres:12" already present on machine
  Normal  Created         161m   kubelet  Created container postgres
  Normal  Started         161m   kubelet  Started container postgres
  Normal  SandboxChanged  143m   kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          143m   kubelet  Container image "postgres:12" already present on machine
  Normal  Created         143m   kubelet  Created container postgres
  Normal  Started         143m   kubelet  Started container postgres
  Normal  SandboxChanged  21m    kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          21m    kubelet  Container image "postgres:12" already present on machine
  Normal  Created         21m    kubelet  Created container postgres
  Normal  Started         21m    kubelet  Started container postgres
  Normal  SandboxChanged  9m55s  kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          9m54s  kubelet  Container image "postgres:12" already present on machine
  Normal  Created         9m54s  kubelet  Created container postgres
  Normal  Started         9m54s  kubelet  Started container postgres
  Normal  SandboxChanged  3m50s  kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          3m49s  kubelet  Container image "postgres:12" already present on machine
  Normal  Created         3m49s  kubelet  Created container postgres
  Normal  Started         3m49s  kubelet  Started container postgres
  Normal  SandboxChanged  36s    kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          35s    kubelet  Container image "postgres:12" already present on machine
  Normal  Created         35s    kubelet  Created container postgres
  Normal  Started         35s    kubelet  Started container postgres


Name:         ingress-nginx-admission-create-r95jq
Namespace:    ingress-nginx
Priority:     0
Node:         minikube/192.168.49.2
Start Time:   Thu, 10 Jun 2021 14:41:46 +0200
Labels:       addonmanager.kubernetes.io/mode=Reconcile
              app.kubernetes.io/component=admission-webhook
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/name=ingress-nginx
              controller-uid=455196aa-5869-4dbc-8e96-3905681bd7be
              job-name=ingress-nginx-admission-create
Annotations:  <none>
Status:       Succeeded
IP:           172.17.0.8
IPs:
  IP:           172.17.0.8
Controlled By:  Job/ingress-nginx-admission-create
Containers:
  create:
    Container ID:  docker://86f45c81ac48f669e3a0923b72db6d9e6efd168e0848bf8b1bd85b6684c33d75
    Image:         docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7
    Image ID:      docker-pullable://jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7
    Port:          <none>
    Host Port:     <none>
    Args:
      create
      --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
      --namespace=$(POD_NAMESPACE)
      --secret-name=ingress-nginx-admission
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 10 Jun 2021 14:42:05 +0200
      Finished:     Thu, 10 Jun 2021 14:42:05 +0200
    Ready:          False
    Restart Count:  0
    Environment:
      POD_NAMESPACE:  ingress-nginx (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from ingress-nginx-admission-token-xb8gt (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  ingress-nginx-admission-token-xb8gt:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission-token-xb8gt
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:          <none>


Name:         ingress-nginx-admission-patch-nvw4r
Namespace:    ingress-nginx
Priority:     0
Node:         minikube/192.168.49.2
Start Time:   Thu, 10 Jun 2021 14:41:46 +0200
Labels:       addonmanager.kubernetes.io/mode=Reconcile
              app.kubernetes.io/component=admission-webhook
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/name=ingress-nginx
              controller-uid=629be209-202a-4363-9673-b7e66edcd44a
              job-name=ingress-nginx-admission-patch
Annotations:  <none>
Status:       Succeeded
IP:           172.17.0.7
IPs:
  IP:           172.17.0.7
Controlled By:  Job/ingress-nginx-admission-patch
Containers:
  patch:
    Container ID:  docker://18c46ddae2c80a24d29880b4f155cf02907c45c2c56c4680a485467fa8a5451f
    Image:         docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7
    Image ID:      docker-pullable://jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7
    Port:          <none>
    Host Port:     <none>
    Args:
      patch
      --webhook-name=ingress-nginx-admission
      --namespace=$(POD_NAMESPACE)
      --patch-mutating=false
      --secret-name=ingress-nginx-admission
      --patch-failure-policy=Fail
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 10 Jun 2021 14:42:06 +0200
      Finished:     Thu, 10 Jun 2021 14:42:06 +0200
    Ready:          False
    Restart Count:  1
    Environment:
      POD_NAMESPACE:  ingress-nginx (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from ingress-nginx-admission-token-xb8gt (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  ingress-nginx-admission-token-xb8gt:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission-token-xb8gt
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:          <none>


Name:         ingress-nginx-controller-5d88495688-j5j2f
Namespace:    ingress-nginx
Priority:     0
Node:         minikube/192.168.49.2
Start Time:   Thu, 01 Jul 2021 13:58:37 +0200
Labels:       addonmanager.kubernetes.io/mode=Reconcile
              app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/name=ingress-nginx
              gcp-auth-skip-secret=true
              pod-template-hash=5d88495688
Annotations:  <none>
Status:       Running
IP:           172.17.0.4
IPs:
  IP:           172.17.0.4
Controlled By:  ReplicaSet/ingress-nginx-controller-5d88495688
Containers:
  controller:
    Container ID:  docker://3e1bc8e463d88a5c4721e2b5883efb7e5235436a5f5f2d3d0b7ef13c92b6908f
    Image:         k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a
    Image ID:      docker-pullable://k8s.gcr.io/ingress-nginx/controller@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a
    Ports:         80/TCP, 443/TCP, 8443/TCP
    Host Ports:    80/TCP, 443/TCP, 0/TCP
    Args:
      /nginx-ingress-controller
      --ingress-class=nginx
      --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
      --report-node-internal-ip-address
      --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
      --udp-services-configmap=$(POD_NAMESPACE)/udp-services
      --validating-webhook=:8443
      --validating-webhook-certificate=/usr/local/certificates/cert
      --validating-webhook-key=/usr/local/certificates/key
    State:          Running
      Started:      Thu, 01 Jul 2021 14:06:57 +0200
    Last State:     Terminated
      Reason:       Error
      Exit Code:    255
      Started:      Thu, 01 Jul 2021 14:03:39 +0200
      Finished:     Thu, 01 Jul 2021 14:06:30 +0200
    Ready:          True
    Restart Count:  2
    Requests:
      cpu:      100m
      memory:   90Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       ingress-nginx-controller-5d88495688-j5j2f (v1:metadata.name)
      POD_NAMESPACE:  ingress-nginx (v1:metadata.namespace)
      LD_PRELOAD:     /usr/local/lib/libmimalloc.so
    Mounts:
      /usr/local/certificates/ from webhook-cert (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from ingress-nginx-token-9drjf (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  webhook-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission
    Optional:    false
  ingress-nginx-token-9drjf:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-token-9drjf
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                    From                      Message
  ----     ------            ----                   ----                      -------
  Warning  FailedScheduling  21h                    default-scheduler         0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
  Warning  FailedScheduling  143m                   default-scheduler         0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
  Warning  FailedScheduling  21m                    default-scheduler         0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
  Warning  FailedScheduling  9m59s                  default-scheduler         0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
  Normal   Scheduled         8m55s                  default-scheduler         Successfully assigned ingress-nginx/ingress-nginx-controller-5d88495688-j5j2f to minikube
  Warning  FailedScheduling  146m (x13 over 162m)   default-scheduler         0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
  Normal   Pulling           8m54s                  kubelet                   Pulling image "k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a"
  Normal   Pulled            8m46s                  kubelet                   Successfully pulled image "k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a" in 7.409190711s
  Normal   Created           8m46s                  kubelet                   Created container controller
  Normal   Started           8m45s                  kubelet                   Started container controller
  Normal   RELOAD            8m44s                  nginx-ingress-controller  NGINX reload triggered due to a change in configuration
  Normal   SandboxChanged    3m54s                  kubelet                   Pod sandbox changed, it will be killed and re-created.
  Normal   Started           3m53s                  kubelet                   Started container controller
  Normal   Created           3m53s                  kubelet                   Created container controller
  Normal   Pulled            3m53s                  kubelet                   Container image "k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a" already present on machine
  Warning  Unhealthy         3m30s (x2 over 3m40s)  kubelet                   Liveness probe failed: Get "http://172.17.0.3:10254/healthz": dial tcp 172.17.0.3:10254: connect: connection refused
  Warning  Unhealthy         3m29s (x2 over 3m39s)  kubelet                   Readiness probe failed: Get "http://172.17.0.3:10254/healthz": dial tcp 172.17.0.3:10254: connect: connection refused
  Normal   RELOAD            3m20s                  nginx-ingress-controller  NGINX reload triggered due to a change in configuration
  Warning  Unhealthy         3m20s                  kubelet                   Liveness probe failed: HTTP probe failed with statuscode: 500
  Normal   SandboxChanged    36s                    kubelet                   Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled            35s                    kubelet                   Container image "k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a" already present on machine
  Normal   Created           35s                    kubelet                   Created container controller
  Normal   Started           35s                    kubelet                   Started container controller
  Normal   RELOAD            31s                    nginx-ingress-controller  NGINX reload triggered due to a change in configuration


Name:                 coredns-74ff55c5b-qzr6b
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Node:                 minikube/192.168.49.2
Start Time:           Tue, 30 Mar 2021 08:23:23 +0200
Labels:               k8s-app=kube-dns
                      pod-template-hash=74ff55c5b
Annotations:          <none>
Status:               Running
IP:                   172.17.0.2
IPs:
  IP:           172.17.0.2
Controlled By:  ReplicaSet/coredns-74ff55c5b
Containers:
  coredns:
    Container ID:  docker://f28346a7389bfc103941ee39b37e3b54fad4fd85dd7d0bf283c2587b789d6633
    Image:         k8s.gcr.io/coredns:1.7.0
    Image ID:      docker-pullable://k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c
    Ports:         53/UDP, 53/TCP, 9153/TCP
    Host Ports:    0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    State:          Running
      Started:      Thu, 01 Jul 2021 14:06:54 +0200
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 01 Jul 2021 14:03:38 +0200
      Finished:     Thu, 01 Jul 2021 14:05:10 +0200
    Ready:          True
    Restart Count:  117
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:    http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-fsplt (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  coredns-token-fsplt:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-fsplt
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  kubernetes.io/os=linux
Tolerations:     CriticalAddonsOnly op=Exists
                 node-role.kubernetes.io/control-plane:NoSchedule
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason          Age                    From     Message
  ----     ------          ----                   ----     -------
  Normal   SandboxChanged  161m                   kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          161m                   kubelet  Container image "k8s.gcr.io/coredns:1.7.0" already present on machine
  Normal   Created         161m                   kubelet  Created container coredns
  Normal   Started         161m                   kubelet  Started container coredns
  Warning  Unhealthy       161m (x3 over 161m)    kubelet  Readiness probe failed: HTTP probe failed with statuscode: 503
  Normal   SandboxChanged  143m                   kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          143m                   kubelet  Container image "k8s.gcr.io/coredns:1.7.0" already present on machine
  Normal   Created         143m                   kubelet  Created container coredns
  Normal   Started         143m                   kubelet  Started container coredns
  Warning  Unhealthy       143m (x3 over 143m)    kubelet  Readiness probe failed: HTTP probe failed with statuscode: 503
  Warning  FailedMount     21m                    kubelet  MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition
  Warning  FailedMount     21m                    kubelet  MountVolume.SetUp failed for volume "coredns-token-fsplt" : failed to sync secret cache: timed out waiting for the condition
  Normal   SandboxChanged  21m                    kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          21m                    kubelet  Container image "k8s.gcr.io/coredns:1.7.0" already present on machine
  Normal   Created         21m                    kubelet  Created container coredns
  Normal   Started         21m                    kubelet  Started container coredns
  Warning  Unhealthy       21m (x3 over 21m)      kubelet  Readiness probe failed: HTTP probe failed with statuscode: 503
  Normal   SandboxChanged  9m56s                  kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          9m55s                  kubelet  Container image "k8s.gcr.io/coredns:1.7.0" already present on machine
  Normal   Created         9m55s                  kubelet  Created container coredns
  Normal   Started         9m54s                  kubelet  Started container coredns
  Normal   SandboxChanged  3m55s                  kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          3m55s                  kubelet  Container image "k8s.gcr.io/coredns:1.7.0" already present on machine
  Normal   Created         3m55s                  kubelet  Created container coredns
  Normal   Started         3m55s                  kubelet  Started container coredns
  Warning  Unhealthy       3m24s (x3 over 3m44s)  kubelet  Readiness probe failed: HTTP probe failed with statuscode: 503
  Warning  Unhealthy       3m14s                  kubelet  Readiness probe failed: Get "http://172.17.0.2:8181/ready": dial tcp 172.17.0.2:8181: connect: connection refused
  Normal   SandboxChanged  40s                    kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          39s                    kubelet  Container image "k8s.gcr.io/coredns:1.7.0" already present on machine
  Normal   Created         39s                    kubelet  Created container coredns
  Normal   Started         39s                    kubelet  Started container coredns
  Warning  Unhealthy       14s (x3 over 34s)      kubelet  Readiness probe failed: HTTP probe failed with statuscode: 503


Name:                 etcd-minikube
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 minikube/192.168.49.2
Start Time:           Thu, 01 Jul 2021 14:06:42 +0200
Labels:               component=etcd
                      tier=control-plane
Annotations:          kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.49.2:2379
                      kubernetes.io/config.hash: c31fe6a5afdd142cf3450ac972274b36
                      kubernetes.io/config.mirror: c31fe6a5afdd142cf3450ac972274b36
                      kubernetes.io/config.seen: 2021-03-30T06:23:07.413051499Z
                      kubernetes.io/config.source: file
Status:               Running
IP:                   192.168.49.2
IPs:
  IP:           192.168.49.2
Controlled By:  Node/minikube
Containers:
  etcd:
    Container ID:  docker://b4822a3c3ca2ea762c18638ed7601cbd4b5d0ab4d83cc07a9701234077f81fee
    Image:         k8s.gcr.io/etcd:3.4.13-0
    Image ID:      docker-pullable://k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2
    Port:          <none>
    Host Port:     <none>
    Command:
      etcd
      --advertise-client-urls=https://192.168.49.2:2379
      --cert-file=/var/lib/minikube/certs/etcd/server.crt
      --client-cert-auth=true
      --data-dir=/var/lib/minikube/etcd
      --initial-advertise-peer-urls=https://192.168.49.2:2380
      --initial-cluster=minikube=https://192.168.49.2:2380
      --key-file=/var/lib/minikube/certs/etcd/server.key
      --listen-client-urls=https://127.0.0.1:2379,https://192.168.49.2:2379
      --listen-metrics-urls=http://127.0.0.1:2381
      --listen-peer-urls=https://192.168.49.2:2380
      --name=minikube
      --peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt
      --peer-client-cert-auth=true
      --peer-key-file=/var/lib/minikube/certs/etcd/peer.key
      --peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt
      --proxy-refresh-interval=70000
      --snapshot-count=10000
      --trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt
    State:          Running
      Started:      Thu, 01 Jul 2021 14:06:43 +0200
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 01 Jul 2021 14:03:32 +0200
      Finished:     Thu, 01 Jul 2021 14:05:05 +0200
    Ready:          False
    Restart Count:  117
    Requests:
      cpu:                100m
      ephemeral-storage:  100Mi
      memory:             100Mi
    Liveness:             http-get http://127.0.0.1:2381/health delay=10s timeout=15s period=10s #success=1 #failure=8
    Startup:              http-get http://127.0.0.1:2381/health delay=10s timeout=15s period=10s #success=1 #failure=24
    Environment:          <none>
    Mounts:
      /var/lib/minikube/certs/etcd from etcd-certs (rw)
      /var/lib/minikube/etcd from etcd-data (rw)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  etcd-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/minikube/certs/etcd
    HostPathType:  DirectoryOrCreate
  etcd-data:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/minikube/etcd
    HostPathType:  DirectoryOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute op=Exists
Events:
  Type    Reason          Age   From     Message
  ----    ------          ----  ----     -------
  Normal  SandboxChanged  162m  kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          162m  kubelet  Container image "k8s.gcr.io/etcd:3.4.13-0" already present on machine
  Normal  Created         162m  kubelet  Created container etcd
  Normal  Started         162m  kubelet  Started container etcd
  Normal  SandboxChanged  143m  kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          143m  kubelet  Container image "k8s.gcr.io/etcd:3.4.13-0" already present on machine
  Normal  Created         143m  kubelet  Created container etcd
  Normal  Started         143m  kubelet  Started container etcd
  Normal  SandboxChanged  21m   kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          21m   kubelet  Container image "k8s.gcr.io/etcd:3.4.13-0" already present on machine
  Normal  Created         21m   kubelet  Created container etcd
  Normal  Started         21m   kubelet  Started container etcd
  Normal  Created         10m   kubelet  Created container etcd
  Normal  Pulled          10m   kubelet  Container image "k8s.gcr.io/etcd:3.4.13-0" already present on machine
  Normal  SandboxChanged  10m   kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Started         10m   kubelet  Started container etcd
  Normal  SandboxChanged  4m1s  kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          4m1s  kubelet  Container image "k8s.gcr.io/etcd:3.4.13-0" already present on machine
  Normal  Created         4m1s  kubelet  Created container etcd
  Normal  Started         4m1s  kubelet  Started container etcd
  Normal  SandboxChanged  51s   kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          50s   kubelet  Container image "k8s.gcr.io/etcd:3.4.13-0" already present on machine
  Normal  Created         50s   kubelet  Created container etcd
  Normal  Started         50s   kubelet  Started container etcd


Name:         ingress-nginx-admission-create-v4xt8
Namespace:    kube-system
Priority:     0
Node:         minikube/192.168.49.2
Start Time:   Thu, 06 May 2021 14:10:04 +0200
Labels:       app.kubernetes.io/component=admission-webhook
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/name=ingress-nginx
              controller-uid=1a745bc4-8e48-4106-81c2-17f5714afea3
              job-name=ingress-nginx-admission-create
Annotations:  <none>
Status:       Succeeded
IP:           172.17.0.4
IPs:
  IP:           172.17.0.4
Controlled By:  Job/ingress-nginx-admission-create
Containers:
  create:
    Container ID:  docker://df960e87e85818b152dc30aa90a01c46ace1e641d697546a7af242bfc7f25d1e
    Image:         jettech/kube-webhook-certgen:v1.2.2@sha256:da8122a78d7387909cf34a0f34db0cce672da1379ee4fd57c626a4afe9ac12b7
    Image ID:      docker-pullable://jettech/kube-webhook-certgen@sha256:da8122a78d7387909cf34a0f34db0cce672da1379ee4fd57c626a4afe9ac12b7
    Port:          <none>
    Host Port:     <none>
    Args:
      create
      --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.kube-system.svc
      --namespace=kube-system
      --secret-name=ingress-nginx-admission
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 06 May 2021 14:10:14 +0200
      Finished:     Thu, 06 May 2021 14:10:14 +0200
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from ingress-nginx-admission-token-t2hvh (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  ingress-nginx-admission-token-t2hvh:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission-token-t2hvh
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:          <none>


Name:         ingress-nginx-admission-patch-kc2p4
Namespace:    kube-system
Priority:     0
Node:         minikube/192.168.49.2
Start Time:   Thu, 06 May 2021 14:10:04 +0200
Labels:       app.kubernetes.io/component=admission-webhook
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/name=ingress-nginx
              controller-uid=39187671-8775-46c6-bb02-290b8c766149
              job-name=ingress-nginx-admission-patch
Annotations:  <none>
Status:       Succeeded
IP:           172.17.0.3
IPs:
  IP:           172.17.0.3
Controlled By:  Job/ingress-nginx-admission-patch
Containers:
  patch:
    Container ID:  docker://1563338a679cc8466ad8cfcbbc8342e97c5712cb2fc5563bc2c2777a71f133cb
    Image:         jettech/kube-webhook-certgen:v1.3.0@sha256:ff01fba91131ed260df3f3793009efbf9686f5a5ce78a85f81c386a4403f7689
    Image ID:      docker-pullable://jettech/kube-webhook-certgen@sha256:ff01fba91131ed260df3f3793009efbf9686f5a5ce78a85f81c386a4403f7689
    Port:          <none>
    Host Port:     <none>
    Args:
      patch
      --webhook-name=ingress-nginx-admission
      --namespace=kube-system
      --patch-mutating=false
      --secret-name=ingress-nginx-admission
      --patch-failure-policy=Fail
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 06 May 2021 14:10:28 +0200
      Finished:     Thu, 06 May 2021 14:10:28 +0200
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 06 May 2021 14:10:12 +0200
      Finished:     Thu, 06 May 2021 14:10:12 +0200
    Ready:          False
    Restart Count:  2
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from ingress-nginx-admission-token-t2hvh (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  ingress-nginx-admission-token-t2hvh:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission-token-t2hvh
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:          <none>


Name:           ingress-nginx-controller-6958cdcd97-f2xn2
Namespace:      kube-system
Priority:       0
Node:           <none>
Labels:         addonmanager.kubernetes.io/mode=Reconcile
                app.kubernetes.io/component=controller
                app.kubernetes.io/instance=ingress-nginx
                app.kubernetes.io/name=ingress-nginx
                gcp-auth-skip-secret=true
                pod-template-hash=6958cdcd97
Annotations:    <none>
Status:         Pending
IP:
IPs:            <none>
Controlled By:  ReplicaSet/ingress-nginx-controller-6958cdcd97
Containers:
  controller:
    Image:       us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:@sha256:46ba23c3fbaafd9e5bd01ea85b2f921d9f2217be082580edc22e6c704a83f02f
    Ports:       80/TCP, 443/TCP, 8443/TCP
    Host Ports:  80/TCP, 443/TCP, 0/TCP
    Args:
      /nginx-ingress-controller
      --configmap=$(POD_NAMESPACE)/nginx-load-balancer-conf
      --report-node-internal-ip-address
      --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
      --udp-services-configmap=$(POD_NAMESPACE)/udp-services
      --validating-webhook=:8443
      --validating-webhook-certificate=/usr/local/certificates/cert
      --validating-webhook-key=/usr/local/certificates/key
    Requests:
      cpu:      100m
      memory:   90Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       ingress-nginx-controller-6958cdcd97-f2xn2 (v1:metadata.name)
      POD_NAMESPACE:  kube-system (v1:metadata.namespace)
    Mounts:
      /usr/local/certificates/ from webhook-cert (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from ingress-nginx-token-znbzh (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  webhook-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission
    Optional:    false
  ingress-nginx-token-znbzh:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-token-znbzh
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Warning  FailedScheduling  10m    default-scheduler  0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
  Warning  FailedScheduling  10m    default-scheduler  0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
  Warning  FailedScheduling  9m59s  default-scheduler  0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
  Warning  FailedScheduling  3m55s  default-scheduler  0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
  Warning  FailedScheduling  39s    default-scheduler  0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.


Name:                 kube-apiserver-minikube
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 minikube/192.168.49.2
Start Time:           Thu, 01 Jul 2021 14:06:42 +0200
Labels:               component=kube-apiserver
                      tier=control-plane
Annotations:          kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.49.2:8443
                      kubernetes.io/config.hash: 01d7e312da0f9c4176daa8464d4d1a50
                      kubernetes.io/config.mirror: 01d7e312da0f9c4176daa8464d4d1a50
                      kubernetes.io/config.seen: 2021-07-01T12:03:31.069699986Z
                      kubernetes.io/config.source: file
Status:               Running
IP:                   192.168.49.2
IPs:
  IP:           192.168.49.2
Controlled By:  Node/minikube
Containers:
  kube-apiserver:
    Container ID:  docker://6703ae1d93c40dbbabb05e522334f44d70100d72e5f5a495c1f1fc313dc6e0c3
    Image:         k8s.gcr.io/kube-apiserver:v1.20.7
    Image ID:      docker-pullable://k8s.gcr.io/kube-apiserver@sha256:5ab3d676c426bfb272fb7605e6978b90d5676913636a6105688862849961386f
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-apiserver
      --advertise-address=192.168.49.2
      --allow-privileged=true
      --authorization-mode=Node,RBAC
      --client-ca-file=/var/lib/minikube/certs/ca.crt
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota
      --enable-bootstrap-token-auth=true
      --etcd-cafile=/var/lib/minikube/certs/etcd/ca.crt
      --etcd-certfile=/var/lib/minikube/certs/apiserver-etcd-client.crt
      --etcd-keyfile=/var/lib/minikube/certs/apiserver-etcd-client.key
      --etcd-servers=https://127.0.0.1:2379
      --insecure-port=0
      --kubelet-client-certificate=/var/lib/minikube/certs/apiserver-kubelet-client.crt
      --kubelet-client-key=/var/lib/minikube/certs/apiserver-kubelet-client.key
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
      --proxy-client-cert-file=/var/lib/minikube/certs/front-proxy-client.crt
      --proxy-client-key-file=/var/lib/minikube/certs/front-proxy-client.key
      --requestheader-allowed-names=front-proxy-client
      --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt
      --requestheader-extra-headers-prefix=X-Remote-Extra-
      --requestheader-group-headers=X-Remote-Group
      --requestheader-username-headers=X-Remote-User
      --secure-port=8443
      --service-account-issuer=https://kubernetes.default.svc.cluster.local
      --service-account-key-file=/var/lib/minikube/certs/sa.pub
      --service-account-signing-key-file=/var/lib/minikube/certs/sa.key
      --service-cluster-ip-range=10.96.0.0/12
      --tls-cert-file=/var/lib/minikube/certs/apiserver.crt
      --tls-private-key-file=/var/lib/minikube/certs/apiserver.key
    State:          Running
      Started:      Thu, 01 Jul 2021 14:06:43 +0200
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 01 Jul 2021 14:03:32 +0200
      Finished:     Thu, 01 Jul 2021 14:05:06 +0200
    Ready:          True
    Restart Count:  1
    Requests:
      cpu:        250m
    Liveness:     http-get https://192.168.49.2:8443/livez delay=10s timeout=15s period=10s #success=1 #failure=8
    Readiness:    http-get https://192.168.49.2:8443/readyz delay=0s timeout=15s period=1s #success=1 #failure=3
    Startup:      http-get https://192.168.49.2:8443/livez delay=10s timeout=15s period=10s #success=1 #failure=24
    Environment:  <none>
    Mounts:
      /etc/ca-certificates from etc-ca-certificates (ro)
      /etc/ssl/certs from ca-certs (ro)
      /usr/local/share/ca-certificates from usr-local-share-ca-certificates (ro)
      /usr/share/ca-certificates from usr-share-ca-certificates (ro)
      /var/lib/minikube/certs from k8s-certs (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  ca-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ssl/certs
    HostPathType:  DirectoryOrCreate
  etc-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ca-certificates
    HostPathType:  DirectoryOrCreate
  k8s-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/minikube/certs
    HostPathType:  DirectoryOrCreate
  usr-local-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/local/share/ca-certificates
    HostPathType:  DirectoryOrCreate
  usr-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/share/ca-certificates
    HostPathType:  DirectoryOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute op=Exists
Events:
  Type    Reason          Age   From     Message
  ----    ------          ----  ----     -------
  Normal  Pulled          4m1s  kubelet  Container image "k8s.gcr.io/kube-apiserver:v1.20.7" already present on machine
  Normal  Created         4m1s  kubelet  Created container kube-apiserver
  Normal  Started         4m1s  kubelet  Started container kube-apiserver
  Normal  SandboxChanged  51s   kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          50s   kubelet  Container image "k8s.gcr.io/kube-apiserver:v1.20.7" already present on machine
  Normal  Created         50s   kubelet  Created container kube-apiserver
  Normal  Started         50s   kubelet  Started container kube-apiserver


Name:                 kube-controller-manager-minikube
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 minikube/192.168.49.2
Start Time:           Thu, 01 Jul 2021 13:57:28 +0200
Labels:               component=kube-controller-manager
                      tier=control-plane
Annotations:          kubernetes.io/config.hash: c7b8fa13668654de8887eea36ddd7b5b
                      kubernetes.io/config.mirror: c7b8fa13668654de8887eea36ddd7b5b
                      kubernetes.io/config.seen: 2021-07-01T12:03:31.069706367Z
                      kubernetes.io/config.source: file
Status:               Running
IP:                   192.168.49.2
IPs:
  IP:           192.168.49.2
Controlled By:  Node/minikube
Containers:
  kube-controller-manager:
    Container ID:  docker://2dcc581ebd94f98e4e30ee330aaf3a722239ff29bf6edd28ff1ce5b6aacf6b59
    Image:         k8s.gcr.io/kube-controller-manager:v1.20.7
    Image ID:      docker-pullable://k8s.gcr.io/kube-controller-manager@sha256:eb9b121cbe40cf9016b95cefd34fb9e62c4caf1516188a98b64f091d871a2d46
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-controller-manager
      --allocate-node-cidrs=true
      --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
      --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
      --bind-address=127.0.0.1
      --client-ca-file=/var/lib/minikube/certs/ca.crt
      --cluster-cidr=10.244.0.0/16
      --cluster-name=mk
      --cluster-signing-cert-file=/var/lib/minikube/certs/ca.crt
      --cluster-signing-key-file=/var/lib/minikube/certs/ca.key
      --controllers=*,bootstrapsigner,tokencleaner
      --kubeconfig=/etc/kubernetes/controller-manager.conf
      --leader-elect=false
      --port=0
      --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt
      --root-ca-file=/var/lib/minikube/certs/ca.crt
      --service-account-private-key-file=/var/lib/minikube/certs/sa.key
      --service-cluster-ip-range=10.96.0.0/12
      --use-service-account-credentials=true
    State:          Running
      Started:      Thu, 01 Jul 2021 14:06:43 +0200
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Thu, 01 Jul 2021 14:03:32 +0200
      Finished:     Thu, 01 Jul 2021 14:05:05 +0200
    Ready:          True
    Restart Count:  1
    Requests:
      cpu:        200m
    Liveness:     http-get https://127.0.0.1:10257/healthz delay=10s timeout=15s period=10s #success=1 #failure=8
    Startup:      http-get https://127.0.0.1:10257/healthz delay=10s timeout=15s period=10s #success=1 #failure=24
    Environment:  <none>
    Mounts:
      /etc/ca-certificates from etc-ca-certificates (ro)
      /etc/kubernetes/controller-manager.conf from kubeconfig (ro)
      /etc/ssl/certs from ca-certs (ro)
      /usr/libexec/kubernetes/kubelet-plugins/volume/exec from flexvolume-dir (rw)
      /usr/local/share/ca-certificates from usr-local-share-ca-certificates (ro)
      /usr/share/ca-certificates from usr-share-ca-certificates (ro)
      /var/lib/minikube/certs from k8s-certs (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  ca-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ssl/certs
    HostPathType:  DirectoryOrCreate
  etc-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ca-certificates
    HostPathType:  DirectoryOrCreate
  flexvolume-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/libexec/kubernetes/kubelet-plugins/volume/exec
    HostPathType:  DirectoryOrCreate
  k8s-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/minikube/certs
    HostPathType:  DirectoryOrCreate
  kubeconfig:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/controller-manager.conf
    HostPathType:  FileOrCreate
  usr-local-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/local/share/ca-certificates
    HostPathType:  DirectoryOrCreate
  usr-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/share/ca-certificates
    HostPathType:  DirectoryOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute op=Exists
Events:
  Type    Reason          Age   From     Message
  ----    ------          ----  ----     -------
  Normal  Pulled          4m1s  kubelet  Container image "k8s.gcr.io/kube-controller-manager:v1.20.7" already present on machine
  Normal  Created         4m1s  kubelet  Created container kube-controller-manager
  Normal  Started         4m1s  kubelet  Started container kube-controller-manager
  Normal  SandboxChanged  51s   kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          50s   kubelet  Container image "k8s.gcr.io/kube-controller-manager:v1.20.7" already present on machine
  Normal  Created         50s   kubelet  Created container kube-controller-manager
  Normal  Started         50s   kubelet  Started container kube-controller-manager


Name:         kube-flannel-ds-amd64-tdq2h
Namespace:    kube-system
Priority:     0
Node:         minikube/192.168.49.2
Start Time:   Thu, 01 Jul 2021 13:46:12 +0200
Labels:       app=flannel
              controller-revision-hash=6674b6b67c
              pod-template-generation=1
              tier=node
Annotations:  <none>
Status:       Running
IP:           192.168.49.2
IPs:
  IP:           192.168.49.2
Controlled By:  DaemonSet/kube-flannel-ds-amd64
Init Containers:
  install-cni:
    Container ID:  docker://4428fd2e5a8fcc1ebf31739de794925385b33111fbdfb0e1b1e7deb0b3f0eb34
    Image:         quay.io/coreos/flannel:v0.12.0-amd64
    Image ID:      docker-pullable://quay.io/coreos/flannel@sha256:6d451d92c921f14bfb38196aacb6e506d4593c5b3c9d40a8b8a2506010dc3e10
    Port:          <none>
    Host Port:     <none>
    Command:
      cp
    Args:
      -f
      /etc/kube-flannel/cni-conf.json
      /etc/cni/net.d/10-flannel.conflist
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 01 Jul 2021 14:06:55 +0200
      Finished:     Thu, 01 Jul 2021 14:06:55 +0200
    Ready:          True
    Restart Count:  3
    Environment:    <none>
    Mounts:
      /etc/cni/net.d from cni (rw)
      /etc/kube-flannel/ from flannel-cfg (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from flannel-token-fsmxs (ro)
Containers:
  kube-flannel:
    Container ID:  docker://f485034b50fc1f63b51d1a516a6cd293aa180bc1c62fd861ef9c79cfd1d23df7
    Image:         quay.io/coreos/flannel:v0.12.0-amd64
    Image ID:      docker-pullable://quay.io/coreos/flannel@sha256:6d451d92c921f14bfb38196aacb6e506d4593c5b3c9d40a8b8a2506010dc3e10
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/bin/flanneld
    Args:
      --ip-masq
      --kube-subnet-mgr
    State:          Running
      Started:      Thu, 01 Jul 2021 14:06:56 +0200
    Last State:     Terminated
      Reason:       Error
      Exit Code:    255
      Started:      Thu, 01 Jul 2021 14:04:28 +0200
      Finished:     Thu, 01 Jul 2021 14:06:30 +0200
    Ready:          True
    Restart Count:  4
    Limits:
      cpu:     100m
      memory:  50Mi
    Requests:
      cpu:     100m
      memory:  50Mi
    Environment:
      POD_NAME:       kube-flannel-ds-amd64-tdq2h (v1:metadata.name)
      POD_NAMESPACE:  kube-system (v1:metadata.namespace)
    Mounts:
      /etc/kube-flannel/ from flannel-cfg (rw)
      /run/flannel from run (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from flannel-token-fsmxs (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  run:
    Type:          HostPath (bare host directory volume)
    Path:          /run/flannel
    HostPathType:
  cni:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:
  flannel-cfg:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-flannel-cfg
    Optional:  false
  flannel-token-fsmxs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  flannel-token-fsmxs
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     :NoSchedule op=Exists
                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                 node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                 node.kubernetes.io/not-ready:NoExecute op=Exists
                 node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                 node.kubernetes.io/unreachable:NoExecute op=Exists
                 node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason          Age                   From               Message
  ----     ------          ----                  ----               -------
  Normal   Scheduled       21m                   default-scheduler  Successfully assigned kube-system/kube-flannel-ds-amd64-tdq2h to minikube
  Normal   Pulling         21m                   kubelet            Pulling image "quay.io/coreos/flannel:v0.12.0-amd64"
  Normal   Pulled          21m                   kubelet            Successfully pulled image "quay.io/coreos/flannel:v0.12.0-amd64" in 4.05929413s
  Normal   Created         21m                   kubelet            Created container install-cni
  Normal   Started         21m                   kubelet            Started container install-cni
  Normal   Pulled          21m                   kubelet            Container image "quay.io/coreos/flannel:v0.12.0-amd64" already present on machine
  Normal   Created         21m                   kubelet            Created container kube-flannel
  Normal   Started         21m                   kubelet            Started container kube-flannel
  Normal   SandboxChanged  9m56s                 kubelet            Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          9m55s                 kubelet            Container image "quay.io/coreos/flannel:v0.12.0-amd64" already present on machine
  Normal   Created         9m55s                 kubelet            Created container install-cni
  Normal   Started         9m55s                 kubelet            Started container install-cni
  Normal   Pulled          9m53s                 kubelet            Container image "quay.io/coreos/flannel:v0.12.0-amd64" already present on machine
  Normal   Created         9m53s                 kubelet            Created container kube-flannel
  Normal   Started         9m53s                 kubelet            Started container kube-flannel
  Normal   Created         3m51s                 kubelet            Created container install-cni
  Normal   Pulled          3m51s                 kubelet            Container image "quay.io/coreos/flannel:v0.12.0-amd64" already present on machine
  Normal   SandboxChanged  3m51s                 kubelet            Pod sandbox changed, it will be killed and re-created.
  Normal   Started         3m50s                 kubelet            Started container install-cni
  Warning  BackOff         3m19s                 kubelet            Back-off restarting failed container
  Normal   Pulled          3m5s (x2 over 3m50s)  kubelet            Container image "quay.io/coreos/flannel:v0.12.0-amd64" already present on machine
  Normal   Created         3m5s (x2 over 3m50s)  kubelet            Created container kube-flannel
  Normal   Started         3m5s (x2 over 3m50s)  kubelet            Started container kube-flannel
  Normal   SandboxChanged  39s                   kubelet            Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          38s                   kubelet            Container image "quay.io/coreos/flannel:v0.12.0-amd64" already present on machine
  Normal   Created         38s                   kubelet            Created container install-cni
  Normal   Started         38s                   kubelet            Started container install-cni
  Normal   Pulled          37s                   kubelet            Container image "quay.io/coreos/flannel:v0.12.0-amd64" already present on machine
  Normal   Created         37s                   kubelet            Created container kube-flannel
  Normal   Started         37s                   kubelet            Started container kube-flannel


Name:                 kube-proxy-b9pxg
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 minikube/192.168.49.2
Start Time:           Thu, 01 Jul 2021 14:04:01 +0200
Labels:               controller-revision-hash=5bd89cc4b7
                      k8s-app=kube-proxy
                      pod-template-generation=2
Annotations:          <none>
Status:               Running
IP:                   192.168.49.2
IPs:
  IP:           192.168.49.2
Controlled By:  DaemonSet/kube-proxy
Containers:
  kube-proxy:
    Container ID:  docker://4417fcb7de6dc9d55e755d9a38723ec029dc4397836f3f199cc7a49dd317ee6f
    Image:         k8s.gcr.io/kube-proxy:v1.20.7
    Image ID:      docker-pullable://k8s.gcr.io/kube-proxy@sha256:5d2be61150535ed37b7a5fa5a8239f89afee505ab2fae05247447851eed710a8
    Port:          <none>
    Host Port:     <none>
    Command:
      /usr/local/bin/kube-proxy
      --config=/var/lib/kube-proxy/config.conf
      --hostname-override=$(NODE_NAME)
    State:          Running
      Started:      Thu, 01 Jul 2021 14:06:57 +0200
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Thu, 01 Jul 2021 14:04:02 +0200
      Finished:     Thu, 01 Jul 2021 14:05:05 +0200
    Ready:          True
    Restart Count:  1
    Environment:
      NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /var/lib/kube-proxy from kube-proxy (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-proxy-token-vnsq4 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  kube-proxy:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-proxy
    Optional:  false
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:
  kube-proxy-token-vnsq4:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kube-proxy-token-vnsq4
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  kubernetes.io/os=linux
Tolerations:     op=Exists
                 CriticalAddonsOnly op=Exists
                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                 node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                 node.kubernetes.io/not-ready:NoExecute op=Exists
                 node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                 node.kubernetes.io/unreachable:NoExecute op=Exists
                 node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type    Reason          Age    From               Message
  ----    ------          ----   ----               -------
  Normal  Scheduled       3m31s  default-scheduler  Successfully assigned kube-system/kube-proxy-b9pxg to minikube
  Normal  Pulled          3m31s  kubelet            Container image "k8s.gcr.io/kube-proxy:v1.20.7" already present on machine
  Normal  Created         3m31s  kubelet            Created container kube-proxy
  Normal  Started         3m31s  kubelet            Started container kube-proxy
  Normal  SandboxChanged  37s    kubelet            Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          37s    kubelet            Container image "k8s.gcr.io/kube-proxy:v1.20.7" already present on machine
  Normal  Created         37s    kubelet            Created container kube-proxy
  Normal  Started         36s    kubelet            Started container kube-proxy


Name:                 kube-scheduler-minikube
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 minikube/192.168.49.2
Start Time:           Thu, 01 Jul 2021 14:06:42 +0200
Labels:               component=kube-scheduler
                      tier=control-plane
Annotations:          kubernetes.io/config.hash: 82ed17c7f4a56a29330619386941d47e
                      kubernetes.io/config.mirror: 82ed17c7f4a56a29330619386941d47e
                      kubernetes.io/config.seen: 2021-07-01T12:03:31.069708131Z
                      kubernetes.io/config.source: file
Status:               Running
IP:                   192.168.49.2
IPs:
  IP:           192.168.49.2
Controlled By:  Node/minikube
Containers:
  kube-scheduler:
    Container ID:  docker://f75f1f2e857912e02e2e29e12953d2ac672513cc11bd4c18060d127c20e218a9
    Image:         k8s.gcr.io/kube-scheduler:v1.20.7
    Image ID:      docker-pullable://k8s.gcr.io/kube-scheduler@sha256:6fdb12580353b6cd59de486ca650e3ba9270bc8d52f1d3052cd9bb1d4f28e189
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-scheduler
      --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
      --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
      --bind-address=127.0.0.1
      --kubeconfig=/etc/kubernetes/scheduler.conf
      --leader-elect=false
      --port=0
    State:          Running
      Started:      Thu, 01 Jul 2021 14:06:43 +0200
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Thu, 01 Jul 2021 14:03:32 +0200
      Finished:     Thu, 01 Jul 2021 14:05:05 +0200
    Ready:          False
    Restart Count:  1
    Requests:
      cpu:        100m
    Liveness:     http-get https://127.0.0.1:10259/healthz delay=10s timeout=15s period=10s #success=1 #failure=8
    Startup:      http-get https://127.0.0.1:10259/healthz delay=10s timeout=15s period=10s #success=1 #failure=24
    Environment:  <none>
    Mounts:
      /etc/kubernetes/scheduler.conf from kubeconfig (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kubeconfig:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/scheduler.conf
    HostPathType:  FileOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute op=Exists
Events:
  Type    Reason          Age   From     Message
  ----    ------          ----  ----     -------
  Normal  Pulled          4m1s  kubelet  Container image "k8s.gcr.io/kube-scheduler:v1.20.7" already present on machine
  Normal  Created         4m1s  kubelet  Created container kube-scheduler
  Normal  Started         4m1s  kubelet  Started container kube-scheduler
  Normal  SandboxChanged  51s   kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          50s   kubelet  Container image "k8s.gcr.io/kube-scheduler:v1.20.7" already present on machine
  Normal  Created         50s   kubelet  Created container kube-scheduler
  Normal  Started         50s   kubelet  Started container kube-scheduler


Name:         storage-provisioner
Namespace:    kube-system
Priority:     0
Node:         minikube/192.168.49.2
Start Time:   Tue, 30 Mar 2021 08:23:26 +0200
Labels:       addonmanager.kubernetes.io/mode=Reconcile
              integration-test=storage-provisioner
Annotations:  <none>
Status:       Running
IP:           192.168.49.2
IPs:
  IP:  192.168.49.2
Containers:
  storage-provisioner:
    Container ID:  docker://7a0e4cd61c11c497c2298ba16b99d10f73e2509df5a87115c6c3fd683208bc32
    Image:         gcr.io/k8s-minikube/storage-provisioner:v5
    Image ID:      docker-pullable://gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
    Port:          <none>
    Host Port:     <none>
    Command:
      /storage-provisioner
    State:          Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 01 Jul 2021 14:06:56 +0200
      Finished:     Thu, 01 Jul 2021 14:07:27 +0200
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Thu, 01 Jul 2021 14:04:27 +0200
      Finished:     Thu, 01 Jul 2021 14:05:05 +0200
    Ready:          False
    Restart Count:  214
    Environment:    <none>
    Mounts:
      /tmp from tmp (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from storage-provisioner-token-p5nbd (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  tmp:
    Type:          HostPath (bare host directory volume)
    Path:          /tmp
    HostPathType:  Directory
  storage-provisioner-token-p5nbd:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  storage-provisioner-token-p5nbd
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason          Age                    From     Message
  ----     ------          ----                   ----     -------
  Warning  FailedMount     162m                   kubelet  MountVolume.SetUp failed for volume "storage-provisioner-token-p5nbd" : failed to sync secret cache: timed out waiting for the condition
  Normal   SandboxChanged  161m                   kubelet  Pod sandbox changed, it will be killed and re-created.
  Warning  BackOff         161m                   kubelet  Back-off restarting failed container
  Normal   Pulled          161m (x2 over 161m)    kubelet  Container image "gcr.io/k8s-minikube/storage-provisioner:v5" already present on machine
  Normal   Created         161m (x2 over 161m)    kubelet  Created container storage-provisioner
  Normal   Started         161m (x2 over 161m)    kubelet  Started container storage-provisioner
  Normal   SandboxChanged  143m                   kubelet  Pod sandbox changed, it will be killed and re-created.
  Warning  BackOff         142m                   kubelet  Back-off restarting failed container
  Normal   Pulled          142m (x2 over 143m)    kubelet  Container image "gcr.io/k8s-minikube/storage-provisioner:v5" already present on machine
  Normal   Created         142m (x2 over 143m)    kubelet  Created container storage-provisioner
  Normal   Started         142m (x2 over 143m)    kubelet  Started container storage-provisioner
  Warning  FailedMount     21m (x2 over 21m)      kubelet  MountVolume.SetUp failed for volume "storage-provisioner-token-p5nbd" : failed to sync secret cache: timed out waiting for the condition
  Normal   SandboxChanged  21m                    kubelet  Pod sandbox changed, it will be killed and re-created.
  Warning  BackOff         21m                    kubelet  Back-off restarting failed container
  Normal   Started         20m (x2 over 21m)      kubelet  Started container storage-provisioner
  Normal   Pulled          20m (x2 over 21m)      kubelet  Container image "gcr.io/k8s-minikube/storage-provisioner:v5" already present on machine
  Normal   Created         20m (x2 over 21m)      kubelet  Created container storage-provisioner
  Normal   SandboxChanged  9m56s                  kubelet  Pod sandbox changed, it will be killed and re-created.
  Warning  BackOff         9m24s                  kubelet  Back-off restarting failed container
  Normal   Pulled          9m11s (x2 over 9m55s)  kubelet  Container image "gcr.io/k8s-minikube/storage-provisioner:v5" already present on machine
  Normal   Created         9m11s (x2 over 9m55s)  kubelet  Created container storage-provisioner
  Normal   Started         9m10s (x2 over 9m55s)  kubelet  Started container storage-provisioner
  Normal   SandboxChanged  3m51s                  kubelet  Pod sandbox changed, it will be killed and re-created.
  Warning  BackOff         3m20s                  kubelet  Back-off restarting failed container
  Normal   Created         3m6s (x2 over 3m51s)   kubelet  Created container storage-provisioner
  Normal   Started         3m6s (x2 over 3m51s)   kubelet  Started container storage-provisioner
  Normal   Pulled          3m6s (x2 over 3m51s)   kubelet  Container image "gcr.io/k8s-minikube/storage-provisioner:v5" already present on machine
  Normal   SandboxChanged  37s                    kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          37s                    kubelet  Container image "gcr.io/k8s-minikube/storage-provisioner:v5" already present on machine
  Normal   Created         37s                    kubelet  Created container storage-provisioner
  Normal   Started         37s                    kubelet  Started container storage-provisioner
  Warning  BackOff         4s                     kubelet  Back-off restarting failed container

Upgrade procedure

I downloaded latest awx-operator.yml from https://raw.githubusercontent.com/ansible/awx-operator/0.12.0/deploy/awx-operator.yaml and applied it with minikube kubectl -- apply -f awx-operator.yaml which created new pods but they never get started.

Then get pods shows:

:~$ minikube kubectl -- get pods
NAME                           READY   STATUS              RESTARTS   AGE
awx-6d97bb8b9f-b8xf6           4/4     Running             108        20d
awx-7c5d846c88-gxqpc           0/4     ContainerCreating   0          22s
awx-operator-79bc95f78-v9lzb   1/1     Running             0          59s
awx-postgres-0                 1/1     Running             27         20d

So far so good.

I tried the upgrade two times. The first one ran direct into the follwoing error, the second one DID SUCEEDED but after node restart ran into the same error!

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/awx/conf/settings.py", line 82, in _ctit_db_wrapper
    yield
  File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/awx/conf/settings.py", line 412, in __getattr__
    value = self._get_local(name)
  File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/awx/conf/settings.py", line 334, in _get_local
    self._preload_cache()
  File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/awx/conf/settings.py", line 297, in _preload_cache
    for setting in Setting.objects.filter(key__in=settings_to_cache.keys(), user__isnull=True).order_by('pk'):
  File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/models/query.py", line 274, in __iter__
    self._fetch_all()
  File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/models/query.py", line 1242, in _fetch_all
    self._result_cache = list(self._iterable_class(self))
  File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/models/query.py", line 55, in __iter__
    results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)
  File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/models/sql/compiler.py", line 1140, in execute_sql
    cursor = self.connection.cursor()
  File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/backends/base/base.py", line 256, in cursor
    return self._cursor()
  File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/backends/base/base.py", line 233, in _cursor
    self.ensure_connection()
  File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/backends/base/base.py", line 217, in ensure_connection
    self.connect()
  File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/utils.py", line 89, in __exit__
    raise dj_exc_value.with_traceback(traceback) from exc_value
  File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/backends/base/base.py", line 217, in ensure_connection
    self.connect()
  File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/backends/base/base.py", line 195, in connect
    self.connection = self.get_new_connection(conn_params)
  File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/backends/postgresql/base.py", line 178, in get_new_connection
    connection = Database.connect(**conn_params)
  File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/psycopg2/__init__.py", line 126, in connect
    conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
django.db.utils.OperationalError: could not translate host name "awx-postgres" to address: Name or service not known

2021-07-01 12:18:38,525 INFO spawned: 'wsbroadcast' with pid 51
2021-07-01 12:18:38,525 INFO spawned: 'wsbroadcast' with pid 51
2021-07-01 12:18:38,526 ERROR    [-] awx.conf.settings Database settings are not available, using defaults.
Traceback (most recent call last):
  File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/backends/base/base.py", line 217, in ensure_connection
    self.connect()
  File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/backends/base/base.py", line 195, in connect
    self.connection = self.get_new_connection(conn_params)
  File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/backends/postgresql/base.py", line 178, in get_new_connection
    connection = Database.connect(**conn_params)
  File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/psycopg2/__init__.py", line 126, in connect
    conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not translate host name "awx-postgres" to address: Name or service not known

My issue

So - my problem here is: could not translate host name "awx-postgres" to address: Name or service not known
And I have absolutely no idea whats causing this.

Any ideas from your side?

The website is then blank because of 404's

2021-07-01 12:21:22,615 DEBUG    [98d2455c16e04a58b368a1aff6caea53] awx.analytics.performance request: <WSGIRequest: GET '/'>, response_time: 0.044s
[pid: 30|app: 0|req: 2/2] 172.17.0.1 () {46 vars in 2185 bytes} [Thu Jul  1 12:21:22 2021] GET / => generated 1190 bytes in 45 msecs (HTTP/1.1 200) 9 headers in 434 bytes (1 switches on core 0)
172.17.0.1 - - [01/Jul/2021:12:21:22 +0000] "GET / HTTP/1.1" 200 1190 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0" "-"
2021/07/01 12:21:22 [error] 25#0: *7 open() "/var/lib/awx/public/static/css/2.687a9035.chunk.css" failed (2: No such file or directory), client: 172.17.0.1, server: _, request: "GET /static/css/2.687a9035.chunk.css HTTP/1.1", host: "ansible-qlb.jki.intern", referrer: "http://ansible-qlb.jki.intern/"
172.17.0.1 - - [01/Jul/2021:12:21:22 +0000] "GET /static/css/2.687a9035.chunk.css HTTP/1.1" 404 162 "http://ansible-qlb.jki.intern/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0" "-"
2021/07/01 12:21:22 [error] 25#0: *9 open() "/var/lib/awx/public/static/js/runtime-main.7202f99a.js" failed (2: No such file or directory), client: 172.17.0.1, server: _, request: "GET /static/js/runtime-main.7202f99a.js HTTP/1.1", host: "ansible-qlb.jki.intern", referrer: "http://ansible-qlb.jki.intern/"
172.17.0.1 - - [01/Jul/2021:12:21:22 +0000] "GET /static/js/runtime-main.7202f99a.js HTTP/1.1" 404 162 "http://ansible-qlb.jki.intern/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0" "-"
2021/07/01 12:21:22 [error] 25#0: *10 open() "/var/lib/awx/public/static/js/main.6ab990a9.chunk.js" failed (2: No such file or directory), client: 172.17.0.1, server: _, request: "GET /static/js/main.6ab990a9.chunk.js HTTP/1.1", host: "ansible-qlb.jki.intern", referrer: "http://ansible-qlb.jki.intern/"
172.17.0.1 - - [01/Jul/2021:12:21:22 +0000] "GET /static/js/main.6ab990a9.chunk.js HTTP/1.1" 404 162 "http://ansible-qlb.jki.intern/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0" "-"
2021/07/01 12:21:22 [error] 25#0: *12 open() "/var/lib/awx/public/static/js/2.53c634ac.chunk.js" failed (2: No such file or directory), client: 172.17.0.1, server: _, request: "GET /static/js/2.53c634ac.chunk.js HTTP/1.1", host: "ansible-qlb.jki.intern", referrer: "http://ansible-qlb.jki.intern/"
172.17.0.1 - - [01/Jul/2021:12:21:22 +0000] "GET /static/js/2.53c634ac.chunk.js HTTP/1.1" 404 162 "http://ansible-qlb.jki.intern/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0" "-"
2021/07/01 12:21:22 [error] 25#0: *11 open() "/var/lib/awx/public/static/css/main.e189280d.chunk.css" failed (2: No such file or directory), client: 172.17.0.1, server: _, request: "GET /static/css/main.e189280d.chunk.css HTTP/1.1", host: "ansible-qlb.jki.intern", referrer: "http://ansible-qlb.jki.intern/"
172.17.0.1 - - [01/Jul/2021:12:21:22 +0000] "GET /static/css/main.e189280d.chunk.css HTTP/1.1" 404 162 "http://ansible-qlb.jki.intern/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0" "-"

Thanks in advance!

P.S.: In addition to the update, my admin password changed? Had to re-retrieve it via the secret decode command - but that worked.

describe after upgrade
:~$ minikube kubectl -- describe po -A
Name:         awx-7c5d846c88-gxqpc
Namespace:    default
Priority:     0
Node:         minikube/192.168.49.2
Start Time:   Thu, 01 Jul 2021 14:09:26 +0200
Labels:       app.kubernetes.io/component=awx
              app.kubernetes.io/managed-by=awx-operator
              app.kubernetes.io/name=awx
              app.kubernetes.io/part-of=awx
              app.kubernetes.io/version=19.2.2
              pod-template-hash=7c5d846c88
Annotations:  <none>
Status:       Running
IP:           172.17.0.4
IPs:
  IP:           172.17.0.4
Controlled By:  ReplicaSet/awx-7c5d846c88
Containers:
  redis:
    Container ID:  docker://38f8306cd22178eb66b991dc7413ffb85926f2c4815d5e12362505e694d1ebbf
    Image:         docker.io/redis:latest
    Image ID:      docker-pullable://redis@sha256:7e2c6181ad5c425443b56c7c73a9cd6df24a122345847d1ea9bb86a5afc76325
    Port:          <none>
    Host Port:     <none>
    Args:
      redis-server
      /etc/redis.conf
    State:          Running
      Started:      Thu, 01 Jul 2021 14:18:33 +0200
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 01 Jul 2021 14:09:27 +0200
      Finished:     Thu, 01 Jul 2021 14:17:48 +0200
    Ready:          True
    Restart Count:  1
    Environment:    <none>
    Mounts:
      /data from awx-redis-data (rw)
      /etc/redis.conf from awx-redis-config (ro,path="redis.conf")
      /var/run/redis from awx-redis-socket (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from awx-token-7psvk (ro)
  awx-web:
    Container ID:   docker://eb74c840ca141a7f9e4a573aeb4e8cea19236ef84e615a346772a542eec5ff17
    Image:          quay.io/ansible/awx:19.2.2
    Image ID:       docker-pullable://quay.io/ansible/awx@sha256:40eeb5e29cda8f59a31f4ec45c2589906c335a751e8836d9a9d818dbd99b416a
    Port:           8052/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 01 Jul 2021 14:18:33 +0200
    Last State:     Terminated
      Reason:       Error
      Exit Code:    143
      Started:      Thu, 01 Jul 2021 14:09:56 +0200
      Finished:     Thu, 01 Jul 2021 14:17:48 +0200
    Ready:          True
    Restart Count:  1
    Requests:
      cpu:     1
      memory:  2Gi
    Environment:
      MY_POD_NAMESPACE:  default (v1:metadata.namespace)
    Mounts:
      /etc/nginx/nginx.conf from awx-nginx-conf (ro,path="nginx.conf")
      /etc/tower/SECRET_KEY from awx-secret-key (ro,path="SECRET_KEY")
      /etc/tower/conf.d/credentials.py from awx-application-credentials (ro,path="credentials.py")
      /etc/tower/conf.d/execution_environments.py from awx-application-credentials (ro,path="execution_environments.py")
      /etc/tower/conf.d/ldap.py from awx-application-credentials (ro,path="ldap.py")
      /etc/tower/settings.py from awx-settings (ro,path="settings.py")
      /var/lib/awx/projects from awx-projects (rw)
      /var/lib/awx/rsyslog from rsyslog-dir (rw)
      /var/run/awx-rsyslog from rsyslog-socket (rw)
      /var/run/redis from awx-redis-socket (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from awx-token-7psvk (ro)
      /var/run/supervisor from supervisor-socket (rw)
  awx-task:
    Container ID:  docker://90f5691cb1627eb76025ce5712be8e05d10aeddb9ca32199fc8f8abf28a84433
    Image:         quay.io/ansible/awx:19.2.2
    Image ID:      docker-pullable://quay.io/ansible/awx@sha256:40eeb5e29cda8f59a31f4ec45c2589906c335a751e8836d9a9d818dbd99b416a
    Port:          <none>
    Host Port:     <none>
    Args:
      /usr/bin/launch_awx_task.sh
    State:          Running
      Started:      Thu, 01 Jul 2021 14:18:34 +0200
    Last State:     Terminated
      Reason:       Error
      Exit Code:    143
      Started:      Thu, 01 Jul 2021 14:09:56 +0200
      Finished:     Thu, 01 Jul 2021 14:17:47 +0200
    Ready:          True
    Restart Count:  1
    Requests:
      cpu:     500m
      memory:  1Gi
    Environment:
      SUPERVISOR_WEB_CONFIG_PATH:  /etc/supervisord.conf
      AWX_SKIP_MIGRATIONS:         1
      MY_POD_UID:                   (v1:metadata.uid)
      MY_POD_IP:                    (v1:status.podIP)
      MY_POD_NAMESPACE:            default (v1:metadata.namespace)
    Mounts:
      /etc/tower/SECRET_KEY from awx-secret-key (ro,path="SECRET_KEY")
      /etc/tower/conf.d/credentials.py from awx-application-credentials (ro,path="credentials.py")
      /etc/tower/conf.d/execution_environments.py from awx-application-credentials (ro,path="execution_environments.py")
      /etc/tower/conf.d/ldap.py from awx-application-credentials (ro,path="ldap.py")
      /etc/tower/settings.py from awx-settings (ro,path="settings.py")
      /var/lib/awx/projects from awx-projects (rw)
      /var/lib/awx/rsyslog from rsyslog-dir (rw)
      /var/run/awx-rsyslog from rsyslog-socket (rw)
      /var/run/receptor from receptor-socket (rw)
      /var/run/redis from awx-redis-socket (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from awx-token-7psvk (ro)
      /var/run/supervisor from supervisor-socket (rw)
  awx-ee:
    Container ID:  docker://94fc3fb29e35e68660b2d31b5af90a649d72b8b072fb2a0fe9f183599547171f
    Image:         quay.io/ansible/awx-ee:0.5.0
    Image ID:      docker-pullable://quay.io/ansible/awx-ee@sha256:f7f9e15b432f9aead8f32a4cd0589b4678da545d2e2bd38cee35e8fcb6bc6601
    Port:          <none>
    Host Port:     <none>
    Args:
      receptor
      --config
      /etc/receptor.conf
    State:          Running
      Started:      Thu, 01 Jul 2021 14:18:34 +0200
    Last State:     Terminated
      Reason:       Error
      Exit Code:    143
      Started:      Thu, 01 Jul 2021 14:10:21 +0200
      Finished:     Thu, 01 Jul 2021 14:17:47 +0200
    Ready:          True
    Restart Count:  1
    Requests:
      cpu:        500m
      memory:     1Gi
    Environment:  <none>
    Mounts:
      /etc/receptor.conf from awx-receptor-config (ro,path="receptor.conf")
      /var/lib/awx/projects from awx-projects (rw)
      /var/run/receptor from receptor-socket (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from awx-token-7psvk (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  awx-application-credentials:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  awx-app-credentials
    Optional:    false
  awx-secret-key:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  awx-secret-key
    Optional:    false
  awx-settings:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      awx-awx-configmap
    Optional:  false
  awx-nginx-conf:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      awx-awx-configmap
    Optional:  false
  awx-redis-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      awx-awx-configmap
    Optional:  false
  awx-redis-socket:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  awx-redis-data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  supervisor-socket:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  rsyslog-socket:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  receptor-socket:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  rsyslog-dir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  awx-receptor-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      awx-awx-configmap
    Optional:  false
  awx-projects:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  awx-token-7psvk:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  awx-token-7psvk
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason          Age    From               Message
  ----     ------          ----   ----               -------
  Normal   Scheduled       13m    default-scheduler  Successfully assigned default/awx-7c5d846c88-gxqpc to minikube
  Normal   Pulled          13m    kubelet            Container image "docker.io/redis:latest" already present on machine
  Normal   Created         13m    kubelet            Created container redis
  Normal   Started         13m    kubelet            Started container redis
  Normal   Pulling         13m    kubelet            Pulling image "quay.io/ansible/awx:19.2.2"
  Normal   Pulled          12m    kubelet            Successfully pulled image "quay.io/ansible/awx:19.2.2" in 25.719753483s
  Normal   Started         12m    kubelet            Started container awx-web
  Normal   Created         12m    kubelet            Created container awx-web
  Normal   Pulled          12m    kubelet            Container image "quay.io/ansible/awx:19.2.2" already present on machine
  Normal   Created         12m    kubelet            Created container awx-task
  Normal   Started         12m    kubelet            Started container awx-task
  Normal   Pulling         12m    kubelet            Pulling image "quay.io/ansible/awx-ee:0.5.0"
  Normal   Pulled          12m    kubelet            Successfully pulled image "quay.io/ansible/awx-ee:0.5.0" in 22.81503665s
  Normal   Created         12m    kubelet            Created container awx-ee
  Normal   Started         12m    kubelet            Started container awx-ee
  Warning  FailedMount     4m19s  kubelet            MountVolume.SetUp failed for volume "awx-application-credentials" : failed to sync secret cache: timed out waiting for the condition
  Warning  FailedMount     4m19s  kubelet            MountVolume.SetUp failed for volume "awx-receptor-config" : failed to sync configmap cache: timed out waiting for the condition
  Warning  FailedMount     4m19s  kubelet            MountVolume.SetUp failed for volume "awx-nginx-conf" : failed to sync configmap cache: timed out waiting for the condition
  Warning  FailedMount     4m19s  kubelet            MountVolume.SetUp failed for volume "awx-redis-config" : failed to sync configmap cache: timed out waiting for the condition
  Warning  FailedMount     4m19s  kubelet            MountVolume.SetUp failed for volume "awx-settings" : failed to sync configmap cache: timed out waiting for the condition
  Warning  FailedMount     4m19s  kubelet            MountVolume.SetUp failed for volume "awx-secret-key" : failed to sync secret cache: timed out waiting for the condition
  Warning  FailedMount     4m19s  kubelet            MountVolume.SetUp failed for volume "awx-token-7psvk" : failed to sync secret cache: timed out waiting for the condition
  Normal   SandboxChanged  4m18s  kubelet            Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          4m18s  kubelet            Container image "docker.io/redis:latest" already present on machine
  Normal   Created         4m18s  kubelet            Created container redis
  Normal   Started         4m18s  kubelet            Started container redis
  Normal   Pulled          4m18s  kubelet            Container image "quay.io/ansible/awx:19.2.2" already present on machine
  Normal   Created         4m18s  kubelet            Created container awx-web
  Normal   Started         4m18s  kubelet            Started container awx-web
  Normal   Pulled          4m18s  kubelet            Container image "quay.io/ansible/awx:19.2.2" already present on machine
  Normal   Created         4m18s  kubelet            Created container awx-task
  Normal   Started         4m17s  kubelet            Started container awx-task
  Normal   Pulled          4m17s  kubelet            Container image "quay.io/ansible/awx-ee:0.5.0" already present on machine
  Normal   Created         4m17s  kubelet            Created container awx-ee
  Normal   Started         4m17s  kubelet            Started container awx-ee


Name:         awx-operator-79bc95f78-v9lzb
Namespace:    default
Priority:     0
Node:         minikube/192.168.49.2
Start Time:   Thu, 01 Jul 2021 14:08:49 +0200
Labels:       name=awx-operator
              pod-template-hash=79bc95f78
Annotations:  <none>
Status:       Running
IP:           172.17.0.5
IPs:
  IP:           172.17.0.5
Controlled By:  ReplicaSet/awx-operator-79bc95f78
Containers:
  awx-operator:
    Container ID:   docker://b9da5d53a3dc60a8fde30067dea22b0dc565681626c5b188a20b01442a96b368
    Image:          quay.io/ansible/awx-operator:0.12.0
    Image ID:       docker-pullable://quay.io/ansible/awx-operator@sha256:3f8a16308ee3dfdd7ad33a544bc6cc87bb0923734627380467eac555ef54ddc1
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 01 Jul 2021 14:18:35 +0200
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 01 Jul 2021 14:08:53 +0200
      Finished:     Thu, 01 Jul 2021 14:17:48 +0200
    Ready:          True
    Restart Count:  1
    Liveness:       http-get http://:6789/healthz delay=15s timeout=1s period=20s #success=1 #failure=3
    Environment:
      WATCH_NAMESPACE:
      POD_NAME:            awx-operator-79bc95f78-v9lzb (v1:metadata.name)
      OPERATOR_NAME:       awx-operator
      ANSIBLE_GATHERING:   explicit
      OPERATOR_VERSION:    0.12.0
      ANSIBLE_DEBUG_LOGS:  false
    Mounts:
      /tmp/ansible-operator/runner from runner (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from awx-operator-token-7lwjl (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  runner:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  awx-operator-token-7lwjl:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  awx-operator-token-7lwjl
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason          Age    From               Message
  ----     ------          ----   ----               -------
  Normal   Scheduled       14m    default-scheduler  Successfully assigned default/awx-operator-79bc95f78-v9lzb to minikube
  Normal   Pulling         14m    kubelet            Pulling image "quay.io/ansible/awx-operator:0.12.0"
  Normal   Pulled          13m    kubelet            Successfully pulled image "quay.io/ansible/awx-operator:0.12.0" in 3.132394843s
  Normal   Created         13m    kubelet            Created container awx-operator
  Normal   Started         13m    kubelet            Started container awx-operator
  Warning  FailedMount     4m19s  kubelet            MountVolume.SetUp failed for volume "awx-operator-token-7lwjl" : failed to sync secret cache: timed out waiting for the condition
  Normal   SandboxChanged  4m18s  kubelet            Pod sandbox changed, it will be killed and re-created.
  Normal   Pulling         4m18s  kubelet            Pulling image "quay.io/ansible/awx-operator:0.12.0"
  Normal   Pulled          4m16s  kubelet            Successfully pulled image "quay.io/ansible/awx-operator:0.12.0" in 1.507130194s
  Normal   Created         4m16s  kubelet            Created container awx-operator
  Normal   Started         4m16s  kubelet            Started container awx-operator


Name:         awx-postgres-0
Namespace:    default
Priority:     0
Node:         minikube/192.168.49.2
Start Time:   Fri, 11 Jun 2021 06:53:34 +0200
Labels:       app.kubernetes.io/component=database
              app.kubernetes.io/instance=postgres-awx
              app.kubernetes.io/managed-by=awx-operator
              app.kubernetes.io/name=postgres
              app.kubernetes.io/part-of=awx
              controller-revision-hash=awx-postgres-78d8b767c8
              statefulset.kubernetes.io/pod-name=awx-postgres-0
Annotations:  <none>
Status:       Running
IP:           172.17.0.6
IPs:
  IP:           172.17.0.6
Controlled By:  StatefulSet/awx-postgres
Containers:
  postgres:
    Container ID:   docker://f39d5b66d4cf56b4f38cb8718c43170afee808e9b5192f6602853035eaa61e63
    Image:          postgres:12
    Image ID:       docker-pullable://postgres@sha256:1ad9a00724bdd8d8da9f2d8a782021a8503eff908c9413b5b34f22d518088f26
    Port:           5432/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 01 Jul 2021 14:18:34 +0200
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 01 Jul 2021 14:06:57 +0200
      Finished:     Thu, 01 Jul 2021 14:17:48 +0200
    Ready:          True
    Restart Count:  28
    Environment:
      POSTGRESQL_DATABASE:        <set to the key 'database' in secret 'awx-postgres-configuration'>  Optional: false
      POSTGRESQL_USER:            <set to the key 'username' in secret 'awx-postgres-configuration'>  Optional: false
      POSTGRESQL_PASSWORD:        <set to the key 'password' in secret 'awx-postgres-configuration'>  Optional: false
      POSTGRES_DB:                <set to the key 'database' in secret 'awx-postgres-configuration'>  Optional: false
      POSTGRES_USER:              <set to the key 'username' in secret 'awx-postgres-configuration'>  Optional: false
      POSTGRES_PASSWORD:          <set to the key 'password' in secret 'awx-postgres-configuration'>  Optional: false
      PGDATA:                     /var/lib/postgresql/data/pgdata
      POSTGRES_INITDB_ARGS:       --auth-host=scram-sha-256
      POSTGRES_HOST_AUTH_METHOD:  scram-sha-256
    Mounts:
      /var/lib/postgresql/data from postgres (rw,path="data")
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-bsj94 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  postgres:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  postgres-awx-postgres-0
    ReadOnly:   false
  default-token-bsj94:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-bsj94
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason          Age    From     Message
  ----    ------          ----   ----     -------
  Normal  SandboxChanged  177m   kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          177m   kubelet  Container image "postgres:12" already present on machine
  Normal  Created         177m   kubelet  Created container postgres
  Normal  Started         177m   kubelet  Started container postgres
  Normal  SandboxChanged  158m   kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          158m   kubelet  Container image "postgres:12" already present on machine
  Normal  Created         158m   kubelet  Created container postgres
  Normal  Started         158m   kubelet  Started container postgres
  Normal  SandboxChanged  36m    kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          36m    kubelet  Container image "postgres:12" already present on machine
  Normal  Created         36m    kubelet  Created container postgres
  Normal  Started         36m    kubelet  Started container postgres
  Normal  SandboxChanged  25m    kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          25m    kubelet  Container image "postgres:12" already present on machine
  Normal  Started         25m    kubelet  Started container postgres
  Normal  Created         25m    kubelet  Created container postgres
  Normal  SandboxChanged  19m    kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          19m    kubelet  Container image "postgres:12" already present on machine
  Normal  Created         19m    kubelet  Created container postgres
  Normal  Started         19m    kubelet  Started container postgres
  Normal  SandboxChanged  15m    kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          15m    kubelet  Container image "postgres:12" already present on machine
  Normal  Created         15m    kubelet  Created container postgres
  Normal  Started         15m    kubelet  Started container postgres
  Normal  SandboxChanged  4m18s  kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          4m17s  kubelet  Container image "postgres:12" already present on machine
  Normal  Created         4m17s  kubelet  Created container postgres
  Normal  Started         4m17s  kubelet  Started container postgres


Name:         ingress-nginx-admission-create-r95jq
Namespace:    ingress-nginx
Priority:     0
Node:         minikube/192.168.49.2
Start Time:   Thu, 10 Jun 2021 14:41:46 +0200
Labels:       addonmanager.kubernetes.io/mode=Reconcile
              app.kubernetes.io/component=admission-webhook
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/name=ingress-nginx
              controller-uid=455196aa-5869-4dbc-8e96-3905681bd7be
              job-name=ingress-nginx-admission-create
Annotations:  <none>
Status:       Succeeded
IP:           172.17.0.8
IPs:
  IP:           172.17.0.8
Controlled By:  Job/ingress-nginx-admission-create
Containers:
  create:
    Container ID:  docker://86f45c81ac48f669e3a0923b72db6d9e6efd168e0848bf8b1bd85b6684c33d75
    Image:         docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7
    Image ID:      docker-pullable://jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7
    Port:          <none>
    Host Port:     <none>
    Args:
      create
      --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
      --namespace=$(POD_NAMESPACE)
      --secret-name=ingress-nginx-admission
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 10 Jun 2021 14:42:05 +0200
      Finished:     Thu, 10 Jun 2021 14:42:05 +0200
    Ready:          False
    Restart Count:  0
    Environment:
      POD_NAMESPACE:  ingress-nginx (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from ingress-nginx-admission-token-xb8gt (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  ingress-nginx-admission-token-xb8gt:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission-token-xb8gt
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:          <none>


Name:         ingress-nginx-admission-patch-nvw4r
Namespace:    ingress-nginx
Priority:     0
Node:         minikube/192.168.49.2
Start Time:   Thu, 10 Jun 2021 14:41:46 +0200
Labels:       addonmanager.kubernetes.io/mode=Reconcile
              app.kubernetes.io/component=admission-webhook
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/name=ingress-nginx
              controller-uid=629be209-202a-4363-9673-b7e66edcd44a
              job-name=ingress-nginx-admission-patch
Annotations:  <none>
Status:       Succeeded
IP:           172.17.0.7
IPs:
  IP:           172.17.0.7
Controlled By:  Job/ingress-nginx-admission-patch
Containers:
  patch:
    Container ID:  docker://18c46ddae2c80a24d29880b4f155cf02907c45c2c56c4680a485467fa8a5451f
    Image:         docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7
    Image ID:      docker-pullable://jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7
    Port:          <none>
    Host Port:     <none>
    Args:
      patch
      --webhook-name=ingress-nginx-admission
      --namespace=$(POD_NAMESPACE)
      --patch-mutating=false
      --secret-name=ingress-nginx-admission
      --patch-failure-policy=Fail
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 10 Jun 2021 14:42:06 +0200
      Finished:     Thu, 10 Jun 2021 14:42:06 +0200
    Ready:          False
    Restart Count:  1
    Environment:
      POD_NAMESPACE:  ingress-nginx (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from ingress-nginx-admission-token-xb8gt (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  ingress-nginx-admission-token-xb8gt:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission-token-xb8gt
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:          <none>


Name:         ingress-nginx-controller-5d88495688-j5j2f
Namespace:    ingress-nginx
Priority:     0
Node:         minikube/192.168.49.2
Start Time:   Thu, 01 Jul 2021 13:58:37 +0200
Labels:       addonmanager.kubernetes.io/mode=Reconcile
              app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/name=ingress-nginx
              gcp-auth-skip-secret=true
              pod-template-hash=5d88495688
Annotations:  <none>
Status:       Running
IP:           172.17.0.3
IPs:
  IP:           172.17.0.3
Controlled By:  ReplicaSet/ingress-nginx-controller-5d88495688
Containers:
  controller:
    Container ID:  docker://530b21428e10a667995045c1721d216c920ceda710f5021b3350e4b295508a29
    Image:         k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a
    Image ID:      docker-pullable://k8s.gcr.io/ingress-nginx/controller@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a
    Ports:         80/TCP, 443/TCP, 8443/TCP
    Host Ports:    80/TCP, 443/TCP, 0/TCP
    Args:
      /nginx-ingress-controller
      --ingress-class=nginx
      --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
      --report-node-internal-ip-address
      --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
      --udp-services-configmap=$(POD_NAMESPACE)/udp-services
      --validating-webhook=:8443
      --validating-webhook-certificate=/usr/local/certificates/cert
      --validating-webhook-key=/usr/local/certificates/key
    State:          Running
      Started:      Thu, 01 Jul 2021 14:18:32 +0200
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Thu, 01 Jul 2021 14:06:57 +0200
      Finished:     Thu, 01 Jul 2021 14:17:57 +0200
    Ready:          True
    Restart Count:  3
    Requests:
      cpu:      100m
      memory:   90Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       ingress-nginx-controller-5d88495688-j5j2f (v1:metadata.name)
      POD_NAMESPACE:  ingress-nginx (v1:metadata.namespace)
      LD_PRELOAD:     /usr/local/lib/libmimalloc.so
    Mounts:
      /usr/local/certificates/ from webhook-cert (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from ingress-nginx-token-9drjf (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  webhook-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission
    Optional:    false
  ingress-nginx-token-9drjf:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-token-9drjf
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                   From                      Message
  ----     ------            ----                  ----                      -------
  Warning  FailedScheduling  21h                   default-scheduler         0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
  Warning  FailedScheduling  158m                  default-scheduler         0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
  Warning  FailedScheduling  36m                   default-scheduler         0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
  Warning  FailedScheduling  25m                   default-scheduler         0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
  Normal   Scheduled         24m                   default-scheduler         Successfully assigned ingress-nginx/ingress-nginx-controller-5d88495688-j5j2f to minikube
  Warning  FailedScheduling  161m (x13 over 177m)  default-scheduler         0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
  Normal   Pulling           24m                   kubelet                   Pulling image "k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a"
  Normal   Created           24m                   kubelet                   Created container controller
  Normal   Pulled            24m                   kubelet                   Successfully pulled image "k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a" in 7.409190711s
  Normal   Started           24m                   kubelet                   Started container controller
  Normal   RELOAD            24m                   nginx-ingress-controller  NGINX reload triggered due to a change in configuration
  Normal   SandboxChanged    19m                   kubelet                   Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled            19m                   kubelet                   Container image "k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a" already present on machine
  Normal   Created           19m                   kubelet                   Created container controller
  Normal   Started           19m                   kubelet                   Started container controller
  Warning  Unhealthy         18m (x2 over 18m)     kubelet                   Liveness probe failed: Get "http://172.17.0.3:10254/healthz": dial tcp 172.17.0.3:10254: connect: connection refused
  Warning  Unhealthy         18m (x2 over 18m)     kubelet                   Readiness probe failed: Get "http://172.17.0.3:10254/healthz": dial tcp 172.17.0.3:10254: connect: connection refused
  Normal   RELOAD            18m                   nginx-ingress-controller  NGINX reload triggered due to a change in configuration
  Warning  Unhealthy         18m                   kubelet                   Liveness probe failed: HTTP probe failed with statuscode: 500
  Normal   SandboxChanged    15m                   kubelet                   Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled            15m                   kubelet                   Container image "k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a" already present on machine
  Normal   Created           15m                   kubelet                   Created container controller
  Normal   Started           15m                   kubelet                   Started container controller
  Normal   RELOAD            15m                   nginx-ingress-controller  NGINX reload triggered due to a change in configuration
  Normal   SandboxChanged    4m20s                 kubelet                   Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled            4m19s                 kubelet                   Container image "k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a" already present on machine
  Normal   Created           4m19s                 kubelet                   Created container controller
  Normal   Started           4m19s                 kubelet                   Started container controller
  Normal   RELOAD            4m18s                 nginx-ingress-controller  NGINX reload triggered due to a change in configuration


Name:                 coredns-74ff55c5b-qzr6b
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Node:                 minikube/192.168.49.2
Start Time:           Tue, 30 Mar 2021 08:23:23 +0200
Labels:               k8s-app=kube-dns
                      pod-template-hash=74ff55c5b
Annotations:          <none>
Status:               Running
IP:                   172.17.0.2
IPs:
  IP:           172.17.0.2
Controlled By:  ReplicaSet/coredns-74ff55c5b
Containers:
  coredns:
    Container ID:  docker://3d2aa892496a4332354441a7531da3220bf5133fff201abaeecc621d520a6c73
    Image:         k8s.gcr.io/coredns:1.7.0
    Image ID:      docker-pullable://k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c
    Ports:         53/UDP, 53/TCP, 9153/TCP
    Host Ports:    0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    State:          Running
      Started:      Thu, 01 Jul 2021 14:18:31 +0200
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 01 Jul 2021 14:06:54 +0200
      Finished:     Thu, 01 Jul 2021 14:17:52 +0200
    Ready:          True
    Restart Count:  118
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:    http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-fsplt (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  coredns-token-fsplt:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-fsplt
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  kubernetes.io/os=linux
Tolerations:     CriticalAddonsOnly op=Exists
                 node-role.kubernetes.io/control-plane:NoSchedule
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason          Age                    From     Message
  ----     ------          ----                   ----     -------
  Normal   SandboxChanged  177m                   kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          177m                   kubelet  Container image "k8s.gcr.io/coredns:1.7.0" already present on machine
  Normal   Created         177m                   kubelet  Created container coredns
  Normal   Started         177m                   kubelet  Started container coredns
  Warning  Unhealthy       176m (x3 over 177m)    kubelet  Readiness probe failed: HTTP probe failed with statuscode: 503
  Normal   SandboxChanged  158m                   kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          158m                   kubelet  Container image "k8s.gcr.io/coredns:1.7.0" already present on machine
  Normal   Created         158m                   kubelet  Created container coredns
  Normal   Started         158m                   kubelet  Started container coredns
  Warning  Unhealthy       158m (x3 over 158m)    kubelet  Readiness probe failed: HTTP probe failed with statuscode: 503
  Warning  FailedMount     36m                    kubelet  MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition
  Warning  FailedMount     36m                    kubelet  MountVolume.SetUp failed for volume "coredns-token-fsplt" : failed to sync secret cache: timed out waiting for the condition
  Normal   SandboxChanged  36m                    kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          36m                    kubelet  Container image "k8s.gcr.io/coredns:1.7.0" already present on machine
  Normal   Created         36m                    kubelet  Created container coredns
  Normal   Started         36m                    kubelet  Started container coredns
  Warning  Unhealthy       36m (x3 over 36m)      kubelet  Readiness probe failed: HTTP probe failed with statuscode: 503
  Normal   SandboxChanged  25m                    kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Created         25m                    kubelet  Created container coredns
  Normal   Pulled          25m                    kubelet  Container image "k8s.gcr.io/coredns:1.7.0" already present on machine
  Normal   Started         25m                    kubelet  Started container coredns
  Normal   SandboxChanged  19m                    kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Created         19m                    kubelet  Created container coredns
  Normal   Started         19m                    kubelet  Started container coredns
  Normal   Pulled          19m                    kubelet  Container image "k8s.gcr.io/coredns:1.7.0" already present on machine
  Warning  Unhealthy       18m (x3 over 19m)      kubelet  Readiness probe failed: HTTP probe failed with statuscode: 503
  Warning  Unhealthy       18m                    kubelet  Readiness probe failed: Get "http://172.17.0.2:8181/ready": dial tcp 172.17.0.2:8181: connect: connection refused
  Normal   SandboxChanged  15m                    kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Started         15m                    kubelet  Started container coredns
  Normal   Created         15m                    kubelet  Created container coredns
  Normal   Pulled          15m                    kubelet  Container image "k8s.gcr.io/coredns:1.7.0" already present on machine
  Warning  Unhealthy       15m (x3 over 15m)      kubelet  Readiness probe failed: HTTP probe failed with statuscode: 503
  Normal   SandboxChanged  4m20s                  kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          4m20s                  kubelet  Container image "k8s.gcr.io/coredns:1.7.0" already present on machine
  Normal   Created         4m20s                  kubelet  Created container coredns
  Normal   Started         4m20s                  kubelet  Started container coredns
  Warning  Unhealthy       3m58s (x3 over 4m18s)  kubelet  Readiness probe failed: HTTP probe failed with statuscode: 503


Name:                 etcd-minikube
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 minikube/192.168.49.2
Start Time:           Thu, 01 Jul 2021 14:06:42 +0200
Labels:               component=etcd
                      tier=control-plane
Annotations:          kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.49.2:2379
                      kubernetes.io/config.hash: c31fe6a5afdd142cf3450ac972274b36
                      kubernetes.io/config.mirror: c31fe6a5afdd142cf3450ac972274b36
                      kubernetes.io/config.seen: 2021-03-30T06:23:07.413051499Z
                      kubernetes.io/config.source: file
Status:               Running
IP:                   192.168.49.2
IPs:
  IP:           192.168.49.2
Controlled By:  Node/minikube
Containers:
  etcd:
    Container ID:  docker://4d070b25706a6c424fdd8ca787a1e17ceb4fbd6b7bbb72cee5f158fd934dccc9
    Image:         k8s.gcr.io/etcd:3.4.13-0
    Image ID:      docker-pullable://k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2
    Port:          <none>
    Host Port:     <none>
    Command:
      etcd
      --advertise-client-urls=https://192.168.49.2:2379
      --cert-file=/var/lib/minikube/certs/etcd/server.crt
      --client-cert-auth=true
      --data-dir=/var/lib/minikube/etcd
      --initial-advertise-peer-urls=https://192.168.49.2:2380
      --initial-cluster=minikube=https://192.168.49.2:2380
      --key-file=/var/lib/minikube/certs/etcd/server.key
      --listen-client-urls=https://127.0.0.1:2379,https://192.168.49.2:2379
      --listen-metrics-urls=http://127.0.0.1:2381
      --listen-peer-urls=https://192.168.49.2:2380
      --name=minikube
      --peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt
      --peer-client-cert-auth=true
      --peer-key-file=/var/lib/minikube/certs/etcd/peer.key
      --peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt
      --proxy-refresh-interval=70000
      --snapshot-count=10000
      --trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt
    State:          Running
      Started:      Thu, 01 Jul 2021 14:18:25 +0200
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 01 Jul 2021 14:06:43 +0200
      Finished:     Thu, 01 Jul 2021 14:17:48 +0200
    Ready:          True
    Restart Count:  118
    Requests:
      cpu:                100m
      ephemeral-storage:  100Mi
      memory:             100Mi
    Liveness:             http-get http://127.0.0.1:2381/health delay=10s timeout=15s period=10s #success=1 #failure=8
    Startup:              http-get http://127.0.0.1:2381/health delay=10s timeout=15s period=10s #success=1 #failure=24
    Environment:          <none>
    Mounts:
      /var/lib/minikube/certs/etcd from etcd-certs (rw)
      /var/lib/minikube/etcd from etcd-data (rw)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  etcd-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/minikube/certs/etcd
    HostPathType:  DirectoryOrCreate
  etcd-data:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/minikube/etcd
    HostPathType:  DirectoryOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute op=Exists
Events:
  Type    Reason          Age    From     Message
  ----    ------          ----   ----     -------
  Normal  SandboxChanged  177m   kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          177m   kubelet  Container image "k8s.gcr.io/etcd:3.4.13-0" already present on machine
  Normal  Created         177m   kubelet  Created container etcd
  Normal  Started         177m   kubelet  Started container etcd
  Normal  SandboxChanged  159m   kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          159m   kubelet  Container image "k8s.gcr.io/etcd:3.4.13-0" already present on machine
  Normal  Created         159m   kubelet  Created container etcd
  Normal  Started         159m   kubelet  Started container etcd
  Normal  SandboxChanged  37m    kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          37m    kubelet  Container image "k8s.gcr.io/etcd:3.4.13-0" already present on machine
  Normal  Created         37m    kubelet  Created container etcd
  Normal  Started         37m    kubelet  Started container etcd
  Normal  SandboxChanged  25m    kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          25m    kubelet  Container image "k8s.gcr.io/etcd:3.4.13-0" already present on machine
  Normal  Created         25m    kubelet  Created container etcd
  Normal  Started         25m    kubelet  Started container etcd
  Normal  SandboxChanged  19m    kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          19m    kubelet  Container image "k8s.gcr.io/etcd:3.4.13-0" already present on machine
  Normal  Created         19m    kubelet  Created container etcd
  Normal  Started         19m    kubelet  Started container etcd
  Normal  SandboxChanged  16m    kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          16m    kubelet  Container image "k8s.gcr.io/etcd:3.4.13-0" already present on machine
  Normal  Created         16m    kubelet  Created container etcd
  Normal  Started         16m    kubelet  Started container etcd
  Normal  SandboxChanged  4m26s  kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          4m26s  kubelet  Container image "k8s.gcr.io/etcd:3.4.13-0" already present on machine
  Normal  Created         4m26s  kubelet  Created container etcd
  Normal  Started         4m26s  kubelet  Started container etcd


Name:         ingress-nginx-admission-create-v4xt8
Namespace:    kube-system
Priority:     0
Node:         minikube/192.168.49.2
Start Time:   Thu, 06 May 2021 14:10:04 +0200
Labels:       app.kubernetes.io/component=admission-webhook
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/name=ingress-nginx
              controller-uid=1a745bc4-8e48-4106-81c2-17f5714afea3
              job-name=ingress-nginx-admission-create
Annotations:  <none>
Status:       Succeeded
IP:           172.17.0.4
IPs:
  IP:           172.17.0.4
Controlled By:  Job/ingress-nginx-admission-create
Containers:
  create:
    Container ID:  docker://df960e87e85818b152dc30aa90a01c46ace1e641d697546a7af242bfc7f25d1e
    Image:         jettech/kube-webhook-certgen:v1.2.2@sha256:da8122a78d7387909cf34a0f34db0cce672da1379ee4fd57c626a4afe9ac12b7
    Image ID:      docker-pullable://jettech/kube-webhook-certgen@sha256:da8122a78d7387909cf34a0f34db0cce672da1379ee4fd57c626a4afe9ac12b7
    Port:          <none>
    Host Port:     <none>
    Args:
      create
      --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.kube-system.svc
      --namespace=kube-system
      --secret-name=ingress-nginx-admission
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 06 May 2021 14:10:14 +0200
      Finished:     Thu, 06 May 2021 14:10:14 +0200
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from ingress-nginx-admission-token-t2hvh (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  ingress-nginx-admission-token-t2hvh:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission-token-t2hvh
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:          <none>


Name:         ingress-nginx-admission-patch-kc2p4
Namespace:    kube-system
Priority:     0
Node:         minikube/192.168.49.2
Start Time:   Thu, 06 May 2021 14:10:04 +0200
Labels:       app.kubernetes.io/component=admission-webhook
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/name=ingress-nginx
              controller-uid=39187671-8775-46c6-bb02-290b8c766149
              job-name=ingress-nginx-admission-patch
Annotations:  <none>
Status:       Succeeded
IP:           172.17.0.3
IPs:
  IP:           172.17.0.3
Controlled By:  Job/ingress-nginx-admission-patch
Containers:
  patch:
    Container ID:  docker://1563338a679cc8466ad8cfcbbc8342e97c5712cb2fc5563bc2c2777a71f133cb
    Image:         jettech/kube-webhook-certgen:v1.3.0@sha256:ff01fba91131ed260df3f3793009efbf9686f5a5ce78a85f81c386a4403f7689
    Image ID:      docker-pullable://jettech/kube-webhook-certgen@sha256:ff01fba91131ed260df3f3793009efbf9686f5a5ce78a85f81c386a4403f7689
    Port:          <none>
    Host Port:     <none>
    Args:
      patch
      --webhook-name=ingress-nginx-admission
      --namespace=kube-system
      --patch-mutating=false
      --secret-name=ingress-nginx-admission
      --patch-failure-policy=Fail
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 06 May 2021 14:10:28 +0200
      Finished:     Thu, 06 May 2021 14:10:28 +0200
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 06 May 2021 14:10:12 +0200
      Finished:     Thu, 06 May 2021 14:10:12 +0200
    Ready:          False
    Restart Count:  2
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from ingress-nginx-admission-token-t2hvh (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  ingress-nginx-admission-token-t2hvh:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission-token-t2hvh
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:          <none>


Name:           ingress-nginx-controller-6958cdcd97-f2xn2
Namespace:      kube-system
Priority:       0
Node:           <none>
Labels:         addonmanager.kubernetes.io/mode=Reconcile
                app.kubernetes.io/component=controller
                app.kubernetes.io/instance=ingress-nginx
                app.kubernetes.io/name=ingress-nginx
                gcp-auth-skip-secret=true
                pod-template-hash=6958cdcd97
Annotations:    <none>
Status:         Pending
IP:
IPs:            <none>
Controlled By:  ReplicaSet/ingress-nginx-controller-6958cdcd97
Containers:
  controller:
    Image:       us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:@sha256:46ba23c3fbaafd9e5bd01ea85b2f921d9f2217be082580edc22e6c704a83f02f
    Ports:       80/TCP, 443/TCP, 8443/TCP
    Host Ports:  80/TCP, 443/TCP, 0/TCP
    Args:
      /nginx-ingress-controller
      --configmap=$(POD_NAMESPACE)/nginx-load-balancer-conf
      --report-node-internal-ip-address
      --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
      --udp-services-configmap=$(POD_NAMESPACE)/udp-services
      --validating-webhook=:8443
      --validating-webhook-certificate=/usr/local/certificates/cert
      --validating-webhook-key=/usr/local/certificates/key
    Requests:
      cpu:      100m
      memory:   90Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       ingress-nginx-controller-6958cdcd97-f2xn2 (v1:metadata.name)
      POD_NAMESPACE:  kube-system (v1:metadata.namespace)
    Mounts:
      /usr/local/certificates/ from webhook-cert (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from ingress-nginx-token-znbzh (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  webhook-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission
    Optional:    false
  ingress-nginx-token-znbzh:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-token-znbzh
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Warning  FailedScheduling  25m    default-scheduler  0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
  Warning  FailedScheduling  25m    default-scheduler  0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
  Warning  FailedScheduling  25m    default-scheduler  0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
  Warning  FailedScheduling  19m    default-scheduler  0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
  Warning  FailedScheduling  15m    default-scheduler  0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
  Warning  FailedScheduling  4m20s  default-scheduler  0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.


Name:                 kube-apiserver-minikube
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 minikube/192.168.49.2
Start Time:           Thu, 01 Jul 2021 14:06:42 +0200
Labels:               component=kube-apiserver
                      tier=control-plane
Annotations:          kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.49.2:8443
                      kubernetes.io/config.hash: 01d7e312da0f9c4176daa8464d4d1a50
                      kubernetes.io/config.mirror: 01d7e312da0f9c4176daa8464d4d1a50
                      kubernetes.io/config.seen: 2021-07-01T12:03:31.069699986Z
                      kubernetes.io/config.source: file
Status:               Running
IP:                   192.168.49.2
IPs:
  IP:           192.168.49.2
Controlled By:  Node/minikube
Containers:
  kube-apiserver:
    Container ID:  docker://8cd90aa914a013e303d06cfe8dd590c67c227d766f681eb007feffa138440222
    Image:         k8s.gcr.io/kube-apiserver:v1.20.7
    Image ID:      docker-pullable://k8s.gcr.io/kube-apiserver@sha256:5ab3d676c426bfb272fb7605e6978b90d5676913636a6105688862849961386f
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-apiserver
      --advertise-address=192.168.49.2
      --allow-privileged=true
      --authorization-mode=Node,RBAC
      --client-ca-file=/var/lib/minikube/certs/ca.crt
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota
      --enable-bootstrap-token-auth=true
      --etcd-cafile=/var/lib/minikube/certs/etcd/ca.crt
      --etcd-certfile=/var/lib/minikube/certs/apiserver-etcd-client.crt
      --etcd-keyfile=/var/lib/minikube/certs/apiserver-etcd-client.key
      --etcd-servers=https://127.0.0.1:2379
      --insecure-port=0
      --kubelet-client-certificate=/var/lib/minikube/certs/apiserver-kubelet-client.crt
      --kubelet-client-key=/var/lib/minikube/certs/apiserver-kubelet-client.key
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
      --proxy-client-cert-file=/var/lib/minikube/certs/front-proxy-client.crt
      --proxy-client-key-file=/var/lib/minikube/certs/front-proxy-client.key
      --requestheader-allowed-names=front-proxy-client
      --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt
      --requestheader-extra-headers-prefix=X-Remote-Extra-
      --requestheader-group-headers=X-Remote-Group
      --requestheader-username-headers=X-Remote-User
      --secure-port=8443
      --service-account-issuer=https://kubernetes.default.svc.cluster.local
      --service-account-key-file=/var/lib/minikube/certs/sa.pub
      --service-account-signing-key-file=/var/lib/minikube/certs/sa.key
      --service-cluster-ip-range=10.96.0.0/12
      --tls-cert-file=/var/lib/minikube/certs/apiserver.crt
      --tls-private-key-file=/var/lib/minikube/certs/apiserver.key
    State:          Running
      Started:      Thu, 01 Jul 2021 14:18:25 +0200
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Thu, 01 Jul 2021 14:06:43 +0200
      Finished:     Thu, 01 Jul 2021 14:17:57 +0200
    Ready:          True
    Restart Count:  2
    Requests:
      cpu:        250m
    Liveness:     http-get https://192.168.49.2:8443/livez delay=10s timeout=15s period=10s #success=1 #failure=8
    Readiness:    http-get https://192.168.49.2:8443/readyz delay=0s timeout=15s period=1s #success=1 #failure=3
    Startup:      http-get https://192.168.49.2:8443/livez delay=10s timeout=15s period=10s #success=1 #failure=24
    Environment:  <none>
    Mounts:
      /etc/ca-certificates from etc-ca-certificates (ro)
      /etc/ssl/certs from ca-certs (ro)
      /usr/local/share/ca-certificates from usr-local-share-ca-certificates (ro)
      /usr/share/ca-certificates from usr-share-ca-certificates (ro)
      /var/lib/minikube/certs from k8s-certs (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  ca-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ssl/certs
    HostPathType:  DirectoryOrCreate
  etc-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ca-certificates
    HostPathType:  DirectoryOrCreate
  k8s-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/minikube/certs
    HostPathType:  DirectoryOrCreate
  usr-local-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/local/share/ca-certificates
    HostPathType:  DirectoryOrCreate
  usr-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/share/ca-certificates
    HostPathType:  DirectoryOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute op=Exists
Events:
  Type    Reason          Age    From     Message
  ----    ------          ----   ----     -------
  Normal  Pulled          19m    kubelet  Container image "k8s.gcr.io/kube-apiserver:v1.20.7" already present on machine
  Normal  Created         19m    kubelet  Created container kube-apiserver
  Normal  Started         19m    kubelet  Started container kube-apiserver
  Normal  SandboxChanged  16m    kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          16m    kubelet  Container image "k8s.gcr.io/kube-apiserver:v1.20.7" already present on machine
  Normal  Created         16m    kubelet  Created container kube-apiserver
  Normal  Started         16m    kubelet  Started container kube-apiserver
  Normal  SandboxChanged  4m26s  kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          4m26s  kubelet  Container image "k8s.gcr.io/kube-apiserver:v1.20.7" already present on machine
  Normal  Created         4m26s  kubelet  Created container kube-apiserver
  Normal  Started         4m26s  kubelet  Started container kube-apiserver


Name:                 kube-controller-manager-minikube
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 minikube/192.168.49.2
Start Time:           Thu, 01 Jul 2021 13:57:28 +0200
Labels:               component=kube-controller-manager
                      tier=control-plane
Annotations:          kubernetes.io/config.hash: c7b8fa13668654de8887eea36ddd7b5b
                      kubernetes.io/config.mirror: c7b8fa13668654de8887eea36ddd7b5b
                      kubernetes.io/config.seen: 2021-07-01T12:03:31.069706367Z
                      kubernetes.io/config.source: file
Status:               Running
IP:                   192.168.49.2
IPs:
  IP:           192.168.49.2
Controlled By:  Node/minikube
Containers:
  kube-controller-manager:
    Container ID:  docker://4b86bfe943e18dea5fb0d1e69e4f535bada54ef2c6247052824b817c0c34cc06
    Image:         k8s.gcr.io/kube-controller-manager:v1.20.7
    Image ID:      docker-pullable://k8s.gcr.io/kube-controller-manager@sha256:eb9b121cbe40cf9016b95cefd34fb9e62c4caf1516188a98b64f091d871a2d46
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-controller-manager
      --allocate-node-cidrs=true
      --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
      --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
      --bind-address=127.0.0.1
      --client-ca-file=/var/lib/minikube/certs/ca.crt
      --cluster-cidr=10.244.0.0/16
      --cluster-name=mk
      --cluster-signing-cert-file=/var/lib/minikube/certs/ca.crt
      --cluster-signing-key-file=/var/lib/minikube/certs/ca.key
      --controllers=*,bootstrapsigner,tokencleaner
      --kubeconfig=/etc/kubernetes/controller-manager.conf
      --leader-elect=false
      --port=0
      --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt
      --root-ca-file=/var/lib/minikube/certs/ca.crt
      --service-account-private-key-file=/var/lib/minikube/certs/sa.key
      --service-cluster-ip-range=10.96.0.0/12
      --use-service-account-credentials=true
    State:          Running
      Started:      Thu, 01 Jul 2021 14:18:25 +0200
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Thu, 01 Jul 2021 14:06:43 +0200
      Finished:     Thu, 01 Jul 2021 14:17:47 +0200
    Ready:          True
    Restart Count:  2
    Requests:
      cpu:        200m
    Liveness:     http-get https://127.0.0.1:10257/healthz delay=10s timeout=15s period=10s #success=1 #failure=8
    Startup:      http-get https://127.0.0.1:10257/healthz delay=10s timeout=15s period=10s #success=1 #failure=24
    Environment:  <none>
    Mounts:
      /etc/ca-certificates from etc-ca-certificates (ro)
      /etc/kubernetes/controller-manager.conf from kubeconfig (ro)
      /etc/ssl/certs from ca-certs (ro)
      /usr/libexec/kubernetes/kubelet-plugins/volume/exec from flexvolume-dir (rw)
      /usr/local/share/ca-certificates from usr-local-share-ca-certificates (ro)
      /usr/share/ca-certificates from usr-share-ca-certificates (ro)
      /var/lib/minikube/certs from k8s-certs (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  ca-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ssl/certs
    HostPathType:  DirectoryOrCreate
  etc-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ca-certificates
    HostPathType:  DirectoryOrCreate
  flexvolume-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/libexec/kubernetes/kubelet-plugins/volume/exec
    HostPathType:  DirectoryOrCreate
  k8s-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/minikube/certs
    HostPathType:  DirectoryOrCreate
  kubeconfig:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/controller-manager.conf
    HostPathType:  FileOrCreate
  usr-local-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/local/share/ca-certificates
    HostPathType:  DirectoryOrCreate
  usr-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/share/ca-certificates
    HostPathType:  DirectoryOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute op=Exists
Events:
  Type    Reason          Age    From     Message
  ----    ------          ----   ----     -------
  Normal  Pulled          19m    kubelet  Container image "k8s.gcr.io/kube-controller-manager:v1.20.7" already present on machine
  Normal  Created         19m    kubelet  Created container kube-controller-manager
  Normal  Started         19m    kubelet  Started container kube-controller-manager
  Normal  SandboxChanged  16m    kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          16m    kubelet  Container image "k8s.gcr.io/kube-controller-manager:v1.20.7" already present on machine
  Normal  Created         16m    kubelet  Created container kube-controller-manager
  Normal  Started         16m    kubelet  Started container kube-controller-manager
  Normal  SandboxChanged  4m26s  kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          4m26s  kubelet  Container image "k8s.gcr.io/kube-controller-manager:v1.20.7" already present on machine
  Normal  Created         4m26s  kubelet  Created container kube-controller-manager
  Normal  Started         4m26s  kubelet  Started container kube-controller-manager


Name:         kube-flannel-ds-amd64-tdq2h
Namespace:    kube-system
Priority:     0
Node:         minikube/192.168.49.2
Start Time:   Thu, 01 Jul 2021 13:46:12 +0200
Labels:       app=flannel
              controller-revision-hash=6674b6b67c
              pod-template-generation=1
              tier=node
Annotations:  <none>
Status:       Running
IP:           192.168.49.2
IPs:
  IP:           192.168.49.2
Controlled By:  DaemonSet/kube-flannel-ds-amd64
Init Containers:
  install-cni:
    Container ID:  docker://7a5b3bb3282b4aefdc76e5dfa64719a7b8e91e0624a4d9e76d331f2f9169dc1d
    Image:         quay.io/coreos/flannel:v0.12.0-amd64
    Image ID:      docker-pullable://quay.io/coreos/flannel@sha256:6d451d92c921f14bfb38196aacb6e506d4593c5b3c9d40a8b8a2506010dc3e10
    Port:          <none>
    Host Port:     <none>
    Command:
      cp
    Args:
      -f
      /etc/kube-flannel/cni-conf.json
      /etc/cni/net.d/10-flannel.conflist
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 01 Jul 2021 14:18:34 +0200
      Finished:     Thu, 01 Jul 2021 14:18:34 +0200
    Ready:          True
    Restart Count:  4
    Environment:    <none>
    Mounts:
      /etc/cni/net.d from cni (rw)
      /etc/kube-flannel/ from flannel-cfg (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from flannel-token-fsmxs (ro)
Containers:
  kube-flannel:
    Container ID:  docker://b2d041a4268e5acec36d250ab83db15a0711cc20a2277fd86dbd94ba7c40e781
    Image:         quay.io/coreos/flannel:v0.12.0-amd64
    Image ID:      docker-pullable://quay.io/coreos/flannel@sha256:6d451d92c921f14bfb38196aacb6e506d4593c5b3c9d40a8b8a2506010dc3e10
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/bin/flanneld
    Args:
      --ip-masq
      --kube-subnet-mgr
    State:          Running
      Started:      Thu, 01 Jul 2021 14:18:34 +0200
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Thu, 01 Jul 2021 14:06:56 +0200
      Finished:     Thu, 01 Jul 2021 14:17:57 +0200
    Ready:          True
    Restart Count:  5
    Limits:
      cpu:     100m
      memory:  50Mi
    Requests:
      cpu:     100m
      memory:  50Mi
    Environment:
      POD_NAME:       kube-flannel-ds-amd64-tdq2h (v1:metadata.name)
      POD_NAMESPACE:  kube-system (v1:metadata.namespace)
    Mounts:
      /etc/kube-flannel/ from flannel-cfg (rw)
      /run/flannel from run (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from flannel-token-fsmxs (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  run:
    Type:          HostPath (bare host directory volume)
    Path:          /run/flannel
    HostPathType:
  cni:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:
  flannel-cfg:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-flannel-cfg
    Optional:  false
  flannel-token-fsmxs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  flannel-token-fsmxs
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     :NoSchedule op=Exists
                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                 node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                 node.kubernetes.io/not-ready:NoExecute op=Exists
                 node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                 node.kubernetes.io/unreachable:NoExecute op=Exists
                 node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason          Age                From               Message
  ----     ------          ----               ----               -------
  Normal   Scheduled       36m                default-scheduler  Successfully assigned kube-system/kube-flannel-ds-amd64-tdq2h to minikube
  Normal   Pulling         36m                kubelet            Pulling image "quay.io/coreos/flannel:v0.12.0-amd64"
  Normal   Pulled          36m                kubelet            Successfully pulled image "quay.io/coreos/flannel:v0.12.0-amd64" in 4.05929413s
  Normal   Started         36m                kubelet            Started container install-cni
  Normal   Created         36m                kubelet            Created container install-cni
  Normal   Created         36m                kubelet            Created container kube-flannel
  Normal   Pulled          36m                kubelet            Container image "quay.io/coreos/flannel:v0.12.0-amd64" already present on machine
  Normal   Started         36m                kubelet            Started container kube-flannel
  Normal   SandboxChanged  25m                kubelet            Pod sandbox changed, it will be killed and re-created.
  Normal   Started         25m                kubelet            Started container install-cni
  Normal   Created         25m                kubelet            Created container install-cni
  Normal   Pulled          25m                kubelet            Container image "quay.io/coreos/flannel:v0.12.0-amd64" already present on machine
  Normal   Pulled          25m                kubelet            Container image "quay.io/coreos/flannel:v0.12.0-amd64" already present on machine
  Normal   Created         25m                kubelet            Created container kube-flannel
  Normal   Started         25m                kubelet            Started container kube-flannel
  Normal   SandboxChanged  19m                kubelet            Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          19m                kubelet            Container image "quay.io/coreos/flannel:v0.12.0-amd64" already present on machine
  Normal   Created         19m                kubelet            Created container install-cni
  Normal   Started         19m                kubelet            Started container install-cni
  Warning  BackOff         18m                kubelet            Back-off restarting failed container
  Normal   Created         18m (x2 over 19m)  kubelet            Created container kube-flannel
  Normal   Started         18m (x2 over 19m)  kubelet            Started container kube-flannel
  Normal   Pulled          18m (x2 over 19m)  kubelet            Container image "quay.io/coreos/flannel:v0.12.0-amd64" already present on machine
  Normal   SandboxChanged  15m                kubelet            Pod sandbox changed, it will be killed and re-created.
  Normal   Created         15m                kubelet            Created container install-cni
  Normal   Pulled          15m                kubelet            Container image "quay.io/coreos/flannel:v0.12.0-amd64" already present on machine
  Normal   Started         15m                kubelet            Started container install-cni
  Normal   Started         15m                kubelet            Started container kube-flannel
  Normal   Pulled          15m                kubelet            Container image "quay.io/coreos/flannel:v0.12.0-amd64" already present on machine
  Normal   Created         15m                kubelet            Created container kube-flannel
  Normal   SandboxChanged  4m18s              kubelet            Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          4m17s              kubelet            Container image "quay.io/coreos/flannel:v0.12.0-amd64" already present on machine
  Normal   Created         4m17s              kubelet            Created container install-cni
  Normal   Started         4m17s              kubelet            Started container install-cni
  Normal   Pulled          4m17s              kubelet            Container image "quay.io/coreos/flannel:v0.12.0-amd64" already present on machine
  Normal   Created         4m17s              kubelet            Created container kube-flannel
  Normal   Started         4m17s              kubelet            Started container kube-flannel


Name:                 kube-proxy-b9pxg
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 minikube/192.168.49.2
Start Time:           Thu, 01 Jul 2021 14:04:01 +0200
Labels:               controller-revision-hash=5bd89cc4b7
                      k8s-app=kube-proxy
                      pod-template-generation=2
Annotations:          <none>
Status:               Running
IP:                   192.168.49.2
IPs:
  IP:           192.168.49.2
Controlled By:  DaemonSet/kube-proxy
Containers:
  kube-proxy:
    Container ID:  docker://597e1342bcee34ab6d7b45f647ab0f8fe1315b329951c01dc3dc5d3cfd31eb2c
    Image:         k8s.gcr.io/kube-proxy:v1.20.7
    Image ID:      docker-pullable://k8s.gcr.io/kube-proxy@sha256:5d2be61150535ed37b7a5fa5a8239f89afee505ab2fae05247447851eed710a8
    Port:          <none>
    Host Port:     <none>
    Command:
      /usr/local/bin/kube-proxy
      --config=/var/lib/kube-proxy/config.conf
      --hostname-override=$(NODE_NAME)
    State:          Running
      Started:      Thu, 01 Jul 2021 14:18:31 +0200
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Thu, 01 Jul 2021 14:06:57 +0200
      Finished:     Thu, 01 Jul 2021 14:17:48 +0200
    Ready:          True
    Restart Count:  2
    Environment:
      NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /var/lib/kube-proxy from kube-proxy (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-proxy-token-vnsq4 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  kube-proxy:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-proxy
    Optional:  false
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:
  kube-proxy-token-vnsq4:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kube-proxy-token-vnsq4
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  kubernetes.io/os=linux
Tolerations:     op=Exists
                 CriticalAddonsOnly op=Exists
                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                 node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                 node.kubernetes.io/not-ready:NoExecute op=Exists
                 node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                 node.kubernetes.io/unreachable:NoExecute op=Exists
                 node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type    Reason          Age    From               Message
  ----    ------          ----   ----               -------
  Normal  Scheduled       18m    default-scheduler  Successfully assigned kube-system/kube-proxy-b9pxg to minikube
  Normal  Pulled          18m    kubelet            Container image "k8s.gcr.io/kube-proxy:v1.20.7" already present on machine
  Normal  Created         18m    kubelet            Created container kube-proxy
  Normal  Started         18m    kubelet            Started container kube-proxy
  Normal  SandboxChanged  15m    kubelet            Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          15m    kubelet            Container image "k8s.gcr.io/kube-proxy:v1.20.7" already present on machine
  Normal  Created         15m    kubelet            Created container kube-proxy
  Normal  Started         15m    kubelet            Started container kube-proxy
  Normal  SandboxChanged  4m20s  kubelet            Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          4m20s  kubelet            Container image "k8s.gcr.io/kube-proxy:v1.20.7" already present on machine
  Normal  Created         4m20s  kubelet            Created container kube-proxy
  Normal  Started         4m20s  kubelet            Started container kube-proxy


Name:                 kube-scheduler-minikube
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 minikube/192.168.49.2
Start Time:           Thu, 01 Jul 2021 14:06:42 +0200
Labels:               component=kube-scheduler
                      tier=control-plane
Annotations:          kubernetes.io/config.hash: 82ed17c7f4a56a29330619386941d47e
                      kubernetes.io/config.mirror: 82ed17c7f4a56a29330619386941d47e
                      kubernetes.io/config.seen: 2021-07-01T12:03:31.069708131Z
                      kubernetes.io/config.source: file
Status:               Running
IP:                   192.168.49.2
IPs:
  IP:           192.168.49.2
Controlled By:  Node/minikube
Containers:
  kube-scheduler:
    Container ID:  docker://a7dedf3d32300b555ee3a87c876d8bd348df6700283494ac16540cfc378c3a44
    Image:         k8s.gcr.io/kube-scheduler:v1.20.7
    Image ID:      docker-pullable://k8s.gcr.io/kube-scheduler@sha256:6fdb12580353b6cd59de486ca650e3ba9270bc8d52f1d3052cd9bb1d4f28e189
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-scheduler
      --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
      --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
      --bind-address=127.0.0.1
      --kubeconfig=/etc/kubernetes/scheduler.conf
      --leader-elect=false
      --port=0
    State:          Running
      Started:      Thu, 01 Jul 2021 14:18:25 +0200
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Thu, 01 Jul 2021 14:06:43 +0200
      Finished:     Thu, 01 Jul 2021 14:17:48 +0200
    Ready:          True
    Restart Count:  2
    Requests:
      cpu:        100m
    Liveness:     http-get https://127.0.0.1:10259/healthz delay=10s timeout=15s period=10s #success=1 #failure=8
    Startup:      http-get https://127.0.0.1:10259/healthz delay=10s timeout=15s period=10s #success=1 #failure=24
    Environment:  <none>
    Mounts:
      /etc/kubernetes/scheduler.conf from kubeconfig (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  kubeconfig:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/scheduler.conf
    HostPathType:  FileOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute op=Exists
Events:
  Type    Reason          Age    From     Message
  ----    ------          ----   ----     -------
  Normal  Pulled          19m    kubelet  Container image "k8s.gcr.io/kube-scheduler:v1.20.7" already present on machine
  Normal  Created         19m    kubelet  Created container kube-scheduler
  Normal  Started         19m    kubelet  Started container kube-scheduler
  Normal  SandboxChanged  16m    kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          16m    kubelet  Container image "k8s.gcr.io/kube-scheduler:v1.20.7" already present on machine
  Normal  Created         16m    kubelet  Created container kube-scheduler
  Normal  Started         16m    kubelet  Started container kube-scheduler
  Normal  SandboxChanged  4m26s  kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          4m26s  kubelet  Container image "k8s.gcr.io/kube-scheduler:v1.20.7" already present on machine
  Normal  Created         4m26s  kubelet  Created container kube-scheduler
  Normal  Started         4m26s  kubelet  Started container kube-scheduler


Name:         storage-provisioner
Namespace:    kube-system
Priority:     0
Node:         minikube/192.168.49.2
Start Time:   Tue, 30 Mar 2021 08:23:26 +0200
Labels:       addonmanager.kubernetes.io/mode=Reconcile
              integration-test=storage-provisioner
Annotations:  <none>
Status:       Running
IP:           192.168.49.2
IPs:
  IP:  192.168.49.2
Containers:
  storage-provisioner:
    Container ID:  docker://8875a27b5b74f925cef2614a22dc1a9f3847071f928c8289af7da5e3f6aa47fb
    Image:         gcr.io/k8s-minikube/storage-provisioner:v5
    Image ID:      docker-pullable://gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
    Port:          <none>
    Host Port:     <none>
    Command:
      /storage-provisioner
    State:          Running
      Started:      Thu, 01 Jul 2021 14:19:14 +0200
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 01 Jul 2021 14:18:31 +0200
      Finished:     Thu, 01 Jul 2021 14:19:01 +0200
    Ready:          True
    Restart Count:  217
    Environment:    <none>
    Mounts:
      /tmp from tmp (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from storage-provisioner-token-p5nbd (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  tmp:
    Type:          HostPath (bare host directory volume)
    Path:          /tmp
    HostPathType:  Directory
  storage-provisioner-token-p5nbd:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  storage-provisioner-token-p5nbd
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason          Age                    From     Message
  ----     ------          ----                   ----     -------
  Warning  FailedMount     177m                   kubelet  MountVolume.SetUp failed for volume "storage-provisioner-token-p5nbd" : failed to sync secret cache: timed out waiting for the condition
  Normal   SandboxChanged  177m                   kubelet  Pod sandbox changed, it will be killed and re-created.
  Warning  BackOff         176m                   kubelet  Back-off restarting failed container
  Normal   Pulled          176m (x2 over 177m)    kubelet  Container image "gcr.io/k8s-minikube/storage-provisioner:v5" already present on machine
  Normal   Started         176m (x2 over 177m)    kubelet  Started container storage-provisioner
  Normal   Created         176m (x2 over 177m)    kubelet  Created container storage-provisioner
  Normal   SandboxChanged  158m                   kubelet  Pod sandbox changed, it will be killed and re-created.
  Warning  BackOff         158m                   kubelet  Back-off restarting failed container
  Normal   Pulled          158m (x2 over 158m)    kubelet  Container image "gcr.io/k8s-minikube/storage-provisioner:v5" already present on machine
  Normal   Created         158m (x2 over 158m)    kubelet  Created container storage-provisioner
  Normal   Started         158m (x2 over 158m)    kubelet  Started container storage-provisioner
  Warning  FailedMount     36m (x2 over 36m)      kubelet  MountVolume.SetUp failed for volume "storage-provisioner-token-p5nbd" : failed to sync secret cache: timed out waiting for the condition
  Normal   SandboxChanged  36m                    kubelet  Pod sandbox changed, it will be killed and re-created.
  Warning  BackOff         36m                    kubelet  Back-off restarting failed container
  Normal   Pulled          36m (x2 over 36m)      kubelet  Container image "gcr.io/k8s-minikube/storage-provisioner:v5" already present on machine
  Normal   Created         36m (x2 over 36m)      kubelet  Created container storage-provisioner
  Normal   Started         36m (x2 over 36m)      kubelet  Started container storage-provisioner
  Normal   SandboxChanged  25m                    kubelet  Pod sandbox changed, it will be killed and re-created.
  Warning  BackOff         24m                    kubelet  Back-off restarting failed container
  Normal   Created         24m (x2 over 25m)      kubelet  Created container storage-provisioner
  Normal   Pulled          24m (x2 over 25m)      kubelet  Container image "gcr.io/k8s-minikube/storage-provisioner:v5" already present on machine
  Normal   Started         24m (x2 over 25m)      kubelet  Started container storage-provisioner
  Normal   SandboxChanged  19m                    kubelet  Pod sandbox changed, it will be killed and re-created.
  Warning  BackOff         18m                    kubelet  Back-off restarting failed container
  Normal   Created         18m (x2 over 19m)      kubelet  Created container storage-provisioner
  Normal   Started         18m (x2 over 19m)      kubelet  Started container storage-provisioner
  Normal   Pulled          18m (x2 over 19m)      kubelet  Container image "gcr.io/k8s-minikube/storage-provisioner:v5" already present on machine
  Normal   SandboxChanged  15m                    kubelet  Pod sandbox changed, it will be killed and re-created.
  Warning  BackOff         15m                    kubelet  Back-off restarting failed container
  Normal   Pulled          15m (x2 over 15m)      kubelet  Container image "gcr.io/k8s-minikube/storage-provisioner:v5" already present on machine
  Normal   Started         15m (x2 over 15m)      kubelet  Started container storage-provisioner
  Normal   Created         15m (x2 over 15m)      kubelet  Created container storage-provisioner
  Normal   SandboxChanged  4m20s                  kubelet  Pod sandbox changed, it will be killed and re-created.
  Warning  BackOff         3m49s                  kubelet  Back-off restarting failed container
  Normal   Pulled          3m37s (x2 over 4m20s)  kubelet  Container image "gcr.io/k8s-minikube/storage-provisioner:v5" already present on machine
  Normal   Created         3m37s (x2 over 4m20s)  kubelet  Created container storage-provisioner
  Normal   Started         3m37s (x2 over 4m20s)  kubelet  Started container storage-provisioner
@Commifreak Commifreak changed the title [minikube] Upgrade from 19.2.0 to any newer version fails [minikube] Upgrade from 19.2.0 to any newer version fails: could not translate host name "awx-postgres" to address: Name or service not known Jul 1, 2021
@coolbry95
Copy link

I have a similar error here.

I am running in AKS. A fresh install or even an upgrade will cause the same issue.

@Commifreak
Copy link
Author

So, I have two issues here? The static files issue and my "cannot find postgres" issue? Great..

Hope for a fast fix - the second test upgrade suceeded and static files as well as postgres was working. Next reboot: Problem occuring again. Interesting.

@coolbry95
Copy link

Based on my logs it doesn't see postgres immediately or postgres isn't accepting connections but should be because its up and running.

I wonder if they issues are related and if one step fails they both fail.

@coolbry95
Copy link

I commented in the other issue I linked before. There is an issue connecting to postgres right away

@dimatha
Copy link

dimatha commented Jul 7, 2021

Sometimes it also happens during awxrestore. Didn't had a chance to check if the service really exists. Maybe just a timing issue ?
AWX: 19.2.0
Operator: 10.0.0

TASK [Set pg_restore command] **************************************************
task path: /opt/ansible/roles/restore/tasks/postgres.yml:64
ok: [localhost] => {"ansible_facts": {"pg_restore": "pg_restore --clean --if-exists -U awx -h awx-postgres.awx-restore.svc.cluster.local -U awx -d awx -p 5432"}, "changed": false}

TASK [restore : Restore database dump to the new postgresql container] *********
task path: /opt/ansible/roles/restore/tasks/postgres.yml:74
fatal: [localhost]: FAILED! => {"changed": true, "failed_when_result": true, "return_code": 1, "stderr": "pg_restore: error: connection to database \"awx\" failed: could not translate host name \"awx-postgres.awx-restore.svc.cluster.local\" to address: Name or service not known
", "stderr_lines": ["pg_restore: error: connection to database \"awx\" failed: could not translate host name \"awx-postgres.awx-restore.svc.cluster.local\" to address: Name or service not known"], "stdout": "", "stdout_lines": []}

@coolbry95
Copy link

Yes this looks like the service does not exist or is not ready yet. I run istio and the service may not be connectable for a few seconds after the istio sidecar proxy starts. This PR may resolve your issues ansible/awx#10583.

@tchellomello
Copy link
Contributor

+1 that should be fixed by the next operator version and the awx release due ansible/awx#10583.

@DomPolizzi
Copy link

Still seeing this in version 30

@dark-vex
Copy link

I had run into the same issue while upgrading the helm chart from 0.25.0 to 0.30.0 the service name changed from awx-postgres to awx-postgres-13. I have changed the hostname on awx-postgres-configuration configmap to fix it

@Raptus1
Copy link

Raptus1 commented Nov 8, 2022

Strange enough I am experiencing this on Rocky 9 deployment, but not Rocky 8 with basically the same setup using only centos9 docker repo instead.

Using latest operator and awx.

@dark-vex Can you elaborate on the configmap you are editing? I and not using helm, but would like to try your bugfix/workaround. :)

@rooftopcellist
Copy link
Member

This issue has become stale. On upgrades involving a postgresql version bump, the postgresql service name will change, as mentioned, as will the host entry in the postgres_configuration_secret.

If you have a custom postgres_configuration_secret with type: managed, you could run in to this. That is a pretty niche case though because there is no reason to make a custom pg config secret if you are not using an external database.

I am going to close this issue. Please open a new issue if you are still experience this and I will take a look. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants