Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minikube return 404 on dashboard for ingress #12451

Closed
PabloG6 opened this issue Sep 11, 2021 · 2 comments
Closed

Minikube return 404 on dashboard for ingress #12451

PabloG6 opened this issue Sep 11, 2021 · 2 comments

Comments

@PabloG6
Copy link

PabloG6 commented Sep 11, 2021

Macbook Air with m1 chip,
Macos Big Sur 11.5.2
Steps to reproduce the issue:

  1. minikube start --driver=docker
  2. minikube addons enable ingress
  3. kubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0

Full output of failed command if not minikube start:

* * ==> Audit <== * |---------|--------------------------|----------|------------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|--------------------------|----------|------------|---------|-------------------------------|-------------------------------| | delete | | minikube | pablogrant | v1.23.0 | Tue, 07 Sep 2021 12:22:35 EST | Tue, 07 Sep 2021 12:22:36 EST | | start | | minikube | pablogrant | v1.23.0 | Tue, 07 Sep 2021 12:28:14 EST | Tue, 07 Sep 2021 12:29:21 EST | | addons | enable ingress | minikube | pablogrant | v1.23.0 | Tue, 07 Sep 2021 12:29:46 EST | Tue, 07 Sep 2021 12:30:23 EST | | start | | minikube | pablogrant | v1.23.0 | Tue, 07 Sep 2021 15:55:56 EST | Tue, 07 Sep 2021 15:56:16 EST | | addons | enable ingress | minikube | pablogrant | v1.23.0 | Tue, 07 Sep 2021 15:59:59 EST | Tue, 07 Sep 2021 16:00:00 EST | | start | | minikube | pablogrant | v1.23.0 | Fri, 10 Sep 2021 17:12:42 EST | Fri, 10 Sep 2021 17:12:59 EST | | addons | enable ingress | minikube | pablogrant | v1.23.0 | Fri, 10 Sep 2021 17:13:45 EST | Fri, 10 Sep 2021 17:13:46 EST | | addons | enable ingress | minikube | pablogrant | v1.23.0 | Fri, 10 Sep 2021 17:18:50 EST | Fri, 10 Sep 2021 17:18:51 EST | | tunnel | | minikube | pablogrant | v1.23.0 | Fri, 10 Sep 2021 17:22:42 EST | Fri, 10 Sep 2021 17:23:38 EST | | tunnel | | minikube | pablogrant | v1.23.0 | Fri, 10 Sep 2021 17:24:46 EST | Fri, 10 Sep 2021 17:28:10 EST | | tunnel | | minikube | root | v1.23.0 | Fri, 10 Sep 2021 17:28:20 EST | Fri, 10 Sep 2021 17:28:28 EST | | logs | --file minikube-logs.txt | minikube | pablogrant | v1.23.0 | Fri, 10 Sep 2021 17:34:29 EST | Fri, 10 Sep 2021 17:34:32 EST | | addons | enable ingress | minikube | pablogrant | v1.23.0 | Fri, 10 Sep 2021 17:36:07 EST | Fri, 10 Sep 2021 17:36:08 EST | | delete | | minikube | pablogrant | v1.23.0 | Fri, 10 Sep 2021 17:42:25 EST | Fri, 10 Sep 2021 17:42:29 EST | | start | | minikube | pablogrant | v1.23.0 | Fri, 10 Sep 2021 17:42:53 EST | Fri, 10 Sep 2021 17:43:29 EST | | stop | | minikube | pablogrant | v1.23.0 | Fri, 10 Sep 2021 17:43:45 EST | Fri, 10 Sep 2021 17:43:56 EST | | start | --vm=true | minikube | pablogrant | v1.23.0 | Fri, 10 Sep 2021 17:44:13 EST | Fri, 10 Sep 2021 17:44:29 EST | | addons | enable ingress | minikube | pablogrant | v1.23.0 | Fri, 10 Sep 2021 17:57:12 EST | Fri, 10 Sep 2021 17:57:45 EST | | tunnel | | minikube | pablogrant | v1.23.0 | Fri, 10 Sep 2021 18:03:23 EST | Fri, 10 Sep 2021 18:04:29 EST | | logs | | minikube | pablogrant | v1.23.0 | Fri, 10 Sep 2021 18:08:30 EST | Fri, 10 Sep 2021 18:08:32 EST | | logs | | minikube | pablogrant | v1.23.0 | Fri, 10 Sep 2021 18:09:11 EST | Fri, 10 Sep 2021 18:09:14 EST | | ip | | minikube | pablogrant | v1.23.0 | Fri, 10 Sep 2021 18:12:45 EST | Fri, 10 Sep 2021 18:12:45 EST | | addons | enable ingress | minikube | pablogrant | v1.23.0 | Fri, 10 Sep 2021 18:17:47 EST | Fri, 10 Sep 2021 18:17:48 EST | | logs | | minikube | pablogrant | v1.23.0 | Fri, 10 Sep 2021 18:19:22 EST | Fri, 10 Sep 2021 18:19:24 EST | | tunnel | | minikube | pablogrant | v1.23.0 | Fri, 10 Sep 2021 18:18:35 EST | Fri, 10 Sep 2021 18:19:36 EST | | stop | | minikube | pablogrant | v1.23.0 | Fri, 10 Sep 2021 18:26:11 EST | Fri, 10 Sep 2021 18:26:22 EST | | start | --vm=true | minikube | pablogrant | v1.23.0 | Fri, 10 Sep 2021 18:28:54 EST | Fri, 10 Sep 2021 18:29:13 EST | | logs | --help | minikube | pablogrant | v1.23.0 | Fri, 10 Sep 2021 18:30:21 EST | Fri, 10 Sep 2021 18:30:21 EST | | service | web --url | minikube | pablogrant | v1.23.0 | Fri, 10 Sep 2021 18:16:11 EST | Fri, 10 Sep 2021 18:33:26 EST | | service | web --url | minikube | pablogrant | v1.23.0 | Fri, 10 Sep 2021 18:34:12 EST | Fri, 10 Sep 2021 18:35:45 EST | | ip | | minikube | pablogrant | v1.23.0 | Fri, 10 Sep 2021 18:51:58 EST | Fri, 10 Sep 2021 18:51:59 EST | | tunnel | | minikube | root | v1.23.0 | Fri, 10 Sep 2021 18:55:32 EST | Fri, 10 Sep 2021 18:55:38 EST | | tunnel | | minikube | pablogrant | v1.23.0 | Fri, 10 Sep 2021 18:55:42 EST | Fri, 10 Sep 2021 19:00:58 EST | | delete | | minikube | pablogrant | v1.23.0 | Fri, 10 Sep 2021 19:01:00 EST | Fri, 10 Sep 2021 19:01:04 EST | | start | | minikube | pablogrant | v1.23.0 | Fri, 10 Sep 2021 19:01:14 EST | Fri, 10 Sep 2021 19:01:50 EST | | tunnel | | minikube | pablogrant | v1.23.0 | Fri, 10 Sep 2021 19:02:38 EST | Fri, 10 Sep 2021 19:02:47 EST | | addons | enable ingress | minikube | pablogrant | v1.23.0 | Fri, 10 Sep 2021 19:03:06 EST | Fri, 10 Sep 2021 19:03:44 EST | | tunnel | | minikube | pablogrant | v1.23.0 | Fri, 10 Sep 2021 19:06:27 EST | Fri, 10 Sep 2021 19:08:29 EST | |---------|--------------------------|----------|------------|---------|-------------------------------|-------------------------------|
  • ==> Last Start <==
  • Log file created at: 2021/09/10 19:01:14
    Running on machine: Pablos-MacBook-Air
    Binary: Built with gc go1.17 for darwin/arm64
    Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
    I0910 19:01:14.240996 25326 out.go:298] Setting OutFile to fd 1 ...
    I0910 19:01:14.241081 25326 out.go:350] isatty.IsTerminal(1) = true
    I0910 19:01:14.241082 25326 out.go:311] Setting ErrFile to fd 2...
    I0910 19:01:14.241085 25326 out.go:350] isatty.IsTerminal(2) = true
    I0910 19:01:14.241142 25326 root.go:313] Updating PATH: /Users/pablogrant/.minikube/bin
    I0910 19:01:14.241730 25326 out.go:305] Setting JSON to false
    I0910 19:01:14.268202 25326 start.go:111] hostinfo: {"hostname":"Pablos-MacBook-Air.local","uptime":163331,"bootTime":1631155143,"procs":559,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.5.2","kernelVersion":"20.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"ac4a14a7-8a8e-3ae6-85a6-55e6b2502bf9"}
    W0910 19:01:14.268292 25326 start.go:119] gopshost.Virtualization returned error: not implemented yet
    I0910 19:01:14.288678 25326 out.go:177] 😄 minikube v1.23.0 on Darwin 11.5.2 (arm64)
    I0910 19:01:14.288781 25326 notify.go:169] Checking for updates...
    I0910 19:01:14.288937 25326 driver.go:343] Setting default libvirt URI to qemu:///system
    I0910 19:01:14.288959 25326 global.go:111] Querying for installed drivers using PATH=/Users/pablogrant/.minikube/bin:/opt/homebrew/opt/maven@3.5/bin:/Users/pablogrant/flutter/bin:/opt/homebrew/opt/node@12/bin:/Users/pablogrant/go/bin:/opt/homebrew/opt/erlang@22/bin:/Users/pablogrant/.asdf/shims:/Users/pablogrant/.asdf/bin:/opt/homebrew/bin:/opt/homebrew/sbin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/go/bin:/Library/Apple/usr/bin:/Users/pablogrant/.cargo/bin
    I0910 19:01:14.288968 25326 global.go:119] vmwarefusion default: false priority: 1, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:the 'vmwarefusion' driver is no longer available Reason: Fix:Switch to the newer 'vmware' driver by using '--driver=vmware'. This may require first deleting your existing cluster Doc:https://minikube.sigs.k8s.io/docs/drivers/vmware/}
    I0910 19:01:14.410151 25326 docker.go:132] docker version: linux-20.10.8
    I0910 19:01:14.410316 25326 cli_runner.go:115] Run: docker system info --format "{{json .}}"
    I0910 19:01:14.843499 25326 info.go:263] docker info: {ID:ZIRE:QFUH:AOUQ:2ENO:K3TB:WEKZ:2S55:GZ32:KBHZ:L4LX:AP7A:HM42 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:13 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2021-09-11 00:01:14.504982295 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.47-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:2085416960 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.0.0-rc.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:}}
    I0910 19:01:14.843576 25326 global.go:119] docker default: true priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
    I0910 19:01:14.843685 25326 global.go:119] hyperkit default: true priority: 8, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "hyperkit": executable file not found in $PATH Reason: Fix:Run 'brew install hyperkit' Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/hyperkit/}
    I0910 19:01:14.843756 25326 global.go:119] parallels default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "prlctl": executable file not found in $PATH Reason: Fix:Install Parallels Desktop for Mac Doc:https://minikube.sigs.k8s.io/docs/drivers/parallels/}
    W0910 19:01:15.534150 25326 podman.go:136] podman returned error: exit status 125
    I0910 19:01:15.534443 25326 global.go:119] podman default: true priority: 3, state: {Installed:true Healthy:false Running:false NeedsImprovement:false Error:"podman version --format {{.Server.Version}}" exit status 125: Error: cannot connect to the Podman socket, please verify that Podman REST API service is running: Get "http://d/v3.3.1/libpod/_ping": dial unix ///var/folders/dt/m_txjf8n4pz6lk172r5mvllc0000gn/T/podman-run--1/podman/podman.sock: connect: no such file or directory Reason: Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/}
    I0910 19:01:15.534484 25326 global.go:119] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
    I0910 19:01:16.118072 25326 global.go:119] virtualbox default: true priority: 6, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
    I0910 19:01:16.118173 25326 global.go:119] vmware default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Reason: Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/}
    I0910 19:01:16.118202 25326 driver.go:278] not recommending "ssh" due to default: false
    I0910 19:01:16.118206 25326 driver.go:273] not recommending "podman" due to health: "podman version --format {{.Server.Version}}" exit status 125: Error: cannot connect to the Podman socket, please verify that Podman REST API service is running: Get "http://d/v3.3.1/libpod/_ping": dial unix ///var/folders/dt/m_txjf8n4pz6lk172r5mvllc0000gn/T/podman-run--1/podman/podman.sock: connect: no such file or directory
    I0910 19:01:16.118232 25326 driver.go:313] Picked: docker
    I0910 19:01:16.118244 25326 driver.go:314] Alternatives: [virtualbox ssh]
    I0910 19:01:16.118249 25326 driver.go:315] Rejects: [hyperkit parallels podman vmwarefusion vmware]
    I0910 19:01:16.137678 25326 out.go:177] ✨ Automatically selected the docker driver. Other choices: virtualbox, ssh
    I0910 19:01:16.137829 25326 start.go:278] selected driver: docker
    I0910 19:01:16.137831 25326 start.go:751] validating driver "docker" against
    I0910 19:01:16.137840 25326 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
    I0910 19:01:16.137970 25326 cli_runner.go:115] Run: docker system info --format "{{json .}}"
    I0910 19:01:16.332833 25326 info.go:263] docker info: {ID:ZIRE:QFUH:AOUQ:2ENO:K3TB:WEKZ:2S55:GZ32:KBHZ:L4LX:AP7A:HM42 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:13 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:47 SystemTime:2021-09-11 00:01:16.238700296 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.47-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:2085416960 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.0.0-rc.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:}}
    I0910 19:01:16.332929 25326 start_flags.go:264] no existing cluster config was found, will generate one from the flags
    W0910 19:01:16.333006 25326 info.go:50] Unable to get CPU info: no such file or directory
    W0910 19:01:16.333061 25326 start.go:914] could not get system cpu info while verifying memory limits, which might be okay: no such file or directory
    W0910 19:01:16.333066 25326 info.go:50] Unable to get CPU info: no such file or directory
    W0910 19:01:16.333067 25326 start.go:914] could not get system cpu info while verifying memory limits, which might be okay: no such file or directory
    I0910 19:01:16.333071 25326 start_flags.go:345] Using suggested 1988MB memory alloc based on sys=16384MB, container=1988MB
    I0910 19:01:16.333126 25326 start_flags.go:719] Wait components to verify : map[apiserver:true system_pods:true]
    I0910 19:01:16.333386 25326 cni.go:93] Creating CNI manager for ""
    I0910 19:01:16.333392 25326 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
    I0910 19:01:16.333395 25326 start_flags.go:278] config:
    {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.26@sha256:d4aa14fbdc3a28a60632c24af937329ec787b02c89983c6f5498d346860a848c Memory:1988 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
    I0910 19:01:16.370649 25326 out.go:177] 👍 Starting control plane node minikube in cluster minikube
    I0910 19:01:16.370848 25326 cache.go:117] Beginning downloading kic base image for docker with docker
    I0910 19:01:16.387757 25326 out.go:177] 🚜 Pulling base image ...
    I0910 19:01:16.388360 25326 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime docker
    I0910 19:01:16.388369 25326 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.26@sha256:d4aa14fbdc3a28a60632c24af937329ec787b02c89983c6f5498d346860a848c in local docker daemon
    I0910 19:01:16.388382 25326 preload.go:147] Found local preload: /Users/pablogrant/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v12-v1.22.1-docker-overlay2-arm64.tar.lz4
    I0910 19:01:16.388399 25326 cache.go:56] Caching tarball of preloaded images
    I0910 19:01:16.388513 25326 preload.go:173] Found /Users/pablogrant/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v12-v1.22.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
    I0910 19:01:16.388518 25326 cache.go:59] Finished verifying existence of preloaded tar for v1.22.1 on docker
    I0910 19:01:16.389319 25326 profile.go:148] Saving config to /Users/pablogrant/.minikube/profiles/minikube/config.json ...
    I0910 19:01:16.389354 25326 lock.go:36] WriteFile acquiring /Users/pablogrant/.minikube/profiles/minikube/config.json: {Name:mk2ca0e6be1e0ef8627338d62217e7d15428b8d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
    I0910 19:01:16.522947 25326 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.26@sha256:d4aa14fbdc3a28a60632c24af937329ec787b02c89983c6f5498d346860a848c in local docker daemon, skipping pull
    I0910 19:01:16.522971 25326 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.26@sha256:d4aa14fbdc3a28a60632c24af937329ec787b02c89983c6f5498d346860a848c exists in daemon, skipping load
    I0910 19:01:16.522978 25326 cache.go:205] Successfully downloaded all kic artifacts
    I0910 19:01:16.523140 25326 start.go:313] acquiring machines lock for minikube: {Name:mk335735793aafb1a22839c6d98a307bf84d32dd Clock:{} Delay:500ms Timeout:10m0s Cancel:}
    I0910 19:01:16.523199 25326 start.go:317] acquired machines lock for "minikube" in 50.25µs
    I0910 19:01:16.523209 25326 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.26@sha256:d4aa14fbdc3a28a60632c24af937329ec787b02c89983c6f5498d346860a848c Memory:1988 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}
    I0910 19:01:16.523272 25326 start.go:126] createHost starting for "" (driver="docker")
    I0910 19:01:16.559744 25326 out.go:204] 🔥 Creating docker container (CPUs=2, Memory=1988MB) ...
    I0910 19:01:16.559937 25326 start.go:160] libmachine.API.Create for "minikube" (driver="docker")
    I0910 19:01:16.559949 25326 client.go:168] LocalClient.Create starting
    I0910 19:01:16.560282 25326 main.go:130] libmachine: Reading certificate data from /Users/pablogrant/.minikube/certs/ca.pem
    I0910 19:01:16.560434 25326 main.go:130] libmachine: Decoding PEM data...
    I0910 19:01:16.560443 25326 main.go:130] libmachine: Parsing certificate...
    I0910 19:01:16.560490 25326 main.go:130] libmachine: Reading certificate data from /Users/pablogrant/.minikube/certs/cert.pem
    I0910 19:01:16.560643 25326 main.go:130] libmachine: Decoding PEM data...
    I0910 19:01:16.560648 25326 main.go:130] libmachine: Parsing certificate...
    I0910 19:01:16.561272 25326 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
    W0910 19:01:16.680440 25326 cli_runner.go:162] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
    I0910 19:01:16.680552 25326 network_create.go:255] running [docker network inspect minikube] to gather additional debugging logs...
    I0910 19:01:16.680565 25326 cli_runner.go:115] Run: docker network inspect minikube
    W0910 19:01:16.794060 25326 cli_runner.go:162] docker network inspect minikube returned with exit code 1
    I0910 19:01:16.794086 25326 network_create.go:258] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1
    stdout:
    []

stderr:
Error: No such network: minikube
I0910 19:01:16.794108 25326 network_create.go:260] output of [docker network inspect minikube]: -- stdout --
[]

-- /stdout --
** stderr **
Error: No such network: minikube

** /stderr **
I0910 19:01:16.794204 25326 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0910 19:01:16.907833 25326 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0x140005b8160] misses:0}
I0910 19:01:16.907862 25326 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0910 19:01:16.907875 25326 network_create.go:106] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0910 19:01:16.907975 25326 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube
I0910 19:01:17.061353 25326 network_create.go:90] docker network minikube 192.168.49.0/24 created
I0910 19:01:17.061380 25326 kic.go:106] calculated static IP "192.168.49.2" for the "minikube" container
I0910 19:01:17.061515 25326 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I0910 19:01:17.175334 25326 cli_runner.go:115] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0910 19:01:17.288818 25326 oci.go:102] Successfully created a docker volume minikube
I0910 19:01:17.288938 25326 cli_runner.go:115] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.26@sha256:d4aa14fbdc3a28a60632c24af937329ec787b02c89983c6f5498d346860a848c -d /var/lib
I0910 19:01:17.964962 25326 oci.go:106] Successfully prepared a docker volume minikube
I0910 19:01:17.965085 25326 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime docker
I0910 19:01:17.965106 25326 kic.go:179] Starting extracting preloaded images to volume ...
I0910 19:01:17.965248 25326 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
I0910 19:01:17.965302 25326 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/pablogrant/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v12-v1.22.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.26@sha256:d4aa14fbdc3a28a60632c24af937329ec787b02c89983c6f5498d346860a848c -I lz4 -xf /preloaded.tar -C /extractDir
I0910 19:01:18.172083 25326 cli_runner.go:115] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=1988mb --memory-swap=1988mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.26@sha256:d4aa14fbdc3a28a60632c24af937329ec787b02c89983c6f5498d346860a848c
I0910 19:01:18.835487 25326 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Running}}
I0910 19:01:18.980157 25326 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0910 19:01:19.109558 25326 cli_runner.go:115] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables
I0910 19:01:19.355221 25326 oci.go:281] the created container "minikube" has a running status.
I0910 19:01:19.355438 25326 kic.go:210] Creating ssh key for kic: /Users/pablogrant/.minikube/machines/minikube/id_rsa...
I0910 19:01:19.548409 25326 kic_runner.go:188] docker (temp): /Users/pablogrant/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0910 19:01:19.784689 25326 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0910 19:01:19.918076 25326 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0910 19:01:19.918108 25326 kic_runner.go:115] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0910 19:01:35.163479 25326 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/pablogrant/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v12-v1.22.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.26@sha256:d4aa14fbdc3a28a60632c24af937329ec787b02c89983c6f5498d346860a848c -I lz4 -xf /preloaded.tar -C /extractDir: (17.198098292s)
I0910 19:01:35.163526 25326 kic.go:188] duration metric: took 17.198482 seconds to extract preloaded images to volume
I0910 19:01:35.164059 25326 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0910 19:01:35.315921 25326 machine.go:88] provisioning docker machine ...
I0910 19:01:35.315963 25326 ubuntu.go:169] provisioning hostname "minikube"
I0910 19:01:35.316288 25326 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0910 19:01:35.430864 25326 main.go:130] libmachine: Using SSH client type: native
I0910 19:01:35.431159 25326 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x10462c0c0] 0x10462eee0 [] 0s} 127.0.0.1 55693 }
I0910 19:01:35.431169 25326 main.go:130] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0910 19:01:35.570439 25326 main.go:130] libmachine: SSH cmd err, output: : minikube

I0910 19:01:35.570544 25326 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0910 19:01:35.686437 25326 main.go:130] libmachine: Using SSH client type: native
I0910 19:01:35.686615 25326 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x10462c0c0] 0x10462eee0 [] 0s} 127.0.0.1 55693 }
I0910 19:01:35.686625 25326 main.go:130] libmachine: About to run SSH command:

	if ! grep -xq '.*\sminikube' /etc/hosts; then
		if grep -xq '127.0.1.1\s.*' /etc/hosts; then
			sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
		else 
			echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
		fi
	fi

I0910 19:01:35.798658 25326 main.go:130] libmachine: SSH cmd err, output: :
I0910 19:01:35.798669 25326 ubuntu.go:175] set auth options {CertDir:/Users/pablogrant/.minikube CaCertPath:/Users/pablogrant/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/pablogrant/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/pablogrant/.minikube/machines/server.pem ServerKeyPath:/Users/pablogrant/.minikube/machines/server-key.pem ClientKeyPath:/Users/pablogrant/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/pablogrant/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/pablogrant/.minikube}
I0910 19:01:35.798688 25326 ubuntu.go:177] setting up certificates
I0910 19:01:35.798693 25326 provision.go:83] configureAuth start
I0910 19:01:35.798771 25326 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0910 19:01:35.928197 25326 provision.go:138] copyHostCerts
I0910 19:01:35.928573 25326 exec_runner.go:145] found /Users/pablogrant/.minikube/cert.pem, removing ...
I0910 19:01:35.928578 25326 exec_runner.go:208] rm: /Users/pablogrant/.minikube/cert.pem
I0910 19:01:35.928896 25326 exec_runner.go:152] cp: /Users/pablogrant/.minikube/certs/cert.pem --> /Users/pablogrant/.minikube/cert.pem (1131 bytes)
I0910 19:01:35.929197 25326 exec_runner.go:145] found /Users/pablogrant/.minikube/key.pem, removing ...
I0910 19:01:35.929199 25326 exec_runner.go:208] rm: /Users/pablogrant/.minikube/key.pem
I0910 19:01:35.929241 25326 exec_runner.go:152] cp: /Users/pablogrant/.minikube/certs/key.pem --> /Users/pablogrant/.minikube/key.pem (1675 bytes)
I0910 19:01:35.929491 25326 exec_runner.go:145] found /Users/pablogrant/.minikube/ca.pem, removing ...
I0910 19:01:35.929497 25326 exec_runner.go:208] rm: /Users/pablogrant/.minikube/ca.pem
I0910 19:01:35.929543 25326 exec_runner.go:152] cp: /Users/pablogrant/.minikube/certs/ca.pem --> /Users/pablogrant/.minikube/ca.pem (1090 bytes)
I0910 19:01:35.929744 25326 provision.go:112] generating server cert: /Users/pablogrant/.minikube/machines/server.pem ca-key=/Users/pablogrant/.minikube/certs/ca.pem private-key=/Users/pablogrant/.minikube/certs/ca-key.pem org=pablogrant.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube]
I0910 19:01:35.968278 25326 provision.go:172] copyRemoteCerts
I0910 19:01:35.968494 25326 ssh_runner.go:152] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0910 19:01:35.968556 25326 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0910 19:01:36.082741 25326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55693 SSHKeyPath:/Users/pablogrant/.minikube/machines/minikube/id_rsa Username:docker}
I0910 19:01:36.163127 25326 ssh_runner.go:319] scp /Users/pablogrant/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1090 bytes)
I0910 19:01:36.177103 25326 ssh_runner.go:319] scp /Users/pablogrant/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
I0910 19:01:36.189262 25326 ssh_runner.go:319] scp /Users/pablogrant/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0910 19:01:36.201885 25326 provision.go:86] duration metric: configureAuth took 403.1865ms
I0910 19:01:36.201895 25326 ubuntu.go:193] setting minikube options for container-runtime
I0910 19:01:36.202108 25326 config.go:177] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
I0910 19:01:36.202194 25326 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0910 19:01:36.316618 25326 main.go:130] libmachine: Using SSH client type: native
I0910 19:01:36.316763 25326 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x10462c0c0] 0x10462eee0 [] 0s} 127.0.0.1 55693 }
I0910 19:01:36.316771 25326 main.go:130] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0910 19:01:36.430561 25326 main.go:130] libmachine: SSH cmd err, output: : overlay

I0910 19:01:36.430568 25326 ubuntu.go:71] root file system type: overlay
I0910 19:01:36.430696 25326 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0910 19:01:36.430784 25326 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0910 19:01:36.564860 25326 main.go:130] libmachine: Using SSH client type: native
I0910 19:01:36.565006 25326 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x10462c0c0] 0x10462eee0 [] 0s} 127.0.0.1 55693 }
I0910 19:01:36.565058 25326 main.go:130] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity
TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0910 19:01:36.685755 25326 main.go:130] libmachine: SSH cmd err, output: : [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity
TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install]
WantedBy=multi-user.target

I0910 19:01:36.686054 25326 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0910 19:01:36.800390 25326 main.go:130] libmachine: Using SSH client type: native
I0910 19:01:36.800543 25326 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x10462c0c0] 0x10462eee0 [] 0s} 127.0.0.1 55693 }
I0910 19:01:36.800552 25326 main.go:130] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0910 19:01:37.338046 25326 main.go:130] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-07-30 19:53:13.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2021-09-11 00:01:36.683622000 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
+BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60

[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always

-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure

-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity

-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process
-OOMScoreAdjust=-500

[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker

I0910 19:01:37.338065 25326 machine.go:91] provisioned docker machine in 2.022136333s
I0910 19:01:37.338074 25326 client.go:171] LocalClient.Create took 20.778198s
I0910 19:01:37.338099 25326 start.go:168] duration metric: libmachine.API.Create for "minikube" took 20.778236625s
I0910 19:01:37.338106 25326 start.go:267] post-start starting for "minikube" (driver="docker")
I0910 19:01:37.338110 25326 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0910 19:01:37.338322 25326 ssh_runner.go:152] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0910 19:01:37.338437 25326 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0910 19:01:37.467237 25326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55693 SSHKeyPath:/Users/pablogrant/.minikube/machines/minikube/id_rsa Username:docker}
I0910 19:01:37.546649 25326 ssh_runner.go:152] Run: cat /etc/os-release
I0910 19:01:37.549759 25326 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0910 19:01:37.549774 25326 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0910 19:01:37.549779 25326 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0910 19:01:37.549781 25326 info.go:137] Remote host: Ubuntu 20.04.2 LTS
I0910 19:01:37.549787 25326 filesync.go:126] Scanning /Users/pablogrant/.minikube/addons for local assets ...
I0910 19:01:37.549857 25326 filesync.go:126] Scanning /Users/pablogrant/.minikube/files for local assets ...
I0910 19:01:37.549885 25326 start.go:270] post-start completed in 211.776625ms
I0910 19:01:37.550329 25326 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0910 19:01:37.665618 25326 profile.go:148] Saving config to /Users/pablogrant/.minikube/profiles/minikube/config.json ...
I0910 19:01:37.666013 25326 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0910 19:01:37.666077 25326 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0910 19:01:37.781038 25326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55693 SSHKeyPath:/Users/pablogrant/.minikube/machines/minikube/id_rsa Username:docker}
I0910 19:01:37.860539 25326 start.go:129] duration metric: createHost completed in 21.337335333s
I0910 19:01:37.860559 25326 start.go:80] releasing machines lock for "minikube", held for 21.337427084s
I0910 19:01:37.860673 25326 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0910 19:01:37.974040 25326 ssh_runner.go:152] Run: systemctl --version
I0910 19:01:37.974115 25326 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0910 19:01:37.974692 25326 ssh_runner.go:152] Run: curl -sS -m 2 https://k8s.gcr.io/
I0910 19:01:37.974851 25326 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0910 19:01:38.096638 25326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55693 SSHKeyPath:/Users/pablogrant/.minikube/machines/minikube/id_rsa Username:docker}
I0910 19:01:38.111039 25326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55693 SSHKeyPath:/Users/pablogrant/.minikube/machines/minikube/id_rsa Username:docker}
I0910 19:01:38.637023 25326 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service containerd
I0910 19:01:38.655772 25326 ssh_runner.go:152] Run: sudo systemctl cat docker.service
I0910 19:01:38.665210 25326 cruntime.go:255] skipping containerd shutdown because we are bound to it
I0910 19:01:38.665633 25326 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service crio
I0910 19:01:38.674598 25326 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0910 19:01:38.684077 25326 ssh_runner.go:152] Run: sudo systemctl unmask docker.service
I0910 19:01:38.737074 25326 ssh_runner.go:152] Run: sudo systemctl enable docker.socket
I0910 19:01:38.790147 25326 ssh_runner.go:152] Run: sudo systemctl cat docker.service
I0910 19:01:38.801157 25326 ssh_runner.go:152] Run: sudo systemctl daemon-reload
I0910 19:01:38.848863 25326 ssh_runner.go:152] Run: sudo systemctl start docker
I0910 19:01:38.858225 25326 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
I0910 19:01:38.909738 25326 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
I0910 19:01:38.998367 25326 out.go:204] 🐳 Preparing Kubernetes v1.22.1 on Docker 20.10.8 ...
I0910 19:01:38.999262 25326 cli_runner.go:115] Run: docker exec -t minikube dig +short host.docker.internal
I0910 19:01:39.243457 25326 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
I0910 19:01:39.244234 25326 ssh_runner.go:152] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts
I0910 19:01:39.247933 25326 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0910 19:01:39.256236 25326 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0910 19:01:39.371893 25326 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime docker
I0910 19:01:39.371985 25326 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}}
I0910 19:01:39.396647 25326 docker.go:558] Got preloaded images: -- stdout --
k8s.gcr.io/kube-apiserver:v1.22.1
k8s.gcr.io/kube-proxy:v1.22.1
k8s.gcr.io/kube-controller-manager:v1.22.1
k8s.gcr.io/kube-scheduler:v1.22.1
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/coredns/coredns:v1.8.4
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.5
kubernetesui/dashboard:v2.1.0
kubernetesui/metrics-scraper:v1.0.4

-- /stdout --
I0910 19:01:39.396657 25326 docker.go:489] Images already preloaded, skipping extraction
I0910 19:01:39.397023 25326 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}}
I0910 19:01:39.422167 25326 docker.go:558] Got preloaded images: -- stdout --
k8s.gcr.io/kube-apiserver:v1.22.1
k8s.gcr.io/kube-controller-manager:v1.22.1
k8s.gcr.io/kube-proxy:v1.22.1
k8s.gcr.io/kube-scheduler:v1.22.1
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/coredns/coredns:v1.8.4
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.5
kubernetesui/dashboard:v2.1.0
kubernetesui/metrics-scraper:v1.0.4

-- /stdout --
I0910 19:01:39.422193 25326 cache_images.go:78] Images are preloaded, skipping loading
I0910 19:01:39.422296 25326 ssh_runner.go:152] Run: docker info --format {{.CgroupDriver}}
I0910 19:01:39.597657 25326 cni.go:93] Creating CNI manager for ""
I0910 19:01:39.597670 25326 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0910 19:01:39.597683 25326 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0910 19:01:39.597706 25326 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.22.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0910 19:01:39.597905 25326 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:

  • groups:
    • system:bootstrappers:kubeadm:default-node-token
      ttl: 24h0m0s
      usages:
    • signing
    • authentication
      nodeRegistration:
      criSocket: /var/run/dockershim.sock
      name: "minikube"
      kubeletExtraArgs:
      node-ip: 192.168.49.2
      taints: []

apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.22.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"

disable disk resource management by default

imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0

Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"

tcpEstablishedTimeout: 0s

Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"

tcpCloseWaitTimeout: 0s

I0910 19:01:39.598037 25326 kubeadm.go:909] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.22.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2

[Install]
config:
{KubernetesVersion:v1.22.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0910 19:01:39.598229 25326 ssh_runner.go:152] Run: sudo ls /var/lib/minikube/binaries/v1.22.1
I0910 19:01:39.604816 25326 binaries.go:44] Found k8s binaries, skipping transfer
I0910 19:01:39.604946 25326 ssh_runner.go:152] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0910 19:01:39.610498 25326 ssh_runner.go:319] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
I0910 19:01:39.619888 25326 ssh_runner.go:319] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0910 19:01:39.629568 25326 ssh_runner.go:319] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2051 bytes)
I0910 19:01:39.638323 25326 ssh_runner.go:152] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0910 19:01:39.642267 25326 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0910 19:01:39.649614 25326 certs.go:52] Setting up /Users/pablogrant/.minikube/profiles/minikube for IP: 192.168.49.2
I0910 19:01:39.649756 25326 certs.go:179] skipping minikubeCA CA generation: /Users/pablogrant/.minikube/ca.key
I0910 19:01:39.649835 25326 certs.go:179] skipping proxyClientCA CA generation: /Users/pablogrant/.minikube/proxy-client-ca.key
I0910 19:01:39.649917 25326 certs.go:297] generating minikube-user signed cert: /Users/pablogrant/.minikube/profiles/minikube/client.key
I0910 19:01:39.649925 25326 crypto.go:69] Generating cert /Users/pablogrant/.minikube/profiles/minikube/client.crt with IP's: []
I0910 19:01:39.734312 25326 crypto.go:157] Writing cert to /Users/pablogrant/.minikube/profiles/minikube/client.crt ...
I0910 19:01:39.734323 25326 lock.go:36] WriteFile acquiring /Users/pablogrant/.minikube/profiles/minikube/client.crt: {Name:mke166a571bd69c002a370b78bb627b44176f205 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0910 19:01:39.734514 25326 crypto.go:165] Writing key to /Users/pablogrant/.minikube/profiles/minikube/client.key ...
I0910 19:01:39.734517 25326 lock.go:36] WriteFile acquiring /Users/pablogrant/.minikube/profiles/minikube/client.key: {Name:mk0e1c32aa116784f53bd4b520298b26f5465767 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0910 19:01:39.734617 25326 certs.go:297] generating minikube signed cert: /Users/pablogrant/.minikube/profiles/minikube/apiserver.key.dd3b5fb2
I0910 19:01:39.734619 25326 crypto.go:69] Generating cert /Users/pablogrant/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0910 19:01:39.893277 25326 crypto.go:157] Writing cert to /Users/pablogrant/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ...
I0910 19:01:39.893284 25326 lock.go:36] WriteFile acquiring /Users/pablogrant/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mk463491715e6145cf1c1b32be2820dd83d00357 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0910 19:01:39.893427 25326 crypto.go:165] Writing key to /Users/pablogrant/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ...
I0910 19:01:39.893430 25326 lock.go:36] WriteFile acquiring /Users/pablogrant/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mkf41434939e61719e72f9398164654e303d3940 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0910 19:01:39.893516 25326 certs.go:308] copying /Users/pablogrant/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /Users/pablogrant/.minikube/profiles/minikube/apiserver.crt
I0910 19:01:39.893631 25326 certs.go:312] copying /Users/pablogrant/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /Users/pablogrant/.minikube/profiles/minikube/apiserver.key
I0910 19:01:39.893718 25326 certs.go:297] generating aggregator signed cert: /Users/pablogrant/.minikube/profiles/minikube/proxy-client.key
I0910 19:01:39.893720 25326 crypto.go:69] Generating cert /Users/pablogrant/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0910 19:01:39.966525 25326 crypto.go:157] Writing cert to /Users/pablogrant/.minikube/profiles/minikube/proxy-client.crt ...
I0910 19:01:39.966534 25326 lock.go:36] WriteFile acquiring /Users/pablogrant/.minikube/profiles/minikube/proxy-client.crt: {Name:mk0ff12540238748a5f19517002095cd4407d620 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0910 19:01:39.966712 25326 crypto.go:165] Writing key to /Users/pablogrant/.minikube/profiles/minikube/proxy-client.key ...
I0910 19:01:39.966714 25326 lock.go:36] WriteFile acquiring /Users/pablogrant/.minikube/profiles/minikube/proxy-client.key: {Name:mke7fe786183c609d186a8a2e53a599939d370d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0910 19:01:39.966957 25326 certs.go:376] found cert: /Users/pablogrant/.minikube/certs/Users/pablogrant/.minikube/certs/ca-key.pem (1675 bytes)
I0910 19:01:39.966985 25326 certs.go:376] found cert: /Users/pablogrant/.minikube/certs/Users/pablogrant/.minikube/certs/ca.pem (1090 bytes)
I0910 19:01:39.967005 25326 certs.go:376] found cert: /Users/pablogrant/.minikube/certs/Users/pablogrant/.minikube/certs/cert.pem (1131 bytes)
I0910 19:01:39.967028 25326 certs.go:376] found cert: /Users/pablogrant/.minikube/certs/Users/pablogrant/.minikube/certs/key.pem (1675 bytes)
I0910 19:01:39.967609 25326 ssh_runner.go:319] scp /Users/pablogrant/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0910 19:01:39.995868 25326 ssh_runner.go:319] scp /Users/pablogrant/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0910 19:01:40.008633 25326 ssh_runner.go:319] scp /Users/pablogrant/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0910 19:01:40.020623 25326 ssh_runner.go:319] scp /Users/pablogrant/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0910 19:01:40.032645 25326 ssh_runner.go:319] scp /Users/pablogrant/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0910 19:01:40.044526 25326 ssh_runner.go:319] scp /Users/pablogrant/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0910 19:01:40.056425 25326 ssh_runner.go:319] scp /Users/pablogrant/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0910 19:01:40.068452 25326 ssh_runner.go:319] scp /Users/pablogrant/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0910 19:01:40.080784 25326 ssh_runner.go:319] scp /Users/pablogrant/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0910 19:01:40.093341 25326 ssh_runner.go:319] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0910 19:01:40.102756 25326 ssh_runner.go:152] Run: openssl version
I0910 19:01:40.109828 25326 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0910 19:01:40.116567 25326 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0910 19:01:40.119947 25326 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Sep 7 17:02 /usr/share/ca-certificates/minikubeCA.pem
I0910 19:01:40.120009 25326 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0910 19:01:40.124168 25326 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0910 19:01:40.130357 25326 kubeadm.go:390] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.26@sha256:d4aa14fbdc3a28a60632c24af937329ec787b02c89983c6f5498d346860a848c Memory:1988 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
I0910 19:01:40.130456 25326 ssh_runner.go:152] Run: docker ps --filter status=paused --filter=name=k8s_.*(kube-system) --format={{.ID}}
I0910 19:01:40.154201 25326 ssh_runner.go:152] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0910 19:01:40.160384 25326 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0910 19:01:40.165666 25326 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
I0910 19:01:40.165776 25326 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0910 19:01:40.171024 25326 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0910 19:01:40.171056 25326 ssh_runner.go:243] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0910 19:01:40.770243 25326 out.go:204] ▪ Generating certificates and keys ...
I0910 19:01:42.321350 25326 out.go:204] ▪ Booting up control plane ...
I0910 19:01:48.890295 25326 out.go:204] ▪ Configuring RBAC rules ...
I0910 19:01:49.266681 25326 cni.go:93] Creating CNI manager for ""
I0910 19:01:49.266695 25326 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0910 19:01:49.266723 25326 ssh_runner.go:152] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0910 19:01:49.267512 25326 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl label nodes minikube.k8s.io/version=v1.23.0 minikube.k8s.io/commit=5931455374810b1bbeb222a9713ae2c756daee10 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_09_10T19_01_49_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0910 19:01:49.267513 25326 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0910 19:01:49.305719 25326 ops.go:34] apiserver oom_adj: -16
I0910 19:01:49.554865 25326 kubeadm.go:985] duration metric: took 288.13275ms to wait for elevateKubeSystemPrivileges.
I0910 19:01:49.554884 25326 kubeadm.go:392] StartCluster complete in 9.424564542s
I0910 19:01:49.554898 25326 settings.go:142] acquiring lock: {Name:mk73c4f6e58d5985eae5a63514f9e14abe2114cf Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0910 19:01:49.555016 25326 settings.go:150] Updating kubeconfig: /Users/pablogrant/.kube/config
I0910 19:01:49.556256 25326 lock.go:36] WriteFile acquiring /Users/pablogrant/.kube/config: {Name:mk9635c6ff1d91971e5fac622eea35ea5f8ebcd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0910 19:01:50.084092 25326 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "minikube" rescaled to 1
I0910 19:01:50.084130 25326 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}
I0910 19:01:50.084142 25326 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0910 19:01:50.102376 25326 out.go:177] 🔎 Verifying Kubernetes components...
I0910 19:01:50.084214 25326 addons.go:404] enableAddons start: toEnable=map[], additional=[]
I0910 19:01:50.102462 25326 addons.go:65] Setting storage-provisioner=true in profile "minikube"
I0910 19:01:50.102479 25326 addons.go:153] Setting addon storage-provisioner=true in "minikube"
W0910 19:01:50.102485 25326 addons.go:165] addon storage-provisioner should already be in state true
I0910 19:01:50.085032 25326 config.go:177] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
I0910 19:01:50.102518 25326 host.go:66] Checking if "minikube" exists ...
I0910 19:01:50.102596 25326 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
I0910 19:01:50.102629 25326 addons.go:65] Setting default-storageclass=true in profile "minikube"
I0910 19:01:50.102640 25326 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0910 19:01:50.103179 25326 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0910 19:01:50.103331 25326 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0910 19:01:50.126048 25326 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0910 19:01:50.126048 25326 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . /etc/resolv.conf.*/i \ hosts {\n 192.168.65.2 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.22.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0910 19:01:50.291076 25326 out.go:177] ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0910 19:01:50.291392 25326 addons.go:337] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0910 19:01:50.291398 25326 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0910 19:01:50.291513 25326 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0910 19:01:50.295447 25326 api_server.go:50] waiting for apiserver process to appear ...
I0910 19:01:50.295521 25326 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0910 19:01:50.309140 25326 addons.go:153] Setting addon default-storageclass=true in "minikube"
W0910 19:01:50.309148 25326 addons.go:165] addon default-storageclass should already be in state true
I0910 19:01:50.309161 25326 host.go:66] Checking if "minikube" exists ...
I0910 19:01:50.309538 25326 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0910 19:01:50.365134 25326 start.go:729] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
I0910 19:01:50.365206 25326 api_server.go:70] duration metric: took 281.053042ms to wait for apiserver process to appear ...
I0910 19:01:50.365216 25326 api_server.go:86] waiting for apiserver healthz status ...
I0910 19:01:50.365223 25326 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55692/healthz ...
I0910 19:01:50.377692 25326 api_server.go:265] https://127.0.0.1:55692/healthz returned 200:
ok
I0910 19:01:50.380835 25326 api_server.go:139] control plane version: v1.22.1
I0910 19:01:50.380844 25326 api_server.go:129] duration metric: took 15.626709ms to wait for apiserver health ...
I0910 19:01:50.381005 25326 system_pods.go:43] waiting for kube-system pods to appear ...
I0910 19:01:50.389947 25326 system_pods.go:59] 4 kube-system pods found
I0910 19:01:50.389958 25326 system_pods.go:61] "etcd-minikube" [9c4cad46-5807-496c-af26-d075c4645d53] Pending
I0910 19:01:50.389960 25326 system_pods.go:61] "kube-apiserver-minikube" [042a0856-f1da-4fe3-aeea-74d98162bbed] Pending
I0910 19:01:50.389962 25326 system_pods.go:61] "kube-controller-manager-minikube" [970e48d2-402e-4fdd-b41c-10571e5143df] Pending
I0910 19:01:50.389964 25326 system_pods.go:61] "kube-scheduler-minikube" [df3d1567-39dd-41db-ac5e-cd4f6ea5e414] Pending
I0910 19:01:50.389967 25326 system_pods.go:74] duration metric: took 8.958458ms to wait for pod list to return data ...
I0910 19:01:50.389971 25326 kubeadm.go:547] duration metric: took 305.823375ms to wait for : map[apiserver:true system_pods:true] ...
I0910 19:01:50.390012 25326 node_conditions.go:102] verifying NodePressure condition ...
I0910 19:01:50.395481 25326 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
I0910 19:01:50.395490 25326 node_conditions.go:123] node cpu capacity is 4
I0910 19:01:50.395499 25326 node_conditions.go:105] duration metric: took 5.471834ms to run NodePressure ...
I0910 19:01:50.395507 25326 start.go:231] waiting for startup goroutines ...
I0910 19:01:50.430508 25326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55693 SSHKeyPath:/Users/pablogrant/.minikube/machines/minikube/id_rsa Username:docker}
I0910 19:01:50.446132 25326 addons.go:337] installing /etc/kubernetes/addons/storageclass.yaml
I0910 19:01:50.446143 25326 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0910 19:01:50.446229 25326 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0910 19:01:50.521053 25326 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0910 19:01:50.572006 25326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55693 SSHKeyPath:/Users/pablogrant/.minikube/machines/minikube/id_rsa Username:docker}
I0910 19:01:50.661581 25326 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0910 19:01:50.834114 25326 out.go:177] 🌟 Enabled addons: storage-provisioner, default-storageclass
I0910 19:01:50.834157 25326 addons.go:406] enableAddons completed in 749.950625ms
I0910 19:01:50.893004 25326 start.go:462] kubectl: 1.22.1, cluster: 1.22.1 (minor skew: 0)
I0910 19:01:50.910276 25326 out.go:177] 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

  • ==> Docker <==

  • -- Logs begin at Sat 2021-09-11 00:01:19 UTC, end at Sat 2021-09-11 00:30:42 UTC. --
    Sep 11 00:01:19 minikube systemd[1]: Starting Docker Application Container Engine...
    Sep 11 00:01:19 minikube dockerd[214]: time="2021-09-11T00:01:19.719104464Z" level=info msg="Starting up"
    Sep 11 00:01:19 minikube dockerd[214]: time="2021-09-11T00:01:19.720916006Z" level=info msg="parsed scheme: "unix"" module=grpc
    Sep 11 00:01:19 minikube dockerd[214]: time="2021-09-11T00:01:19.720987423Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
    Sep 11 00:01:19 minikube dockerd[214]: time="2021-09-11T00:01:19.721063298Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
    Sep 11 00:01:19 minikube dockerd[214]: time="2021-09-11T00:01:19.721103423Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
    Sep 11 00:01:19 minikube dockerd[214]: time="2021-09-11T00:01:19.724034423Z" level=info msg="parsed scheme: "unix"" module=grpc
    Sep 11 00:01:19 minikube dockerd[214]: time="2021-09-11T00:01:19.724121756Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
    Sep 11 00:01:19 minikube dockerd[214]: time="2021-09-11T00:01:19.724143464Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
    Sep 11 00:01:19 minikube dockerd[214]: time="2021-09-11T00:01:19.724149173Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
    Sep 11 00:01:19 minikube dockerd[214]: time="2021-09-11T00:01:19.831172839Z" level=warning msg="Your kernel does not support cgroup blkio weight"
    Sep 11 00:01:19 minikube dockerd[214]: time="2021-09-11T00:01:19.831206881Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
    Sep 11 00:01:19 minikube dockerd[214]: time="2021-09-11T00:01:19.832197214Z" level=info msg="Loading containers: start."
    Sep 11 00:01:19 minikube dockerd[214]: time="2021-09-11T00:01:19.925462964Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
    Sep 11 00:01:20 minikube dockerd[214]: time="2021-09-11T00:01:20.042202923Z" level=info msg="Loading containers: done."
    Sep 11 00:01:20 minikube dockerd[214]: time="2021-09-11T00:01:20.068538048Z" level=info msg="Docker daemon" commit=75249d8 graphdriver(s)=overlay2 version=20.10.8
    Sep 11 00:01:20 minikube dockerd[214]: time="2021-09-11T00:01:20.068855548Z" level=info msg="Daemon has completed initialization"
    Sep 11 00:01:20 minikube systemd[1]: Started Docker Application Container Engine.
    Sep 11 00:01:20 minikube dockerd[214]: time="2021-09-11T00:01:20.159740423Z" level=info msg="API listen on /run/docker.sock"
    Sep 11 00:01:36 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed.
    Sep 11 00:01:37 minikube systemd[1]: Stopping Docker Application Container Engine...
    Sep 11 00:01:37 minikube dockerd[214]: time="2021-09-11T00:01:37.146502542Z" level=info msg="Processing signal 'terminated'"
    Sep 11 00:01:37 minikube dockerd[214]: time="2021-09-11T00:01:37.149676208Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = transport is closing" module=libcontainerd namespace=moby
    Sep 11 00:01:37 minikube dockerd[214]: time="2021-09-11T00:01:37.149730833Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
    Sep 11 00:01:37 minikube dockerd[214]: time="2021-09-11T00:01:37.149686792Z" level=info msg="Daemon shutdown complete"
    Sep 11 00:01:37 minikube dockerd[214]: time="2021-09-11T00:01:37.149820042Z" level=warning msg="Error while testing if containerd API is ready" error="rpc error: code = Canceled desc = context canceled"
    Sep 11 00:01:37 minikube systemd[1]: docker.service: Succeeded.
    Sep 11 00:01:37 minikube systemd[1]: Stopped Docker Application Container Engine.
    Sep 11 00:01:37 minikube systemd[1]: Starting Docker Application Container Engine...
    Sep 11 00:01:37 minikube dockerd[463]: time="2021-09-11T00:01:37.204211333Z" level=info msg="Starting up"
    Sep 11 00:01:37 minikube dockerd[463]: time="2021-09-11T00:01:37.207080375Z" level=info msg="parsed scheme: "unix"" module=grpc
    Sep 11 00:01:37 minikube dockerd[463]: time="2021-09-11T00:01:37.207100667Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
    Sep 11 00:01:37 minikube dockerd[463]: time="2021-09-11T00:01:37.207116417Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
    Sep 11 00:01:37 minikube dockerd[463]: time="2021-09-11T00:01:37.207126083Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
    Sep 11 00:01:37 minikube dockerd[463]: time="2021-09-11T00:01:37.208587250Z" level=info msg="parsed scheme: "unix"" module=grpc
    Sep 11 00:01:37 minikube dockerd[463]: time="2021-09-11T00:01:37.208603875Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
    Sep 11 00:01:37 minikube dockerd[463]: time="2021-09-11T00:01:37.208611625Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
    Sep 11 00:01:37 minikube dockerd[463]: time="2021-09-11T00:01:37.208616792Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
    Sep 11 00:01:37 minikube dockerd[463]: time="2021-09-11T00:01:37.214585833Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
    Sep 11 00:01:37 minikube dockerd[463]: time="2021-09-11T00:01:37.220580917Z" level=warning msg="Your kernel does not support cgroup blkio weight"
    Sep 11 00:01:37 minikube dockerd[463]: time="2021-09-11T00:01:37.220597333Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
    Sep 11 00:01:37 minikube dockerd[463]: time="2021-09-11T00:01:37.220719333Z" level=info msg="Loading containers: start."
    Sep 11 00:01:37 minikube dockerd[463]: time="2021-09-11T00:01:37.269216250Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
    Sep 11 00:01:37 minikube dockerd[463]: time="2021-09-11T00:01:37.286465625Z" level=info msg="Loading containers: done."
    Sep 11 00:01:37 minikube dockerd[463]: time="2021-09-11T00:01:37.311187625Z" level=info msg="Docker daemon" commit=75249d8 graphdriver(s)=overlay2 version=20.10.8
    Sep 11 00:01:37 minikube dockerd[463]: time="2021-09-11T00:01:37.311241083Z" level=info msg="Daemon has completed initialization"
    Sep 11 00:01:37 minikube systemd[1]: Started Docker Application Container Engine.
    Sep 11 00:01:37 minikube dockerd[463]: time="2021-09-11T00:01:37.339329042Z" level=info msg="API listen on [::]:2376"
    Sep 11 00:01:37 minikube dockerd[463]: time="2021-09-11T00:01:37.342601292Z" level=info msg="API listen on /var/run/docker.sock"
    Sep 11 00:02:33 minikube dockerd[463]: time="2021-09-11T00:02:33.148942554Z" level=info msg="ignoring event" container=d8a09240e02949e113cf612382e8711402cfbd89a936f85eaa618b954a6c8fce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
    Sep 11 00:03:08 minikube dockerd[463]: time="2021-09-11T00:03:08.396181209Z" level=warning msg="reference for unknown type: " digest="sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068" remote="k8s.gcr.io/ingress-nginx/kube-webhook-certgen@sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068"
    Sep 11 00:03:13 minikube dockerd[463]: time="2021-09-11T00:03:13.784787878Z" level=info msg="ignoring event" container=9c1c66c3b7ae7bcb250d952531fa811adcd867498be7ed1c0d6bf19711f5503f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
    Sep 11 00:03:14 minikube dockerd[463]: time="2021-09-11T00:03:14.100912587Z" level=info msg="ignoring event" container=ac7b99a420a01c636328b14890d8867076c6f543ef64b23ed6890fdf2f9addbb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
    Sep 11 00:03:14 minikube dockerd[463]: time="2021-09-11T00:03:14.141262837Z" level=info msg="ignoring event" container=46c032893d53feee917800babe623a2db9da5d600e02e53b901c8257247d3097 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
    Sep 11 00:03:15 minikube dockerd[463]: time="2021-09-11T00:03:15.114736504Z" level=info msg="ignoring event" container=bd65c5bbd70a7b802a1965b1883637df4919db93558e84accc5dfff631cdaf34 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
    Sep 11 00:03:15 minikube dockerd[463]: time="2021-09-11T00:03:15.124740545Z" level=info msg="ignoring event" container=bad82234aab2cdbb9a5213c353b4e5fdf64f2654b6cf5b136ce0e438b51c1d4f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
    Sep 11 00:03:15 minikube dockerd[463]: time="2021-09-11T00:03:15.490175421Z" level=warning msg="reference for unknown type: " digest="sha256:44a7a06b71187a4529b0a9edee5cc22bdf71b414470eff696c3869ea8d90a695" remote="k8s.gcr.io/ingress-nginx/controller@sha256:44a7a06b71187a4529b0a9edee5cc22bdf71b414470eff696c3869ea8d90a695"

  • ==> container status <==

  • CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
    2e8bf254f2d95 85e6c0cff043f 12 minutes ago Running kubernetes-dashboard 0 4ace92212d815
    d0fa7be33ac90 a262dd7495d90 12 minutes ago Running dashboard-metrics-scraper 0 582535e6bd4ea
    e5fb46cf7dc02 gcr.io/google-samples/hello-app@sha256:1147fcb69f36f717b3bf9e20f68149494865603cdb947594c4a04f1b0de88bee 25 minutes ago Running hello-app 0 f40d2d6dbfc10
    5187f10ca9d4f k8s.gcr.io/ingress-nginx/controller@sha256:44a7a06b71187a4529b0a9edee5cc22bdf71b414470eff696c3869ea8d90a695 26 minutes ago Running controller 0 edbdf49400bd2
    46c032893d53f ff596504f4c1b 27 minutes ago Exited patch 1 bd65c5bbd70a7
    ac7b99a420a01 k8s.gcr.io/ingress-nginx/kube-webhook-certgen@sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068 27 minutes ago Exited create 0 bad82234aab2c
    48738a106b83d ba04bb24b9575 28 minutes ago Running storage-provisioner 1 2e34c8d672424
    681177b39a816 6d3ffc2696ac2 28 minutes ago Running coredns 0 dc45130ebd054
    dad4f3776c269 d9fa9053808ef 28 minutes ago Running kube-proxy 0 3393944d38531
    d8a09240e0294 ba04bb24b9575 28 minutes ago Exited storage-provisioner 0 2e34c8d672424
    dfc1b17b53a80 4641e56315a27 28 minutes ago Running kube-scheduler 0 b6d583ad6337d
    dcc1473312ab1 d5504eacf2d71 28 minutes ago Running kube-controller-manager 0 f7bdc7ee9d16a
    d76095f1fccb1 7605412e3e072 28 minutes ago Running kube-apiserver 0 bfa7eb8e20e5b
    eb12941eef1f1 2252d5eb703b0 28 minutes ago Running etcd 0 bbcfd891c1367

  • ==> coredns [681177b39a81] <==

  • .:53
    [INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
    CoreDNS-1.8.4
    linux/arm64, go1.16.4, 053c4d5

  • ==> describe nodes <==

  • Name: minikube
    Roles: control-plane,master
    Labels: beta.kubernetes.io/arch=arm64
    beta.kubernetes.io/os=linux
    kubernetes.io/arch=arm64
    kubernetes.io/hostname=minikube
    kubernetes.io/os=linux
    minikube.k8s.io/commit=5931455374810b1bbeb222a9713ae2c756daee10
    minikube.k8s.io/name=minikube
    minikube.k8s.io/updated_at=2021_09_10T19_01_49_0700
    minikube.k8s.io/version=v1.23.0
    node-role.kubernetes.io/control-plane=
    node-role.kubernetes.io/master=
    node.kubernetes.io/exclude-from-external-load-balancers=
    Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
    node.alpha.kubernetes.io/ttl: 0
    volumes.kubernetes.io/controller-managed-attach-detach: true
    CreationTimestamp: Sat, 11 Sep 2021 00:01:46 +0000
    Taints:
    Unschedulable: false
    Lease:
    HolderIdentity: minikube
    AcquireTime:
    RenewTime: Sat, 11 Sep 2021 00:30:35 +0000
    Conditions:
    Type Status LastHeartbeatTime LastTransitionTime Reason Message


    MemoryPressure False Sat, 11 Sep 2021 00:29:56 +0000 Sat, 11 Sep 2021 00:01:44 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
    DiskPressure False Sat, 11 Sep 2021 00:29:56 +0000 Sat, 11 Sep 2021 00:01:44 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
    PIDPressure False Sat, 11 Sep 2021 00:29:56 +0000 Sat, 11 Sep 2021 00:01:44 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
    Ready True Sat, 11 Sep 2021 00:29:56 +0000 Sat, 11 Sep 2021 00:01:59 +0000 KubeletReady kubelet is posting ready status
    Addresses:
    InternalIP: 192.168.49.2
    Hostname: minikube
    Capacity:
    cpu: 4
    ephemeral-storage: 61255492Ki
    hugepages-1Gi: 0
    hugepages-2Mi: 0
    hugepages-32Mi: 0
    hugepages-64Ki: 0
    memory: 2036540Ki
    pods: 110
    Allocatable:
    cpu: 4
    ephemeral-storage: 61255492Ki
    hugepages-1Gi: 0
    hugepages-2Mi: 0
    hugepages-32Mi: 0
    hugepages-64Ki: 0
    memory: 2036540Ki
    pods: 110
    System Info:
    Machine ID: 215b2fd81b00415893c852bbaaece8ed
    System UUID: 215b2fd81b00415893c852bbaaece8ed
    Boot ID: c9b828e3-84d8-49af-8970-560df6201f16
    Kernel Version: 5.10.47-linuxkit
    OS Image: Ubuntu 20.04.2 LTS
    Operating System: linux
    Architecture: arm64
    Container Runtime Version: docker://20.10.8
    Kubelet Version: v1.22.1
    Kube-Proxy Version: v1.22.1
    PodCIDR: 10.244.0.0/24
    PodCIDRs: 10.244.0.0/24
    Non-terminated Pods: (11 in total)
    Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age


    default web-79d88c97d6-nsq8p 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 26m
    ingress-nginx ingress-nginx-controller-69bdbc4d57-m4qwv 100m (2%!)(MISSING) 0 (0%!)(MISSING) 90Mi (4%!)(MISSING) 0 (0%!)(MISSING) 27m
    kube-system coredns-78fcd69978-kbz9h 100m (2%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 28m
    kube-system etcd-minikube 100m (2%!)(MISSING) 0 (0%!)(MISSING) 100Mi (5%!)(MISSING) 0 (0%!)(MISSING) 28m
    kube-system kube-apiserver-minikube 250m (6%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 28m
    kube-system kube-controller-manager-minikube 200m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 28m
    kube-system kube-proxy-g4qq6 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 28m
    kube-system kube-scheduler-minikube 100m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 28m
    kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 28m
    kubernetes-dashboard dashboard-metrics-scraper-7976b667d4-brwmn 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 12m
    kubernetes-dashboard kubernetes-dashboard-6fcdf4f6d-mcrdh 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 12m
    Allocated resources:
    (Total limits may be over 100 percent, i.e., overcommitted.)
    Resource Requests Limits


    cpu 850m (21%!)(MISSING) 0 (0%!)(MISSING)
    memory 260Mi (13%!)(MISSING) 170Mi (8%!)(MISSING)
    ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
    hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
    hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
    hugepages-32Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
    hugepages-64Ki 0 (0%!)(MISSING) 0 (0%!)(MISSING)
    Events:
    Type Reason Age From Message


    Normal NodeHasSufficientMemory 29m (x5 over 29m) kubelet Node minikube status is now: NodeHasSufficientMemory
    Normal NodeHasNoDiskPressure 29m (x5 over 29m) kubelet Node minikube status is now: NodeHasNoDiskPressure
    Normal NodeHasSufficientPID 29m (x4 over 29m) kubelet Node minikube status is now: NodeHasSufficientPID
    Normal Starting 28m kubelet Starting kubelet.
    Normal NodeHasSufficientMemory 28m kubelet Node minikube status is now: NodeHasSufficientMemory
    Normal NodeHasNoDiskPressure 28m kubelet Node minikube status is now: NodeHasNoDiskPressure
    Normal NodeHasSufficientPID 28m kubelet Node minikube status is now: NodeHasSufficientPID
    Normal NodeAllocatableEnforced 28m kubelet Updated Node Allocatable limit across pods
    Normal NodeReady 28m kubelet Node minikube status is now: NodeReady

  • ==> dmesg <==

  • [Sep10 22:11] cacheinfo: Unable to detect cache hierarchy for CPU 0
    [ +5.446397] grpcfuse: loading out-of-tree module taints kernel.
    [Sep10 23:34] hrtimer: interrupt took 4485625 ns

  • ==> etcd [eb12941eef1f] <==

  • 2021-09-11 00:21:25.108281 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:21:35.110354 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:21:44.642362 I | mvcc: store.index: compact 1199
    2021-09-11 00:21:44.643814 I | mvcc: finished scheduled compaction at 1199 (took 1.210208ms)
    2021-09-11 00:21:45.109332 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:21:55.108876 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:22:05.108151 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:22:15.124441 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:22:25.126446 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:22:35.125715 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:22:45.137158 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:22:55.134091 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:23:05.136170 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:23:15.135745 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:23:25.136273 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:23:35.134899 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:23:45.135875 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:23:55.134582 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:24:05.134421 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:24:15.136963 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:24:25.135474 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:24:35.135918 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:24:45.135750 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:24:55.134855 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:25:05.135004 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:25:15.134761 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:25:25.136625 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:25:35.135495 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:25:45.135026 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:25:55.135517 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:26:05.134716 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:26:15.136271 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:26:25.135818 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:26:35.136189 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:26:44.675945 I | mvcc: store.index: compact 1493
    2021-09-11 00:26:44.678008 I | mvcc: finished scheduled compaction at 1493 (took 1.470792ms)
    2021-09-11 00:26:45.134349 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:26:55.135866 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:27:05.135306 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:27:15.135359 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:27:25.136788 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:27:35.135737 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:27:45.135212 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:27:55.136483 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:28:05.135800 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:28:15.134930 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:28:25.135899 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:28:35.135441 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:28:45.135575 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:28:55.134455 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:29:05.136966 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:29:15.136215 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:29:25.134408 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:29:35.134459 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:29:45.135753 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:29:55.135368 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:30:05.136754 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:30:15.134708 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:30:25.134861 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-09-11 00:30:35.137383 I | etcdserver/api/etcdhttp: /health OK (status code 200)

  • ==> kernel <==

  • 00:30:43 up 2:19, 0 users, load average: 0.07, 0.22, 0.25
    Linux minikube 5.10.47-linuxkit Need a reliable and low latency local cluster setup for Kubernetes  #1 SMP PREEMPT Sat Jul 3 21:50:16 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
    PRETTY_NAME="Ubuntu 20.04.2 LTS"

  • ==> kube-apiserver [d76095f1fccb] <==

  • W0911 00:01:45.132716 1 genericapiserver.go:455] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
    I0911 00:01:45.135749 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
    I0911 00:01:45.135770 1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
    W0911 00:01:45.149641 1 genericapiserver.go:455] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
    I0911 00:01:46.191706 1 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
    I0911 00:01:46.191708 1 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
    I0911 00:01:46.191796 1 dynamic_serving_content.go:129] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
    I0911 00:01:46.191943 1 secure_serving.go:266] Serving securely on [::]:8443
    I0911 00:01:46.191970 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
    I0911 00:01:46.192070 1 apiservice_controller.go:97] Starting APIServiceRegistrationController
    I0911 00:01:46.192084 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
    I0911 00:01:46.192170 1 dynamic_serving_content.go:129] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
    I0911 00:01:46.192449 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
    I0911 00:01:46.192463 1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
    I0911 00:01:46.192694 1 controller.go:83] Starting OpenAPI AggregationController
    I0911 00:01:46.192720 1 available_controller.go:491] Starting AvailableConditionController
    I0911 00:01:46.192723 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
    I0911 00:01:46.192787 1 apf_controller.go:299] Starting API Priority and Fairness config controller
    I0911 00:01:46.193077 1 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
    I0911 00:01:46.193113 1 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
    I0911 00:01:46.193868 1 autoregister_controller.go:141] Starting autoregister controller
    I0911 00:01:46.193874 1 cache.go:32] Waiting for caches to sync for autoregister controller
    I0911 00:01:46.193900 1 customresource_discovery_controller.go:209] Starting DiscoveryController
    E0911 00:01:46.194892 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg:
    I0911 00:01:46.213877 1 crdregistration_controller.go:111] Starting crd-autoregister controller
    I0911 00:01:46.213896 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
    I0911 00:01:46.213909 1 controller.go:85] Starting OpenAPI controller
    I0911 00:01:46.213918 1 naming_controller.go:291] Starting NamingConditionController
    I0911 00:01:46.213925 1 establishing_controller.go:76] Starting EstablishingController
    I0911 00:01:46.213934 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
    I0911 00:01:46.213951 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
    I0911 00:01:46.213962 1 crd_finalizer.go:266] Starting CRDFinalizer
    I0911 00:01:46.231277 1 shared_informer.go:247] Caches are synced for node_authorizer
    I0911 00:01:46.237758 1 controller.go:611] quota admission added evaluator for: namespaces
    I0911 00:01:46.296003 1 cache.go:39] Caches are synced for autoregister controller
    I0911 00:01:46.296022 1 apf_controller.go:304] Running API Priority and Fairness config worker
    I0911 00:01:46.296029 1 cache.go:39] Caches are synced for AvailableConditionController controller
    I0911 00:01:46.296061 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
    I0911 00:01:46.296066 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
    I0911 00:01:46.314009 1 shared_informer.go:247] Caches are synced for crd-autoregister
    I0911 00:01:47.192736 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
    I0911 00:01:47.192837 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
    I0911 00:01:47.195757 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
    I0911 00:01:47.198890 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
    I0911 00:01:47.198910 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
    I0911 00:01:47.409871 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
    I0911 00:01:47.424547 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
    W0911 00:01:47.549188 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
    I0911 00:01:47.549718 1 controller.go:611] quota admission added evaluator for: endpoints
    I0911 00:01:47.551812 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
    I0911 00:01:48.240764 1 controller.go:611] quota admission added evaluator for: serviceaccounts
    I0911 00:01:49.053501 1 controller.go:611] quota admission added evaluator for: deployments.apps
    I0911 00:01:49.074034 1 controller.go:611] quota admission added evaluator for: daemonsets.apps
    I0911 00:01:49.229456 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
    I0911 00:02:02.347582 1 controller.go:611] quota admission added evaluator for: replicasets.apps
    I0911 00:02:02.606033 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
    I0911 00:03:07.221783 1 controller.go:611] quota admission added evaluator for: jobs.batch
    I0911 00:05:59.995354 1 controller.go:611] quota admission added evaluator for: ingresses.networking.k8s.io
    W0911 00:11:48.340558 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted
    W0911 00:29:14.753750 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted

  • ==> kube-controller-manager [dcc1473312ab] <==

  • I0911 00:02:02.544666 1 shared_informer.go:247] Caches are synced for resource quota
    I0911 00:02:02.544674 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown
    I0911 00:02:02.544683 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client
    I0911 00:02:02.545759 1 shared_informer.go:247] Caches are synced for certificate-csrapproving
    I0911 00:02:02.563094 1 shared_informer.go:247] Caches are synced for resource quota
    I0911 00:02:02.589456 1 shared_informer.go:247] Caches are synced for taint
    I0911 00:02:02.589513 1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone:
    W0911 00:02:02.589545 1 node_lifecycle_controller.go:1013] Missing timestamp for Node minikube. Assuming now as a timestamp.
    I0911 00:02:02.589562 1 node_lifecycle_controller.go:1214] Controller detected that zone is now in state Normal.
    I0911 00:02:02.589725 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller"
    I0911 00:02:02.589746 1 taint_manager.go:187] "Starting NoExecuteTaintManager"
    I0911 00:02:02.598455 1 shared_informer.go:247] Caches are synced for daemon sets
    I0911 00:02:02.609500 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-g4qq6"
    I0911 00:02:02.962063 1 shared_informer.go:247] Caches are synced for garbage collector
    I0911 00:02:02.962076 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
    I0911 00:02:02.973505 1 shared_informer.go:247] Caches are synced for garbage collector
    I0911 00:03:07.150794 1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-controller" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set ingress-nginx-controller-69bdbc4d57 to 1"
    I0911 00:03:07.159043 1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-controller-69bdbc4d57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: ingress-nginx-controller-69bdbc4d57-m4qwv"
    I0911 00:03:07.223296 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-create
    I0911 00:03:07.227573 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-patch
    I0911 00:03:07.229397 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-create
    I0911 00:03:07.229797 1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: ingress-nginx-admission-create--1-jjq2t"
    I0911 00:03:07.230442 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-patch
    I0911 00:03:07.230667 1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: ingress-nginx-admission-patch--1-g7xrv"
    I0911 00:03:07.236069 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-create
    I0911 00:03:07.236126 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-patch
    I0911 00:03:07.236850 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-patch
    I0911 00:03:07.236985 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-create
    I0911 00:03:07.257423 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-create
    I0911 00:03:07.261738 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-patch
    I0911 00:03:14.026005 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-patch
    I0911 00:03:15.070166 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-patch
    I0911 00:03:15.070412 1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
    I0911 00:03:15.082411 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-patch
    I0911 00:03:15.089735 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-create
    I0911 00:03:15.089995 1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
    I0911 00:03:15.094707 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-create
    I0911 00:03:16.255376 1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-patch
    I0911 00:04:43.205936 1 event.go:291] "Event occurred" object="default/web" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set web-79d88c97d6 to 1"
    I0911 00:04:43.211086 1 event.go:291] "Event occurred" object="default/web-79d88c97d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: web-79d88c97d6-nsq8p"
    I0911 00:17:52.944391 1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-7976b667d4 to 1"
    I0911 00:17:52.951301 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
    I0911 00:17:52.954538 1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7976b667d4" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods "dashboard-metrics-scraper-7976b667d4-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found"
    I0911 00:17:52.957781 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found"
    E0911 00:17:52.958351 1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-7976b667d4" failed with pods "dashboard-metrics-scraper-7976b667d4-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
    E0911 00:17:52.963643 1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
    E0911 00:17:52.963661 1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-7976b667d4" failed with pods "dashboard-metrics-scraper-7976b667d4-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
    I0911 00:17:52.963698 1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7976b667d4" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods "dashboard-metrics-scraper-7976b667d4-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found"
    I0911 00:17:52.966517 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found"
    E0911 00:17:52.966554 1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
    E0911 00:17:52.970756 1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-7976b667d4" failed with pods "dashboard-metrics-scraper-7976b667d4-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
    I0911 00:17:52.970792 1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7976b667d4" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods "dashboard-metrics-scraper-7976b667d4-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found"
    E0911 00:17:52.972889 1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
    I0911 00:17:52.972912 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found"
    E0911 00:17:52.988530 1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-7976b667d4" failed with pods "dashboard-metrics-scraper-7976b667d4-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
    I0911 00:17:52.988583 1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7976b667d4" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods "dashboard-metrics-scraper-7976b667d4-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found"
    E0911 00:17:52.995721 1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
    I0911 00:17:52.995766 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found"
    I0911 00:17:53.035517 1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7976b667d4" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7976b667d4-brwmn"
    I0911 00:17:53.047251 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-mcrdh"

  • ==> kube-proxy [dad4f3776c26] <==

  • I0911 00:02:03.196910 1 node.go:172] Successfully retrieved node IP: 192.168.49.2
    I0911 00:02:03.196960 1 server_others.go:140] Detected node IP 192.168.49.2
    W0911 00:02:03.196980 1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
    I0911 00:02:03.209898 1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
    I0911 00:02:03.209924 1 server_others.go:212] Using iptables Proxier.
    I0911 00:02:03.209930 1 server_others.go:219] creating dualStackProxier for iptables.
    W0911 00:02:03.209939 1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
    I0911 00:02:03.210689 1 server.go:649] Version: v1.22.1
    I0911 00:02:03.211629 1 config.go:315] Starting service config controller
    I0911 00:02:03.211656 1 shared_informer.go:240] Waiting for caches to sync for service config
    I0911 00:02:03.211674 1 config.go:224] Starting endpoint slice config controller
    I0911 00:02:03.211682 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
    E0911 00:02:03.214431 1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.16a39b8dc38ae0fd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc0471a5ecc9cce44, ext:85611709, loc:(*time.Location)(0x26a8e60)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-minikube", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "minikube.16a39b8dc38ae0fd" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
    I0911 00:02:03.312316 1 shared_informer.go:247] Caches are synced for endpoint slice config
    I0911 00:02:03.312317 1 shared_informer.go:247] Caches are synced for service config

  • ==> kube-scheduler [dfc1b17b53a8] <==

  • I0911 00:01:44.046896 1 serving.go:347] Generated self-signed cert in-memory
    W0911 00:01:46.231444 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
    W0911 00:01:46.231591 1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
    W0911 00:01:46.231698 1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
    W0911 00:01:46.231711 1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
    I0911 00:01:46.245792 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
    I0911 00:01:46.245856 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
    I0911 00:01:46.246365 1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
    I0911 00:01:46.246425 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
    E0911 00:01:46.246996 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
    E0911 00:01:46.247942 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
    E0911 00:01:46.248553 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
    E0911 00:01:46.248580 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
    E0911 00:01:46.248683 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
    E0911 00:01:46.248733 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
    E0911 00:01:46.248786 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
    E0911 00:01:46.248822 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
    E0911 00:01:46.248947 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
    E0911 00:01:46.248993 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
    E0911 00:01:46.249036 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
    E0911 00:01:46.249898 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
    E0911 00:01:46.249961 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
    E0911 00:01:46.249985 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
    E0911 00:01:46.250028 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
    E0911 00:01:47.115885 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
    E0911 00:01:47.224096 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
    E0911 00:01:47.334894 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
    I0911 00:01:47.647439 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file

  • ==> kubelet <==

  • -- Logs begin at Sat 2021-09-11 00:01:19 UTC, end at Sat 2021-09-11 00:30:43 UTC. --
    Sep 11 00:03:07 minikube kubelet[2267]: I0911 00:03:07.368736 2267 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume "kube-api-access-lczqs" (UniqueName: "kubernetes.io/projected/7877643c-56b5-4fc9-9d20-eb77d692db95-kube-api-access-lczqs") pod "ingress-nginx-admission-create--1-jjq2t" (UID: "7877643c-56b5-4fc9-9d20-eb77d692db95") "
    Sep 11 00:03:07 minikube kubelet[2267]: E0911 00:03:07.370074 2267 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found
    Sep 11 00:03:07 minikube kubelet[2267]: E0911 00:03:07.370245 2267 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/5a18cb87-6a94-44b8-a41a-b42437140a9a-webhook-cert podName:5a18cb87-6a94-44b8-a41a-b42437140a9a nodeName:}" failed. No retries permitted until 2021-09-11 00:03:07.870218208 +0000 UTC m=+78.831384953 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/5a18cb87-6a94-44b8-a41a-b42437140a9a-webhook-cert") pod "ingress-nginx-controller-69bdbc4d57-m4qwv" (UID: "5a18cb87-6a94-44b8-a41a-b42437140a9a") : secret "ingress-nginx-admission" not found
    Sep 11 00:03:07 minikube kubelet[2267]: E0911 00:03:07.876246 2267 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found
    Sep 11 00:03:07 minikube kubelet[2267]: E0911 00:03:07.876310 2267 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/5a18cb87-6a94-44b8-a41a-b42437140a9a-webhook-cert podName:5a18cb87-6a94-44b8-a41a-b42437140a9a nodeName:}" failed. No retries permitted until 2021-09-11 00:03:08.876297209 +0000 UTC m=+79.837463912 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/5a18cb87-6a94-44b8-a41a-b42437140a9a-webhook-cert") pod "ingress-nginx-controller-69bdbc4d57-m4qwv" (UID: "5a18cb87-6a94-44b8-a41a-b42437140a9a") : secret "ingress-nginx-admission" not found
    Sep 11 00:03:07 minikube kubelet[2267]: I0911 00:03:07.990230 2267 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="bad82234aab2cdbb9a5213c353b4e5fdf64f2654b6cf5b136ce0e438b51c1d4f"
    Sep 11 00:03:07 minikube kubelet[2267]: I0911 00:03:07.990416 2267 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-admission-patch--1-g7xrv through plugin: invalid network status for"
    Sep 11 00:03:07 minikube kubelet[2267]: I0911 00:03:07.992460 2267 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-admission-create--1-jjq2t through plugin: invalid network status for"
    Sep 11 00:03:07 minikube kubelet[2267]: I0911 00:03:07.992515 2267 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-admission-patch--1-g7xrv through plugin: invalid network status for"
    Sep 11 00:03:07 minikube kubelet[2267]: I0911 00:03:07.994247 2267 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="bd65c5bbd70a7b802a1965b1883637df4919db93558e84accc5dfff631cdaf34"
    Sep 11 00:03:08 minikube kubelet[2267]: E0911 00:03:08.882365 2267 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found
    Sep 11 00:03:08 minikube kubelet[2267]: E0911 00:03:08.882451 2267 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/5a18cb87-6a94-44b8-a41a-b42437140a9a-webhook-cert podName:5a18cb87-6a94-44b8-a41a-b42437140a9a nodeName:}" failed. No retries permitted until 2021-09-11 00:03:10.882436209 +0000 UTC m=+81.843602954 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/5a18cb87-6a94-44b8-a41a-b42437140a9a-webhook-cert") pod "ingress-nginx-controller-69bdbc4d57-m4qwv" (UID: "5a18cb87-6a94-44b8-a41a-b42437140a9a") : secret "ingress-nginx-admission" not found
    Sep 11 00:03:08 minikube kubelet[2267]: I0911 00:03:08.997676 2267 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-admission-patch--1-g7xrv through plugin: invalid network status for"
    Sep 11 00:03:08 minikube kubelet[2267]: I0911 00:03:08.998730 2267 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-admission-create--1-jjq2t through plugin: invalid network status for"
    Sep 11 00:03:10 minikube kubelet[2267]: E0911 00:03:10.895212 2267 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found
    Sep 11 00:03:10 minikube kubelet[2267]: E0911 00:03:10.895345 2267 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/5a18cb87-6a94-44b8-a41a-b42437140a9a-webhook-cert podName:5a18cb87-6a94-44b8-a41a-b42437140a9a nodeName:}" failed. No retries permitted until 2021-09-11 00:03:14.895331043 +0000 UTC m=+85.856497788 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/5a18cb87-6a94-44b8-a41a-b42437140a9a-webhook-cert") pod "ingress-nginx-controller-69bdbc4d57-m4qwv" (UID: "5a18cb87-6a94-44b8-a41a-b42437140a9a") : secret "ingress-nginx-admission" not found
    Sep 11 00:03:14 minikube kubelet[2267]: I0911 00:03:14.018259 2267 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-admission-patch--1-g7xrv through plugin: invalid network status for"
    Sep 11 00:03:14 minikube kubelet[2267]: I0911 00:03:14.019966 2267 scope.go:110] "RemoveContainer" containerID="9c1c66c3b7ae7bcb250d952531fa811adcd867498be7ed1c0d6bf19711f5503f"
    Sep 11 00:03:14 minikube kubelet[2267]: I0911 00:03:14.022157 2267 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-admission-create--1-jjq2t through plugin: invalid network status for"
    Sep 11 00:03:15 minikube kubelet[2267]: I0911 00:03:15.059364 2267 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-admission-patch--1-g7xrv through plugin: invalid network status for"
    Sep 11 00:03:15 minikube kubelet[2267]: I0911 00:03:15.062347 2267 scope.go:110] "RemoveContainer" containerID="9c1c66c3b7ae7bcb250d952531fa811adcd867498be7ed1c0d6bf19711f5503f"
    Sep 11 00:03:15 minikube kubelet[2267]: I0911 00:03:15.062512 2267 scope.go:110] "RemoveContainer" containerID="46c032893d53feee917800babe623a2db9da5d600e02e53b901c8257247d3097"
    Sep 11 00:03:15 minikube kubelet[2267]: I0911 00:03:15.080339 2267 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-admission-create--1-jjq2t through plugin: invalid network status for"
    Sep 11 00:03:15 minikube kubelet[2267]: I0911 00:03:15.083648 2267 scope.go:110] "RemoveContainer" containerID="ac7b99a420a01c636328b14890d8867076c6f543ef64b23ed6890fdf2f9addbb"
    Sep 11 00:03:15 minikube kubelet[2267]: I0911 00:03:15.231825 2267 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-controller-69bdbc4d57-m4qwv through plugin: invalid network status for"
    Sep 11 00:03:15 minikube kubelet[2267]: I0911 00:03:15.232277 2267 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-controller-69bdbc4d57-m4qwv through plugin: invalid network status for"
    Sep 11 00:03:15 minikube kubelet[2267]: I0911 00:03:15.232743 2267 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="edbdf49400bd28add6dca299686d7c0a8dcbfa6c0a2de32ec36f4eca4a4cd685"
    Sep 11 00:03:16 minikube kubelet[2267]: I0911 00:03:16.244584 2267 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="bad82234aab2cdbb9a5213c353b4e5fdf64f2654b6cf5b136ce0e438b51c1d4f"
    Sep 11 00:03:16 minikube kubelet[2267]: I0911 00:03:16.245610 2267 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-controller-69bdbc4d57-m4qwv through plugin: invalid network status for"
    Sep 11 00:03:16 minikube kubelet[2267]: I0911 00:03:16.248789 2267 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="bd65c5bbd70a7b802a1965b1883637df4919db93558e84accc5dfff631cdaf34"
    Sep 11 00:03:17 minikube kubelet[2267]: I0911 00:03:17.336078 2267 reconciler.go:196] "operationExecutor.UnmountVolume started for volume "kube-api-access-6ng25" (UniqueName: "kubernetes.io/projected/198310a9-8d67-4be8-bb06-67cde21aaec7-kube-api-access-6ng25") pod "198310a9-8d67-4be8-bb06-67cde21aaec7" (UID: "198310a9-8d67-4be8-bb06-67cde21aaec7") "
    Sep 11 00:03:17 minikube kubelet[2267]: I0911 00:03:17.336131 2267 reconciler.go:196] "operationExecutor.UnmountVolume started for volume "kube-api-access-lczqs" (UniqueName: "kubernetes.io/projected/7877643c-56b5-4fc9-9d20-eb77d692db95-kube-api-access-lczqs") pod "7877643c-56b5-4fc9-9d20-eb77d692db95" (UID: "7877643c-56b5-4fc9-9d20-eb77d692db95") "
    Sep 11 00:03:17 minikube kubelet[2267]: I0911 00:03:17.338526 2267 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/198310a9-8d67-4be8-bb06-67cde21aaec7-kube-api-access-6ng25" (OuterVolumeSpecName: "kube-api-access-6ng25") pod "198310a9-8d67-4be8-bb06-67cde21aaec7" (UID: "198310a9-8d67-4be8-bb06-67cde21aaec7"). InnerVolumeSpecName "kube-api-access-6ng25". PluginName "kubernetes.io/projected", VolumeGidValue ""
    Sep 11 00:03:17 minikube kubelet[2267]: I0911 00:03:17.338558 2267 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7877643c-56b5-4fc9-9d20-eb77d692db95-kube-api-access-lczqs" (OuterVolumeSpecName: "kube-api-access-lczqs") pod "7877643c-56b5-4fc9-9d20-eb77d692db95" (UID: "7877643c-56b5-4fc9-9d20-eb77d692db95"). InnerVolumeSpecName "kube-api-access-lczqs". PluginName "kubernetes.io/projected", VolumeGidValue ""
    Sep 11 00:03:17 minikube kubelet[2267]: I0911 00:03:17.437082 2267 reconciler.go:319] "Volume detached for volume "kube-api-access-6ng25" (UniqueName: "kubernetes.io/projected/198310a9-8d67-4be8-bb06-67cde21aaec7-kube-api-access-6ng25") on node "minikube" DevicePath """
    Sep 11 00:03:17 minikube kubelet[2267]: I0911 00:03:17.437114 2267 reconciler.go:319] "Volume detached for volume "kube-api-access-lczqs" (UniqueName: "kubernetes.io/projected/7877643c-56b5-4fc9-9d20-eb77d692db95-kube-api-access-lczqs") on node "minikube" DevicePath """
    Sep 11 00:03:44 minikube kubelet[2267]: I0911 00:03:44.359162 2267 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-controller-69bdbc4d57-m4qwv through plugin: invalid network status for"
    Sep 11 00:04:43 minikube kubelet[2267]: I0911 00:04:43.216769 2267 topology_manager.go:200] "Topology Admit Handler"
    Sep 11 00:04:43 minikube kubelet[2267]: I0911 00:04:43.314138 2267 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume "kube-api-access-chfkk" (UniqueName: "kubernetes.io/projected/2b2c3b9a-f97b-4f86-ba30-a15e2b8bc6be-kube-api-access-chfkk") pod "web-79d88c97d6-nsq8p" (UID: "2b2c3b9a-f97b-4f86-ba30-a15e2b8bc6be") "
    Sep 11 00:04:43 minikube kubelet[2267]: I0911 00:04:43.732162 2267 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/web-79d88c97d6-nsq8p through plugin: invalid network status for"
    Sep 11 00:04:43 minikube kubelet[2267]: I0911 00:04:43.831814 2267 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/web-79d88c97d6-nsq8p through plugin: invalid network status for"
    Sep 11 00:04:46 minikube kubelet[2267]: I0911 00:04:46.847395 2267 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/web-79d88c97d6-nsq8p through plugin: invalid network status for"
    Sep 11 00:04:47 minikube kubelet[2267]: I0911 00:04:47.857009 2267 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/web-79d88c97d6-nsq8p through plugin: invalid network status for"
    Sep 11 00:06:49 minikube kubelet[2267]: W0911 00:06:49.431829 2267 sysinfo.go:203] Nodes topology is not available, providing CPU topology
    Sep 11 00:11:49 minikube kubelet[2267]: W0911 00:11:49.422681 2267 sysinfo.go:203] Nodes topology is not available, providing CPU topology
    Sep 11 00:16:49 minikube kubelet[2267]: W0911 00:16:49.419798 2267 sysinfo.go:203] Nodes topology is not available, providing CPU topology
    Sep 11 00:17:53 minikube kubelet[2267]: I0911 00:17:53.046539 2267 topology_manager.go:200] "Topology Admit Handler"
    Sep 11 00:17:53 minikube kubelet[2267]: I0911 00:17:53.095605 2267 topology_manager.go:200] "Topology Admit Handler"
    Sep 11 00:17:53 minikube kubelet[2267]: I0911 00:17:53.104723 2267 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/a630a7d3-9b0b-4e23-b9ab-53929f9fff2d-tmp-volume") pod "dashboard-metrics-scraper-7976b667d4-brwmn" (UID: "a630a7d3-9b0b-4e23-b9ab-53929f9fff2d") "
    Sep 11 00:17:53 minikube kubelet[2267]: I0911 00:17:53.105141 2267 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume "kube-api-access-7q2sd" (UniqueName: "kubernetes.io/projected/a630a7d3-9b0b-4e23-b9ab-53929f9fff2d-kube-api-access-7q2sd") pod "dashboard-metrics-scraper-7976b667d4-brwmn" (UID: "a630a7d3-9b0b-4e23-b9ab-53929f9fff2d") "
    Sep 11 00:17:53 minikube kubelet[2267]: I0911 00:17:53.105182 2267 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/6c1a2c1f-f614-4b02-a371-2c9fdce69670-tmp-volume") pod "kubernetes-dashboard-6fcdf4f6d-mcrdh" (UID: "6c1a2c1f-f614-4b02-a371-2c9fdce69670") "
    Sep 11 00:17:53 minikube kubelet[2267]: I0911 00:17:53.105197 2267 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume "kube-api-access-nmspk" (UniqueName: "kubernetes.io/projected/6c1a2c1f-f614-4b02-a371-2c9fdce69670-kube-api-access-nmspk") pod "kubernetes-dashboard-6fcdf4f6d-mcrdh" (UID: "6c1a2c1f-f614-4b02-a371-2c9fdce69670") "
    Sep 11 00:17:53 minikube kubelet[2267]: W0911 00:17:53.112533 2267 container.go:586] Failed to update stats for container "/kubepods/besteffort/poda630a7d3-9b0b-4e23-b9ab-53929f9fff2d": /sys/fs/cgroup/cpuset/kubepods/besteffort/poda630a7d3-9b0b-4e23-b9ab-53929f9fff2d/cpuset.cpus found to be empty, continuing to push stats
    Sep 11 00:17:53 minikube kubelet[2267]: I0911 00:17:53.708456 2267 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-7976b667d4-brwmn through plugin: invalid network status for"
    Sep 11 00:17:53 minikube kubelet[2267]: I0911 00:17:53.708474 2267 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-mcrdh through plugin: invalid network status for"
    Sep 11 00:17:54 minikube kubelet[2267]: I0911 00:17:54.436261 2267 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-mcrdh through plugin: invalid network status for"
    Sep 11 00:17:54 minikube kubelet[2267]: I0911 00:17:54.440006 2267 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-7976b667d4-brwmn through plugin: invalid network status for"
    Sep 11 00:18:01 minikube kubelet[2267]: E0911 00:18:01.817598 2267 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: ["/kubepods/besteffort/poda630a7d3-9b0b-4e23-b9ab-53929f9fff2d": RecentStats: unable to find data in memory cache]"
    Sep 11 00:21:49 minikube kubelet[2267]: W0911 00:21:49.418708 2267 sysinfo.go:203] Nodes topology is not available, providing CPU topology
    Sep 11 00:26:49 minikube kubelet[2267]: W0911 00:26:49.445031 2267 sysinfo.go:203] Nodes topology is not available, providing CPU topology

  • ==> kubernetes-dashboard [2e8bf254f2d9] <==

  • 2021/09/11 00:28:51 Found 1 endpoints related to web service in default namespace
    2021/09/11 00:28:51 [2021-09-11T00:28:51Z] Outcoming response to 127.0.0.1 with 200 status code
    2021/09/11 00:28:51 received 0 resources from sidecar instead of 1
    2021/09/11 00:28:51 received 0 resources from sidecar instead of 1
    2021/09/11 00:28:51 Getting pod metrics
    2021/09/11 00:28:51 received 0 resources from sidecar instead of 1
    2021/09/11 00:28:51 received 0 resources from sidecar instead of 1
    2021/09/11 00:28:51 Skipping metric because of error: Metric label not set.
    2021/09/11 00:28:51 Skipping metric because of error: Metric label not set.
    2021/09/11 00:28:51 [2021-09-11T00:28:51Z] Outcoming response to 127.0.0.1 with 200 status code
    2021/09/11 00:29:01 [2021-09-11T00:29:01Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 127.0.0.1:
    2021/09/11 00:29:01 Getting list of namespaces
    2021/09/11 00:29:01 [2021-09-11T00:29:01Z] Incoming HTTP/1.1 GET /api/v1/service/default/web request from 127.0.0.1:
    2021/09/11 00:29:01 Getting details of web service in default namespace
    2021/09/11 00:29:01 [2021-09-11T00:29:01Z] Incoming HTTP/1.1 GET /api/v1/service/default/web/event?itemsPerPage=10&page=1 request from 127.0.0.1:
    2021/09/11 00:29:01 [2021-09-11T00:29:01Z] Incoming HTTP/1.1 GET /api/v1/service/default/web/pod?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1:
    2021/09/11 00:29:01 Found 1 events related to web service in default namespace
    2021/09/11 00:29:01 [2021-09-11T00:29:01Z] Outcoming response to 127.0.0.1 with 200 status code
    2021/09/11 00:29:01 [2021-09-11T00:29:01Z] Outcoming response to 127.0.0.1 with 200 status code
    2021/09/11 00:29:01 Found 1 endpoints related to web service in default namespace
    2021/09/11 00:29:01 [2021-09-11T00:29:01Z] Outcoming response to 127.0.0.1 with 200 status code
    2021/09/11 00:29:01 received 0 resources from sidecar instead of 1
    2021/09/11 00:29:01 received 0 resources from sidecar instead of 1
    2021/09/11 00:29:01 Getting pod metrics
    2021/09/11 00:29:01 received 0 resources from sidecar instead of 1
    2021/09/11 00:29:01 received 0 resources from sidecar instead of 1
    2021/09/11 00:29:01 Skipping metric because of error: Metric label not set.
    2021/09/11 00:29:01 Skipping metric because of error: Metric label not set.
    2021/09/11 00:29:01 [2021-09-11T00:29:01Z] Outcoming response to 127.0.0.1 with 200 status code
    2021/09/11 00:29:06 [2021-09-11T00:29:06Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 127.0.0.1:
    2021/09/11 00:29:06 Getting list of namespaces
    2021/09/11 00:29:06 [2021-09-11T00:29:06Z] Incoming HTTP/1.1 GET /api/v1/service/default/web request from 127.0.0.1:
    2021/09/11 00:29:06 Getting details of web service in default namespace
    2021/09/11 00:29:06 [2021-09-11T00:29:06Z] Incoming HTTP/1.1 GET /api/v1/service/default/web/pod?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1:
    2021/09/11 00:29:06 [2021-09-11T00:29:06Z] Incoming HTTP/1.1 GET /api/v1/service/default/web/event?itemsPerPage=10&page=1 request from 127.0.0.1:
    2021/09/11 00:29:06 [2021-09-11T00:29:06Z] Outcoming response to 127.0.0.1 with 200 status code
    2021/09/11 00:29:06 Found 1 endpoints related to web service in default namespace
    2021/09/11 00:29:06 [2021-09-11T00:29:06Z] Outcoming response to 127.0.0.1 with 200 status code
    2021/09/11 00:29:06 Found 1 events related to web service in default namespace
    2021/09/11 00:29:06 [2021-09-11T00:29:06Z] Outcoming response to 127.0.0.1 with 200 status code
    2021/09/11 00:29:06 received 0 resources from sidecar instead of 1
    2021/09/11 00:29:06 received 0 resources from sidecar instead of 1
    2021/09/11 00:29:06 Getting pod metrics
    2021/09/11 00:29:06 received 0 resources from sidecar instead of 1
    2021/09/11 00:29:06 received 0 resources from sidecar instead of 1
    2021/09/11 00:29:06 Skipping metric because of error: Metric label not set.
    2021/09/11 00:29:06 Skipping metric because of error: Metric label not set.
    2021/09/11 00:29:06 [2021-09-11T00:29:06Z] Outcoming response to 127.0.0.1 with 200 status code
    2021/09/11 00:29:07 [2021-09-11T00:29:07Z] Incoming HTTP/1.1 GET /api/v1/login/status request from 127.0.0.1:
    2021/09/11 00:29:07 [2021-09-11T00:29:07Z] Outcoming response to 127.0.0.1 with 200 status code
    2021/09/11 00:29:07 [2021-09-11T00:29:07Z] Incoming HTTP/1.1 GET /api/v1/ingress/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1:
    2021/09/11 00:29:07 [2021-09-11T00:29:07Z] Outcoming response to 127.0.0.1 with 404 status code
    2021/09/11 00:29:08 [2021-09-11T00:29:08Z] Incoming HTTP/1.1 GET /api/v1/login/status request from 127.0.0.1:
    2021/09/11 00:29:08 [2021-09-11T00:29:08Z] Outcoming response to 127.0.0.1 with 200 status code
    2021/09/11 00:29:11 [2021-09-11T00:29:11Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 127.0.0.1:
    2021/09/11 00:29:11 Getting list of namespaces
    2021/09/11 00:29:11 [2021-09-11T00:29:11Z] Outcoming response to 127.0.0.1 with 200 status code
    2021/09/11 00:29:11 [2021-09-11T00:29:11Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 127.0.0.1:
    2021/09/11 00:29:11 Getting list of namespaces
    2021/09/11 00:29:11 [2021-09-11T00:29:11Z] Outcoming response to 127.0.0.1 with 200 status code

  • ==> storage-provisioner [48738a106b83] <==

  • I0911 00:02:33.761989 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
    I0911 00:02:33.769306 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
    I0911 00:02:33.769345 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
    I0911 00:02:33.782368 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
    I0911 00:02:33.782471 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_fafb1669-eb59-4c88-b94c-6691ea0b534f!
    I0911 00:02:33.783134 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"55c7fcc9-3ff4-4b3d-b55f-d0b8ac3dcb04", APIVersion:"v1", ResourceVersion:"465", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_fafb1669-eb59-4c88-b94c-6691ea0b534f became leader
    I0911 00:02:33.883857 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_fafb1669-eb59-4c88-b94c-6691ea0b534f!

  • ==> storage-provisioner [d8a09240e029] <==

  • I0911 00:02:03.124407 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
    F0911 00:02:33.127196 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout

@dR3b
Copy link

dR3b commented Sep 14, 2021

#12424

@spowelljr
Copy link
Member

Hi @PabloG6, thanks for reporting you issue with minikube!

As @dR3b linked, this is a known issue that we're actively fixing.

I'm going to close this issue in favor of #12424 to keep discussion centralized, but please follow along with that issue for updates on the issue, thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants