Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Index out of range in podmanNetworkInspect using Podman 2.2.0 on Debian #10110

Closed
wrtate opened this issue Jan 8, 2021 · 3 comments
Closed
Labels
co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. os/linux triage/duplicate Indicates an issue is a duplicate of other open issue.

Comments

@wrtate
Copy link

wrtate commented Jan 8, 2021

On a fresh install of Debian testing, using podman 2.2.0, "minikube start --driver=podman" crashes with a "panic: runtime error: index out of range [1] with length 1".

It seems that pkg/drivers/kic/oci/network_create.go in podmanNetworkInspect, the output from a "podman network inspect" command is being parsed and compared to an empty string to determine if the network exists. However, when I added an extra log message, output seemed to contain a newline.

I'm not completely sure where this extra newline is coming from - it's not obviously podman itself. Piping stdout to "od -c", indicates no output:
$ sudo podman network inspect minikube --format '{{range .plugins}}{{if eq .type "bridge"}}{{(index (index .ipam.ranges 0) 0).subnet}},{{(index (index .ipam.ranges 0) 0).gateway}}{{end}}{{end}}' | od -c
Error: error inspecting object: no such network minikube
0000000

I'm actually guessing that this line was never meant to be reached. There is a check on the output for "No such network" (while stderr actually contained "no such network") that could have stopped it.

Stripping the string seemed to solve it for me, but I think the "no such network" case difference needs to be considered as well.
--- a/pkg/drivers/kic/oci/network_create.go
+++ b/pkg/drivers/kic/oci/network_create.go
@@ -215,21 +215,21 @@ func podmanNetworkInspect(name string) (netInfo, error) {
if err != nil {
logDockerNetworkInspect(Podman, name)
if strings.Contains(rr.Output(), "No such network") {

                    return info, ErrNetworkNotFound
            }
            return info, err
    }

    output := rr.Stdout.String()
  •   if output == "" {
    
  •   if strings.TrimSpace(output) == "" {
              return info, fmt.Errorf("no bridge network found for %s", name)
      }
    

Steps to reproduce the issue:

  1. minikube delete
  2. sudo podman network remove minikube
  3. minikube start --driver=podman

Please note in attempting to reproduce this, I discovered I had to manually remove a "minikube" network from podman after the delete. Is this intentional or another bug?

Full output of failed command:
I0107 22:33:44.516140 2656655 out.go:221] Setting OutFile to fd 1 ...
I0107 22:33:44.516328 2656655 out.go:268] TERM=xterm,COLORTERM=, which probably does not support color
I0107 22:33:44.516336 2656655 out.go:234] Setting ErrFile to fd 2...
I0107 22:33:44.516342 2656655 out.go:268] TERM=xterm,COLORTERM=, which probably does not support color
I0107 22:33:44.516415 2656655 root.go:280] Updating PATH: /home/scanner/.minikube/bin
I0107 22:33:44.516611 2656655 out.go:228] Setting JSON to false
I0107 22:33:44.611664 2656655 start.go:104] hostinfo: {"hostname":"enterprise","uptime":980008,"bootTime":1609100416,"procs":318,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"bullseye/sid","kernelVersion":"5.9.0-4-rt-amd64","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"1336c7b6-c371-40ad-a06b-32136a793b1c"}
I0107 22:33:44.613490 2656655 start.go:114] virtualization: kvm host
I0107 22:33:44.615632 2656655 out.go:119] * minikube v1.16.0 on Debian bullseye/sid

  • minikube v1.16.0 on Debian bullseye/sid
    I0107 22:33:44.616018 2656655 driver.go:303] Setting default libvirt URI to qemu:///system
    I0107 22:33:44.616173 2656655 notify.go:126] Checking for updates...
    I0107 22:33:44.734610 2656655 podman.go:118] podman version: 2.2.0
    I0107 22:33:44.735443 2656655 out.go:119] * Using the podman (experimental) driver based on user configuration
  • Using the podman (experimental) driver based on user configuration
    I0107 22:33:44.735456 2656655 start.go:277] selected driver: podman
    I0107 22:33:44.735461 2656655 start.go:686] validating driver "podman" against
    I0107 22:33:44.735473 2656655 start.go:697] status for podman: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Fix: Doc:}
    I0107 22:33:44.735518 2656655 cli_runner.go:111] Run: sudo -n podman system info --format json
    I0107 22:33:44.940726 2656655 info.go:273] podman info: {Host:{BuildahVersion:1.18.0 CgroupVersion:v1 Conmon:{Package:conmon: /usr/libexec/podman/conmon Path:/usr/libexec/podman/conmon Version:conmon version 2.0.20, commit: unknown} Distribution:{Distribution:debian Version:unknown} MemFree:2521915392 MemTotal:16745357312 OCIRuntime:{Name:runc Package:runc: /usr/bin/runc Path:/usr/bin/runc Version:runc version 1.0.0rc92+dfsg1
    commit: 1.0.0
    rc92+dfsg1-5
    spec: 1.0.2-dev} SwapFree:32421441536 SwapTotal:33285992448 Arch:amd64 Cpus:8 Eventlogger:journald Hostname:enterprise Kernel:5.9.0-4-rt-amd64 Os:linux Rootless:false Uptime:272h 13m 28.8s (Approximately 11.33 days)} Registries:{Search:[]} Store:{ConfigFile:/etc/containers/storage.conf ContainerStore:{Number:0} GraphDriverName:overlay GraphOptions:{} GraphRoot:/var/lib/containers/storage GraphStatus:{BackingFilesystem:btrfs NativeOverlayDiff:true SupportsDType:true UsingMetacopy:false} ImageStore:{Number:1} RunRoot:/var/run/containers/storage VolumePath:/var/lib/containers/storage/volumes}}
    I0107 22:33:44.940928 2656655 start_flags.go:235] no existing cluster config was found, will generate one from the flags
    I0107 22:33:44.943090 2656655 start_flags.go:253] Using suggested 3900MB memory alloc based on sys=15969MB, container=15969MB
    I0107 22:33:44.943517 2656655 start_flags.go:648] Wait components to verify : map[apiserver:true system_pods:true]
    I0107 22:33:44.943609 2656655 cni.go:74] Creating CNI manager for ""
    I0107 22:33:44.943660 2656655 cni.go:139] CNI unnecessary in this configuration, recommending no CNI
    I0107 22:33:44.943696 2656655 start_flags.go:367] config:
    {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] MultiNodeRequested:false}
    I0107 22:33:44.945733 2656655 out.go:119] * Starting control plane node minikube in cluster minikube
  • Starting control plane node minikube in cluster minikube
    I0107 22:33:44.945878 2656655 cache.go:112] Driver isn't docker, skipping base image download
    I0107 22:33:44.945933 2656655 preload.go:97] Checking if preload exists for k8s version v1.20.0 and runtime docker
    I0107 22:33:44.946050 2656655 preload.go:105] Found local preload: /home/scanner/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-docker-overlay2-amd64.tar.lz4
    I0107 22:33:44.946085 2656655 cache.go:54] Caching tarball of preloaded images
    I0107 22:33:44.946176 2656655 preload.go:131] Found /home/scanner/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
    I0107 22:33:44.946228 2656655 cache.go:57] Finished verifying existence of preloaded tar for v1.20.0 on docker
    I0107 22:33:44.947115 2656655 profile.go:147] Saving config to /home/scanner/.minikube/profiles/minikube/config.json ...
    I0107 22:33:44.947195 2656655 lock.go:36] WriteFile acquiring /home/scanner/.minikube/profiles/minikube/config.json: {Name:mk5e6ed7e40cf7ec67750318f2c2f45a2875e730 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
    I0107 22:33:44.947797 2656655 cache.go:185] Successfully downloaded all kic artifacts
    I0107 22:33:44.947894 2656655 start.go:314] acquiring machines lock for minikube: {Name:mkee29aa7e8feb7875fa9330f0ad58ea22217a86 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
    I0107 22:33:44.948102 2656655 start.go:318] acquired machines lock for "minikube" in 153.446µs
    I0107 22:33:44.948165 2656655 start.go:90] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}
    I0107 22:33:44.948390 2656655 start.go:127] createHost starting for "" (driver="podman")
    I0107 22:33:44.950094 2656655 out.go:119] * Creating podman container (CPUs=2, Memory=3900MB) ...
  • Creating podman container (CPUs=2, Memory=3900MB) ...
    I0107 22:33:44.950691 2656655 start.go:164] libmachine.API.Create for "minikube" (driver="podman")
    I0107 22:33:44.950771 2656655 client.go:165] LocalClient.Create starting
    I0107 22:33:44.950921 2656655 main.go:119] libmachine: Reading certificate data from /home/scanner/.minikube/certs/ca.pem
    I0107 22:33:44.951043 2656655 main.go:119] libmachine: Decoding PEM data...
    I0107 22:33:44.951121 2656655 main.go:119] libmachine: Parsing certificate...
    I0107 22:33:44.951574 2656655 main.go:119] libmachine: Reading certificate data from /home/scanner/.minikube/certs/cert.pem
    I0107 22:33:44.951680 2656655 main.go:119] libmachine: Decoding PEM data...
    I0107 22:33:44.951747 2656655 main.go:119] libmachine: Parsing certificate...
    I0107 22:33:44.952958 2656655 cli_runner.go:111] Run: sudo -n podman network inspect minikube --format "{{range .plugins}}{{if eq .type "bridge"}}{{(index (index .ipam.ranges 0) 0).subnet}},{{(index (index .ipam.ranges 0) 0).gateway}}{{end}}{{end}}"
    W0107 22:33:45.054693 2656655 cli_runner.go:149] sudo -n podman network inspect minikube --format "{{range .plugins}}{{if eq .type "bridge"}}{{(index (index .ipam.ranges 0) 0).subnet}},{{(index (index .ipam.ranges 0) 0).gateway}}{{end}}{{end}}" returned with exit code 125
    I0107 22:33:45.055176 2656655 network_create.go:235] running [podman network inspect minikube] to gather additional debugging logs...
    I0107 22:33:45.055235 2656655 cli_runner.go:111] Run: sudo -n podman network inspect minikube
    W0107 22:33:45.122732 2656655 cli_runner.go:149] sudo -n podman network inspect minikube returned with exit code 125
    I0107 22:33:45.122765 2656655 network_create.go:238] error running [sudo -n podman network inspect minikube]: sudo -n podman network inspect minikube: exit status 125
    stdout:
    []

stderr:
Error: error inspecting object: no such network minikube
I0107 22:33:45.122791 2656655 network_create.go:240] output of [sudo -n podman network inspect minikube]: -- stdout --
[]

-- /stdout --
** stderr **
Error: error inspecting object: no such network minikube

** /stderr **
I0107 22:33:45.122846 2656655 cli_runner.go:111] Run: sudo -n podman network inspect podman --format "{{range .plugins}}{{if eq .type "bridge"}}{{(index (index .ipam.ranges 0) 0).subnet}},{{(index (index .ipam.ranges 0) 0).gateway}}{{end}}{{end}}"
I0107 22:33:45.211063 2656655 client.go:168] LocalClient.Create took 260.257606ms
panic: runtime error: index out of range [1] with length 1

goroutine 201 [running]:
k8s.io/minikube/pkg/drivers/kic/oci.podmanNetworkInspect(0x1cefa53, 0x6, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x2046220, ...)
/usr/local/google/home/tstromberg/src/minikube/pkg/drivers/kic/oci/network_create.go:222 +0x6c1
k8s.io/minikube/pkg/drivers/kic/oci.containerNetworkInspect(0xc00119a2a0, 0x6, 0x1cefa53, 0x6, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/usr/local/google/home/tstromberg/src/minikube/pkg/drivers/kic/oci/network_create.go:155 +0x187
k8s.io/minikube/pkg/drivers/kic/oci.CreateNetwork(0xc00119a2a0, 0x6, 0xc00119a270, 0x8, 0x2, 0xc00119b680, 0x6, 0x0, 0x47b758)
/usr/local/google/home/tstromberg/src/minikube/pkg/drivers/kic/oci/network_create.go:65 +0x1d0
k8s.io/minikube/pkg/drivers/kic.(*Driver).Create(0xc00117d0e0, 0xc001178520, 0x1d)
/usr/local/google/home/tstromberg/src/minikube/pkg/drivers/kic/kic.go:89 +0x4c5
k8s.io/minikube/pkg/minikube/machine.(*LocalClient).Create(0xc0010de680, 0xc0010e0e40, 0x0, 0x0)
/usr/local/google/home/tstromberg/src/minikube/pkg/minikube/machine/client.go:224 +0x48f
k8s.io/minikube/pkg/minikube/machine.timedCreateHost.func2(0x20b8fe0, 0xc0010de680, 0xc0010e0e40, 0xc0010d5e20, 0xc000a11340)
/usr/local/google/home/tstromberg/src/minikube/pkg/minikube/machine/start.go:194 +0x3b
created by k8s.io/minikube/pkg/minikube/machine.timedCreateHost
/usr/local/google/home/tstromberg/src/minikube/pkg/minikube/machine/start.go:193 +0x107

Thank you for your hard work and contributions to open source!

@wrtate
Copy link
Author

wrtate commented Jan 8, 2021

This was observed using your deb download:
minikube version: v1.16.0
commit: 9f1e482-dirty

As well as when building this repo from source at 7668755.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jan 8, 2021

Duplicate, #10088

I discovered I had to manually remove a "minikube" network from podman after the delete. Is this intentional or another bug?

It's a bug, #9705

We don't have any testing for the debian version of podman (see #10089)
So that line where the podman CNI is missing had not been tested before.

The problem with the dirty git should be fixed in the "real" releases (1.17).
But reappeared in 1.16.0-1 fix for #9995. Locally, it's probably go.mod ?

@afbjorklund afbjorklund added co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. os/linux triage/duplicate Indicates an issue is a duplicate of other open issue. labels Jan 8, 2021
@wrtate
Copy link
Author

wrtate commented Jan 8, 2021

Guess I was a bit too slow in getting this written up and should have checked for existing issues again.

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. os/linux triage/duplicate Indicates an issue is a duplicate of other open issue.
Projects
None yet
Development

No branches or pull requests

2 participants