Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The fcos initialized by podman will disconnect the network, but the manually initialized fcos will not, the same qemu #18177

Closed
yxing-xyz opened this issue Apr 13, 2023 · 9 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. machine remote Problem is in podman-remote

Comments

@yxing-xyz
Copy link

yxing-xyz commented Apr 13, 2023

Issue Description

coreos/fedora-coreos-tracker#1463
The fcos initialized by podman will disconnect the network, but the manually initialized fcos will not, the same qemu

image

I started three qemu virtual machines for comparison

  1. archlinux arm
  2. fcos
  3. podman fcos

Steps to reproduce the issue

Steps to reproduce the issue

  1. wait within 14 hours
  2. Observe the podman debug qemu graphical interface

Describe the results you received

Describe the results you received

[56254.722194] virtio_net virtio0 enp0s1: TX timeout on queue: 0, sq: output.0, vq: 0x1, name: output.0, 5220000 usecs ago

Describe the results you expected

Use podman normally

podman info output

host:
  arch: arm64
  buildahVersion: 1.29.0
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - pids
  - misc
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.7-2.fc37.aarch64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.7, commit: '
  cpuUtilization:
    idlePercent: 99.98
    systemPercent: 0.01
    userPercent: 0.01
  cpus: 8
  distribution:
    distribution: fedora
    variant: coreos
    version: "37"
  eventLogger: journald
  hostname: localhost.localdomain
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 6.1.18-200.fc37.aarch64
  linkmode: dynamic
  logDriver: journald
  memFree: 16253087744
  memTotal: 16727650304
  networkBackend: netavark
  ociRuntime:
    name: crun
    package: crun-1.8.1-1.fc37.aarch64
    path: /usr/bin/crun
    version: |-
      crun version 1.8.1
      commit: f8a096be060b22ccd3d5f3ebe44108517fbf6c30
      rundir: /run/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.0-8.fc37.aarch64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.3
  swapFree: 0
  swapTotal: 0
  uptime: 15h 49m 11.00s (Approximately 0.62 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries:
  search:
  - docker.io
store:
  configFile: /usr/share/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphRootAllocated: 136833904640
  graphRootUsed: 2558099456
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 0
  runRoot: /run/containers/storage
  transientStore: false
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 4.4.2
  Built: 1677669759
  BuiltTime: Wed Mar  1 19:22:39 2023
  GitCommit: ""
  GoVersion: go1.19.6
  Os: linux
  OsArch: linux/arm64
  Version: 4.4.2

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

Additional environment details

QEMU emulator version 7.2.1
Copyright (c) 2003-2022 Fabrice Bellard and the QEMU Project developers

Additional information

For comparison, I used qemu to start three virtual machines for observation.
A qemu virtual machine that comes with podman
An archlinuxarm virtual machine
A fcos virtual machine started by qemu, the same version as the one that comes with podman

podman fcos:
Linux localhost.localdomain 6.1.18-200.fc37.aarch64 #1 SMP PREEMPT_DYNAMIC Sat Mar 11 16:03:54 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux

qemu fcos:
Linux localhost.localdomain 6.1.18-200.fc37.aarch64 #1 SMP PREEMPT_DYNAMIC Sat Mar 11 16:03:54 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux

archlinuxarm:
Linux x 6.2.10-1-aarch64-ARCH #1 SMP PREEMPT_DYNAMIC Fri Apr 7 10:32:52 MDT 2023 aarch64 GNU/Linux

@yxing-xyz yxing-xyz added the kind/bug Categorizes issue or PR as related to a bug. label Apr 13, 2023
@github-actions github-actions bot added the remote Problem is in podman-remote label Apr 13, 2023
@yxing-xyz
Copy link
Author

yxing-xyz commented Apr 13, 2023

[56254.698060] ------------[ cut here ]------------
[56254.707183] NETDEV WATCHDOG: enp0s1 (virtio_net): transmit queue 0 timed out
[56254.713883] WARNING: CPU: 1 PID: 0 at net/sched/sch_generic.c:525 dev_watchdog+0x26c/0x27c
[56254.716598] Modules linked in: 9p fscache netfs overlay rfkill binfmt_misc 9pnet_virtio 9pnet xfs crct10dif_ce polyval_ce polyval_generic ghash_ce sha3_ce sha512_ce sha512_arm64 virtio_net net_failover failover virtio_console virtio_blk virtio_mmio scsi_dh_rdac scsi_dh_emc scsi_dh_alua ip6_tables ip_tables dm_multipath fuse qemu_fw_cfg
[56254.719669] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 6.1.18-200.fc37.aarch64 #1
[56254.719684] Hardware name: QEMU QEMU Virtual Machine, BIOS 0.0.0 02/06/2015
[56254.719686] pstate: 40400005 (nZcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[56254.719699] pc : dev_watchdog+0x26c/0x27c
[56254.719705] lr : dev_watchdog+0x26c/0x27c
[56254.719709] sp : ffff80000800bdb0
[56254.719711] x29: ffff80000800bdb0 x28: ffff800008efc804 x27: ffff80000800bed0
[56254.719717] x26: ffff80000a149008 x25: 0000000000000000 x24: ffff80000a8aec58
[56254.719722] x23: ffff80000a8a7000 x22: 0000000000000000 x21: ffff0000c064541c
[56254.719726] x20: ffff0000c0645000 x19: ffff0000c06454c8 x18: ffffffffffffffff
[56254.719731] x17: ffff8003f3ba0000 x16: ffff80000800c000 x15: ffff80000800b9a8
[56254.719735] x14: ffff80000ad7e104 x13: 74756f2064656d69 x12: 7420302065756575
[56254.719739] x11: 00000000ffffdfff x10: ffff80000a9a1220 x9 : ffff8000081ee7d0
[56254.719744] x8 : 000000000002ffe8 x7 : c0000000ffffdfff x6 : 0000000000000000
[56254.719748] x5 : ffff0003fdcec450 x4 : 0000000000000040 x3 : 0000000000000008
[56254.719753] x2 : 0000000000000104 x1 : ffff0000c038c400 x0 : 0000000000000000
[56254.719758] Call trace:
[56254.719760]  dev_watchdog+0x26c/0x27c
[56254.719765]  call_timer_fn+0x3c/0x1c4
[56254.720778]  __run_timers+0x22c/0x2dc
[56254.720782]  run_timer_softirq+0x38/0x60
[56254.720786]  __do_softirq+0x168/0x418
[56254.720789]  ____do_softirq+0x18/0x24
[56254.720806]  call_on_irq_stack+0x2c/0x38
[56254.720821]  do_softirq_own_stack+0x24/0x3c
[56254.720825]  __irq_exit_rcu+0x120/0x170
[56254.721070]  irq_exit_rcu+0x18/0x24
[56254.721073]  el1_interrupt+0x38/0x70
[56254.721395]  el1h_64_irq_handler+0x18/0x2c
[56254.721399]  el1h_64_irq+0x68/0x6c
[56254.721402]  default_idle_call+0x40/0x184
[56254.721406]  cpuidle_idle_call+0x160/0x1b0
[56254.721430]  do_idle+0xac/0x100
[56254.721432]  cpu_startup_entry+0x30/0x34
[56254.721436]  secondary_start_kernel+0xd8/0x100
[56254.721442]  __secondary_switched+0xb0/0xb4
[56254.721694] ---[ end trace 0000000000000000 ]---
[56254.722194] virtio_net virtio0 enp0s1: TX timeout on queue: 0, sq: output.0, vq: 0x1, name: output.0, 5220000 usecs ago
[56259.735730] virtio_net virtio0 enp0s1: TX timeout on queue: 0, sq: output.0, vq: 0x1, name: output.0, 10240000 usecs ago
[56264.694435] virtio_net virtio0 enp0s1: TX timeout on queue: 0, sq: output.0, vq: 0x1, name: output.0, 15200000 usecs ago
[56269.734021] virtio_net virtio0 enp0s1: TX timeout on queue: 0, sq: output.0, vq: 0x1, name: output.0, 20240000 usecs ago
[56274.695155] virtio_net virtio0 enp0s1: TX timeout on queue: 0, sq: output.0, vq: 0x1, name: output.0, 25200000 usecs ago

@yxing-xyz
Copy link
Author

I think it is likely to be a problem with the network protocol stack of gvproxy

@Luap99 Luap99 added the machine label Apr 13, 2023
@sbrivio-rh
Copy link
Collaborator

[56254.722194] virtio_net virtio0 enp0s1: TX timeout on queue: 0, sq: output.0, vq: 0x1, name: output.0, 5220000 usecs ago

podman fcos: Linux localhost.localdomain 6.1.18-200.fc37.aarch64 #1 SMP PREEMPT_DYNAMIC Sat Mar 11 16:03:54 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux

qemu fcos: Linux localhost.localdomain 6.1.18-200.fc37.aarch64 #1 SMP PREEMPT_DYNAMIC Sat Mar 11 16:03:54 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux

archlinuxarm: Linux x 6.2.10-1-aarch64-ARCH #1 SMP PREEMPT_DYNAMIC Fri Apr 7 10:32:52 MDT 2023 aarch64 GNU/Linux

Actually, you might need a kernel that includes:

commit d71ebe8114b4bf622804b810f5e274069060a174
Author: Jason Wang <jasowang@redhat.com>
Date:   Tue Jan 17 11:47:07 2023 +0800

    virtio-net: correctly enable callback during start_xmit

that is, >= 6.2-rc3, in the guest. I consistently hit an issue very similar to the one you described (while testing passt), without that fix.

@yxing-xyz
Copy link
Author

Ok, then we can only wait for the upstream of fcos to update the kernel. Thank you for your communication。 @sbrivio-rh

@yxing-xyz
Copy link
Author

Linux localhost.localdomain 6.2.9-300.fc38.aarch64 #1 SMP PREEMPT_DYNAMIC Thu Mar 30 22:53:50 UTC 2023 aarch64 GNU/Linux

The latest fcos kernel no longer reproduces this bug

@rafasc
Copy link

rafasc commented Jun 3, 2023

I'm not sure if this is the same bug, but my interface still dies after some time.

Podman info:

host:
  arch: arm64
  buildahVersion: 1.30.0
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.7-2.fc38.aarch64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.7, commit: '
  cpuUtilization:
    idlePercent: 98.98
    systemPercent: 0.59
    userPercent: 0.43
  cpus: 6
  databaseBackend: boltdb
  distribution:
    distribution: fedora
    variant: coreos
    version: "38"
  eventLogger: journald
  hostname: localhost.localdomain
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 1000000
    uidmap:
    - container_id: 0
      host_id: 501
      size: 1
    - container_id: 1
      host_id: 100000
      size: 1000000
  kernel: 6.2.15-300.fc38.aarch64
  linkmode: dynamic
  logDriver: journald
  memFree: 5565382656
  memTotal: 6041935872
  networkBackend: netavark
  ociRuntime:
    name: crun
    package: crun-1.8.5-1.fc38.aarch64
    path: /usr/bin/crun
    version: |-
      crun version 1.8.5
      commit: b6f80f766c9a89eb7b1440c0a70ab287434b17ed
      rundir: /run/user/501/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/501/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.0-12.fc38.aarch64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.3
  swapFree: 0
  swapTotal: 0
  uptime: 0h 12m 45.00s
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - docker.io
store:
  configFile: /var/home/core/.config/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 0
    stopped: 1
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/home/core/.local/share/containers/storage
  graphRootAllocated: 106769133568
  graphRootUsed: 5410377728
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 3
  runRoot: /run/user/501/containers
  transientStore: false
  volumePath: /var/home/core/.local/share/containers/storage/volumes
version:
  APIVersion: 4.5.0
  Built: 1681486872
  BuiltTime: Fri Apr 14 16:41:12 2023
  GitCommit: ""
  GoVersion: go1.20.2
  Os: linux
  OsArch: linux/arm64
  Version: 4.5.0

Logs

Jun 03 22:35:11 localhost.localdomain kernel: ------------[ cut here ]------------
Jun 03 22:35:11 localhost.localdomain kernel: NETDEV WATCHDOG: enp0s1 (virtio_net): transmit queue 0 timed out
Jun 03 22:35:11 localhost.localdomain kernel: WARNING: CPU: 1 PID: 0 at net/sched/sch_generic.c:525 dev_watchdog+0x270/0x280
Jun 03 22:35:11 localhost.localdomain kernel: Modules linked in: xt_addrtype xt_nat xt_mark xt_conntrack xt_comment nft_chain_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nft_compat nf_tab>
Jun 03 22:35:11 localhost.localdomain kernel: CPU: 1 PID: 0 Comm: swapper/1 Not tainted 6.2.15-300.fc38.aarch64 #1
Jun 03 22:35:11 localhost.localdomain kernel: Hardware name: QEMU QEMU Virtual Machine, BIOS edk2-stable202302-for-qemu 03/01/2023
Jun 03 22:35:11 localhost.localdomain kernel: pstate: 61400005 (nZCv daif +PAN -UAO -TCO +DIT -SSBS BTYPE=--)
Jun 03 22:35:11 localhost.localdomain kernel: pc : dev_watchdog+0x270/0x280
Jun 03 22:35:11 localhost.localdomain kernel: lr : dev_watchdog+0x270/0x280
Jun 03 22:35:11 localhost.localdomain kernel: sp : ffff80000800bdb0
Jun 03 22:35:11 localhost.localdomain kernel: x29: ffff80000800bdb0 x28: ffff800008f71f40 x27: ffff80000800bec8
Jun 03 22:35:11 localhost.localdomain kernel: x26: ffff80000a283008 x25: 0000000000000000 x24: ffff80000a9eec58
Jun 03 22:35:11 localhost.localdomain kernel: x23: ffff80000a9e7000 x22: 0000000000000000 x21: ffff0000c237541c
Jun 03 22:35:11 localhost.localdomain kernel: x20: ffff0000c2375000 x19: ffff0000c23754c8 x18: ffffffffffffffff
Jun 03 22:35:11 localhost.localdomain kernel: x17: ffff80016bef6000 x16: ffff800008008000 x15: ffff80000800b980
Jun 03 22:35:11 localhost.localdomain kernel: x14: ffff80000aeba374 x13: 74756f2064656d69 x12: 7420302065756575
Jun 03 22:35:11 localhost.localdomain kernel: x11: 00000000ffffdfff x10: ffff80000aae02a0 x9 : ffff8000081c7b40
Jun 03 22:35:11 localhost.localdomain kernel: x8 : 000000000002ffe8 x7 : c0000000ffffdfff x6 : 00000000000affa8
Jun 03 22:35:11 localhost.localdomain kernel: x5 : 0000000000001fff x4 : 0000000000000104 x3 : ffff80000a283008
Jun 03 22:35:11 localhost.localdomain kernel: x2 : 0000000000000000 x1 : 0000000000000000 x0 : ffff0000c0342200
Jun 03 22:35:11 localhost.localdomain kernel: Call trace:
Jun 03 22:35:11 localhost.localdomain kernel:  dev_watchdog+0x270/0x280
Jun 03 22:35:11 localhost.localdomain kernel:  call_timer_fn+0x3c/0x1c4
Jun 03 22:35:11 localhost.localdomain kernel:  __run_timers+0x258/0x320
Jun 03 22:35:11 localhost.localdomain kernel:  run_timer_softirq+0x38/0x60
Jun 03 22:35:11 localhost.localdomain kernel:  __do_softirq+0x168/0x418
Jun 03 22:35:11 localhost.localdomain kernel:  ____do_softirq+0x18/0x24
Jun 03 22:35:11 localhost.localdomain kernel:  call_on_irq_stack+0x24/0x30
Jun 03 22:35:11 localhost.localdomain kernel:  do_softirq_own_stack+0x24/0x3c
Jun 03 22:35:11 localhost.localdomain kernel:  __irq_exit_rcu+0x118/0x170
Jun 03 22:35:11 localhost.localdomain kernel:  irq_exit_rcu+0x18/0x24
Jun 03 22:35:11 localhost.localdomain kernel:  el1_interrupt+0x38/0x8c
Jun 03 22:35:11 localhost.localdomain kernel:  el1h_64_irq_handler+0x18/0x2c
Jun 03 22:35:11 localhost.localdomain kernel:  el1h_64_irq+0x68/0x6c
Jun 03 22:35:11 localhost.localdomain kernel:  default_idle_call+0x3c/0x180
Jun 03 22:35:11 localhost.localdomain kernel:  cpuidle_idle_call+0x164/0x1b0
Jun 03 22:35:11 localhost.localdomain kernel:  do_idle+0xa4/0xf4
Jun 03 22:35:11 localhost.localdomain kernel:  cpu_startup_entry+0x2c/0x3c
Jun 03 22:35:11 localhost.localdomain kernel:  secondary_start_kernel+0xd8/0x100
Jun 03 22:35:11 localhost.localdomain kernel:  __secondary_switched+0xb0/0xb4
Jun 03 22:35:11 localhost.localdomain kernel: ---[ end trace 0000000000000000 ]---
Jun 03 22:35:11 localhost.localdomain kernel: virtio_net virtio0 enp0s1: TX timeout on queue: 0, sq: output.0, vq: 0x1, name: output.0, 9520000 usecs ago
Jun 03 22:35:16 localhost.localdomain kernel: virtio_net virtio0 enp0s1: TX timeout on queue: 0, sq: output.0, vq: 0x1, name: output.0, 14480000 usecs ago
Jun 03 22:35:21 localhost.localdomain kernel: virtio_net virtio0 enp0s1: TX timeout on queue: 0, sq: output.0, vq: 0x1, name: output.0, 19520000 usecs ago
Jun 03 22:35:26 localhost.localdomain kernel: virtio_net virtio0 enp0s1: TX timeout on queue: 0, sq: output.0, vq: 0x1, name: output.0, 24480000 usecs ago
Jun 03 22:35:31 localhost.localdomain kernel: virtio_net virtio0 enp0s1: TX timeout on queue: 0, sq: output.0, vq: 0x1, name: output.0, 29520000 usecs ago
Jun 03 22:35:36 localhost.localdomain kernel: virtio_net virtio0 enp0s1: TX timeout on queue: 0, sq: output.0, vq: 0x1, name: output.0, 34480000 usecs ago
Jun 03 22:35:37 localhost.localdomain NetworkManager[805]: <info>  [1685828137.0364] device (enp0s1): state change: ip-config -> failed (reason 'ip-config-unavailable', sys-iface-state: 'managed')

Unsure if this is valuable information, but today I started the machine with --log-level=debug and logged in via qemu.
When this happened, I issued a reboot and the systemd booted correctly and podman was usable again. Could rule out a problem on gvproxy side.

This seems the same issue to me. If there's something against reopening this, I can open a new issue.

@sbrivio-rh
Copy link
Collaborator

I'm not sure if this is the same bug, but my interface still dies after some time.

Weird. Would you have a way to rmmod virtio_net; modprobe virtio_net from your VM? That should tell us if resetting queues and device states helps virtio-net, or if it's something outside the guest.

This seems the same issue to me. If there's something against reopening this, I can open a new issue.

It might be a similar issue, but I'm fairly sure the one I mentioned is fixed in 6.2.15-300.fc38.aarch64. On the other hand these messages just indicate that the transmit queue is stuck, which can happen for a number of reasons really.

@rafasc
Copy link

rafasc commented Jun 5, 2023

Weird. Would you have a way to rmmod virtio_net; modprobe virtio_net from your VM? That should tell us if resetting queues and device states helps virtio-net, or if it's something outside the guest.

This seems to bring podman back up. (stop working after 5h uptime)
I'll try to capture some logs when this happens.

@rafasc
Copy link

rafasc commented Jun 6, 2023

Nothing special in the logs, except the fact ardvark-dns starts having issues shortly before the crash.

Jun 06 11:54:37 localhost.localdomain aardvark-dns[2937]: 25855 dns request got empty response
Jun 06 11:54:37 localhost.localdomain aardvark-dns[2937]: 1196 dns request got empty response
Jun 06 11:54:42 localhost.localdomain aardvark-dns[2937]: 63019 dns request got empty response
Jun 06 11:54:42 localhost.localdomain aardvark-dns[2937]: 38730 dns request got empty response
Jun 06 11:54:45 localhost.localdomain aardvark-dns[2937]: 55604 dns request got empty response
Jun 06 11:54:45 localhost.localdomain aardvark-dns[2937]: 51965 dns request got empty response
Jun 06 11:54:47 localhost.localdomain aardvark-dns[2937]: 38730 dns request got empty response
Jun 06 11:54:47 localhost.localdomain aardvark-dns[2937]: 63019 dns request got empty response
Jun 06 11:54:47 localhost.localdomain aardvark-dns[2937]: 41935 dns request got empty response
Jun 06 11:54:47 localhost.localdomain aardvark-dns[2937]: 651 dns request got empty response
Jun 06 11:54:47 localhost.localdomain aardvark-dns[2937]: 4843 dns request got empty response
Jun 06 11:54:47 localhost.localdomain aardvark-dns[2937]: 32209 dns request got empty response
Jun 06 11:54:49 localhost.localdomain aardvark-dns[2937]: 53164 dns request got empty response
Jun 06 11:54:49 localhost.localdomain aardvark-dns[2937]: 60262 dns request got empty response
Jun 06 11:54:50 localhost.localdomain aardvark-dns[2937]: 46054 dns request got empty response
Jun 06 11:54:50 localhost.localdomain aardvark-dns[2937]: 19558 dns request got empty response

But I am assuming this is a symptom rather than a cause.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 5, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 5, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. machine remote Problem is in podman-remote
Projects
None yet
Development

No branches or pull requests

4 participants