Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Podman machine fails to start with exit status 255 on Mac #17403

Closed
smakinen opened this issue Feb 7, 2023 · 47 comments · Fixed by #18915 or #19210
Closed

[Bug]: Podman machine fails to start with exit status 255 on Mac #17403

smakinen opened this issue Feb 7, 2023 · 47 comments · Fixed by #18915 or #19210
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. podman-desktop

Comments

@smakinen
Copy link

smakinen commented Feb 7, 2023

Issue Description

There seems to be an issue when trying to start Podman with podman machine start when using macOS and QEMU. I created a Podman machine about two months ago but now the machine fails to start. Somehow, starting the machine got gradually worse after a time before completely failing in startup.

Here is what happens when I try to start the machine.

podman machine start 
Starting machine "podman-machine-default"
Waiting for VM ...
Mounting volume... /Users/me:/Users/me
Error: exit status 255

From the podman machine start --log-level debug output, I can see the last statement that is executed is the SSH command related to creating the mount point directories (maybe line 669 in qemu/machine.go), which fails. Previous issues have considered this to be a sign of e.g. invalid SSH configuration (e.g. #14237) but maybe there is something more to it. When running with log-level debug and the QEMU window open, the exit status 255 error shows before all the Fedora services have started.

Could this be a race condition issue so that SSH related services have not yet started when the SSH mount commands are executed? I found one closed and apparently fixed issue where the race condition was suggested (#11532).

I tried connecting to the QEMU QMP monitor socket with nc -U qmp_podman-machine-default.sock and running the following queries just after QEMU has started and before all the services are running.

{"QMP": {"version": {"qemu": {"micro": 0, "minor": 2, "major": 7}, "package": ""}, "capabilities": ["oob"]}}
{ "execute": "qmp_capabilities"} 
{"return": {}}
{ "execute": "query-status" }
{"return": {"status": "running", "singlestep": false, "running": true}}

So the VM is in an running state and also the gvproxy port (50810 here) is in a listening state early on.

netstat -ap tcp | grep -i listen
tcp4       0      0  localhost.50810        *.*                    LISTEN

Is it possible that condition in qemu/machine.go (line 645) for state != machine.Running || !listening { cannot hold back the execution of the SSH statement before the machine is fully initialized?

Steps to reproduce the issue

Steps to reproduce the issue (happens on an existing Podman machine)

  1. podman machine start

Describe the results you received

podman machine start 
Starting machine "podman-machine-default"
Waiting for VM ...
Mounting volume... /Users/me:/Users/me
Error: exit status 255

Describe the results you expected

Podman machine should be able to start. Thinking more broadly, maybe there should additional guarantees that SSH on the QEMU machine is up and running before issuing commands to the machine. Perhaps the SSH connection should be polled for a couple of times with sensible timeout or then other events from QEMU used for the purpose? For instance, the QMP monitor emits a NIC_RX_FILTER_CHANGED event towards the end of the initialization (not sure of its purpose, though).

{"timestamp": {"seconds": 1675763180, "microseconds": 921089}, "event": "NIC_RX_FILTER_CHANGED", "data": {"path": "/machine/peripheral-anon/device[0]/virtio-backend"}}

podman info output

The machine (or gvproxy) is not up and running so cannot get info.

podman info   
Error: failed to connect: dial tcp [::1]:50810: connect: connection refused

Some supplemental environment details to podman info .

Podman (brew): stable 4.4.0 (bottled), HEAD
OS: macOS Ventura 13.2
Architecture (QEMU): aarch64
Chip: Apple M1 Pro

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

Additional environment details

Here are the results of podman machine inspect.

[
  {
    "ConfigPath": {
      "Path": "/Users/me/.config/containers/podman/machine/qemu/podman-machine-default.json"
    },
    "ConnectionInfo": {
      "PodmanSocket": {
        "Path": "/Users/me/.local/share/containers/podman/machine/podman-machine-default/podman.sock"
      }
    },
    "Created": "2022-12-12T12:34:18.127475+02:00",
    "Image": {
      "IgnitionFilePath": {
        "Path": "/Users/me/.config/containers/podman/machine/qemu/podman-machine-default.ign"
      },
      "ImageStream": "testing",
      "ImagePath": {
        "Path": "/Users/me/.local/share/containers/podman/machine/qemu/podman-machine-default_fedora-coreos-37.20221127.2.0-qemu.aarch64.qcow2"
      }
    },
    "LastUp": "2023-01-31T16:41:24.124649+02:00",
    "Name": "podman-machine-default",
    "Resources": {
      "CPUs": 1,
      "DiskSize": 100,
      "Memory": 3072
    },
    "SSHConfig": {
      "IdentityPath": "/Users/me/.ssh/podman-machine-default",
      "Port": 50810,
      "RemoteUsername": "core"
    },
    "State": ""
  }
]

Additional information

It appears that the problem affects Podman machines that have been in use for some time (i.e. a few months). Why fresh Podman machines straight out-of-the-box work and what cause the slow decay remain a mystery.

@smakinen smakinen added the kind/bug Categorizes issue or PR as related to a bug. label Feb 7, 2023
@smakinen
Copy link
Author

smakinen commented Feb 8, 2023

Here is what I have tried so far without success:

Is recreating the machine the only way forward at this point? I guess there is no way to recover containers from the VM if the machine cannot be connected?

@jamesmikesell
Copy link

jamesmikesell commented Feb 14, 2023

I see this error quite frequently too.

Not sure if this is related, but feel like i started seeing this error more frequently after starting to use Podman for dev containers, with named volumes that might have more IO opperations than other containers, and additionally needing to add swap space to the podman machine.

I also always seem to see this error after podman has (inexplicably) crashed. No idea why podman has crashed, the only symptom is that my containers stop responding. Killing podman machine and trying to re-run podman machine start occasionally results success, however other times the only solution is to completely destroy the existing machine by running podman machine rm and then re-initializing a new machine.

@jamesmikesell
Copy link

@smakinen I think you are correct about this being a race condition, and some bug internal to Podman

I'm able to get podman to start a machine that was previously failing using the following shell script, which manually pauses the podman and gvproxy process (no idea what that second command does), and waits until the user verifies that the Qemu VM has finished booting before unpausing those processes.

#!/bin/bash

pkill podman
pkill qemu

podman machine start --log-level debug &
PID=$!
sleep 2

pkill -STOP podman
pkill -STOP gvproxy

echo "^^^^"
echo "^^^^"
echo "^^^^"
echo "If the above says the VM already running or started"
echo "then edit the json file located at ~/.config/containers/podman/machine/qemu/"
echo "and change the line"
echo "\"Starting\": true"
echo "to be"
echo "\"Starting\": false"
echo ""
echo "dont forget to save, and rerun this script."
echo ""
echo "Else, continue with instructions below"
echo ""
echo "Qemu will open in another window (likely in the background)"
echo "wait until you see a login prompt on that window"
read -p "then return to THIS terminal and hit enter"

pkill -CONT podman
pkill -CONT gvproxy

Unfortunately, this is only a bandaid. We need something to address whatever the underlying root problem is.

@tonyseek
Copy link

It happened on an Intel MacBook (MacBookPro16,1) too, not just the M1. @jamesmikesell's script works for me, as a workaround.

@berndlosert
Copy link

berndlosert commented Feb 15, 2023

I just started having this problem after restarting my M1 Mac. Running with --log-level debug shows that it dies after trying to ssh into the VM.

For the record, I am using:

  • macOS Monterey 12.6.3
  • podman version 4.3.1 (installed with brew)

@berndlosert
Copy link

Removing the vm and recreating it fixed it for me.

@smakinen
Copy link
Author

A brilliant idea and move @jamesmikesell! I'm happy to say that with your script, I can see my containers again :). So halting the podman and gvproxy processes until QEMU has time to finish initialization works. I actually previously tried to renice the QEMU and Podman processes to make the QEMU process run faster but I could not figure out how to halt the other process altogether. Thanks a lot.

To @berndlosert: also good to hear that recreating the VM works but I did not want lose my VM since I had a bunch of containers in the VM used for development. So I was not keen to remove the existing VM (and the problem could also reappear after a while).

For Podman and the QEMU machine initialization code, I think there should a function such as isSSHRunning which would be used to check the SSH status before using SSH (in the 'Waiting for VM...' phase). It's a good question what is the best way to check the availability of SSH in the guest OS if the machine state and the listening-state of gvproxy cannot be relied on.

@laurent-martin
Copy link

laurent-martin commented Feb 27, 2023

I observed the same: podman vm starting successfully only once every 4 times.
but pausing podman until qemu is stable solves the situation.
thanks @jamesmikesell

See the script I use podmac.sh start.

Example:

$ ./podmac.sh start intel_64

Starting machine "intel_64"
Waiting for VM ...
Pausing podman
Waiting for SSH
SSH-2.0-OpenSSH_8.8
Resuming podman
...
Machine "intel_64" started successfully

@benoitf
Copy link
Contributor

benoitf commented Mar 16, 2023

I reproduce it time to time as well with the latest version

$ podman machine start new
Starting machine "new"
Waiting for VM ...
Mounting volume... /Users:/Users
Error: exit status 255

@barloff-st
Copy link

Hitting same issue with the following:

  • A1990 (2019 Intel)
  • podman 4.3.1 and 4.4.4 (upgrade did not help)

Workarounds mentioned here did not resolve the issue (removing VM and recreating, and script). Running into same failure location as what berndlosert is seeing.

@itsthejoker
Copy link

I just ran into this issue as well with podman 4.4.4 on a 2020 MBP. podman machine stop / start a few times let it finish, but I'm seeing the same results as @benoitf.

@smakinen
Copy link
Author

smakinen commented Apr 9, 2023

For me, James' script has worked well so far. Waiting until QEMU is prepared for SSH logins helps. I'm certain that the waiting approach could be forged into a PR for Podman but perhaps it won't work in all cases. I'm a bit curious, what do you @itsthejoker and others see in the QEMU window when you run podman machine start --log-level debug and wait until no more messages are printed to the QEMU screen? Is there some kind of other error message perhaps related to SSH?

@chevdor
Copy link

chevdor commented Apr 14, 2023

This Error 255 seems to be a catch-all so users may have various issues under the hood.

It is also a tricky one since podman-desktop will simply ignore it and tell you the machine is running.
What is does NOT tell you is that, if the ssh connection fails, all your mounts will have been skipped, so mounting volumes will fail all over the place for no apparent reasons...

In my case, I spotted the issue using:

podman start --log-level debug

The SSH connection failed for some reasons.

podman system connection ls

shows where the ssh key is located. A simple ll ~/.ssh | grep podman also works and if fine if you see only 2 lines.

I then deleted keys and machine. Then recreated the machine and everything was back in track:

podman machine stop; podman machine rm
rm ~/.ssh/podman*
podman machine init
podman machine start

My 2 cents if you have this error 255 issue, would be to first try to get things to work with a default podman machine (ie no custom memory, cpu, volumes, etc...) and then see if the fancy options keep working once the core problem is solved. I would also advise not using podman-desktop (for the duration of the test) as it give a false sense of success.

@chevdor
Copy link

chevdor commented Apr 17, 2023

I kept on having issue and I am seeing some success thanks to this comment in this issue. I think the topic of this issue could be edited and the M1 part removed, I don't think the issues discussed here are M1 specific. I run into the same on an Intel Mac. The issue is however likely related to the fact that users here use a Mac.

The podman troubleshooting guide mentions some extra cleanup so the following may help some:

podman machine stop
podman machine rm -f
rm -rf ~/.local/share/containers/podman
rm -rf ~/.config/containers/

then your regular:

podman machine init
podman machine start

AFAIK, the fix for me was to add the following:

# to fix podman issues
# see https://github.com/containers/podman/issues/14237
Host localhost
  IdentitiesOnly yes

to my ~/.ssh/config.

I was consistently getting this 255 error even from a clean rebooted system and when using a default podman machine.

With the fix above of the ssh config and yet-another cleanup, I was able to finally get things back in order.

@samuel-phan
Copy link

For me, the issue was my SSH agent with a lot of keys already.

My fix:

ssh-add -D  # clear the SSH keys from my SSH agent

podman machine stop
podman machine start

@noyez
Copy link

noyez commented Apr 17, 2023

For me, the issue was my SSH agent with a lot of keys already.

Same here, i just ran into this issue. I just temporarily unset SSH_AUTH_SOCK.

@samuel-phan didn't know about ssh-add -D, thx!

@paulftw
Copy link

paulftw commented Apr 25, 2023

see the script I use start.sh

Had the same issue and the start script fixed it. I also had to change "Starting" to false in json (as @jamesmikesell mentioned). Without it any attempt to start a machine was returning "VM already running or starting".

@laurent-martin
Copy link

I also had to change "Starting" to false in json

I have updated the script here: https://github.com/laurent-martin/podman_x86_64_on_apple_aach64
So that it automatically fixes this dondition as well now:

./podmac.sh start intel_64
Checking jq...OK, jq found
Checking podman...OK, podman found
Checking curl...OK, curl found
Resetting stale starting state.         <--------------------
Starting machine "intel_64"
Waiting for VM ...
Pausing podman
SSH available: Resuming podman.
Mounting volume... /Users/laurent:/Users/laurent

This machine is currently configured in rootless mode. If your containers
require root permissions (e.g. ports < 1024), or if you run into compatibility
issues with non-podman clients, you can switch using the following command:

	podman machine set --rootful intel_64

API forwarding listening on: /var/run/docker.sock
Docker API clients default to this address. You do not need to set DOCKER_HOST.

Machine "intel_64" started successfully

@smakinen smakinen changed the title [Bug]: Podman machine fails to start with exit status 255 on Mac M1 Pro [Bug]: Podman machine fails to start with exit status 255 on Mac Apr 26, 2023
@vrothberg
Copy link
Member

@ashley-cui @baude PTAL

@osalbahr
Copy link

osalbahr commented May 4, 2023

This happens to me sometimes too, even though the machine starts anyways. I don't know if this is related.

~$ podman machine start
Starting machine "podman-machine-default"
Waiting for VM ...
Mounting volume... /Users:/Users
Error: exit status 255
~$ podman run -it debian
Resolved "debian" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull docker.io/library/debian:latest...
Getting image source signatures
Copying blob sha256:918547b9432687b1e1d238e82dc1e0ea0b736aafbf3c402eea98c6db81a9cb65
Copying config sha256:34b4fa67dc04381e908b662ed69b3dbe8015fa723a746c66cc870a5330520981
Writing manifest to image destination
Storing signatures
root@1c153be897f5:/# uname -a
Linux 1c153be897f5 6.2.9-300.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Mar 30 22:32:58 UTC 2023 x86_64 GNU/Linux
root@1c153be897f5:/# exit
exit
~$ 

@laurent-martin
Copy link

Mounting volume... /Users:/Users
Error: exit status 255
~$ podman run -it debian

Yes, the VM may be started, but the mount failed. The test is rather (/Users represents the mounted volume in VM):

$ podman run -it -v /Users:/Users debian
root@a1f31cce35f5:/# ls /Users

It should show the contents of /Users on macos... when the mount did not fail.

@osalbahr
Copy link

osalbahr commented May 4, 2023

Mounting volume... /Users:/Users
Error: exit status 255
~$ podman run -it debian

Yes, the VM may be started, but the mount failed. The test is rather (/Users represents the mounted volume in VM):

$ podman run -it -v /Users:/Users debian
root@a1f31cce35f5:/# ls /Users

It should show the contents of /Users on macos... when the mount did not fail.

I am aware that the mount didn’t fail unlike with OP. I was just reporting on the error message in case it might be related to this issue, since it is part of the same message.

@jamesmikesell
Copy link

jamesmikesell commented May 5, 2023

I've come up with a slightly improved version of my script (read hack) above, which waits for the VM to finish booting before letting podman mount directories.

The above script ran podman/qemu in debug mode, which slowed down performance of the running containers.

This improved version of the script runs podman/qemu in normal mode, and relies on the user waiting for the CPU utilization of the qemu process to lower and stabilize (something that happens once the VM is done loading) before hitting enter and thus allowing podman to mount shared directories.

#!/bin/bash

pkill podman
pkill qemu

podman machine start &
PID=$!
sleep 2

pkill -STOP podman
pkill -STOP gvproxy

echo "^^^^"
echo "^^^^"
echo "^^^^"
echo "If the above says the VM already running or started"
echo "then edit the json file located at ~/.config/containers/podman/machine/qemu/"
echo "and change the line"
echo "\"Starting\": true"
echo "to be"
echo "\"Starting\": false"
echo ""
echo "don't forget to save, and rerun this script."
echo ""
echo "Else, continue with instructions below"
echo ""
echo "Wait until the displayed CPU utilization lowers and stabilizes to 1% or less"
echo "Then hit enter"


PID_QEMU=$(pgrep qemu)

while true; do
  CPU=$(ps -p $PID_QEMU -o %cpu | awk 'NR>1 {print $1}')
  printf "\rCPU utilization: %s%%             " $CPU

  read -s -t 1
  if [ $? -eq 0 ]; then
    break
  fi
done


pkill -CONT podman
pkill -CONT gvproxy

@Bluebugs
Copy link

@jamesmikesell thanks for your script. I have the same problem and using it reliably work.

vrothberg added a commit to vrothberg/libpod that referenced this issue Jul 13, 2023
During the exponential backoff waiting for the machine to be fully up
and running, also make sure that SSH is ready.  The systemd dependencies
of the ready.service include the sshd.service among others but that is
not enough.

Other CoreOS users reported the same issue on IRC, so I feel fairly
confident to use the pragmatic approach of making sure SSH works on the
client side.  containers#17403 is quite old and there are other pressing machine
issues that need attention.

[NO NEW TESTS NEEDED]

Fixes: containers#17403
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
ashley-cui pushed a commit to ashley-cui/podman that referenced this issue Jul 13, 2023
Make sure that starting a qemu machine uses proper exponential backoffs
and that a single variable isn't shared across multiple backoffs.

DO NOT BACKPORT: I want to avoid backporting this PR to the upcoming 4.6
release as it increases the flakiness of machine start (see containers#17403). On
my M2 machine, the flake rate seems to have increased with this change
and I strongly suspect that additional/redundant sleep after waiting for
the machine to be running and listening reduced the flakiness.  My hope
is to have more predictable behavior and find the sources of the flakes
soon.

[NO NEW TESTS NEEDED] - still too flaky to add a test to CI.

Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
ashley-cui pushed a commit to ashley-cui/podman that referenced this issue Jul 13, 2023
During the exponential backoff waiting for the machine to be fully up
and running, also make sure that SSH is ready.  The systemd dependencies
of the ready.service include the sshd.service among others but that is
not enough.

Other CoreOS users reported the same issue on IRC, so I feel fairly
confident to use the pragmatic approach of making sure SSH works on the
client side.  containers#17403 is quite old and there are other pressing machine
issues that need attention.

[NO NEW TESTS NEEDED]

Fixes: containers#17403
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
@romanrev
Copy link

I am still experiencing this issue - on podman 4.5.1 - using the podmac.sh, removing all of my ssh identities etc - have to start/stop machine for around 20 times before it actually mounts the directories correctly

➤ podmac.sh start podman-machine-default
Checking jq...OK, jq found
Checking podman...OK, podman found
Checking curl...OK, curl found
Starting machine "podman-machine-default"
Waiting for VM ...
Pausing podman
Waiting for SSH /...Mounting volume... /Users:/Users
Error: exit status 255
ERROR: podman exited prematurely
roman@Romans-MacBook-Pro:~/K/A/m/d/c/application|main⚡*
➤ podman machine stop
Waiting for VM to exit...
Machine "podman-machine-default" stopped successfully
# above in a loop
# until finally

➤ podmac.sh start podman-machine-default
Checking jq...OK, jq found
Checking podman...OK, podman found
Checking curl...OK, curl found
Starting machine "podman-machine-default"
Waiting for VM ...
Pausing podman
Waiting for SSH /...Mounting volume... /Users:/Users
SSH available: Resuming podman.
Mounting volume... /private:/private
Mounting volume... /var/folders:/var/folders
API forwarding listening on: /var/run/docker.sock
Docker API clients default to this address. You do not need to set DOCKER_HOST.

@Stausssi
Copy link

The bug is not fixed in 4.5.1, but will be part of the 4.6.0 release (see changelog)

@vrothberg
Copy link
Member

Yes, that's it. The fixes will ship with the upcoming 4.6 release. The plan is to release 4.6 by the end of this week, so I the fix will reach you soon.

@itsthejoker
Copy link

Awesome, thank you!!

@sneko
Copy link

sneko commented Aug 2, 2023

Can someone confirm v4.6 solves the issue? In my case I upgraded 1 week ago, had the issue multiple times despite a full reboot of my MacOS...

#17403 (comment) is still my only viable solution 😢

@benoitf
Copy link
Contributor

benoitf commented Aug 2, 2023

@sneko did you recreate your podman machine ? Ignition script has been changed so you'll need to recreate the machine

@romanrev
Copy link

romanrev commented Aug 2, 2023

Can someone confirm v4.6 solves the issue? In my case I upgraded 1 week ago, had the issue multiple times despite a full reboot of my MacOS...

#17403 (comment) is still my only viable solution 😢

I seem to have stopped experiencing the issue after I upgraded, even without recreating the machine

@smakinen
Copy link
Author

smakinen commented Aug 8, 2023

Thanks for the changes to the Ignition script and for shipping the fix. For a previously created Podman machine that was failing in machine startup, I still got the exit status 255 error with Podman 4.6.0. As Florent mentioned, taking advantage of the changed Ignition script won't happen without recreating the machine.

% podman --version podman version 4.6.0 % podman machine start Starting machine "podman-machine-default" Waiting for VM ... Error: machine did not transition into running state: ssh error: exit status 255

I think the situation is fairly good now. If all future machines start ok and at least some of the currently failing legacy machines can be started with the delayed startup scripts found here, Podman should be good to go in most scenarios 😊. Thank you all and have a nice autumn (at least in the northern hemisphere) everyone 🍁.

@laurent-martin
Copy link

Starting a x86 VM on M1 macbook works now with 4.6.0 :

% podman --version
podman version 4.6.0

% podman machine start intel_64
Starting machine "intel_64"
Waiting for VM ...
Mounting volume... /Users/laurent:/Users/laurent

This machine is currently configured in rootless mode. If your containers
require root permissions (e.g. ports < 1024), or if you run into compatibility
issues with non-podman clients, you can switch using the following command:

	podman machine set --rootful intel_64

API forwarding listening on: /var/run/docker.sock
Docker API clients default to this address. You do not need to set DOCKER_HOST.

Machine "intel_64" started successfully

@shrishs
Copy link

shrishs commented Aug 11, 2023

I am still facing the same issue on Mac Ventura.
podman version
Client: Podman Engine
Version: 4.6.1

bash-3.2$ podman machine start podman-machine-default --log-level debug
INFO[0000] podman filtering at log level debug
DEBU[0000] Using Podman machine with qemu virtualization provider
Starting machine "podman-machine-default"
[/usr/local/opt/podman/libexec/podman/gvproxy -listen-qemu unix:///var/folders/ft/1tqtxfq54v38g9kbzvby4hkm0000gn/T/podman/qmp_podman-machine-default.sock -pid-file /var/folders/ft/1tqtxfq54v38g9kbzvby4hkm0000gn/T/podman/podman-machine-default_proxy.pid -ssh-port 53958 -forward-sock /Users/shrishsrivastava/.local/share/containers/podman/machine/qemu/podman.sock -forward-dest /run/user/501/podman/podman.sock -forward-user core -forward-identity /Users/xxxx/.ssh/podman-machine-default --debug]
DEBU[0000] qemu cmd: [/usr/local/bin/qemu-system-x86_64 -m 2048 -smp 1 -fw_cfg name=opt/com.coreos/config,file=/Users/xxxx/.config/containers/podman/machine/qemu/podman-machine-default.ign -qmp unix:/var/folders/ft/1tqtxfq54v38g9kbzvby4hkm0000gn/T/podman/qmp_podman-machine-default.sock,server=on,wait=off -netdev socket,id=vlan,fd=3 -device virtio-net-pci,netdev=vlan,mac=5a:94:ef:e4:0c:ee -device virtio-serial -chardev socket,path=/var/folders/ft/1tqtxfq54v38g9kbzvby4hkm0000gn/T/podman/podman-machine-default_ready.sock,server=on,wait=off,id=apodman-machine-default_ready -device virtserialport,chardev=apodman-machine-default_ready,name=org.fedoraproject.port.0 -pidfile /var/folders/ft/1tqtxfq54v38g9kbzvby4hkm0000gn/T/podman/podman-machine-default_vm.pid -machine q35,accel=hvf:tcg -cpu host -virtfs local,path=/Users,mount_tag=vol0,security_model=none -virtfs local,path=/private,mount_tag=vol1,security_model=none -virtfs local,path=/var/folders,mount_tag=vol2,security_model=none -drive if=virtio,file=/Users/xxxx/.local/share/containers/podman/machine/qemu/podman-machine-default_fedora-coreos-38.20230806.2.0-qemu.x86_64.qcow2]
Waiting for VM ...
Error: machine did not transition into running state

@vrothberg
Copy link
Member

@shrishs thanks for reporting. I don´t think it's the same issue but another issue. Feel free to create a new issue on GitHub.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Nov 10, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 10, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. podman-desktop
Projects
None yet