-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CI:ALL] Bump to v5.2.0-rc3 #23462
[CI:ALL] Bump to v5.2.0-rc3 #23462
Conversation
Two tests failing in gating but never CI; add some debug instrumentation to make it possible to find out what is going on Signed-off-by: Ed Santiago <santiago@redhat.com>
The test assumes that if more than 1 ip on the host we should be able to set host.containers.internal. This however is not how the logic works in the code. What it actually does is to check all ips in the rootless-netns and then it knows that it cannot use any of these ips. This includes any podman bridge ips. You can reproduce the error when you have only one ipv4 on the host then run a container as root in the background and run the test: hack/bats --rootless 505:host.containers.internal So the failure here was that there was already a podman container running as root on the default bridge thus the test saw 2 ips but then the rootless run also uses the same subnet for its bridge and the code knew that ip would not work either. I could have made another special condition in test but the better way to work around it is to create a new network. A new network will make sure there are no conflicting subnets assigned so the test will pass. Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The tests didn't check anything actually because default_ifname requires an ip version argument to work. Thus pasta_iface was empty, add new checks to prevent this kind of error again. Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This contains a fix for a gvproxy crash on macos on fast connections with heavy network load. This should fix containers#23114 Signed-off-by: Christophe Fergeau <cfergeau@redhat.com>
The value of the pointer might be changed while creating the container causing unexpected side effects. Signed-off-by: Paul Holzinger <pholzing@redhat.com>
We bind ports to ensure there are no conflicts and we leak them into conmon to keep them open. However we bound the ports after the network was set up so it was possible for a second network setup to overwrite the firewall configs of a previous container as it failed only later when binding the port. As such we must ensure we bind before the network is set up. This is not so simple because we still have to take care of PostConfigureNetNS bool in which case the network set up happens after we launch conmon. Thus we end up with two different conditions. Also it is possible that we "leak" the ports that are set on the container until the garbage collector will close them. This is not perfect but the alternative is adding special error handling on each function exit after prepare until we start conmon which is a lot of work to do correctly. Fixes https://issues.redhat.com/browse/RHEL-50746 Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Signed-off-by: Matt Heon <mheon@redhat.com>
Add the `--compat-volumes option from Buildah v1.37 into Podman in preparation of Podman v5.2 Signed-off-by: tomsweeneyredhat <tsweeney@redhat.com>
This commit was automatically cherry-picked by buildah-vendor-treadmill v0.3 from the buildah vendor treadmill PR, containers#13808 /vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv > The git commit message from that PR is below. Please review it, > edit as necessary, then remove this comment block. \^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Changes since 2024-05-21: * document --compat-volumes * Fix conflict caused by Ed's local-registry PR in buildah Signed-off-by: Ed Santiago <santiago@redhat.com> Signed-off-by: tomsweeneyredhat <tsweeney@redhat.com>
When using service containers and play kube we create a complicated set of dependencies. First in a pod all conmon/container cgroups are part of one slice, that slice will be removed when the entire pod is stopped resulting in systemd killing all processes that were part in it. Now the issue here is around the working of stopPodIfNeeded() and stopIfOnlyInfraRemains(), once a container is cleaned up it will check if the pod should be stopped depending on the pod ExitPolicy. If this is the case it wil stop all containers in that pod. However in our flaky test we calle podman pod kill which logically killed all containers already. Thus the logic now thinks on cleanup it must stop the pod and calls into pod.stopWithTimeout(). Then there we try to stop but because all containers are already stopped it just throws errors and never gets to the point were it would call Cleanup(). So the code does not do cleanup and eventually calls removePodCgroup() which will cause all conmon and other podman cleanup processes of this pod to be killed. Thus the podman container cleanup process was likely killed while actually trying to the the proper cleanup which leaves us in a bad state. Following commands such as podman pod rm will try to the cleanup again as they see it was not completed but then fail as they are unable to recover from the partial cleanup state. Long term network cleanup needs to be more robust and ideally should be idempotent to handle cases were cleanup was killed in the middle. Fixes containers#21569 Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Fix up a couple of versions in comments in the pkg/api/server/register_images.go file. Based on comments from containers#23440 Signed-off-by: tomsweeneyredhat <tsweeney@redhat.com>
Signed-off-by: Matt Heon <mheon@redhat.com>
We were not able to find or create Copr project
Unless the HTTP status code above is >= 500, please check your configuration for:
|
Eeek it's against the wrong branch |
Signed-off-by: Matt Heon <mheon@redhat.com>
Signed-off-by: Matt Heon <mheon@redhat.com>
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: mheon The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Alright, fixed, it's against 5.2 now |
@containers/podman-maintainers PTAL |
@@ -2,6 +2,7 @@ | |||
|
|||
## 5.2.0 | |||
### Features | |||
- Podman now supports `libkrun` as a backend for creating virtual machines on MacOS. The `libkrun` backend has the advantage of allowing GPUs to be mounted into the virtual machine to accelerate tasks. The default backend remains `applehv`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a nit as almost everyone writes it "MacOS", but officially, it's "macOS"
LGTM |
Restarted one machine flake |
assuming a flake ... be sure it isnt /lgtm |
You've been missing out. It's ALWAYS a flake. podman-machine is failing ~ 2 out of 3 runs. Sometimes more. |
As the title says
Does this PR introduce a user-facing change?