-
Notifications
You must be signed in to change notification settings - Fork 505
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create test jobs to validate that debs/rpms can be installed #821
Comments
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten @kubernetes/release-engineering -- If someone is interested in taking this, go for it! |
@justaugustus This might be an interesting one. Is it okay if I gave it a try and ask for some initial guidance? |
I want to start working on this, but I need a few pointers to get started.
|
@xmudrii -- Sorry I didn't get to this this week. Still working through the post-holiday queue. Will write something up for you next week. |
@justaugustus Reminder to take a look at this issue if you have some time. 🙂 |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
I've discussed this issue with @justaugustus a few months ago and I'll try to recap our conversation. There are two ways to fix this issue: a simple and a complete way. A simple way means that we move the old script to this repo and make it work. The script can be found here: https://github.com/kubernetes/kubeadm/blob/master/tests/e2e/packages/verify_packages_install_deb.sh A complete way means that we do something along:
The complete way is much more complicated and it's unclear how it should look like. We should start simple, port the old script, and create a job that would use it. |
So if I understand correctly the steps we are likely to take are: |
I would only add that we might want to slightly revisit the script. I see that it has a part for verifying stable packages. I think that this might be covered by
I'm not sure that we need to update |
So looking at current script, it does not check the versions of the installed packages and does not check kubeadm, kubelet, kubectl etc
So would we be better off keeping the check or maybe adding it to the verify-published.sh and then dropping it from the new script. |
We've discussed this issue in the #release-management channel on Slack. I'll try to recap the most important points. The original script from the kubeadm repo doesn't work anymore because we are no longer building debs and rpms for each CI build. Historically, we would build debs/rpms for each CI build and put them in a bucket, from where anyone could grab them. However, we are not doing this since a while ago, so it's not possible to use the original script at all. Instead, we want to use
Because of the reasons stated above, we've concluded that it makes sense to put this issue on hold until we don't start using packages built with Also, @puerco proposed that instead of running this job as a periodic, we run it as part of the release process (e.g. when staging the release). /lifecycle frozen |
This job should probably also verify that the installed software (from the packages) has the expected version ? Currently the |
Failed again, for the 1.1.1 deb. It also contains 0.8.6 |
Ugh, this may fix it: #2673 |
@saschagrunert : would it be possible to release new packages to apt, with the correct contents ? kubernetes-cni_0.8.6-00 (this one seems OK, no new release needed) EDIT: These would be imaginary new packages, to replace the old ones with the wrong content: $ apt list kubernetes-cni -a
Listing... Done
kubernetes-cni/kubernetes-xenial 1.1.1-00 amd64
kubernetes-cni/kubernetes-xenial 0.8.7-00 amd64
kubernetes-cni/kubernetes-xenial 0.8.6-00 amd64
kubernetes-cni/kubernetes-xenial 0.7.5-00 amd64
kubernetes-cni/kubernetes-xenial 0.6.0-00 amd64
kubernetes-cni/kubernetes-xenial 0.5.1-00 amd64
kubernetes-cni/kubernetes-xenial 0.3.0.1-07a8a2-00 amd64 |
They should be automatically generated with the October patch releases. Not sure if we have to bump the package rev, though. |
If you do a "stealth" update, the old ones might be used from the cache (depending on how people set up their mirroring, when filenames are same) EDIT: My bad, the filename would change: Also I don't think apt and yum will see it as an update, if it has the same EVR ? But we can try it, something simple like |
@afbjorklund I'm wondering why we are at revision Line 95 in dbc13bd
|
We are at 00, unfortunately the contents are 0.8.6. So I was thinking that 01 was the next revision after ? Normally the default debian revision is "0", so I'm not sure where the extra zero came from to start with... |
Still happening in Kubernetes 1.26.0: anders@lima-k8s:~$ apt list | grep kubernetes-xenial
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
cri-tools/kubernetes-xenial,now 1.25.0-00 amd64 [installed]
docker-engine/kubernetes-xenial 1.11.2-0~xenial amd64
kubeadm/kubernetes-xenial,now 1.26.0-00 amd64 [installed]
kubectl/kubernetes-xenial,now 1.26.0-00 amd64 [installed]
kubelet/kubernetes-xenial,now 1.26.0-00 amd64 [installed]
kubernetes-cni/kubernetes-xenial,now 1.1.1-00 amd64 [installed]
rkt/kubernetes-xenial 1.29.0-1 amd64
anders@lima-k8s:~$ /usr/bin/crictl --version
crictl version v1.25.0
anders@lima-k8s:~$ /opt/cni/bin/portmap --version
CNI portmap plugin v0.8.6 Due to the packages not being bumped. |
I'm not working on this at the moment. |
Per @spiffxp's review comment:
creating an issue instead of having this as a code TODO.
/assign
/area release-eng
/milestone v1.16
/sig release
/priority important-soon
The text was updated successfully, but these errors were encountered: