Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

go-runner in 1.24.8 kube-proxy Docker Image is built with 1.17.3 GO #2841

Closed
jhawkins1 opened this issue Nov 16, 2022 · 35 comments
Closed

go-runner in 1.24.8 kube-proxy Docker Image is built with 1.17.3 GO #2841

jhawkins1 opened this issue Nov 16, 2022 · 35 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/release Categorizes an issue or PR as relevant to SIG Release. sig/security Categorizes an issue or PR as relevant to SIG Security.

Comments

@jhawkins1
Copy link

What happened?

It appears although 1.24.x moved to 1.18 GO that the "go-runner" in kube-proxy Docker Image is built using 1.17.3 GO. This is being detected by Vulnerability Scanners, therefore, all 1.17.3 CVEs are being reported for the kube-proxy Image due to "go-runner".

What did you expect to happen?

"go-runner" should be built with the same version of GO that the Kubernetes Release is using. For 1.24.8 that is GO 1.18.x.

How can we reproduce it (as minimally and precisely as possible)?

Vulnerability Scan of the Container Image will show that 1.17.3 is being used to build the "go-runner" in the kube-proxy image.

Anything else we need to know?

No response

Kubernetes version

1.24.8

Cloud provider

N/A

OS version

# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here

# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here

Install tools

Container runtime (CRI) and version (if applicable)

Related plugins (CNI, CSI, ...) and versions (if applicable)

@jhawkins1 jhawkins1 added the kind/bug Categorizes issue or PR as related to a bug. label Nov 16, 2022
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Nov 16, 2022
@k8s-ci-robot
Copy link
Contributor

@jhawkins1: This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Nov 16, 2022
@dims
Copy link
Member

dims commented Nov 16, 2022

/remove-sig api-machinery
/sig networking

@k8s-ci-robot k8s-ci-robot removed the sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. label Nov 16, 2022
@k8s-ci-robot
Copy link
Contributor

@dims: The label(s) sig/networking cannot be applied, because the repository doesn't have them.

In response to this:

/remove-sig api-machinery
/sig networking

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Nov 16, 2022
@jhawkins1
Copy link
Author

/sig security

@k8s-ci-robot k8s-ci-robot added sig/security Categorizes an issue or PR as relevant to SIG Security. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Nov 16, 2022
@jhawkins1
Copy link
Author

/sig release

@k8s-ci-robot k8s-ci-robot added the sig/release Categorizes an issue or PR as relevant to SIG Release. label Nov 16, 2022
@aojea
Copy link
Member

aojea commented Nov 17, 2022

we'll need help from sig-release here, I tried to find out how the image is built for 1.24 without success

@jhawkins1
Copy link
Author

Looks like go-runner was added to kube-proxy in 1.23 via kubernetes/kubernetes#106086 . I see @BenTheElder played a role with the adding of go-runner. Maybe the folks on that Issue and set of PRs could aid in tracking down this issue.

@BenTheElder
Copy link
Member

kube-proxy doesn't use this image in 1.25+
The image is in the kubernetes/release repo

@BenTheElder
Copy link
Member

Unfortunately we probably can't backport the 1.25 improvements since they could break people depending on other things in the image.

@jhawkins1
Copy link
Author

jhawkins1 commented Nov 17, 2022

@BenTheElder , looking at the build it appears it should be using 1.18 GO for the Image being generated and then utilized by Kube Proxy. Confusing to me how it is using the 1.17.3 GO... I am fuzzy though how this works -- my first time looking at the build mechanisms.

@jhawkins1
Copy link
Author

@BenTheElder .... Ping... Any progress or further thoughts on this issue? Can the go-runner that kube-proxy is using be built with the same GO version used by 1.24.x K8 without backporting other changes for 1.25.x that you eluded to in your earlier comment? We would like to get the CVE fixes in 1.24.x without needing to move to 1.25.x in the near term -- 1.25.x is in our plans for early/mid 2023, before 1.24 goes EOL in July. Getting some pressures on the Vuln Management side to get the CVEs remediate in the nearer term.

@BenTheElder
Copy link
Member

I'm sorry but I'm not working on this, and I'm out right now.

Can the go-runner that kube-proxy is using be built with the same GO version used by 1.24.x K8 without backporting other changes for 1.25.x that you eluded to in your earlier comment?

Maybe? This is not easy to answer, Go version upgrades often require updating dependencies and otherwise patching to work around behavior / standard library changes. You'll have to look into what changes are between these go versions.

Focus in the kube-proxy image work has been on eliminating dependencies in the image going forward for
various reasons including avoiding CVE issues. We're also working on better plans for maintaining go versions, but historically Kubernetes has not done Go minor version upgrades within patch releases for various compatibility reasons so you'd have to build your own patched image. Most distros will be maintaining this sort of thing for you and we do the best we can upstream.

cc @kubernetes/sig-release-leads

@liggitt has been working on a patch set to be able to safely upgrade minor go versions on a patch releases for the first time, but I don't think it has included go-runner, and he's also out right now.

@liggitt
Copy link
Member

liggitt commented Jan 3, 2023

@liggitt has been working on a patch set to be able to safely upgrade minor go versions on a patch releases for the first time, but I don't think it has included go-runner, and he's also out right now.

it did bump the go-runner image for 1.23 and 1.24:

but this issue is saying the go-runner version for kube-proxy is not controlled by that line? since https://github.com/kubernetes/kubernetes/blob/v1.24.8/build/common.sh#L94 also referenced go1.18.8 but this issue is saying the v1.24.8 kube-proxy image uses a go-runner built with go1.17?

@jhawkins1
Copy link
Author

@liggitt , the Vulnerability Scanner we are using (Palo Alto Prisma) is detecting 1.17.3 for the go-runner Application (/go-runner) in the 1.24.8 kube-proxy Image (k8s.gcr.io/kube-proxy:v1.24.8). Other K8 GO Apps (ex. usr/local/bin/kube-proxy) are showing using 1.18.8. We recently upgraded to 1.24.x family of K8 and assumed 1.18.x was used for everything, and discovered it appears the GO-Runner was built with 1.17.3 GO. Confusing, thus the issue submission.

@liggitt
Copy link
Member

liggitt commented Jan 3, 2023

hmm, looks like the version of go-runner included in the kube-proxy iptables image comes from https://github.com/kubernetes/release/blame/master/images/build/debian-iptables/Makefile#L24

that is actually stale on master (I would have expected go1.19.4), but I also don't know if/how that got bumped for the debian-iptables image version used in 1.23 and 1.24

I think this issue probably belongs in the https://github.com/kubernetes/release repo... once that produces a debian-iptables image with modern go-runner images, we can pick that up in 1.23 and 1.24.

/transfer release

@k8s-ci-robot k8s-ci-robot transferred this issue from kubernetes/kubernetes Jan 3, 2023
@liggitt
Copy link
Member

liggitt commented Jan 3, 2023

https://github.com/kubernetes/release/blob/master/images/build/debian-iptables/variants.yaml says IMAGE_VERSION: 'bullseye-v1.5.1' is for Kubernetes 1.23 and newer, but 1.23 is still on bullseye-v1.1.0 and 1.24 is still on bullseye-v1.3.0... should those update to 1.5.1+?

@aojea
Copy link
Member

aojea commented Jan 3, 2023

I think that this is the change d36a24a

@aojea
Copy link
Member

aojea commented Jan 3, 2023

dup of #2507 ?

@liggitt
Copy link
Member

liggitt commented Jan 3, 2023

d36a24a didn't update the debian-iptables IMAGE_VERSION, and didn't produce a new image, did it? (was 1.5.1 before and is still 1.5.1)

@liggitt
Copy link
Member

liggitt commented Jan 3, 2023

also, the go-runner image is still stale (should be 1.19.4 now)

@liggitt
Copy link
Member

liggitt commented Jan 3, 2023

should 1.23 and 1.24 update from bullseye-v1.1.0 and bullseye-v1.3.0 to bullseye-v1.5.1 (or whatever 1.5.x is after we update go-runner 1.19.4)?

@aojea
Copy link
Member

aojea commented Jan 3, 2023

d36a24a didn't update the debian-iptables IMAGE_VERSION, and didn't produce a new image, did it? (was 1.5.1 before and is still 1.5.1)

I see new images pushed with the same tag
https://console.cloud.google.com/gcr/images/k8s-staging-build-image/global/debian-iptables?project=k8s-staging-build-image

@aojea
Copy link
Member

aojea commented Jan 3, 2023

should 1.23 and 1.24 update from bullseye-v1.1.0 and bullseye-v1.3.0 to bullseye-v1.5.1 (or whatever 1.5.x is after we update go-runner 1.19.4)?

I think so

@liggitt
Copy link
Member

liggitt commented Jan 3, 2023

I see new images pushed with the same tag
https://console.cloud.google.com/gcr/images/k8s-staging-build-image/global/debian-iptables?project=k8s-staging-build-image

really? 😱 I didn't expect mutable tags there

@BenTheElder
Copy link
Member

FWIW we normally don't consume from staging directly and what we promote to production (registry.k8s.io) is promoted by digest so the tag mutability isn't as problematic as it sounds.

@liggitt
Copy link
Member

liggitt commented Jan 4, 2023

do we know if the 1.5.1 image that got promoted included the go-runner update or not?

@jhawkins1
Copy link
Author

@liggitt , in January we updated to 1.24.10, and it is still using the old go-runner. So, it does not look like this has been corrected unless it made it into the 1.24.11 release that just came out -- I could not really tell from the CHANGELOG comments.

@liggitt
Copy link
Member

liggitt commented Mar 6, 2023

1.23 (EOL):

1.24:

1.25:

1.26:

@liggitt
Copy link
Member

liggitt commented Mar 6, 2023

cc @cpanato for visibility / gap in dependency tracking / propagating new go versions through image chains

@cpanato
Copy link
Member

cpanato commented Mar 20, 2023

looks like in the past we missed to update distroless-iptables, but now we are tracking that and keeping up-to-date

i will review the active branches and check those things

@cpanato
Copy link
Member

cpanato commented Mar 20, 2023

@liggitt should we build a new distrolles-iptables using go 1.19 and update that in the active patch branches?

today we are building that only with go 1.20

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 18, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 18, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/release Categorizes an issue or PR as relevant to SIG Release. sig/security Categorizes an issue or PR as relevant to SIG Security.
Projects
None yet
Development

No branches or pull requests

8 participants