Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow management of KubeProxy configuration in CAPI #4512

Closed
vrabbi opened this issue Apr 22, 2021 · 23 comments
Closed

Allow management of KubeProxy configuration in CAPI #4512

vrabbi opened this issue Apr 22, 2021 · 23 comments
Labels
area/api Issues or PRs related to the APIs area/control-plane Issues or PRs related to control-plane lifecycle management kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@vrabbi
Copy link

vrabbi commented Apr 22, 2021

User Story

As a CAPI user I would like to manage the kubeproxy configuration in a declarative way

Detailed Description

This is a follow up of #1584

There are many reasons and use cases for changing kubeProxy settings. the main one i have encountered is using IPVS mode which is a requirement in certain envirnments due to performance gains it offers over iptables.

As of today when initializing a cluster with CABPK, there is no way to supply this information through the standard mechanism of the kubeadm config CRD.
we have a hacky workaround which adds a script in a file to the nodes in the KCP that adds a KubeProxy Configuration yaml to the kubeadm-init.yaml configuration file and a preKubeadmCommand to run the script. while this works it is not ideal.
another use case for kubeproxy settings is to enable ServiceTopology feature.

Related Issue

there is a similar issue for kubelet configuration #4464

@enxebre
Copy link
Member

enxebre commented Apr 22, 2021

/kind feature
/area api
Relates to #4444

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. area/api Issues or PRs related to the APIs labels Apr 22, 2021
@vincepri
Copy link
Member

@fabriziopandini Could we unify all of these issues into one?

@fabriziopandini
Copy link
Member

I prefer to keep them separated because implication of changing KubeletConfiguration and changing KubeProxy configuration are different and we need to nail down details and actionable items.
However I agree we should have consistency at UX level

@vincepri
Copy link
Member

vincepri commented Jul 6, 2021

/milestone Next

@k8s-ci-robot k8s-ci-robot added this to the Next milestone Jul 6, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 4, 2021
@fabriziopandini
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 4, 2021
@randomvariable
Copy link
Member

/area control-plane

@k8s-ci-robot k8s-ci-robot added the area/control-plane Issues or PRs related to control-plane lifecycle management label Nov 2, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 31, 2022
@vincepri
Copy link
Member

/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 31, 2022
@kopiczko
Copy link

kopiczko commented Feb 2, 2022

Is there a reason why Patches

// "kube-apiserver", "kube-controller-manager", "kube-scheduler", "etcd". "patchtype" can be one
don't support kube-proxy?

@sbueringer
Copy link
Member

Posting here to (in addition to Slack):

I assume because the patch field patches static pod manifests (which only exist for kube-apiserver, …). kube-proxy is deployed via a Daemonset by kubeadm.
But I think that’s something which could probably be better answered by kubeadm folks.

/cc @neolit123 @fabriziopandini

@neolit123
Copy link
Member

^ yeah pretty much that.

Is there a reason why Patches, don't support kube-proxy?

kube-proxy in kubeadm is deployed as a single configuration backed DaemonSet, so having patches does not make sense, because patches are per node. also kube-proxy doesn't really support instance specific config and who knows what happens if you configure each kube-proxy instance differently.

in kubeadm, you could at least pass an entire KubeProxyConfiguration during init.

also...the future of kube-proxy in kubeadm is unclear. the component as it stands today is full of technical dept and the kubeproxyconfig being stuck in v1alpha1 is only one of the problems. there have been discussions about kube-proxy2 of sorts.

@kopiczko
Copy link

kopiczko commented Feb 2, 2022

Thanks for the answers guys!

in kubeadm, you could at least pass an entire KubeProxyConfiguration during init.

Yeah but not in CAPI and we this is what this issue is about :)

@sathieu
Copy link
Contributor

sathieu commented May 17, 2022

Is there any available workaround?

@kopiczko
Copy link

This is what we've done https://github.com/giantswarm/cluster-openstack/pull/41/files#diff-76a685e30df3f53a36ac87ed4b53b7f7a5c8721b75b36633ae9d697eff71090bR41

@fabriziopandini fabriziopandini added the triage/accepted Indicates an issue or PR is ready to be actively worked on. label Jul 29, 2022
@fabriziopandini fabriziopandini removed this from the Next milestone Jul 29, 2022
@fabriziopandini fabriziopandini removed the triage/accepted Indicates an issue or PR is ready to be actively worked on. label Jul 29, 2022
@ruifaling
Copy link
Contributor

is there some progress?

@sbueringer
Copy link
Member

sbueringer commented Aug 17, 2022

No - nobody is working on this issue

@fabriziopandini
Copy link
Member

/close

Given that the future of component config is not clear I would avoid extending its usage in CAPI. Let's collect use cases where the current approach does not work before committing to a way forward

@k8s-ci-robot
Copy link
Contributor

@fabriziopandini: Closing this issue.

In response to this:

/close

Given that the future of component config is not clear I would avoid extending its usage in CAPI. Let's collect use cases where the current approach does not work before committing to a way forward

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@timoreimann
Copy link
Contributor

FWIW, we'd also like to disable / omit kube-proxy long-term as we plan to use Cilium's kube-proxy-free mode where all forwarding/routing is supposed to happen through eBPF. Not 100% sure if that's relevant here (or rather kubeadm), but I thought I'd share it since @fabriziopandini asked for collecting use cases.

@sbueringer
Copy link
Member

Thx for the info. We also have this issue: #3700

As far as I know kube-proxy deployment can be disabled via the controlplane.cluster.x-k8s.io/skip-kube-proxy annotation.

@fabriziopandini
Copy link
Member

@timoreimann thanks for sharing!
I agree with @sbueringer that #3700 is most appropriate to track your use cases (this one was about exposing the kube-proxy component config instead)

@timoreimann
Copy link
Contributor

Thanks to both of you. The info in #3700 is super useful to us.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/api Issues or PRs related to the APIs area/control-plane Issues or PRs related to control-plane lifecycle management kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests