-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow management of Kubelet configuration in CAPI #4464
Comments
If possible, I think it would be really nice to be able to use different KubeletConfiguration for control plane and worker nodes. Our experience is that depending on the node size the eviction thresholds should be configured differently. E.g. control plane nodes with 4 cpu / 16 GI require different thresholds then worker nodes with 16 cpu / 64 Gi RAM. On our internal platform we solved this by deploying KubeletConfigurations and assigning them to nodes after the This could work similarly to how |
I agree with @sbueringer, if we allow this we should allow for a configuration on a more granular level than an entire cluster. I would argue probably allow for configuration at the KCP, MachinePool, and MachineDeployment/MachineSet level, that way one could get as granular as they wanted. The one thing that I do worry about with this, though is that currently doesn't kubeadm manage the KubeletConfiguration at the version level? Are we going to end up having to handle any type of conversions or other automated mutations for users if they specify a KubeletConfiguration? Or are we going to get into odd situations where specifying the KubeletConfiguration will interfere with upgrade handling? |
I'm not sure about the impact of updates. Just some details about our current solution:
We don't execute commands like |
i think the best option is to just allow passing KubeletConfiguration to the as far as the kubelet CM naming goes, we really want kubeadm to stop versioning the CM name: |
but to share, @fabriziopandini and me had a lengthy discussion about the state of kubeletconfiguration vs kubelet flags vs instance specific config vs component config upgrade and overall this space is quite messy and problematic. @fabriziopandini preferred that we continue using the pattern on passing a single config to kubeadm and then users could apply node specific overrides with kubeletExtraArgs. of course, this has the risk that if the kubelet suddenly removes a number of its deprecated flags, users of CAPI will be in trouble. |
@neolit123
Possible problem:
That was also one of my concerns. |
Relates to #4444 |
TL;DR; this issue focuses on what is supported today by kubeadm, which is already an improvement on the current status. Instance/group of instance specific KubeletConfigurations is something useful, but the change should be implemented in kubeadm first, and only after in CAPI. However as @detiber and @neolit123 was pointing out:
Given that, we should be careful on changing the status quo in kubeadm unless SIG-node/SIG-architecture clarifies the roadmap in this area. |
/milestone Next |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/lifecycle frozen |
Apparently DynamicKubeletConfig is deprecated and the plan is to remove it with 1.23. Some more context in: kubernetes/enhancements#281 https://github.com/kubernetes/kubernetes/pull/102966/files: fs.MarkDeprecated("dynamic-config-dir", "Feature DynamicKubeletConfig is deprecated in 1.22 and will not move to GA. It is planned to be removed from Kubernetes in the version 1.23. Please use alternative ways to update kubelet configuration.") |
Jep, I think it is already possible to configure the kubelet via kubeletconfiguration via a cloud-init provided file. Edit: for us we will do that and configure the flag in the kubelet.service . |
Adding a use case: When I bootstrap a Machine, I want to set the Node's Spec.ProviderID field. I could use the flag, but it appears deprecated:
That said, the kubeadm docs say to set node-specific fields using flags, not the configuration file. For reference, kind v0.11 uses the (As an aside, does the out-of-tree cloud provider controller set this field? If it does, I assume it does so without changing kubelet's configuration.) |
fyi. If I understood it correctly it should be possible to customize/patch the KubeletConfiguration with kubeadm/Kubernetes v1.25: |
/close Given that the future of component config is not clear I would avoid extending its usage in CAPI. Let's collect use cases where current approach does not work before committing to a way forward |
@fabriziopandini: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
User Story
As a CAPI user I would like to manage the KubeletConfiguration in a declarative way
Detailed Description
This is a follow up of #1584.
As of today when initialising a cluster with CABPK, a default KubeletConfiguration is used and stored into
kubelet-config-XX
ConfigMap gets created, and the same config map is then carried away during upgrades.The user can edit the
kubelet-config-XX
ConfigMap between upgrades, but this approach is not declarative and does not provide a good abstraction over this configuration detail in CAPI.This issue is about discussing following options:
kubelet-config-XX
ConfigMap (and thus reconcile it, preventing direct modification/drift from the authoritative source)A few consideration should be addressed:
Anything else you would like to add:
/kind feature
The text was updated successfully, but these errors were encountered: