-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhance nfd-worker placement #31
Enhance nfd-worker placement #31
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ArangoGutierrez The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Enable NFD operator to deploy nfd-worker pods on nodes labeled other than "node-role.kubernetes.io/worker" Also: Fix clean-labels make target Signed-off-by: Carlos Eduardo Arango Gutierrez <carangog@redhat.com>
e0b1ef1
to
8c7df9f
Compare
/kind feature |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I guess I never totally understood the old nodeSelector 🤔 Do we really need some affinity? By default (in upstream Kubernetes at least) daemonset should be scheduled on on every schedulable node, right(?)
The Makefile fix makes sense in any case.
this PR affect only I know we have #4 but that is for By Using nodeAffinity we can better help jube-scheduler to deploy |
Mm, I know how node affinity works. But again, why do we need node affinity for workers? Shouldn't we just let it run on all (schedulable) nodes? In some setups even master node might be configured to run regular workloads in which case this new affinity will prevent nfd-worker from running there. |
@marquiz There are several customers that apply taints to workers, since we do not know which taints, the NFD worker daemonset tolerates all taints, which means they can be also scheduled on masters. For now we want to prevent NFD to be scheduled on masters but I have seen requests for labelling masters as well. For now, tolerate all taints, and set an anti-affinity on the master. Let NFD worker be scheduled only on "any" worker. |
OK, that explains. This patch would make sense IF there were some tolerations added. But, currently there are none.
Yet another configurability option for #19? |
Co-authored-by: Markus Lehtonen <markus.lehtonen@intel.com>
Signed-off-by: Carlos Eduardo Arango Gutierrez <carangog@redhat.com>
/label tide/merge-method-squash |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @ArangoGutierrez ! Now this makes sense 😉
/lgtm |
After kubernetes-sigs#31 three node cluster, where all nodes will have the master and worker(or node for vanilla clusters) labels are just getting the nfd-master daemonset deployed. Since they have the master label, the NFD workers will not be scheduled. Discussion started in a downstream distribution of NFD: openshift/cluster-nfd-operator#109 This Patch fix that adding support for master:worker type of nodes, being used in edge deployments Signed-off-by: Carlos Eduardo Arango Gutierrez <carangog@redhat.com>
After kubernetes-sigs#31 three node cluster, where all nodes will have the master and worker(or node for vanilla clusters) labels are just getting the nfd-master daemonset deployed. Since they have the master label, the NFD workers will not be scheduled. Discussion started in a downstream distribution of NFD: openshift/cluster-nfd-operator#109 This Patch fix that, by adding support for master:worker type of nodes, modifying the nodeAffinity rules on the nfd-worker daemonSet to allow any node with the "node" label (independent if it is a master also) to get scheduled Signed-off-by: Carlos Eduardo Arango Gutierrez <carangog@redhat.com>
Enable NFD operator to deploy nfd-worker pods on nodes labeled other
than "node-role.kubernetes.io/worker"
Also: Fix clean-labels make target
Closes: #29
Closes: #30