Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This fixes some issues seen with the new node slice IPAM feature.
Disable some controllers that are not needed. Don't need informer on nodeSlicePool since that is an internal CR we manage. Don't need to reconcile for NADs when the resource version does not change. For nodes, only listen for node add and delete. Nodes are updated frequently for Status changes and conditions that we don't need - we only want to allocate a new slice pool when a node is created, and remove its allocation when node is deleted
We have multiple NADs(which map to multiple NICs), but with the same CIDR and network_name(because it is really one L2 network). With node slice pool feature enabled and a Pod requesting multiple networks, the same podRef and same containerID will be present multiple times in each IPpool, for each ifName(corresponding to each NAD). We also need to match by ifName so it deletes the correct entry, rather than first one
When node slice size or cidr is reconfigured, it was passing the wrong range - ipam.Range, rather than ipamConf.IPRanges[0].Range. ipam.Range is cleared and set to ipamConf.IPRanges[0].Range so it was error'ing out with empty CIDR
nodeSlice.Spec was not being written/saved when the node slice size/cidr was reconfigured
the Subnet mask/cidr used was incorrect. We should use the CIDR from the NAD, rather than the range for the node. The NAD's range really defines the cluster-wide subnet, whereas the nodeSLicePool's IPRange is just used for grouping IP allocations. Without this fix, each node was on a different subnet, resulting in a IP lookup via default route(primary CNI), rather than going over the NAD/Multus network. For example, suppose our NAD range is 10.0.0.0/8. The node slice size is a /24. If we use the range from the NodeSLicePool, Pods on a node are on a different /24, rather than all being on the same /8.