-
Notifications
You must be signed in to change notification settings - Fork 306
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Filter pods that have out of range IP #1963
Filter pods that have out of range IP #1963
Conversation
Hi @sawsa307. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign @swetharepakula |
fe88f4d
to
963f048
Compare
pkg/neg/syncers/transaction.go
Outdated
if !isNode { | ||
return false | ||
} | ||
_, podCIDR, err := netset.ParseCIDRSloppy(node.Spec.PodCIDR) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can omit the custom parsers here and use the std library ones, net.ParseCIDR()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Antonio, i found in the comment section for ParseCIDRSloppy, it says We're choosing to keep it for compat with potential stored values.
, where the values refer to CIDR with leading '0' characters on numbers. Is there a specific reason that we can be sure that there is no need to consider CIDR like this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, that is a good question, we had to do it in kubernetes to keep compatibility, imagine someone created a service with the ip 10.001.1.1, if he upgrades that Service will be invalid, and we don't want to break it.
For components that use the kubernetes API they have more freedom to make their choice and say, since version X if we found that IP we are going to reject it, so please update your manifests.
I prefer for the ecosystem to "force" this evolution and not keep the old parsers, it should be a problem only for Kubernetes/kubernetes not for the others.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it. That makes sense. Thank you!
pkg/neg/syncers/transaction.go
Outdated
return false | ||
} | ||
_, podCIDR, err := netset.ParseCIDRSloppy(node.Spec.PodCIDR) | ||
podIP := net.ParseIP(pod.Status.PodIP) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the CNI not necessarily have to use node.Spec.PodCIDR
, that means that this relation may not be always true, also, if the CNI is supposed to use the CIDR this double check should not be necessary, because this can only happen if the CNI is wrong.
In addition, there is a plan to support multiple PodCIDRs, so node.Spec.PodCIDRs will contain a slice of networks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The checks is related to the problem of preemptible node with the buggy 1.22 node eviction logic. We are verify this because it might cause the NEG not to be sync completely. We are operation it on the assumption that there might be a bug somewhere else.
Currently since we are only supporting single network ipv4, we only do check for ipv4. As we add support for the other feature, this will be updated.
@swetharepakula
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
then it is better if you iterate over the node.Spec.PodCIDRs list too , and returns true if there is a match, and false if there is no match
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good. I'll change it. Thank you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I commented in the other PR you sent with a sketch of a solution
/ok-to-test |
963f048
to
47edd1e
Compare
2dd2c7f
to
fc1e477
Compare
fc1e477
to
ef87f23
Compare
ef87f23
to
d7384ff
Compare
e0a1233
to
b37cf85
Compare
b8c4612
to
25a99ce
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The pod IP validation needs to consider a few more edge cases
pkg/neg/syncers/utils.go
Outdated
@@ -527,10 +535,13 @@ func validatePod(pod *apiv1.Pod, nodeLister cache.Indexer) error { | |||
if err != nil || !exists { | |||
return negtypes.ErrEPNodeNotFound | |||
} | |||
_, isNode := obj.(*apiv1.Node) | |||
node, isNode := obj.(*apiv1.Node) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: by convention we use ok
instead of isNode
pkg/neg/syncers/utils.go
Outdated
// If dual stack is enabled, it would return error if either ipv4 or ipv6 address does not match to the pod IPs | ||
func podContainsEndpointAddress(networkEndpoint negtypes.NetworkEndpoint, pod *apiv1.Pod, enableDualStackNEG bool) error { | ||
endpointIPs := []string{networkEndpoint.IP} | ||
if enableDualStackNEG { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
check that there is a dual stack endpoint. It is possible dual stack is enabled but this service is not a dual stack service.
pkg/neg/syncers/utils.go
Outdated
for _, podIP := range pod.Status.PodIPs { | ||
podIPs = append(podIPs, net.ParseIP(podIP.IP)) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since the pod ips have already been verified, use the network endpoint we formed instead. This might be a dual stack cluster, but it might not be a dual stack service so both endpoints may not need to be verified.
25a99ce
to
316a74d
Compare
316a74d
to
68bccdc
Compare
d202d16
to
d87e0fd
Compare
d87e0fd
to
352c729
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
few small comments, otherwise this is ready to merge
352c729
to
b0d6d5f
Compare
Filter pods have IPs outside of the corresponding nodes' IP ranges.
b0d6d5f
to
e00401e
Compare
/retest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: sawsa307, swetharepakula The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Filter pods have IPs outside of the corresponding nodes' IP ranges.