-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Egress policies don't work for egress traffic outside of the cluster #55
Comments
This is especially prudent given that you can't disable the virtual v4 interfaces creation by the CNI on v6 clusters. On v4 clusters you have the option of not enable the veths via ENABLE_V6_EGRESS, but there is no such alternative on v6 clusters meaning there is no reliable way to selectively block public egress. If that were possible this issue would be somewhat mitigated. |
Network policy controller can only operate in either IPv4 or IPv6 mode. Agent will rely on the resolved endpoints provided by the NP controller (via the custom CRD) to enforce policies against the Pod's primary interface created by the VPC CNI plugin (i.e.,) @vaskozl Same extends to https://github.com/aws/amazon-vpc-cni-k8s/blob/master/docs/network-policy-faq.md |
The issue here is that in this case we're not talking about a cluster that has intentionally added an additional interface with chained plugins or Multus, this is just using the base VPC CNI with only the network policies enabled. This surely means that anybody using this on a cluster will run into exactly the same issue, without being able to find any documentation to explain this absolutely huge caveat to the network policies. Are there any recommended workarounds for this currently, as it cannot be used in it's current form? |
@ConorPKeegan EKS doesn't support dual stack clusters, and Pods on IPv4 clusters will not have any secondary interfaces created by VPC CNI by default and IPv6 Link local addresses are not routable. |
I can confirm the behavior reported on this thread - The behavior on ipv6 cluster is a bit different than ipv4 cluster Network Policy that blocks all outgoing traffic
Its behavior On Ipv6 cluster :
As can be seen above in the first attempt we try to curl using 'name' and since connectivity with kube-dns is blocked via policy the name resolution does not happen and connection fails there. We might think here our use-case is achieved but under the hood ipv4 is still allowed. To prove this theory we by-pass the DNS name resolution process and hit the ipv4 address directly while policy is still applied as shown in second attempt show below, it is able to establish connection with destination.
Note : 93.184.216.34 is the ipv4 address for example.com. If we put ipv6 address in the curl request it is still blocked as show below -
This proves that ipv4 traffic isn't blocked on ipv6 cluster even with the simplest of network policies that is supposed to block everything. Its behavior on ipv4 cluster :
As can be seen the ipv4 traffic is blocked on ipv4 cluster with the same network policy even if you directly hit the IP address. |
@htyagi-aws As discussed offline, this is expected behavior on EKS IPv6 clusters. NP solution right now is single stack only (i.e.,) either IPv4 or IPv6 (EKS doesn't have dual stack support). VPC CNI chains a light weight CNI plugin that creates a secondary interface for IPv6 pods to facilitate egress IPv4 communication - Our motivation behind this is to help with migration if one of the dependencies is not yet IPv6 ready. IP addresses assigned to these v4 only interfaces are from the 169.254 range and are not routable beyond the node and are also not advertised back to the kubelet. These interfaces are purely for egress only IPv4 support (i.e.,) no one can reach these pods using an IPv4 address and so these are purely IPv6 only pods. As called out in the FAQ, NP is only enforced on interfaces created by VPC CNI (i.e.,) primary interface of the pod and are not enforced on any interfaces created by chained plugins. We will provide an option to disable the secondary egress v4 plugin to avoid any IPv4 fall back concerns. Will keep this thread updated w.r.t that. |
@achevuru Thanks for confirming you'll work to provide an option to disable the egress v4 plugin, this should be sufficient for our use case, as we can just use DNS64 / NAT64 to provide connectivity to the v4 internet. I think it would be worth explicitly calling out this caveat in the FAQ, the most relevant point in there currently seem to be:
As someone not familiar with the internals of VPC CNI it's not obvious to me that using VPC CNI in its default configuration will result in a chained plugin and second interface that the Network policies will not be applied to. I think it would be worth adding extra entries to the FAQ along the lines of: Q) Are Network Policies applied to egress IPv6 traffic in an IPv4 cluster? Q) Are Network Policies applied to egress IPv4 traffic in an IPv6 cluster? |
Having the ability to turn off the v4 egress plugin in a v6 cluster is quite important to us being able to move our project into production (currently aiming for the 16th October), is this likely to be completed by then or is there a date (or estimated date) when this is likely to be completed? |
@ConorPKeegan we are still determining internally when we can provide a new VPC CNI release that allows disabling the |
@ConorPKeegan a new environment variable that allows disabling the |
Updating this issue because of the dates discussed earlier. VPC CNI v1.15.1 release has been slightly delayed in order to get more PRs in, but still targeting release within the next two weeks. Tagging @ConorPKeegan since you had mentioned a dependency on 10/16 |
Closing now that VPC CNI v1.15.1 is released on GitHub: https://github.com/aws/amazon-vpc-cni-k8s/releases/tag/v1.15.1. This release contains the Network Policy agent v1.0.4 image tag |
Thank you, is the v1.0.4 tag planned to be pushed to this repo soon? |
@ConorPKeegan yep, that should be completed today |
For IPv6 clusters, all IPv4 egress traffic cannot filtered with policies (and likely vice versa for IPv4 clusters with IPv6 link-local addresses).
Example (on a clean IPv6 EKS cluster):
Using this policy to deny all egress traffic (except DNS):
Using this pod to test with:
All IPv6 traffic is blocked from accessing external resources, but IPv4 is not:
The text was updated successfully, but these errors were encountered: