Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add DISABLE_CONTAINER_V6 to disable IPv6 networking in container network namespaces #2499

Merged
merged 1 commit into from
Aug 18, 2023

Conversation

jdn5126
Copy link
Contributor

@jdn5126 jdn5126 commented Aug 17, 2023

What type of PR is this?
feature

Which issue does this PR fix:
#2483

What does this PR do / Why do we need it:
This PR introduces a new environment variable: DISABLE_CONTAINER_V6. When this environment variable is set, the tuning plugin will be chained in the AWS conflist (/etc/cni/net.d/10-aws.conflist) and configured to disable IPv6 networking in newly created container network namespaces. This is done to provide a solution to #2483.

If an issue # is not available please add repro steps and logs from IPAMD/CNI showing the issue:

  1. Create an IPv4 cluster with containerd as the container-runtime (EKS 1.24+)
  2. Create a pod
  3. See that IPv6 is enabled on the loopback interface within the container network namespace regardless of host sysctls.

Testing done on this change:
Manually verified that conflist is rendered correctly and IPv6 is disabled in newly created container network namespaces. Verified that all integration tests pass following this change.

Automation added to e2e:
N/A

Will this PR introduce any new dependencies?:
No

Will this break upgrades or downgrades. Has updating a running cluster been tested?:
No, Yes

Does this change require updates to the CNI daemonset config files to work?:
No

Does this PR introduce any user-facing change?:
Yes

Introduce DISABLE_CONTAINER_V6 to disable IPv6 networking in container namespaces.

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

@jdn5126 jdn5126 requested a review from a team as a code owner August 17, 2023 21:47
README.md Outdated Show resolved Hide resolved
@jdn5126 jdn5126 force-pushed the disable_container_v6 branch from 7b766df to b02e85a Compare August 18, 2023 16:46
README.md Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
cmd/aws-vpc-cni/main.go Outdated Show resolved Hide resolved
cmd/aws-vpc-cni/main.go Outdated Show resolved Hide resolved
README.md Show resolved Hide resolved
@jdn5126 jdn5126 force-pushed the disable_container_v6 branch from b02e85a to a88bd9c Compare August 18, 2023 17:00
cmd/aws-vpc-cni/main.go Outdated Show resolved Hide resolved
@jdn5126 jdn5126 force-pushed the disable_container_v6 branch from a88bd9c to f92eb10 Compare August 18, 2023 17:14
@jdn5126 jdn5126 merged commit 7ab227e into aws:master Aug 18, 2023

Type: Boolean as a String

Default: `false`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this be default true for v6 clusters?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so, as we don't want to change default behavior. There could be some customers that rely on IPv6 being enabled

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nvm..it is not needed..we don't want the plugin..

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But lets add a line, we shouldn't set this for v6 clusters..

Copy link
Contributor

@jayanthvn jayanthvn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm..

@jdn5126 jdn5126 deleted the disable_container_v6 branch September 13, 2023 21:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants