Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Release 1.22] NodeIP autodetection in case of dualstack node #5958

Closed
rbrtbnfgl opened this issue Aug 4, 2022 · 5 comments
Closed

[Release 1.22] NodeIP autodetection in case of dualstack node #5958

rbrtbnfgl opened this issue Aug 4, 2022 · 5 comments
Assignees
Milestone

Comments

@rbrtbnfgl
Copy link
Contributor

Backport for #5918 and #5491

@est-suse
Copy link
Contributor

Hi @rbrtbnfgl,

When I trying to replicate this issue on 1.22, is not happening, please refer to image attached:
image

Using the Commit 2b13d70

This issue is happening:

image

Could you please take a look and let me know.

Thanks

@rbrtbnfgl
Copy link
Contributor Author

The check for the mismatch between NodeIP and ClusterCIDR was implemented from 1.23 and not backported on 1.22 (I backported it on this PR), so the error that doesn't occur is expected.
The PR fixes the missing IPv6 autodetection also for 1.22: in case of a dualstack node when the nodeIP is not specified if you check the internalIP of the node only the IPv4 is configured, with this PR it configures both IPv4 and IPv6.

@est-suse
Copy link
Contributor

est-suse commented Aug 11, 2022

hi @rbrtbnfgl, The VM was created and configure for Both IPV4 and IPV6
image

This is the Config.yaml:

write-kubeconfig-mode: 644
token: summer
cluster-cidr: 10.42.0.0/16,2001:cafe:43:0::/56
service-cidr: 10.43.0.0/16,2001:cafe:44:1::/112
flannel-iface: "ens5"
disable-network-policy: true
flannel-ipv6-masq: true
cluster-init: true

@rbrtbnfgl
Copy link
Contributor Author

When you run

kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{" "}{.status.addresses[?(@.type == "InternalIP")].address}{"\n"}{end}'

Before the PR you should get

Node-name IPv4Addr

After the merge you should get

Node-name IPv4Addr IPv6Addr

@est-suse
Copy link
Contributor

est-suse commented Aug 11, 2022

Validated on 1.22 branch with commit 2b13d70
Environment Details
Infrastructure

Cloud
Hosted
Node(s) CPU architecture, OS, and Version:

Static hostname: ip-192-168-23-0
Icon name: computer-vm
Chassis: vm
Machine ID: ec2b159251fd2faee9b6b668fae4d9b9
Boot ID: 6f780fd9979c470f887974b17f1f1096
Virtualization: kvm
Operating System: Ubuntu 20.04.4 LTS
Kernel: Linux 5.13.0-1029-aws
Architecture: x86-64

Single node
Config.yaml:

write-kubeconfig-mode: 644
token: summer
cluster-cidr: 10.42.0.0/16,2001:cafe:43:0::/56
service-cidr: 10.43.0.0/16,2001:cafe:44:1::/112
flannel-iface: "ens5"
disable-network-policy: true
flannel-ipv6-masq: true
cluster-init: true
Testing Steps
Copy config.yaml
$ sudo mkdir -p /etc/rancher/k3s && sudo cp config.yaml /etc/rancher/k3s
Install k3s
For replication: curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.23.9+k3s1 sh -
For validation: curl -sfL https://get.k3s.io | INSTALL_K3S_COMMIT=d04af60aad053ec94e60c986bf4c70cdbfc9e11c sh -
Ensure k3s cluster is up
Replication Results:

k3s version used for replication:
This issue was not back-ported to 1.22 based on Roberto B comments.
Validation Results:

k3s version used for validation:
k3s version v1.22.12+k3s-2b13d70a
go version go1.17.5

The cluster comes up successfully and no error is observed
$ journalctl -xeu k3s.service | grep 'cluster-cidr:'

$ kubectl get nodes,pods -A -o wide

image

Additional context / logs:

results of the command kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{" "}{.status.addresses[?(@.type == "InternalIP")].address}{"\n"}{end}'
ip-192-168-23-0 192.168.23.0 2600:1f1c:ab4:ee48:4243:210f:54e7:e368

Validated deployment and ran though dual-stack testing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants