Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

metal/vsphere-cluster: set env var for new K8s creation support #866

Merged
merged 2 commits into from
Sep 12, 2023

Conversation

etungsten
Copy link
Contributor

@etungsten etungsten commented Sep 11, 2023

Issue number:
N/A

Description of changes:
Sets the K8S_{}_SUPPORT env var for enabling potential new K8s version cluster creation support in the eksctl-anywhere CLI.

See https://github.com/tatlat/eks-anywhere/blob/abb848382bb52ea53cc0aa7c54f7e7a262b411bf/pkg/features/features.go#L11

Testing done:
vSphere cluster resource agent creates clusters using pre-existing management cluster:

$ kubectl --kubeconfig testsys.kubeconfig logs -f x86-64-vmware-k9f558bb3-f42e-4656-b348-cb7b92ccd921-creati5gvwh -n testsys
[2023-09-12T16:24:29Z INFO  resource_agent::agent] Initializing Agent
[2023-09-12T16:24:29Z INFO  vsphere_k8s_cluster_resource_agent::vsphere_k8s_cluster_provider] Getting vSphere secret
[2023-09-12T16:24:29Z INFO  vsphere_k8s_cluster_resource_agent::vsphere_k8s_cluster_provider] Creating working directory
[2023-09-12T16:24:29Z INFO  vsphere_k8s_cluster_resource_agent::vsphere_k8s_cluster_provider] Checking existing cluster
[2023-09-12T16:24:30Z INFO  vsphere_k8s_cluster_resource_agent::vsphere_k8s_cluster_provider] Creation policy is 'IfNotExists' and cluster 'x86-64-vmware-k8s-128' does not exist: creating cluster
[2023-09-12T16:24:30Z INFO  vsphere_k8s_cluster_resource_agent::vsphere_k8s_cluster_provider] Creating cluster
[2023-09-12T16:24:30Z INFO  vsphere_k8s_cluster_resource_agent::vsphere_k8s_cluster_provider] Downloading OVA 'bottlerocket-vmware-k8s-1.28-x86_64-v1.15.0.ova'
[2023-09-12T16:24:33Z INFO  vsphere_k8s_cluster_resource_agent::vsphere_k8s_cluster_provider] Importing OVA and creating a VM template out of it
[2023-09-12T16:24:46Z INFO  vsphere_k8s_cluster_resource_agent::vsphere_k8s_cluster_provider] Tagging VM template
2023-09-12T16:25:43.303Z        V4      Reading bundles manifest        {"url": "https://dev-release-assets.eks-anywhere.model-rocket.aws.dev/bundle-release.yaml"}
2023-09-12T16:25:43.486Z        V4      Using CAPI provider versions    {"Core Cluster API": "v1.5.1+c53e1b9", "Kubeadm Bootstrap": "v1.5.1+36b7efe", "Kubeadm Control Plane": "v1.5.1+731ab4f", "External etcd Bootstrap": "v1.0.9+76b9ed0", "External etcd Controller": "v1.0.14+e504faf", "Cluster API Provider VSphere": "v1.7.0+78171dd"}
2023-09-12T16:25:43.486Z        V4      Relative network path specified, using path /SDDC-Datacenter/network/sddc-cgw-network-2
2023-09-12T16:25:43.486Z        V1      SSHUsername is not set or is empty for VSphereMachineConfig, using default      {"c": "x86-64-vmware-k8s-128-node", "user": "ec2-user"}
2023-09-12T16:25:43.543Z        V2      Pulling docker image    {"image": "public.ecr.aws/l0g8r8j6/eks-anywhere-cli-tools:v0.17.1-eks-a-v0.0.0-dev-build.7550"}
2023-09-12T16:25:57.098Z        V3      Initializing long running container     {"name": "eksa_1694535943543436745", "image": "public.ecr.aws/l0g8r8j6/eks-anywhere-cli-tools:v0.17.1-eks-a-v0.0.0-dev-build.7550"}
2023-09-12T16:26:01.370Z        V4      Task start      {"task_name": "setup-validate"}
2023-09-12T16:26:01.371Z        V0      Performing setup and validations
2023-09-12T16:26:01.388Z        V0      ✅ Connected to server
2023-09-12T16:26:01.824Z        V0      ✅ Authenticated to vSphere
2023-09-12T16:26:02.570Z        V0      ✅ Datacenter validated
2023-09-12T16:26:02.958Z        V0      ✅ Network validated
2023-09-12T16:26:04.093Z        V3      CloneMode not set, defaulting to fullClone      {"VSphereMachineConfig": "x86-64-vmware-k8s-128-node"}
2023-09-12T16:26:04.093Z        V4      Relative datastore path specified, using path /SDDC-Datacenter/datastore/WorkloadDatastore
2023-09-12T16:26:04.405Z        V0      ✅ Datastore validated
2023-09-12T16:26:04.733Z        V0      ✅ Folder validated
2023-09-12T16:26:05.056Z        V0      ✅ Resource pool validated
2023-09-12T16:26:07.506Z        V0      ✅ Machine config tags validated
2023-09-12T16:26:07.506Z        V0      ✅ Control plane and Workload templates validated
2023-09-12T16:26:08.841Z        V0      Provided sshAuthorizedKey is not set or is empty, auto-generating new key pair...       {"vSphereMachineConfig": "x86-64-vmware-k8s-128-node"}
2023-09-12T16:26:09.710Z        V0      Private key saved to x86-64-vmware-k8s-128/eks-a-id_rsa. Use 'ssh -i x86-64-vmware-k8s-128/eks-a-id_rsa <username>@<Node-IP-Address>' to login to your cluster node
2023-09-12T16:26:12.358Z        V0      ✅ cloudadmin@vmc.local user vSphere privileges validated
2023-09-12T16:26:12.358Z        V0      ✅ Vsphere Provider setup is valid
2023-09-12T16:26:12.358Z        V0      ✅ Validate OS is compatible with registry mirror configuration
2023-09-12T16:26:12.358Z        V0      ✅ Validate certificate for registry mirror
2023-09-12T16:26:12.358Z        V0      ✅ Validate authentication for git provider
2023-09-12T16:26:12.358Z        V0      ✅ Validate cluster's eksaVersion matches EKS-A version
2023-09-12T16:26:13.193Z        V0      ✅ Validate cluster name
2023-09-12T16:26:13.193Z        V0      ✅ Validate gitops
2023-09-12T16:26:13.193Z        V0      ✅ Validate identity providers' name
2023-09-12T16:26:13.990Z        V0      ✅ Validate management cluster has eksa crds
2023-09-12T16:26:14.861Z        V0      ✅ Validate management cluster name is valid
2023-09-12T16:26:15.613Z        V0      ✅ Validate management cluster eksaVersion compatibility
2023-09-12T16:26:15.613Z        V4      Task finished   {"task_name": "setup-validate", "duration": "14.242644906s"}
2023-09-12T16:26:15.613Z        V4      ----------------------------------
2023-09-12T16:26:15.613Z        V4      Task start      {"task_name": "bootstrap-cluster-init"}
2023-09-12T16:26:15.613Z        V4      Task finished   {"task_name": "bootstrap-cluster-init", "duration": "1.944µs"}
2023-09-12T16:26:15.613Z        V4      ----------------------------------
2023-09-12T16:26:15.613Z        V4      Task start      {"task_name": "workload-cluster-init"}
2023-09-12T16:26:15.613Z        V0      Creating new workload cluster
2023-09-12T16:26:16.747Z        V3      Waiting for external etcd to be ready   {"cluster": "x86-64-vmware-k8s-128"}
2023-09-12T16:28:19.619Z        V3      External etcd is ready
2023-09-12T16:28:19.619Z        V3      Waiting for control plane to be available
2023-09-12T16:29:51.939Z        V3      Waiting for workload kubeconfig generation      {"cluster": "x86-64-vmware-k8s-128"}
2023-09-12T16:29:52.519Z        V0      Installing networking on workload cluster
2023-09-12T16:29:54.988Z        V4      Installing machine health checks on bootstrap cluster
2023-09-12T16:29:55.640Z        V3      Waiting for controlplane and worker machines to be ready
2023-09-12T16:29:56.799Z        V4      Nodes are not ready yet {"total": 3, "ready": 0, "cluster name": "x86-64-vmware-k8s-128"}
2023-09-12T16:30:10.889Z        V4      Nodes are not ready yet {"total": 3, "ready": 0, "cluster name": "x86-64-vmware-k8s-128"}
...
2023-09-12T16:31:09.938Z        V4      Nodes are not ready yet {"total": 3, "ready": 2, "cluster name": "x86-64-vmware-k8s-128"}
2023-09-12T16:31:18.553Z        V4      Nodes are not ready yet {"total": 3, "ready": 2, "cluster name": "x86-64-vmware-k8s-128"}
2023-09-12T16:31:19.995Z        V4      Nodes are not ready yet {"total": 3, "ready": 2, "cluster name": "x86-64-vmware-k8s-128"}
2023-09-12T16:31:21.397Z        V4      Nodes ready     {"total": 3}
2023-09-12T16:31:21.397Z        V4      Task finished   {"task_name": "workload-cluster-init", "duration": "5m5.783572863s"}
2023-09-12T16:31:21.397Z        V4      ----------------------------------
2023-09-12T16:31:21.397Z        V4      Task start      {"task_name": "install-resources-on-management-cluster"}
2023-09-12T16:31:21.397Z        V4      Task finished   {"task_name": "install-resources-on-management-cluster", "duration": "1.405µs"}
2023-09-12T16:31:21.397Z        V4      ----------------------------------
2023-09-12T16:31:21.397Z        V4      Task start      {"task_name": "capi-management-move"}
2023-09-12T16:31:21.397Z        V4      Task finished   {"task_name": "capi-management-move", "duration": "1.294µs"}
2023-09-12T16:31:21.397Z        V4      ----------------------------------
2023-09-12T16:31:21.397Z        V4      Task start      {"task_name": "eksa-components-install"}
2023-09-12T16:31:21.397Z        V0      Creating EKS-A CRDs instances on workload cluster
2023-09-12T16:31:21.400Z        V4      Applying eksa yaml resources to cluster
2023-09-12T16:31:22.044Z        V1      Applying Bundles to cluster
2023-09-12T16:31:22.705Z        V1      Applying EKSARelease to cluster
2023-09-12T16:31:23.291Z        V4      Applying eksd manifest to cluster
2023-09-12T16:31:23.859Z        V4      Applying eksd manifest to cluster
2023-09-12T16:31:24.389Z        V4      Applying eksd manifest to cluster
2023-09-12T16:31:24.915Z        V4      Applying eksd manifest to cluster
2023-09-12T16:31:25.552Z        V4      Applying eksd manifest to cluster
2023-09-12T16:31:26.083Z        V4      Applying eksd manifest to cluster
2023-09-12T16:31:28.261Z        V4      Task finished   {"task_name": "eksa-components-install", "duration": "6.863512061s"}
2023-09-12T16:31:28.261Z        V4      ----------------------------------
2023-09-12T16:31:28.261Z        V4      Task start      {"task_name": "gitops-manager-install"}
2023-09-12T16:31:28.261Z        V0      Installing GitOps Toolkit on workload cluster
2023-09-12T16:31:28.261Z        V0      GitOps field not specified, bootstrap flux skipped
2023-09-12T16:31:28.261Z        V4      Task finished   {"task_name": "gitops-manager-install", "duration": "25.727µs"}
2023-09-12T16:31:28.261Z        V4      ----------------------------------
2023-09-12T16:31:28.261Z        V4      Task start      {"task_name": "write-cluster-config"}
2023-09-12T16:31:28.261Z        V0      Writing cluster config file
2023-09-12T16:31:28.263Z        V4      Task finished   {"task_name": "write-cluster-config", "duration": "2.310092ms"}
2023-09-12T16:31:28.263Z        V4      ----------------------------------
2023-09-12T16:31:28.263Z        V4      Task start      {"task_name": "delete-kind-cluster"}
2023-09-12T16:31:28.263Z        V0      🎉 Cluster created!
2023-09-12T16:31:28.263Z        V4      Task finished   {"task_name": "delete-kind-cluster", "duration": "14.069µs"}
2023-09-12T16:31:28.263Z        V4      ----------------------------------
2023-09-12T16:31:28.263Z        V4      Task start      {"task_name": "install-curated-packages"}
--------------------------------------------------------------------------------------
The Amazon EKS Anywhere Curated Packages are only available to customers with the
Amazon EKS Anywhere Enterprise Subscription
--------------------------------------------------------------------------------------
2023-09-12T16:31:28.263Z        V0      Enabling curated packages on the cluster
2023-09-12T16:31:28.263Z        V0      Installing helm chart on cluster        {"chart": "eks-anywhere-packages-x86-64-vmware-k8s-128", "version": "0.0.0-1a2dfffbefe7cb9b5196a7a821a950155203fbdb"}
2023-09-12T16:31:29.068Z        V0      ⚠️  Failed to install the optional EKS-A Curated Package Controller. Please try installation again through eksctl after the cluster creation succeeds    {"warning": "WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /local/eksa-work/mgmt.kubeconfig\nWARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /local/eksa-work/mgmt.kubeconfig\nPulled: public.ecr.aws/l0g8r8j6/eks-anywhere-packages:0.0.0-1a2dfffbefe7cb9b5196a7a821a950155203fbdb\nDigest: sha256:9757f69ef2ad77ce857f3bfa34f330aa58fb89cb6f49cf94a1a53c04c4370e1a\nError: UPGRADE FAILED: another operation (install/upgrade/rollback) is in progress\n"}
2023-09-12T16:31:29.068Z        V4      Task finished   {"task_name": "install-curated-packages", "duration": "804.972095ms"}
2023-09-12T16:31:29.068Z        V4      ----------------------------------
2023-09-12T16:31:29.068Z        V4      Tasks completed {"duration": "5m27.697723782s"}
2023-09-12T16:31:29.069Z        V3      Logging out from current govc session
2023-09-12T16:31:29.703Z        V3      Cleaning up long running container      {"name": "eksa_1694535943543436745"}
[2023-09-12T16:31:30Z INFO  vsphere_k8s_cluster_resource_agent::vsphere_k8s_cluster_provider] Scaling default NodeGroup machinedeployments replicas to 0
machinedeployment.cluster.x-k8s.io/x86-64-vmware-k8s-128-md-0 scaled
[2023-09-12T16:31:30Z INFO  vsphere_k8s_cluster_resource_agent::vsphere_k8s_cluster_provider] Cluster created
[2023-09-12T16:31:30Z INFO  resource_agent::agent] Resource action succeeded.
[2023-09-12T16:31:30Z INFO  resource_agent::agent] 'keep_running' is true.

Terms of contribution:

By submitting this pull request, I agree that this contribution is dual-licensed under the terms of both the Apache License, version 2.0, and the MIT license.

Sets the 'K8S_{}_SUPPORT' env var for enabling potential new K8s version
cluster creation support in the eksctl-anywhere CLI.
@etungsten etungsten force-pushed the eksa-new-k8s-sup-env-var branch from e2f0823 to e2a0c1c Compare September 11, 2023 17:46
The vSphere workload cluster config was missing the managementCluster
field to indicate the management cluster to deploy the cluster in.
@etungsten etungsten force-pushed the eksa-new-k8s-sup-env-var branch from b96541f to 0fdeb93 Compare September 12, 2023 06:01
@etungsten etungsten marked this pull request as ready for review September 12, 2023 16:34
@etungsten etungsten merged commit 3f7efe2 into bottlerocket-os:develop Sep 12, 2023
@etungsten etungsten deleted the eksa-new-k8s-sup-env-var branch September 12, 2023 16:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants