Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

📖 Fill the book content with the current README.md #449

Merged
merged 4 commits into from
Mar 6, 2024
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Next Next commit
Add book content
Signed-off-by: Danil Grigorev <danil.grigorev@suse.com>
Danil-Grigorev committed Mar 6, 2024
commit dd1c5f1d402f22c6ad8fc7b4a9bfa1fe776b7533
34 changes: 34 additions & 0 deletions docs/book/src/installation/helm-chart-installation.md
Original file line number Diff line number Diff line change
@@ -1 +1,35 @@
# Using Helm Charts

Alternatively, you can install the Cluster API operator using Helm charts:

```bash
helm repo add capi-operator https://kubernetes-sigs.github.io/cluster-api-operator
helm repo update
helm install capi-operator capi-operator/cluster-api-operator --create-namespace -n capi-operator-system
```

#### Installing cert-manager using Helm chart

CAPI operator Helm chart supports provisioning of cert-manager as a dependency. It is disabled by default, but you can enable it with `--set cert-manager.enabled=true` option to `helm install` command or inside of `cert-manager` section in [values.yaml](https://github.com/kubernetes-sigs/cluster-api-operator/blob/main/hack/charts/cluster-api-operator/values.yaml) file. Additionally you can define other [parameters](https://artifacthub.io/packages/helm/cert-manager/cert-manager#configuration) provided by the cert-manager chart.

#### Installing providers using Helm chart

The operator Helm chart supports a "quickstart" option for bootstrapping a management cluster. The user experience is relatively similar to [clusterctl init](https://cluster-api.sigs.k8s.io/clusterctl/commands/init.html?highlight=init#clusterctl-init):

```bash
helm install capi-operator capi-operator/cluster-api-operator --create-namespace -n capi-operator-system --set infrastructure=docker:v1.4.2 --wait --timeout 90s # core Cluster API with kubeadm bootstrap and control plane providers will also be installed
```

```bash
helm install capi-operator capi-operator/cluster-api-operator --create-namespace -n capi-operator-system —set infrastructure="docker;azure" --wait --timeout 90s # core Cluster API with kubeadm bootstrap and control plane providers will also be installed
```

```bash
helm install capi-operator capi-operator/cluster-api-operator --create-namespace -n capi-operator-system —set infrastructure="capd-custom-ns:docker:v1.4.2;capz-custom-ns:azure:v1.10.0" --wait --timeout 90s # core Cluster API with kubeadm bootstrap and control plane providers will also be installed
```

```bash
helm install capi-operator capi-operator/cluster-api-operator --create-namespace -n capi-operator-system --set core=cluster-api:v1.4.2 --set controlPlane=kubeadm:v1.4.2 --set bootstrap=kubeadm:v1.4.2 --set infrastructure=docker:v1.4.2 --wait --timeout 90s
```

For more complex operations, please refer to our API documentation.
14 changes: 14 additions & 0 deletions docs/book/src/installation/manifest-installation.md
Original file line number Diff line number Diff line change
@@ -1 +1,15 @@
# Using Manifests from Release Assets

Before installing the Cluster API Operator this way, you must first ensure that cert-manager is installed, as the operator does not manage cert-manager installations. To install cert-manager, run the following command:

```bash
kubectl apply -f https://github.com/jetstack/cert-manager/releases/latest/download/cert-manager.yaml
```

Wait for cert-manager to be ready before proceeding.

After cert-manager is successfully installed, you can install the Cluster API operator directly by applying the latest release assets:

```bash
kubectl apply -f https://github.com/kubernetes-sigs/cluster-api-operator/releases/latest/download/operator-components.yaml
```
82 changes: 82 additions & 0 deletions docs/book/src/reference/providers.md
Original file line number Diff line number Diff line change
@@ -1 +1,83 @@
# Provider List

The Cluster API Operator introduces new API types: `CoreProvider`, `BootstrapProvider`, `ControlPlaneProvider`, `InfrastructureProvider`, `AddonProvider` and `IPAMProvider`. These five provider types share common Spec and Status types, `ProviderSpec` and `ProviderStatus`, respectively.

The CRDs are scoped to be namespaced, allowing RBAC restrictions to be enforced if needed. This scoping also enables the installation of multiple versions of controllers (grouped within namespaces) in the same management cluster.

Related Golang structs can be found in the [Cluster API Operator repository](https://github.com/kubernetes-sigs/cluster-api-operator/tree/main/api/v1alpha1).

Below are the new API types being defined, with shared types used for Spec and Status among the different provider types—Core, Bootstrap, ControlPlane, and Infrastructure:

*CoreProvider*

```golang
type CoreProvider struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`

Spec ProviderSpec `json:"spec,omitempty"`
Status ProviderStatus `json:"status,omitempty"`
}
```

*BootstrapProvider*

```golang
type BootstrapProvider struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`

Spec ProviderSpec `json:"spec,omitempty"`
Status ProviderStatus `json:"status,omitempty"`
}
```

*ControlPlaneProvider*

```golang
type ControlPlaneProvider struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`

Spec ProviderSpec `json:"spec,omitempty"`
Status ProviderStatus `json:"status,omitempty"`
}
```

*InfrastructureProvider*

```golang
type InfrastructureProvider struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`

Spec ProviderSpec `json:"spec,omitempty"`
Status ProviderStatus `json:"status,omitempty"`
}
```

*AddonProvider*

```golang
type AddonProvider struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`

Spec AddonProviderSpec `json:"spec,omitempty"`
Status AddonProviderStatus `json:"status,omitempty"`
}
```

*IPAMProvider*

```golang
type IPAMProvider struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`

Spec IPAMProviderSpec `json:"spec,omitempty"`
Status IPAMProviderStatus `json:"status,omitempty"`
}
```

The following sections provide details about `ProviderSpec` and `ProviderStatus`, which are shared among all the provider types.
117 changes: 117 additions & 0 deletions docs/book/src/topics/air-gapped-environtment.md
Original file line number Diff line number Diff line change
@@ -1 +1,118 @@
# Air-gapped Environment

To install Cluster API providers in an air-gapped environment using the operator, address the following issues:

1. Configure the operator for an air-gapped environment:
- Manually fetch and store a helm chart for the operator.
- Provide image overrides for the operator in from an accessible image repository.
2. Configure providers for an air-gapped environment:
- Provide fetch configuration for each provider from an accessible location (e.g., an internal GitHub repository) or from pre-created ConfigMaps within the cluster.
- Provide image overrides for each provider to pull images from an accessible image repository.

**Example Usage:**

As an admin, I need to fetch the Azure provider components from within the cluster because I am working in an air-gapped environment.

In this example, there is a ConfigMap in the `capz-system` namespace that defines the components and metadata of the provider.

The Azure InfrastructureProvider is configured with a `fetchConfig` specifying the label selector, allowing the operator to determine the available versions of the Azure provider. Since the provider's version is marked as `v1.9.3`, the operator uses the components information from the ConfigMap with matching label to install the Azure provider.

```yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
labels:
provider-components: azure
name: v1.9.3
namespace: capz-system
data:
components: |
# Components for v1.9.3 YAML go here
metadata: |
# Metadata information goes here
---
apiVersion: operator.cluster.x-k8s.io/v1alpha2
kind: InfrastructureProvider
metadata:
name: azure
namespace: capz-system
spec:
version: v1.9.3
configSecret:
name: azure-variables
fetchConfig:
selector:
matchLabels:
provider-components: azure
```

### Situation when manifests do not fit into configmap

There is a limit on the [maximum size](https://kubernetes.io/docs/concepts/configuration/configmap/#motivation) of a configmap - 1MiB. If the manifests do not fit into this size, Kubernetes will generate an error and provider installation fail. To avoid this, you can archive the manifests and put them in the configmap that way.

For example, you have two files: `components.yaml` and `metadata.yaml`. To create a working config map you need:

1. Archive components.yaml using `gzip` cli tool

```sh
gzip -c components.yaml > components.gz
```

2. Create a configmap manifest from the archived data

```sh
kubectl create configmap v1.9.3 --namespace=capz-system --from-file=components=components.gz --from-file=metadata=metadata.yaml --dry-run=client -o yaml > configmap.yaml
```

3. Edit the file by adding "provider.cluster.x-k8s.io/compressed: true" annotation

```sh
yq eval -i '.metadata.annotations += {"provider.cluster.x-k8s.io/compressed": "true"}' configmap.yaml
```

**Note**: without this annotation operator won't be able to determine if the data is compressed or not.

4. Add labels that will be used to match the configmap in `fetchConfig` section of the provider

```sh
yq eval -i '.metadata.labels += {"my-label": "label-value"}' configmap.yaml
```

5. Create a configmap in your kubernetes cluster using kubectl

```sh
kubectl create -f configmap.yaml
```

## Patching provider manifests

Provider manifests can be patched using JSON merge patches. This can be useful when you need to modify the provider manifests that are fetched from the repository. In order to provider
manifests `spec.ResourcePatches` has to be used where an array of patches can be specified:

```yaml
---
apiVersion: operator.cluster.x-k8s.io/v1alpha2
kind: CoreProvider
metadata:
name: cluster-api
namespace: capi-system
spec:
resourcePatches:
- |
apiVersion: v1
kind: Service
metadata:
labels:
test-label: test-value
```

More information about JSON merge patches can be found here <https://datatracker.ietf.org/doc/html/rfc7396>

There are couple of rules for the patch to match a manifest:

- The `kind` field must match the target object.
- If `apiVersion` is specified it will only be applied to matching objects.
- If `metadata.name` and `metadata.namespace` not specified, the patch will be applied to all objects of the specified kind.
- If `metadata.name` is specified, the patch will be applied to the object with the specified name. This is for cluster scoped objects.
- If both `metadata.name` and `metadata.namespace` are specified, the patch will be applied to the object with the specified name and namespace.
11 changes: 11 additions & 0 deletions docs/book/src/topics/basic-capi-provider-installation.md
Original file line number Diff line number Diff line change
@@ -1 +1,12 @@
# Basic Cluster API Provider Installation

In this section, we will walk you through the basic process of installing Cluster API providers using the operator. The Cluster API operator manages six types of objects:

- CoreProvider
- BootstrapProvider
- ControlPlaneProvider
- InfrastructureProvider
- AddonProvider
- IPAMProvider

Please note that this example provides a basic configuration of Azure Infrastructure provider for getting started. More detailed examples and CRD descriptions will be provided in subsequent sections of this document.
7 changes: 7 additions & 0 deletions docs/book/src/topics/deleting-provider.md
Original file line number Diff line number Diff line change
@@ -1 +1,8 @@
# Deleting a Provider

To remove the installed providers and all related kubernetes objects just delete the following CRs:

```bash
kubectl delete infrastructureprovider azure
kubectl delete coreprovider cluster-api
```
11 changes: 11 additions & 0 deletions docs/book/src/topics/deleting-providers.md
Original file line number Diff line number Diff line change
@@ -1 +1,12 @@
# Deleting providers

To remove all installed providers and all related kubernetes objects just delete the following CRs:

```bash
kubectl delete coreprovider --all --all-namespaces
kubectl delete infrastructureprovider --all --all-namespaces
kubectl delete bootstrapprovider --all --all-namespaces
kubectl delete controlplaneprovider --all --all-namespaces
kubectl delete ipamprovider --all --all-namespaces
kubectl delete addonprovider --all --all-namespaces
```
25 changes: 25 additions & 0 deletions docs/book/src/topics/injecting-additional-manifests.md
Original file line number Diff line number Diff line change
@@ -1 +1,26 @@
# Injecting additional manifests

It is possible to inject additional manifests when installing/upgrading a provider. This can be useful when you need to add extra RBAC resources to the provider controller, for example.
The field `AdditionalManifests` is a reference to a ConfigMap that contains additional manifests, which will be applied together with the provider components. The key for storing these manifests has to be `manifests`.
The manifests are applied only once when a certain release is installed/upgraded. If the namespace is not specified, the namespace of the provider will be used. There is no validation of the YAML content inside the ConfigMap.

```yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: additional-manifests
namespace: capi-system
data:
manifests: |
# Additional manifests go here
---
apiVersion: operator.cluster.x-k8s.io/v1alpha2
kind: CoreProvider
metadata:
name: cluster-api
namespace: capi-system
spec:
additionalManifests:
name: additional-manifests
```
30 changes: 30 additions & 0 deletions docs/book/src/topics/installing-capz.md
Original file line number Diff line number Diff line change
@@ -1 +1,31 @@
# Installing Azure Infrastructure Provider

Next, install [Azure Infrastructure Provider](https://capz.sigs.k8s.io/). Before that ensure that `capz-system` namespace exists.

Since the provider requires variables to be set, create a secret containing them in the same namespace as the provider. It is also recommended to include a `github-token` in the secret. This token is used to fetch the provider repository, and it is required for the provider to be installed. The operator may exceed the rate limit of the GitHub API without the token. Like [clusterctl](https://cluster-api.sigs.k8s.io/clusterctl/overview.html?highlight=github_token#avoiding-github-rate-limiting), the token needs only the `repo` scope.

```yaml
---
apiVersion: v1
kind: Secret
metadata:
name: azure-variables
namespace: capz-system
type: Opaque
stringData:
AZURE_CLIENT_ID_B64: Zm9vCg==
AZURE_CLIENT_SECRET_B64: Zm9vCg==
AZURE_SUBSCRIPTION_ID_B64: Zm9vCg==
AZURE_TENANT_ID_B64: Zm9vCg==
github-token: ghp_fff
---
apiVersion: operator.cluster.x-k8s.io/v1alpha1
kind: InfrastructureProvider
metadata:
name: azure
namespace: capz-system
spec:
version: v1.9.3
configSecret:
name: azure-variables
```
18 changes: 18 additions & 0 deletions docs/book/src/topics/installing-core-provider.md
Original file line number Diff line number Diff line change
@@ -1 +1,19 @@
# Installing the CoreProvider

The first step is to install the CoreProvider, which is responsible for managing the Cluster API CRDs and the Cluster API controller.

You can utilize any existing namespace for providers in your Kubernetes operator. However, before creating a provider object, make sure the specified namespace has been created. In the example below, we use the `capi-system` namespace. You can create this namespace through either the Command Line Interface (CLI) by running `kubectl create namespace capi-system`, or by using the declarative approach described in the [official Kubernetes documentation](https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/#create-new-namespaces).

*Example:*

```yaml
apiVersion: operator.cluster.x-k8s.io/v1alpha2
kind: CoreProvider
metadata:
name: cluster-api
namespace: capi-system
spec:
version: v1.4.3
```

**Note:** Only one CoreProvider can be installed at the same time on a single cluster.
24 changes: 24 additions & 0 deletions docs/book/src/topics/installing-provider.md
Original file line number Diff line number Diff line change
@@ -1 +1,25 @@
# Installing a Provider

To install a new Cluster API provider with the Cluster API Operator, create a provider object as shown in the first example API usage for creating the secret with variables and the provider itself.

The operator processes a provider object by applying the following rules:

- The CoreProvider is installed first; other providers will be requeued until the core provider exists.
- Before installing any provider, the following pre-flight checks are executed:
- No other instance of the same provider (same Kind, same name) should exist in any namespace.
- The Cluster API contract (e.g., v1beta1) must match the contract of the core provider.
- The operator sets conditions on the provider object to surface any installation issues, including pre-flight checks and/or order of installation.
- If the FetchConfiguration is not defined, the operator applies the embedded fetch configuration for the given kind and `ObjectMeta.Name` specified in the [Cluster API code](https://github.com/kubernetes-sigs/cluster-api/blob/main/cmd/clusterctl/client/config/providers_client.go).

The installation process, managed by the operator, aligns with the implementation underlying the `clusterctl init` command and includes these steps:

- Fetching provider artifacts (the components.yaml and metadata.yaml files).
- Applying image overrides, if any.
- Replacing variables in the infrastructure-components from EnvVar and Secret.
- Applying the resulting YAML to the cluster.

Differences between the operator and `clusterctl init` include:

- The operator installs one provider at a time while `clusterctl init` installs a group of providers in a single operation.
- The operator stores fetched artifacts in a config map for reuse during subsequent reconciliations.
- The operator uses a Secret, while `clusterctl init` relies on environment variables and a local configuration file.
6 changes: 6 additions & 0 deletions docs/book/src/topics/modifying-provider.md
Original file line number Diff line number Diff line change
@@ -1 +1,7 @@
# Modifying a Provider

In addition to changing a provider version (upgrades), the operator supports modifying other provider fields such as controller flags and variables. This can be achieved through `kubectl edit` or `kubectl apply` to the provider object.

The operation works similarly to upgrades: The current provider instance is deleted while preserving CRDs, namespaces, and user objects. Then, a new provider instance with the updated flags/variables is installed.

**Note**: `clusterctl` currently does not support this operation.
12 changes: 12 additions & 0 deletions docs/book/src/topics/upgrading-provider.md
Original file line number Diff line number Diff line change
@@ -1 +1,13 @@
# Upgrading a Provider

To trigger an upgrade for a Cluster API provider, change the `spec.Version` field. All providers must follow the golden rule of respecting the same Cluster API contract supported by the core provider.

The operator performs the upgrade by:

1. Deleting the current provider components, while preserving CRDs, namespaces, and user objects.
2. Installing the new provider components.

Differences between the operator and `clusterctl upgrade apply` include:

- The operator upgrades one provider at a time while `clusterctl upgrade apply` upgrades a group of providers in a single operation.
- With the declarative approach, users are responsible for manually editing the Provider objects' YAML, while `clusterctl upgrade apply --contract` automatically determines the latest available versions for each provider.