Skip to content

Commit

Permalink
Documentation for ODF Add-on and Worker Replace
Browse files Browse the repository at this point in the history
review comments added

reverted resource_addons.go
  • Loading branch information
aayushsss1 authored and aayushsss1 committed Jun 18, 2023
1 parent a4b62d9 commit 4a7406d
Show file tree
Hide file tree
Showing 16 changed files with 288 additions and 35 deletions.
29 changes: 16 additions & 13 deletions examples/openshift-data-foundation/addon/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,20 +22,20 @@ For more information, about
│ │ ├── createcrd.sh
│ │ ├── updatecrd.sh
│ │ ├── updateodf.sh
│ │ ├── deleteaddon.sh
│ │ ├── deletecrd.sh
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ ├── input.tfvars
│ │ ├── schematics.tfvars
```

* `ibm-odf-addon` - This folder is used to deploy a specific Version of Openshift-Data-Foundation with the `odfDeploy` parameter set to false i.e the add-on is installed without the ocscluster using the IBM-Cloud Terraform Provider.

* `ocscluster` - This folder is used to deploy the `OcsCluster` CRD with the given parameters set in the `input.tfvars` file.

* `ocscluster` - This folder is used to deploy the `OcsCluster` CRD with the given parameters set in the `schematics.tfvars` file.
* `addon` - This folder contains scripts to create the CRD and deploy the ODF add-on on your cluster. `The main.tf` file contains the `null_resource` to internally call the above two folders, and perform the required actions.

#### Note

You do not have to change anything in the `ibm-odf-addon` and `ocscluster` folders. You just have to input the required parameters in the `input.tfvars` file under the `addon` folder, and run terraform.
You do not have to change anything in the `ibm-odf-addon` and `ocscluster` folders. You just have to input the required parameters in the `schematics.tfvars` file under the `addon` folder, and run terraform.

## Usage

Expand All @@ -49,23 +49,25 @@ $ cd addon

```bash
$ terraform init
$ terraform plan --var-file input.tfvars
$ terraform apply --var-file input.tfvars
$ terraform plan --var-file schematics.tfvars
$ terraform apply --var-file schematics.tfvars
```

Run `terraform destroy --var-file input.tfvars` when you don't need these resources.
Run `terraform destroy --var-file schematics.tfvars` when you don't need these resources.

### Option 2 - IBM Cloud Schematics

To Deploy & Manage the Openshift-Data-Foundation add-on using `IBM Cloud Schematics` please follow the below documentation

https://cloud.ibm.com/docs/schematics?topic=schematics-get-started-terraform

Please note you have to change the `terraform` keyword in the scripts to `terraform1.x` where `x` is the version of terraform you use in IBM Schematics, for example if you're using terraform version 1.3 in schematics make sure to change `terraform` -> `terraform1.3` in the .sh files.

## Example usage

### Deployment of ODF

The default input.tfvars is given below, the user should just change the value of the parameters in accorandance to their requirment.
The default schematics.tfvars is given below, the user should just change the value of the parameters in accorandance to their requirment.

```hcl
ibmcloud_api_key = "" # Enter your API Key
Expand Down Expand Up @@ -96,7 +98,7 @@ workerNodes = null

### Scale-Up of ODF

The following variables in the `input.tfvars` file can be edited
The following variables in the `schematics.tfvars` file can be edited

* numOfOsd - To scale your storage
* workerNodes - To increase the number of Worker Nodes with ODF
Expand All @@ -109,7 +111,7 @@ workerNodes = null -> "worker_1_ID,worker_2_ID"

### Upgrade of ODF

The following variables in the `input.tfvars` file should be changed in order to upgrade the ODF add-on and the Ocscluster CRD.
The following variables in the `schematics.tfvars` file should be changed in order to upgrade the ODF add-on and the Ocscluster CRD.

* odfVersion - Specify the version you wish to upgrade to
* ocsUpgrade - Must be set to `true` to upgrade the CRD
Expand Down Expand Up @@ -148,7 +150,7 @@ ocsUpgrade = "false" -> "true"
| ibmcloud_api_key | IBM Cloud API Key | `string` | yes | -
| cluster | Name of the cluster. | `string` | yes | -
| region | Region of the cluster | `string` | yes | -
| odfVersion | Version of the ODF add-on | `string` | yes | 4.11
| odfVersion | Version of the ODF add-on | `string` | yes | 4.12.0
| osdSize | Enter the size for the storage devices that you want to provision for the Object Storage Daemon (OSD) pods | `string` | yes | 250Gi
| numOfOsd | The Number of OSD | `string` | yes | 1
| osdStorageClassName | Enter the storage class to be used to provision block volumes for Object Storage Daemon (OSD) pods | `string` | yes | ibmc-vpc-block-metro-10iops-tier
Expand All @@ -172,5 +174,6 @@ Refer - https://cloud.ibm.com/docs/openshift?topic=openshift-deploy-odf-vpc&inte

* Users should only change the values of the variables within quotes, variables should be left untouched with the default values if they are not set.
* `workerNodes` takes a string containing comma separated values of the names of the worker nodes you wish to enable ODF on.
* On `terraform apply --var-file=input.tfvars`, the add-on is enabled and the custom resource is created.
* On `terraform apply --var-file=schematics.tfvars`, the add-on is enabled and the custom resource is created.
* During ODF update, please do not tamper with the `ocsUpgrade` variable, just change the value to true within quotation, without changing the format of the variable.
* During the `Upgrade of Odf` scenario on IBM Schematics, please make sure to change the value of `ocsUpgrade` to `false` after. Locally this is automatically handled using `sed`.
4 changes: 2 additions & 2 deletions examples/openshift-data-foundation/addon/createaddon.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ set -e
WORKING_DIR=$(pwd)

cp ${WORKING_DIR}/variables.tf ${WORKING_DIR}/ibm_odf_addon/variables.tf
cp ${WORKING_DIR}/input.tfvars ${WORKING_DIR}/ibm_odf_addon/input.tfvars
cp ${WORKING_DIR}/schematics.tfvars ${WORKING_DIR}/ibm_odf_addon/schematics.tfvars
cd ${WORKING_DIR}/ibm_odf_addon
terraform init
terraform apply --auto-approve -var-file ${WORKING_DIR}/ibm_odf_addon/input.tfvars
terraform apply --auto-approve -var-file ${WORKING_DIR}/ibm_odf_addon/schematics.tfvars
4 changes: 2 additions & 2 deletions examples/openshift-data-foundation/addon/createcrd.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ set -e
WORKING_DIR=$(pwd)

cp ${WORKING_DIR}/variables.tf ${WORKING_DIR}/ocscluster/variables.tf
cp ${WORKING_DIR}/input.tfvars ${WORKING_DIR}/ocscluster/input.tfvars
cp ${WORKING_DIR}/schematics.tfvars ${WORKING_DIR}/ocscluster/schematics.tfvars
cd ${WORKING_DIR}/ocscluster
terraform init
terraform apply --auto-approve -var-file ${WORKING_DIR}/ocscluster/input.tfvars
terraform apply --auto-approve -var-file ${WORKING_DIR}/ocscluster/schematics.tfvars
17 changes: 17 additions & 0 deletions examples/openshift-data-foundation/addon/deleteaddon.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
#!/usr/bin/env bash

set -e

WORKING_DIR=$(pwd)

cp ${WORKING_DIR}/variables.tf ${WORKING_DIR}/ibm_odf_addon/schematics.tfvars
cp ${WORKING_DIR}/schematics.tfvars ${WORKING_DIR}/ibm_odf_addon/schematics.tfvars
cd ${WORKING_DIR}/ibm_odf_addon
terraform init
if [ -e ${WORKING_DIR}/ibm_odf_addon/terraform.tfstate ]
then
echo "ok"
else
terraform apply --auto-approve -var-file=${WORKING_DIR}/ibm_odf_addon/schematics.tfvars
fi
terraform destroy --auto-approve -var-file=${WORKING_DIR}/ibm_odf_addon/schematics.tfvars
19 changes: 19 additions & 0 deletions examples/openshift-data-foundation/addon/deletecrd.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
#!/usr/bin/env bash

set -e

WORKING_DIR=$(pwd)

cp ${WORKING_DIR}/variables.tf ${WORKING_DIR}/ocscluster/variables.tf
cp ${WORKING_DIR}/schematics.tfvars ${WORKING_DIR}/ocscluster/schematics.tfvars
cd ${WORKING_DIR}/ocscluster
terraform init
if [ -e ${WORKING_DIR}/ocscluster/terraform.tfstate ]
then
echo "ok"
else
terraform import -var-file=${WORKING_DIR}/ocscluster/schematics.tfvars kubernetes_manifest.ocscluster_ocscluster_auto "apiVersion=ocs.ibm.io/v1,kind=OcsCluster,namespace=openshift-storage,name=ocscluster-auto"
terraform apply --auto-approve -var-file ${WORKING_DIR}/ocscluster/schematics.tfvars
fi

terraform destroy --auto-approve -var-file=${WORKING_DIR}/ocscluster/schematics.tfvars
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ terraform {
required_providers {
ibm = {
source = "IBM-Cloud/ibm"
version = "1.53.0"
version = "1.55.0-beta0"
}
}
}
Expand Down
7 changes: 5 additions & 2 deletions examples/openshift-data-foundation/addon/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ resource "null_resource" "customResourceGroup" {
provisioner "local-exec" {

when = destroy
command = "cd ocscluster && terraform destroy --auto-approve -var-file input.tfvars"
command = "sh ./deletecrd.sh"

}

Expand All @@ -34,7 +34,7 @@ resource "null_resource" "addOn" {
provisioner "local-exec" {

when = destroy
command = "cd ibm_odf_addon && terraform destroy --auto-approve -var-file input.tfvars"
command = "sh ./deleteaddon.sh"

}

Expand All @@ -47,6 +47,7 @@ resource "null_resource" "updateCRD" {
numOfOsd = var.numOfOsd
ocsUpgrade = var.ocsUpgrade
workerNodes = var.workerNodes
osdDevicePaths = var.osdDevicePaths
}


Expand All @@ -65,7 +66,9 @@ resource "null_resource" "updateCRD" {
resource "null_resource" "upgradeODF" {

triggers = {

odfVersion = var.odfVersion

}

provisioner "local-exec" {
Expand Down
4 changes: 2 additions & 2 deletions examples/openshift-data-foundation/addon/ocscluster/main.tf
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
terraform {
required_providers {
kubernetes = {
version = "2.18.1"
version = ">= 2.18.1"
}
ibm = {
source = "IBM-Cloud/ibm"
version = ">= 1.12.0"
version = "1.55.0-beta0"
}
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,8 @@
# To enable ODF AddOn on your cluster
ibmcloud_api_key = ""
cluster = ""
region = "us-south"
odfVersion = "4.12.0"


region = ""
odfVersion = ""


# To create the Ocscluster Custom Resource Definition, with the following specs
Expand Down
22 changes: 15 additions & 7 deletions examples/openshift-data-foundation/addon/updatecrd.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,21 @@ set -e
WORKING_DIR=$(pwd)

cp ${WORKING_DIR}/variables.tf ${WORKING_DIR}/ocscluster/variables.tf
cp ${WORKING_DIR}/input.tfvars ${WORKING_DIR}/ocscluster/input.tfvars
cp ${WORKING_DIR}/schematics.tfvars ${WORKING_DIR}/ocscluster/schematics.tfvars
cd ${WORKING_DIR}/ocscluster
terraform apply --auto-approve -var-file ${WORKING_DIR}/ocscluster/input.tfvars
terraform init
if [ -e ${WORKING_DIR}/ocscluster/terraform.tfstate ]
then
echo "ok"
else
terraform import -var-file=${WORKING_DIR}/ocscluster/schematics.tfvars kubernetes_manifest.ocscluster_ocscluster_auto "apiVersion=ocs.ibm.io/v1,kind=OcsCluster,namespace=openshift-storage,name=ocscluster-auto"
fi

sed -i'' -e "s|ocsUpgrade = \"true\"|ocsUpgrade = \"false\"|g" ${WORKING_DIR}/input.tfvars
sed -i'' -e "s|ocsUpgrade = \"true\"|ocsUpgrade = \"false\"|g" ${WORKING_DIR}/ocscluster/input.tfvars
rm -f ${WORKING_DIR}/input.tfvars-e
rm -f ${WORKING_DIR}/ocscluster/input.tfvars-e
terraform apply --auto-approve -var-file ${WORKING_DIR}/ocscluster/schematics.tfvars

terraform apply --auto-approve -var-file ${WORKING_DIR}/ocscluster/input.tfvars
sed -i'' -e "s|ocsUpgrade = \"true\"|ocsUpgrade = \"false\"|g" ${WORKING_DIR}/schematics.tfvars
sed -i'' -e "s|ocsUpgrade = \"true\"|ocsUpgrade = \"false\"|g" ${WORKING_DIR}/ocscluster/schematics.tfvars
rm -f ${WORKING_DIR}/schematics.tfvars-e
rm -f ${WORKING_DIR}/ocscluster/schematics.tfvars-e

terraform apply --auto-approve -var-file ${WORKING_DIR}/ocscluster/schematics.tfvars
5 changes: 3 additions & 2 deletions examples/openshift-data-foundation/addon/updateodf.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ set -e
WORKING_DIR=$(pwd)

cp ${WORKING_DIR}/variables.tf ${WORKING_DIR}/ibm_odf_addon/variables.tf
cp ${WORKING_DIR}/input.tfvars ${WORKING_DIR}/ibm_odf_addon/input.tfvars
cp ${WORKING_DIR}/schematics.tfvars ${WORKING_DIR}/ibm_odf_addon/schematics.tfvars
cd ${WORKING_DIR}/ibm_odf_addon
terraform apply --auto-approve -var-file ${WORKING_DIR}/ibm_odf_addon/input.tfvars
terraform init
terraform apply --auto-approve -var-file ${WORKING_DIR}/ibm_odf_addon/schematics.tfvars
89 changes: 89 additions & 0 deletions examples/openshift-data-foundation/vpc-worker-replace/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
# OpenShift-Data-Foundation VPC Worker Replace

This example shows how to replace & update the Kubernetes VPC Gen-2 worker installed with Openshift-Data-Foundation to the latest patch in the specified cluster.

For more information, about VPC worker updates, see [Updating VPC worker nodes](https://cloud.ibm.com/docs/containers?topic=containers-update&interface=ui#vpc_worker_node)

## Usage

To run this example you need to execute:

```sh
$ terraform init
$ terraform plan -var-file input.tfvars
$ terraform apply -var-file input.tfvars
```

* Run `terraform untaint ibm_container_vpc_worker.<resource_name>[index]` to untaint the failed worker after fixing it manually to proceed with next set of workers
* Run `terraform destroy` when you need to provide new set of worker list

## Example usage

Perform worker replace:

```terraform
resource "ibm_container_vpc_worker" "worker" {
count = length(var.worker_list)
cluster_name = var.cluster_name
replace_worker = element(var.worker_list, count.index)
resource_group_id = data.ibm_resource_group.group.id
kube_config_path = data.ibm_container_cluster_config.cluster_config.config_file_path
sds = "ODF"
sds_timeout = (var.sds_timeout != null ? var.sds_timeout : null)
timeouts {
create = (var.create_timeout != null ? var.create_timeout : null)
delete = (var.delete_timeout != null ? var.delete_timeout : null)
}
}
```

```terraform
data ibm_resource_group group {
name = var.resource_group
}
data ibm_container_cluster_config cluster_config {
cluster_name_id = var.cluster_name
resource_group_id = data.ibm_resource_group.group.id
}
```

<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->

## Requirements

| Name | Version |
|------|---------|
| terraform | >=1.0.0, <2.0 |

## Providers

| Name | Version |
|------|---------|
| ibm | latest |

## Inputs

| Name | Description | Type | Required |
|------|-------------|------|---------|
| cluster_name | Name of the cluster. | `string` | yes |
| replace_worker | ID of the worker to be replaced. | `string` | yes |
| resource_group_id | ID of the resousrce group | `string` | no |
| check_ptx_status | Whether to check ptx status on replaced workers | `bool` | no |
| kube_config_path | The Cluster config with absolute path | `string` | no |
| ptx_timeout | Timeout used while checking the portworx status | `string` | no
| sds | Software Defined Storage - Only `ODF` is currently supported | `string` | no
| sds_timeout | Timeout used while checking the sds status/deployment | `string` | no

## Note

* This resource is different from all other resource of IBM Cloud. Worker replace has 2 operations, i.e. Delete old worker & Create a new worker. On `terraform apply`, Replace operation is being handled where both the deletion & creation happens whereas on the `terraform destroy`, only the state is cleared but not the actual resource.
* When the worker list is being provided as inputs, the list must be user generated and should not be passed from the `ibm_container_cluster` data source.
* If `terraform apply` fails during worker replace or while checking the sds status, perform any one of the following actions before retrying.
* Resolve the issue manually and perform `terraform untaint` to proceed with the subsequent workers in the list.
* If worker replace is still needed, update the input list by replacing the existing worker id with the new worker id.



* Please note, currently only `ODF` is a supported value for the `sds` input
31 changes: 31 additions & 0 deletions examples/openshift-data-foundation/vpc-worker-replace/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
#####################################################
# vpc worker replace/update
# Copyright 2023 IBM
#####################################################

#####################################################
# Read each worker information attached to cluster
#####################################################
data ibm_resource_group group {
name = var.resource_group
}

data ibm_container_cluster_config cluster_config {
cluster_name_id = var.cluster_name
resource_group_id = data.ibm_resource_group.group.id
}

resource "ibm_container_vpc_worker" "worker" {
count = length(var.worker_list)
cluster_name = var.cluster_name
replace_worker = element(var.worker_list, count.index)
resource_group_id = data.ibm_resource_group.group.id
kube_config_path = data.ibm_container_cluster_config.cluster_config.config_file_path
sds = "ODF"
sds_timeout = "30m"

timeouts {
create = (var.create_timeout != null ? var.create_timeout : null)
delete = (var.delete_timeout != null ? var.delete_timeout : null)
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
provider "ibm" {
ibmcloud_api_key = var.ibmcloud_api_key
}
Loading

0 comments on commit 4a7406d

Please sign in to comment.