Skip to content

Commit

Permalink
Merge pull request #1 from frankmit11/kvmOcpUpcEnhancements
Browse files Browse the repository at this point in the history
Kvm ocp upc enhancements
  • Loading branch information
frankmit11 authored Feb 9, 2023
2 parents 294d9f3 + 0afcc45 commit bd43db4
Show file tree
Hide file tree
Showing 78 changed files with 3,479 additions and 853 deletions.
4 changes: 4 additions & 0 deletions .gitattributes
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
*.yaml linguist-detectable=true
*.yaml linguist-language=YAML
*.yml linguist-detectable=true
*.yml linguist-language=YAML
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,7 @@ often repeated and in need of automation.
- [Copy, Sort and Fetch Data Sets on z/OS using Ansible](zos_concepts/data_transfer/copy_sort_fetch)- \[[Playback](https://mediacenter.ibm.com/media/Copy%2C+sort%2C+and+fetch+data+on+z+OS+using+Ansible/1_ah4qhyvu)]
- [Terse Data Set and Fetch](zos_concepts/data_transfer/terse_fetch_data_set)
- [Transfer, Dump and Unpack Data Sets](zos_concepts/data_transfer/dump_pack_ftp_unpack_restore)
- [Grow ZFS aggregates](zos_concepts/zfsadm/zfs_grow_aggr)
- Integrating Existing Automation
- [Job Control Language](zos_concepts/jobs) (JCL)
- [Submit Batch Jobs, Query and Retrieve Job Output](zos_concepts/jobs/submit_query_retrieve)
Expand Down
1 change: 1 addition & 0 deletions collections/requirements.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,3 +8,4 @@
collections:
- name: ibm.ibm_zos_core
version: 1.4.0-beta.1
- name: community.general
Original file line number Diff line number Diff line change
Expand Up @@ -29,10 +29,16 @@
---
- import_playbook: configure-pre-check.yaml
- import_playbook: configure-installer-client.yaml
when: use_localreg == false
- import_playbook: configure-installer-client-local.yaml
when: use_localreg == true
- import_playbook: configure-install-config.yaml
- import_playbook: configure-install-manifests.yaml
- import_playbook: configure-install-ignition.yaml
- import_playbook: configure-installer-rhcos.yaml
when: use_localreg == false
- import_playbook: configure-installer-rhcos-local.yaml
when: use_localreg == true
- import_playbook: configure-security-groups.yaml
- import_playbook: configure-network.yaml
- import_playbook: configure-bastion-properties.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,13 @@
# =================================================================

---
- import_playbook: configure-bootstrap.yaml
- import_playbook: configure-control-plane.yaml
- import_playbook: configure-bootstrap-kvm.yaml
when: vm_type == "kvm"
- import_playbook: configure-bootstrap-zvm.yaml
when: vm_type == "zvm"
- import_playbook: configure-control-plane-kvm.yaml
when: vm_type == "kvm"
- import_playbook: configure-control-plane-zvm.yaml
when: vm_type == "zvm"
- import_playbook: wait-for-bootstrap-complete.yaml
- import_playbook: destroy-computes.yaml
- import_playbook: destroy-bootstrap.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,10 @@
# =================================================================

---
- name: Create the compute nodes
hosts: localhost
roles:
- configure-compute-nodes
- import_playbook: configure-compute-nodes-kvm.yaml
when: vm_type == "kvm"
- import_playbook: configure-compute-nodes-zvm.yaml
when: vm_type == "zvm"

- import_playbook: approve.yaml
- import_playbook: wait-for-install-complete.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,4 +20,8 @@
- import_playbook: destroy-computes.yaml
- import_playbook: destroy-network.yaml
- import_playbook: destroy-security-groups.yaml
- import_playbook: destroy-images-files.yaml
- import_playbook: destroy-volumes.yaml
when:
- volume_type_id is defined
- vm_type == "kvm"
- import_playbook: destroy-files.yaml
71 changes: 42 additions & 29 deletions z_infra_provisioning/cloud_infra_center/ocp_upi/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ The playbook contains the following topics:

3. Requirements pre-check before the installation

**Note**: This playbook supports IBM® Cloud Infrastructure Center version 1.1.4, 1.1.5 and RH OpenShift Container Platform version 4.6 and, 4.7, 4.8, 4.9, 4.10 for z/VM and version 4.7, 4.8, 4.9, 4.10 for KVM.
**Note**: This playbook supports IBM® Cloud Infrastructure Center version 1.1.4, 1.1.5 and RH OpenShift Container Platform version 4.6 and, 4.7, 4.8, 4.9, 4.10, 4.11for z/VM and version 4.7, 4.8, 4.9, 4.10, 4.11for KVM.

# Installing Red Hat OpenShift on the IBM Cloud Infrastructure Center via user-provisioned infrastructure (UPI)

Expand Down Expand Up @@ -76,7 +76,7 @@ After you performed the previous steps successfully, you get one ready OpenShift

- **(Required)** A Linux server, the machine that runs Ansible.
- RHEL8 is the operation system version we tested
- Ansible == 2.8
- Ansible == 2.8 or 2.9
- This server **must not** be any of the IBM Cloud Infrastructure Center nodes
- You can use a single LPAR server or virtual machine
- Disk with at least 20 GiB
Expand Down Expand Up @@ -121,7 +121,7 @@ sudo subscription-manager repos --enable=ansible-2.8-for-rhel-8-s390x-rpms

Install the packages from the repository in the Linux server:
```sh
sudo dnf install python3 ansible jq wget git firewalld tar gzip -y
sudo dnf install python3 ansible jq wget git firewalld tar gzip redhat-rpm-config gcc libffi-devel python3-devel openssl-devel cargo -y
```
Make sure that `python` points to Python3
```sh
Expand All @@ -131,10 +131,7 @@ Upgrade the pip package and dnf:
```sh
sudo -H pip3 install --upgrade pip
```
Install the required package through dnf:
```sh
sudo dnf install redhat-rpm-config gcc libffi-devel python3-devel openssl-devel cargo -y
```

Then create the requirements file and use pip3 to install the python modules:

**Note**: The requirements.txt are tested for python-openstackclient=5.5.0.
Expand All @@ -154,13 +151,14 @@ python-keystoneclient==4.0.0
python-cinderclient==7.0.0
python-novaclient==17.0.0
stevedore==1.32.0
dogpile-cache==0.9.0
dogpile-cache
stevedore==1.32.0
netaddr==0.7.19
python-openstackclient==5.2.2
cryptography==3.2.1
EOF

sudo pip3 install -r requirements.txt python-openstackclient --ignore-installed
sudo pip3 install -r requirements.txt --ignore-installed
```

**Verification:**
Expand Down Expand Up @@ -192,12 +190,12 @@ The login should now complete without asking for a password.
4. Copy the `icicrc` file from the IBM Cloud Infrastructure Center management node to your user's `/opt/ibm/icic/icicrc` directory:
```sh
mkdir -p /opt/ibm/icic
scp -r user@host:/opt/ibm/icic/icicrc /opt/ibm/icic/icicrc
scp user@host:/opt/ibm/icic/icicrc /opt/ibm/icic/
```

5. Copy the `icic.crt` file from the IBM Cloud Infrastructure Center management node to your certs directory `/etc/pki/tls/certs/`:
```
scp -r user@host:/etc/pki/tls/certs/icic.crt /etc/pki/tls/certs/
scp user@host:/etc/pki/tls/certs/icic.crt /etc/pki/tls/certs/
```

6. Run the source `icicrc` to set the environment variables:
Expand Down Expand Up @@ -248,8 +246,8 @@ Update your settings based on the samples. The following propeties are **require
| `use_network_subnet` | \<subnet id from network name in icic\> |`openstack network list -c Subnets -f value`|
| `vm_type` | kvm| The operation system of OpenShift Container Platform, <br>supported: `kvm` or `zvm`| |
| `disk_type` | dasd|The disk storage of OpenShift Container Platform, <br>supported: `dasd` or `scsi` | |
| `openshift_version` |4.10| The product version of OpenShift Container Platform, <br>such as `4.6` or `4.7` or `4.8`. <br> And the rhcos is not updated for every single minor version. User can get available openshift_version from [here](https://mirror.openshift.com/pub/openshift-v4/s390x/dependencies/rhcos/)| |
| `openshift_minor_version` |3| The minor version of Openshift Container Platform, <br>such as `3`. <br>For openshift_version `4.10` for example, the only rhcos release available is `4.10.3`, and user can inspect what minor releases are available by checking [here](https://mirror.openshift.com/pub/openshift-v4/s390x/dependencies/rhcos/4.10/) to see whats there |
| `openshift_version` |4.10| The product version of OpenShift Container Platform, <br>such as `4.8` or `4.9` or `4.10` or `4.11`. <br> And the rhcos is not updated for every single minor version. User can get available openshift_version from [here](https://mirror.openshift.com/pub/openshift-v4/s390x/dependencies/rhcos/)| |
| `openshift_minor_version` |3| The minor version of Openshift Container Platform, <br>such as `3`.Support to use `latest` tag to install the latest minor version under`openshift_version` <br> And User can inspect what minor releases are available by checking [here](https://mirror.openshift.com/pub/openshift-v4/s390x/clients/ocp/) to see whats there |
| `auto_allocated_ip` |true|(Boolean) true or false, if false, <br>IPs will be allocated from `allocation_pool_start` and `allocation_pool_end` |
| `os_flavor_bootstrap` | medium| `openstack flavor list`, Minimum flavor disk size >= 35 GiB | |
| `os_flavor_master` | medium| `openstack flavor list`, Minimum flavor disk size >= 35 GiB | |
Expand Down Expand Up @@ -284,7 +282,12 @@ Others are **optional**, you can enable them and update value if you need more s
| `http_proxy` |\<http-proxy\>| `http://<username>:<pswd>@<ip>:<port>`, a proxy URL to use for creating HTTP connections outside the cluster. <br>**required** when `use_proxy` is true
| `https_proxy` |\<https-proxy\>| `http://<username>:<pswd>@<ip>:<port>`, a proxy URL to use for creating HTTPS connections outside the cluster <br>**required** when `use_proxy` is true
| `no_proxy` |\<https-proxy\>| A comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to include all subdomains of that domain. Use * to bypass proxy for all destinations. <br>Such as: `'127.0.0.1,169.254.169.254,172.26.0.0/17,172.30.0.0/16,10.0.0.0/16,10.128.0.0/14,localhost,.api-int.,.example.com.'`
| `approve_nodes_csr` |10| Default is 10 minutes that used to wait for approving node csrs
| `use_localreg` |false| (Boolean) true or false, if true then Openshift Container Platform will use local packages to download
| `localreg_mirror` |\<local-mirror-registry\>| The name of local mirror registry to use for mirroring the required container images of OpenShift Container Platform for disconnected installations. Following [guide](https://docs.openshift.com/container-platform/4.10/installing/disconnected_install/installing-mirroring-installation-images.html) to setup mirror registry, and we offer temporary script to setup registry and mirror images, you can get scripts from [mirror-registry](tools/mirror-registry/), please update the correct `PULL_SECRET` and `VERSION` in `01-mirror-registry.sh` script before use it.
| `local_openshift_install` |\<local-openshift-install-url\>| This is always the latest installer download [link](https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-install-linux.tar.gz), use an SSH or HTTP client to store the Openshift installation package, and put the link here
| `local_openshift_client` |\<local-openshift-client-url\>| This is always the latest client download [link](https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-client-linux.tar.gz), use an SSH or HTTP client to store the Openshift client package, and put the link here
| `local_rhcos_image` |\<local-rhcos-image-url\>| This is all rhcos images download [link](https://mirror.openshift.com/pub/openshift-v4/s390x/dependencies/rhcos/latest/), download the name that corresponds with KVM or z/VM images, and use an SSH or HTTP client to store it, put the link here
| `additional_certs` |`{{ lookup('file', '/opt/registry/certs/domain.crt') \| indent (width=2) }}`| The local mirror registry repo additionally need SSL certificated to be accessed, those can be added cert file via the `additional_certs` variable.
| `create_server_timeout` |10| Default is 10 minutes that used to create instances and volumes from backend storage provider

## Creation of the cluster
Expand All @@ -310,6 +313,12 @@ ansible-playbook -i inventory.yaml configure-haproxy.yaml
```sh
ansible-playbook -i inventory.yaml bastion.yaml
```
> If you don't have any existing DNS server or Load Balancer and use the non-root user,run the command as below and enter the password for your user.
```sh
ansible-playbook -i inventory.yaml bastion.yaml -K
```



3. **Step3**:
```sh
Expand Down Expand Up @@ -337,34 +346,38 @@ After above steps, you will get one ready OpenShift Container Platform on the IB

## Day2 Operation

### Add a new compute node
Use this playbook to add new compute node as allocated IP:
### Add compute node
Use this playbook to add a new compute node as allocated IP:
```sh
ansible-playbook -i inventory.yaml add-new-compute-node.yaml
```
Use this playbook to add new compute node as fixed IP:
Use this playbook to add a new compute node as fixed IP:
```sh
ansible-playbook -i inventory.yaml add-new-compute-node.yaml -e ip=x.x.x.xs
ansible-playbook -i inventory.yaml add-new-compute-node.yaml -e ip=x.x.x.x
```
**Please notice:**
The new compute node should be updated corresponding DNS and Load Balancer. If you use your own existing DNS server and Load Balancer for the Red Hat OpenShift installation, you may skip this part.
* If you use our `bastion.yaml` playbook to configure the DNS server and Load Balancer, you can use this playbook to update those two directly.
Use this playbook to add multiple compute nodes as allocated IP, and update bastion info automatically:
```sh
ansible-playbook -i inventory.yaml modify-bastion.yaml
ansible-playbook -i inventory.yaml add-new-compute-node.yaml -e worker_number=3 -e update_bastion=true
```
* If you use our `configure-haproxy.yaml` playbook to configure the Load Balancer, you can use this playbook to update HAProxy too.
Use this playbook to add multiple compute nodes as fixed IP, separate the IP list with commas, and update bastion info automatically:
```sh
ansible-playbook -i inventory.yaml modify-haproxy.yaml
```
* If you use our `configure-dns.yaml` playbook to configure the DNS server, you can use this playbook to update DNS too.
```sh
ansible-playbook -i inventory.yaml modify-dns.yaml
ansible-playbook -i inventory.yaml add-new-compute-node.yaml -e ip=x.x.x.x,x.x.x.x -e worker_number=2 -e update_bastion=true
```
**Please notice:**
> If you use your own bastion server, you can refer [Add-DNS-HAProxy](docs/add-dns-haproxy.md) to update bastion info.
## Uninstall Red Hat OpenShift Container Platform

`ansible-playbook -i inventory.yaml 04-destroy.yaml`

## Remove RHCOS images
In order to save image space, our playbook will not delete the uploaded image automatically, user can use this individual playbook to remove it.
`ansible-playbook -i inventory.yaml destroy-images.yaml`

And we store the SHA256 value into image properties to verify downloading images, the SHA256 comes from the `gz` packages.
```
| owner_specified.openstack.object | images/rhcos |
| owner_specified.openstack.sha256 | fc265b2d5b6c9f6d175e8b15f672aba78f6e4707875f9accaa2cb74e3d90d27b
```

## Copyright
© Copyright IBM Corporation 2021
Expand Down
Loading

0 comments on commit bd43db4

Please sign in to comment.