-
CodeReady Containers / OpenShift 4.1
-
Ansible Agnostic Deployer
-
OpenDataHub
-
Process Automation Manager
-
Change Data Capture with Debezium and Kafka
-
Tekton Pipelines
Youtube playlist at https://www.youtube.com/playlist?list=PLg1pvyPzFye2UtQjZTSjoXhFdqkGK6exw
Red Hat CodeReady Containers brings a minimal OpenShift 4.1 or newer cluster to your local computer (see https://code-ready.github.io/crc/)
Ansible Agnostic Deployer aka AgnosticD, is a fully automated 2 Phase deployer for building and deploying everything from basic infrastructure to fully configured running application environments running on either public Cloud Providers or OpenShift clusters.
AgnosticD is not an OpenShift Deployer, though it can and does that, it is however also a deployer that just happens to be used to deploy a lot of OpenShift and OpenShift workloads, amongst other things. See https://github.com/redhat-cop/agnosticd
Open Data Hub is a machine-learning-as-a-service platform built on Red Hat’s Kubernetes-based OpenShift® Container Platform, Ceph Object Storage, and Kafka/Strimzi integrating a collection of open source projects. See https://opendatahub.io/
The instructions below cover
-
installing Red Hat CodeReady Containers on a Fedora 30 physical server
-
using Ansible Agnostic Deployer to deploy OpenDataHub on OpenShift 4.1
-
configuring SSH tunneling / port forwarding to access the OpenShift console, OpenDataHub etc from your laptop.
[marc@fedora30 ~]$ cat /etc/fedora-release
Fedora release 30 (Thirty)
https://gist.github.com/181192/cf7eb42a25538ccdb8d0bb7dd57cf236 Example: sudo lvresize -An -L +10G --resizefs /dev/mapper/fedora_dell--per415--03-root
dnf config-manager --set-enabled fedora su -c 'dnf -y install git wget tar qemu-kvm libvirt NetworkManager jq libselinux-python' sudo systemctl start libvirtd sudo systemctl enable libvirtd
python3 --version
Python 3.7.4
Create a Python virtual environment; we will use it later when we set up Ansible Agnostic Deployer.
python3 -m venv agnosticd
su demouser
cd /home/demouser
tar -xzf go1.12.9.linux-amd64.tar.gz
Add to .bashrc:
export GOROOT=/home/demouser/go export GOPATH=/home/demouser/work export PATH=$GOPATH/bin:$GOROOT/bin:$PATH
source .bashrc
go
su demouser cd /home/demouser wget https://mirror.openshift.com/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz tar -xvf crc-linux-amd64.tar.xz cd crc-linux-1.0.0-beta.5-amd64/ sudo cp ./crc /usr/bin
Set the memory available to CRC according to what you have on your physical server
I’m on a physical server with around 100G of memory so I allocate 80G to CRC as follows:
crc config set memory 81920
crc config set cpus <cpus>
crc setup crc start
Increase disk size available to CodeReady Containers (see crc-org/crc#127)
sudo dnf -y install libguestfs libguestfs-tools export CRC_MACHINE_IMAGE=${HOME}/.crc/machines/crc/crc #example : export CRC_MACHINE_IMAGE=/home/demouser/.crc/machines/crc/crc crc stop #adding 500G sudo qemu-img resize ${CRC_MACHINE_IMAGE} +500G sudo cp ${CRC_MACHINE_IMAGE} ${CRC_MACHINE_IMAGE}.ORIGINAL sudo virt-resize --expand /dev/sda3 ${CRC_MACHINE_IMAGE}.ORIGINAL ${CRC_MACHINE_IMAGE} sudo rm ${CRC_MACHINE_IMAGE}.ORIGINAL crc setup crc start crc status
Expected result
crc status CRC VM: Running OpenShift: Running (v4.2.0-0.nightly-2019-09-26-192831) Disk Usage: 33.88GB of 569.1GB (Inside the CRC VM) Cache Usage: 15.95GB Cache Directory: /home/demouser/.crc/cache
INFO Checking if NetworkManager is installed INFO Checking if NetworkManager service is running INFO Caching oc binary INFO Setting up virtualization INFO Setting up KVM INFO Installing libvirt service and dependencies INFO Adding user to libvirt group INFO Enabling libvirt INFO Starting libvirt service INFO Installing crc-driver-libvirt INFO Removing older system-wide crc-driver-libvirt INFO Setting up libvirt 'crc' network INFO Starting libvirt 'crc' network INFO Writing Network Manager config for crc INFO Writing dnsmasq config for crc INFO Unpacking bundle from the CRC binary
You’ll need your pull secret from https://cloud.redhat.com/openshift/install/metal/user-provisioned
crc start
INFO Checking if NetworkManager is installed INFO Checking if NetworkManager service is running INFO Checking if oc binary is cached INFO Checking if Virtualization is enabled INFO Checking if KVM is enabled INFO Checking if libvirt is installed INFO Checking if user is part of libvirt group INFO Checking if libvirt is enabled INFO Checking if libvirt daemon is running INFO Checking if crc-driver-libvirt is installed INFO Checking if libvirt 'crc' network is available INFO Checking if libvirt 'crc' network is active INFO Checking if /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf exists INFO Checking if /etc/NetworkManager/dnsmasq.d/crc.conf exists INFO Checking if CRC bundle is cached in '$HOME/.crc' ? Image pull secret [? for help]
INFO To access the cluster using 'oc', run 'eval $(crc oc-env) && oc login -u kubeadmin -p <password> https://api.crc.testing:6443' ******INFO Access the OpenShift web-console here: https://console-openshift-console.apps-crc.testing ******************************************************** INFO Login to the console with user: kubeadmin, password: <password>
sudo cp /home/demouser/.crc/bin/oc /usr/bin
eval $(crc oc-env) && oc login -u kubeadmin -p <password> https://api.crc.testing:6443
If you get "Error from server (InternalError): Internal error occurred: unexpected response: 503" when running the above command, try again a bit later.
oc get projects
NAME DISPLAY NAME STATUS default Active kube-public Active kube-system Active openshift Active openshift-apiserver Active openshift-apiserver-operator Active openshift-authentication Active openshift-authentication-operator Active openshift-cloud-credential-operator Active openshift-cluster-machine-approver Active openshift-cluster-node-tuning-operator Active openshift-cluster-samples-operator Active openshift-cluster-storage-operator Active openshift-cluster-version Active openshift-config Active openshift-config-managed Active openshift-console Active openshift-console-operator Active openshift-controller-manager Active openshift-controller-manager-operator Active openshift-dns Active openshift-dns-operator Active openshift-etcd Active openshift-image-registry Active openshift-infra Active openshift-ingress Active openshift-ingress-operator Active openshift-kube-apiserver Active openshift-kube-apiserver-operator Active openshift-kube-controller-manager Active openshift-kube-controller-manager-operator Active openshift-kube-scheduler Active openshift-kube-scheduler-operator Active openshift-machine-api Active openshift-machine-config-operator Active openshift-marketplace Active openshift-monitoring Active openshift-multus Active openshift-network-operator Active openshift-node Active openshift-operator-lifecycle-manager Active openshift-operators Active openshift-sdn Active openshift-service-ca Active openshift-service-ca-operator Active openshift-service-catalog-apiserver-operator Active openshift-service-catalog-controller-manager-operator Active
su demouser cd /home/demouser git clone https://github.com/redhat-cop/agnosticd.git cd agnosticd/ansible python -m pip install --upgrade --trusted-host files.pythonhosted.org -r requirements.txt python3 -m pip install --upgrade --trusted-host files.pythonhosted.org -r requirements.txt pip3 install kubernetes pip3 install openshift pip install kubernetes pip install openshift
cat inventory
127.0.0.1 ansible_connection=local
export WORKLOAD="ocp4-workload-open-data-hub" sudo cp /usr/local/bin/ansible-playbook /usr/bin/ansible-playbook
We are only deploying one Open Data Hub project, so we set user_count to 1.
eval $(crc oc-env) && oc login -u kubeadmin -p 78UVa-zNj5W-YB62Z-ggxGZ https://api.crc.testing:6443 ansible-playbook -i inventory ./configs/ocp-workloads/ocp-workload.yml -e"ocp_workload=${WORKLOAD}" -e"ACTION=create" -e"user_count=1" -e"ocp_username=kubeadmin" -e"ansible_become_pass=<password>" -e"silent=False"
PLAY RECAP ******************************************************************************************************************************************** 127.0.0.1 : ok=42 changed=18 unreachable=0 failed=0 skipped=1 rescued=1 ignored=0 Friday 30 August 2019 03:47:30 -0400 (0:00:00.015) 0:04:03.377 ********* =============================================================================== ocp4-workload-open-data-hub : wait for deploy rook-ceph-rgw-my-store ------------------------------------------------------------------------- 153.90s ocp4-workload-open-data-hub : wait for deploy ------------------------------------------------------------------------------------------------- 34.09s ocp4-workload-open-data-hub : get route for jupyterhub ---------------------------------------------------------------------------------------- 33.67s ocp4-workload-open-data-hub : Curling ODH resources -------------------------------------------------------------------------------------------- 5.35s ocp4-workload-open-data-hub : Curling Rook resources ------------------------------------------------------------------------------------------- 4.54s ocp4-workload-open-data-hub : apply scc.yaml, operator.yaml ------------------------------------------------------------------------------------ 1.47s ocp4-workload-open-data-hub : apply toolbox.yaml object.yaml ----------------------------------------------------------------------------------- 1.04s ocp4-workload-open-data-hub : create temp directory -------------------------------------------------------------------------------------------- 0.74s ocp4-workload-open-data-hub : Apply opendatahub_v1alpha1_opendatahub_crd.yaml ------------------------------------------------------------------ 0.74s ocp4-workload-open-data-hub : new-obtain {{ item }} secrets ------------------------------------------------------------------------------------ 0.68s ocp4-workload-open-data-hub : Applying cluster.yaml -------------------------------------------------------------------------------------------- 0.67s ocp4-workload-open-data-hub : apply role.yaml -------------------------------------------------------------------------------------------------- 0.60s ocp4-workload-open-data-hub : create the Project ----------------------------------------------------------------------------------------------- 0.58s ocp4-workload-open-data-hub : create ODH Custom Resource object -------------------------------------------------------------------------------- 0.55s ocp4-workload-open-data-hub : modify and apply rook object-user.yaml for {{ item }} ------------------------------------------------------------ 0.53s ocp4-workload-open-data-hub : apply service_account.yaml --------------------------------------------------------------------------------------- 0.53s ocp4-workload-open-data-hub : apply ODH custom resource object customization ------------------------------------------------------------------- 0.53s ocp4-workload-open-data-hub : apply operator.yaml ---------------------------------------------------------------------------------------------- 0.53s ocp4-workload-open-data-hub : apply role_binding.yaml ------------------------------------------------------------------------------------------ 0.52s ocp4-workload-open-data-hub : wait for rook-ceph-mon/osd a to get to status of running --------------------------------------------------------- 0.49s
oc project open-data-hub-user1
oc get pods
NAME READY STATUS RESTARTS AGE jupyterhub-1-7q4zs 1/1 Running 0 49m jupyterhub-1-deploy 0/1 Completed 0 49m jupyterhub-db-1-deploy 0/1 Completed 0 49m jupyterhub-db-1-rttgz 1/1 Running 1 49m jupyterhub-nb-c455c922-2d4e64-2d4d66-2db463-2d066ac236166f 1/1 Running 0 28m opendatahub-operator-86c5cb8b4b-l5cg6 1/1 Running 0 50m spark-operator-6b46b4d97-8mv92 1/1 Running 0 49m
[marc@dell-r730-019 crc]$ oc get route
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD jupyterhub jupyterhub-open-data-hub-user1.apps-crc.testing jupyterhub 8080-tcp edge/Redirect None
On your laptop, add jupyterhub-open-data-hub-user1.apps-crc.testing to your /etc/hosts. Example:
127.0.0.1 localhost marc.rhel8 console-openshift-console.apps-crc.testing oauth-openshift.apps-crc.testing mapit-app-management.apps-crc.testing mapit-spring-pipeline-demo.apps-crc.testing jupyterhub-open-data-hub-user1.apps-crc.testing jupyterhub-open-data-hub-user1.apps-crc.testing
On your laptop, sudo ssh marc@fedora30 -L 443:jupyterhub-open-data-hub-user1.apps-crc.testing:443
You can now browse to https://jupyterhub-open-data-hub-user1.apps-crc.testing
Now that we have Open Data Hub ready, let’s deploy something interesting on it.
Here is a an example with IBM’s Qiskit open-source framework for quantum computing
Video no. 9 at https://lnkd.in/g9gmhQQ Github repo at https://lnkd.in/gS7Mcc9
For details and videos on how to deploy many more solutions on OpenShift 4.1 / CodeReady Containers, please refer to
https://bit.ly/marcredhat and https://bit.ly/marcredhatplaylist