Skip to content
Guirish Salgaonkar edited this page Aug 13, 2018 · 1 revision

Kubernetes-Cluster

The instructions provided below specify the steps to build Kubernetes version v1.9.3 on Linux on IBM Z for the following distributions only for our reference:

  • RHEL 7.3
  • SLES 12 SP2
  • Ubuntu 16.04

General Notes:

  • When following the steps below please use a super permission user unless otherwise specified.
  • A directory /<source_root>/ will be referred to in these instructions, this is a temporary writable directory anywhere you'd like to place it.

Prerequisites:

  • Go (Refer Go recipe)
  • Docker (For RHEL only)(Refer instructions mentioned here) . Please Make sure that your docker version is supported to kubernetes version.

Step 1: Building Kubernetes

1.1) Install following dependencies

  • RHEL 7.3

    sudo yum install git gcc-c++ which iptables make 
  • SLES 12 SP2

    sudo zypper install git gcc-c++ which iptables make docker
  • Ubuntu 16.04

    sudo apt-get update
    sudo apt-get install git make iptables gcc wget tar flex subversion binutils-dev bzip2 build-essential vim docker

1.2) Set environment variables

export GOPATH=/<source_root>/

1.3) Download s390x tar file.( You can download latest version kubernetes binary)

cd /<source_root>/
wget https://dl.k8s.io/v1.9.3/kubernetes-server-linux-s390x.tar.gz
tar -xvf kubernetes-server-linux-s390x.tar.gz
export PATH=$GOPATH/kubernetes/server/bin/:$PATH

1.4) Install etcd

Instructions for building etcd can be found here. Make sure etcd binary is available in PATH environment variable.

  export ETCD_UNSUPPORTED_ARCH=s390x
  export KUBE_ENABLE_CLUSTER_DNS=true

1.5. Install Flannel CNI

The flannel CNI plugin can be found in the CNI plugins reposistory.

Use the following steps to build cni

git clone https://github.com/containernetworking/cni
cd cni 
git checkout v0.5.2
./build.sh

Binaries will be created in bin, copy them to cni-bin-dir /opt/cni/bin

cp bin/* /opt/cni/bin
mkdir -p /etc/cni/net.d

1.6. Setup kubelet service

  • Use the kubelet binary from the installed kubernetes.
ln -s /<kubernetes_install_dir>/server/bin/kubelet /usr/bin/kubelet

create /etc/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStart=/usr/bin/kubelet \
--kubeconfig=/etc/kubernetes/kubelet.conf \
--require-kubeconfig=true \
--root-dir=/scratch/ecos0031/snehal \
--pod-manifest-path=/etc/kubernetes/manifests \
--allow-privileged=true \
--network-plugin=cni \
--cni-conf-dir=/etc/cni/net.d \
--cni-bin-dir=/opt/cni/bin \
--v=4
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
systemctl daemon-reload

1.7. create cluster using kubeadm

Note: follow the instructions from here

  • installing kubeadm

set kubeadm, kubectl to PATH variable

  • Initializing master
kubeadm init --pod-network-cidr=10.244.0.0/16

The output should look like:

[init] Using Kubernetes version: v1.9.3
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
        [WARNING FileExisting-ebtables]: ebtables not found in system path
        [WARNING FileExisting-socat]: socat not found in system path
        [WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [ecos0034 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 9.12.19.117]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 40.512118 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node ecos0034 as master by adding a label and a taint
[markmaster] Master ecos0034 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 6f55c3.e9f509d40a2dad6d
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 6f55c3.e9f509d40a2dad6d 9.12.19.117:6443 --discovery-token-ca-cert-hash sha256:d1407bbfa448f6c3f1f1abc85de7f86e8617a060133db380df47af64ac4a0092

  • Installing a pod network (flannel)

edit the kube-flannel.yml file from https://github.com/coreos/flannel/blob/v0.8.0/Documentation/kube-flannel.yml as follows:

@@ -47,7 +47,7 @@
     spec:
       hostNetwork: true
       nodeSelector:
-        beta.kubernetes.io/arch: amd64
+        beta.kubernetes.io/arch: s390x
       tolerations:
       - key: node-role.kubernetes.io/master
         operator: Exists
@@ -55,7 +55,7 @@
       serviceAccountName: flannel
       containers:
       - name: kube-flannel
-        image: quay.io/coreos/flannel:v0.8.0-amd64
+        image: quay.io/coreos/flannel:v0.8.0-s390x
         command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ]
         securityContext:
           privileged: true
@@ -74,7 +74,7 @@
         - name: flannel-cfg
           mountPath: /etc/kube-flannel/
       - name: install-cni
-        image: quay.io/coreos/flannel:v0.8.0-amd64
+        image: quay.io/coreos/flannel:v0.8.0-s390x
         command: [ "/bin/sh", "-c", "set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done" ]
         volumeMounts:
         - name: cni

To start using your cluster, you need to run (as a regular user):

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

install a flannel network:

kubectl apply -f kube-flannel.yml 

you can confirm that it is working by checking that the kube-dns pod is Running in the output of kubectl get pods --all-namespaces

Some basic commands:

To check services run - kubectl get services
To check deployment run - kubectl get deployments
To delete services run - kubectl delete svc <service-name>
To delete deployment run - kubectl delete deployment <deployment-name>