- Versions of all the components used
- KVM Setup
- Setting up the VMs before installing the components
- Installing Kubernetes binaries on the Nodes
- Creating Certificates & Authentication to enable secure communication across all the Kubernetes components
- Setting up Etcd
- Setting up Flannel
- Setting up Kubernetes Components
- Testing the cluster
- Troubleshooting
- References
Note: This setup documents running a two node (1 Master & 1 Worker) Kubernetes Cluster on LinuxOne cloud using LinuxOne supported linux distribution SUSE Linux Enterprise Server 12 SP3.
The cluster is based on docker as container run-time and the components are installed as systemd services with flannel as the overlay networking model. The versions of each components used are as follows
Component/ Infrastructure | Version |
---|---|
LinuxOne Instance flavour |
LinuxONE-Medium |
LinuxOne Instance OS |
SLES12SP3 |
SLES VMs |
SLES12SP3 |
Go |
1.10.2 |
Kubernetes |
1.9.8 |
Docker |
17.09.1-ce |
Etcd |
3.3.6 |
Flannel |
0.10.0 |
Refer to this gist to setup virtual machines using KVM on LinuxOne Community Cloud.
-
Create two virtual machines after setting up KVM as explained above and install ubuntu on both the nodes
-
SSH into the nodes and follow the below steps in the nodes specified across each step
-
Note down the IP addresses of both the nodes. In the context of this document, IP addresses of the nodes are as follows
Node IP Master
192.168.100.230
Worker
192.168.100.169
-
Make sure the firewall is disabled while installing the virtual machine, if not disable it by running
SuSEfirewall2 off
-
Make sure to set the hostname of both the machines to differentiate one machine from the other. Preferably name the machines as k8s-master for the Master node and k8s-worker for the Worker node. Hostname can be set using the command
hostnamectl set-hostname <hostname>
. -
Install Go on the Master node. Go installation is needed to build etcd in the later stages
wget https://dl.google.com/go/go1.10.2.linux-s390x.tar.gz tar -C /usr/local -xzf go1.10.2.linux-s390x.tar.gz echo 'export PATH=$PATH:/usr/local/go/bin' >> /etc/profile source /etc/profile
-
By default SLES comes with vim and net-tools installed. But git needs to be installed. Install git on both the nodes. Vim is needed to edit files, although preinstalled nano can be used. Git is needed to download source code for compilation in the later stages. Net-tools is needed to test and troubleshoot working of flannel.
zypper install git-core
-
Install docker on Worker node(s). To install docker on SLES, it is necessary to register the machine with the SLES server and download an add-on module. After registering the server, the module Containers Module 12 s390x needs to be added to the existing list of repositories. There are two ways to add this repository. One way is through yast and the other is using
And then run the following commands to install docker.sudo SUSEConnect -p sle-module-containers/12/s390x -r ''
sudo zypper install -y docker sudo systemctl enable docker.service sudo systemctl start docker.service
-
Create the below folders in order to save certificates & configuration files later
-
On Master Node
sudo mkdir -p /srv/kubernetes sudo mkdir -p /var/lib/{kube-controller-manager,kube-scheduler} sudo mkdir /var/lib/flanneld/
-
On Worker Node(s)
sudo mkdir -p /srv/kubernetes sudo mkdir -p /var/lib/{kube-proxy,kubelet} sudo mkdir /var/lib/flanneld/
-
-
Kubernetes is not currently designed to work with swap memory on. Disable swap space on all the nodes, by running
swapoff -a sudo sed -i 's/.*swap/#&/' /etc/fstab
It is important to set the KUBERNETES_SERVER_ARCH to s390x in this step. Run the following on the Master node
cd ~/Downloads wget https://dl.k8s.io/v1.9.8/kubernetes.tar.gz tar -xvf kubernetes.tar.gz cd kubernetes/cluster export KUBERNETES_SERVER_ARCH=s390x ./get-kube-binaries.sh cd server tar -xvf kubernetes-server-linux-s390x.tar.gz sudo cp server/kubernetes/server/bin/{kubectl,kube-apiserver,kube-controller-manager,kube-scheduler} /usr/local/bin sudo scp server/kubernetes/server/bin/{kubelet,kube-proxy} root@192.168.100.169:/usr/local/bin
The binaries can also be directly downloaded from the official googleapi links. Run the following commands on the Master node
cd ~/Downloads wget https://storage.googleapis.com/kubernetes-release/release/v1.9.8/bin/linux/s390x/kubectl wget https://storage.googleapis.com/kubernetes-release/release/v1.9.8/bin/linux/s390x/kube-apiserver wget https://storage.googleapis.com/kubernetes-release/release/v1.9.8/bin/linux/s390x/kube-controller-manager wget https://storage.googleapis.com/kubernetes-release/release/v1.9.8/bin/linux/s390x/kube-scheduler wget https://storage.googleapis.com/kubernetes-release/release/v1.9.8/bin/linux/s390x/kubelet wget https://storage.googleapis.com/kubernetes-release/release/v1.9.8/bin/linux/s390x/kube-proxy sudo cp kubectl kube-apiserver kube-controller-manager kube-scheduler /usr/local/bin sudo scp kubelet kube-proxy root@192.168.100.169:/usr/local/bin
Creating Certificates & Authentication to enable secure communication across all the Kubernetes components
Run all the following steps and thereby generate the files in the Master node, and then copy the specific mentioned certs and config files to the worker nodes.
cd /srv/kubernetes openssl genrsa -out ca-key.pem 2048 openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj "/CN=kube-ca"
cat > openssl.cnf <<EOF [req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [v3_req] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = kubernetes DNS.2 = kubernetes.default DNS.3 = kubernetes.default.svc DNS.4 = kubernetes.default.svc.cluster.local IP.1 = 127.0.0.1 IP.2 = 192.168.100.230 # Master IP EOF
openssl genrsa -out apiserver-key.pem 2048 openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config openssl.cnf openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial \ -out apiserver.pem -days 7200 -extensions v3_req -extfile openssl.cnf cp apiserver.pem server.crt cp apiserver-key.pem server.key
openssl genrsa -out admin-key.pem 2048 openssl req -new -key admin-key.pem -out admin.csr -subj "/CN=admin" openssl x509 -req -in admin.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out admin.pem -days 7200
openssl genrsa -out kube-proxy-key.pem 2048 openssl req -new -key kube-proxy-key.pem -out kube-proxy.csr -subj "/CN=kube-proxy" openssl x509 -req -in kube-proxy.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out kube-proxy.pem -days 7200
openssl genrsa -out kubelet-key.pem 2048 openssl req -new -key kubelet-key.pem -out kubelet.csr -subj "/CN=kubelet" openssl x509 -req -in kubelet.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out kubelet.pem -days 7200
openssl genrsa -out kube-controller-manager-key.pem 2048 openssl req -new -key kube-controller-manager-key.pem -out kube-controller-manager.csr -subj "/CN=kube-controller-manager" openssl x509 -req -in kube-controller-manager.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out kube-controller-manager.pem -days 7200
openssl genrsa -out kube-scheduler-key.pem 2048 openssl req -new -key kube-scheduler-key.pem -out kube-scheduler.csr -subj "/CN=kube-scheduler" openssl x509 -req -in kube-scheduler.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out kube-scheduler.pem -days 7200
cat > worker-openssl.cnf << EOF [req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [v3_req] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] IP.1 = 192.168.100.169 EOF
openssl genrsa -out ubuntu-worker-key.pem 2048 WORKER_IP=192.168.100.169 openssl req -new -key ubuntu-worker-key.pem -out ubuntu-worker.csr \ -subj "/CN=ubuntu" -config worker-openssl.cnf WORKER_IP=192.168.100.169 openssl x509 -req -in ubuntu-worker.csr -CA ca.pem -CAkey ca-key.pem \ -CAcreateserial -out ubuntu-worker.pem -days 7200 -extensions v3_req -extfile worker-openssl.cnf
cat > etcd-openssl.cnf <<EOF [req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment extendedKeyUsage = clientAuth,serverAuth subjectAltName = @alt_names [alt_names] IP.1 = 192.168.100.230 EOF
openssl genrsa -out etcd.key 2048 openssl req -new -key etcd.key -out etcd.csr -subj "/CN=etcd" -extensions v3_req -config etcd-openssl.cnf -sha256 openssl x509 -req -sha256 -CA ca.pem -CAkey ca-key.pem -CAcreateserial \ -in etcd.csr -out etcd.crt -extensions v3_req -extfile etcd-openssl.cnf -days 7200
TOKEN=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null) kubectl config set-cluster linux1.k8s --certificate-authority=/srv/kubernetes/ca.pem --embed-certs=true --server=https://192.168.100.230:6443 kubectl config set-credentials admin --client-certificate=/srv/kubernetes/admin.pem --client-key=/srv/kubernetes/admin-key.pem --embed-certs=true --token=$TOKEN kubectl config set-context linux1.k8s --cluster=linux1.k8s --user=admin kubectl config use-context linux1.k8s cat ~/.kube/config #Create config file
TOKEN=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null) kubectl config set-cluster linux1.k8s --certificate-authority=/srv/kubernetes/ca.pem --embed-certs=true --server=https://192.168.100.230:6443 --kubeconfig=/var/lib/kube-controller-manager/kubeconfig kubectl config set-credentials kube-controller-manager --client-certificate=/srv/kubernetes/kube-controller-manager.pem --client-key=/srv/kubernetes/kube-controller-manager-key.pem --embed-certs=true --token=$TOKEN --kubeconfig=/var/lib/kube-controller-manager/kubeconfig kubectl config set-context linux1.k8s --cluster=linux1.k8s --user=kube-controller-manager --kubeconfig=/var/lib/kube-controller-manager/kubeconfig kubectl config use-context linux1.k8s --kubeconfig=/var/lib/kube-controller-manager/kubeconfig
TOKEN=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null) kubectl config set-cluster linux1.k8s --certificate-authority=/srv/kubernetes/ca.pem --embed-certs=true --server=https://192.168.100.230:6443 --kubeconfig=/var/lib/kube-scheduler/kubeconfig kubectl config set-credentials kube-scheduler --client-certificate=/srv/kubernetes/kube-scheduler.pem --client-key=/srv/kubernetes/kube-scheduler-key.pem --embed-certs=true --token=$TOKEN --kubeconfig=/var/lib/kube-scheduler/kubeconfig kubectl config set-context linux1.k8s --cluster=linux1.k8s --user=kube-scheduler --kubeconfig=/var/lib/kube-scheduler/kubeconfig kubectl config use-context linux1.k8s --kubeconfig=/var/lib/kube-scheduler/kubeconfig
TOKEN=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null) kubectl config set-cluster linux1.k8s --certificate-authority=/srv/kubernetes/ca.pem --embed-certs=true --server=https://192.168.100.230:6443 --kubeconfig=kubelet.kubeconfig kubectl config set-credentials kubelet --client-certificate=/srv/kubernetes/kubelet.pem --client-key=/srv/kubernetes/kubelet-key.pem --embed-certs=true --token=$TOKEN --kubeconfig=kubelet.kubeconfig kubectl config set-context linux1.k8s --cluster=linux1.k8s --user=kubelet --kubeconfig=kubelet.kubeconfig kubectl config use-context linux1.k8s --kubeconfig=kubelet.kubeconfig scp kubelet.kubeconfig root@192.168.100.169:/var/lib/kubelet/kubeconfig
TOKEN=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null) kubectl config set-cluster linux1.k8s --certificate-authority=/srv/kubernetes/ca.pem --embed-certs=true --server=https://192.168.100.230:6443 --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy --client-certificate=/srv/kubernetes/kube-proxy.pem --client-key=/srv/kubernetes/kube-proxy-key.pem --embed-certs=true --token=$TOKEN --kubeconfig=kube-proxy.kubeconfig kubectl config set-context linux1.k8s --cluster=linux1.k8s --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig kubectl config use-context linux1.k8s --kubeconfig=kube-proxy.kubeconfig scp kube-proxy.kubeconfig root@192.168.100.169:/var/lib/kube-proxy/kubeconfig
cd /usr/local/bin/go/src/ mkdir -p github.com/coreos cd github.com/coreos git clone git://github.com/coreos/etcd cd etcd git checkout v3.3.6 ./build ./test #(optional) cp bin/* /usr/local/bin
sudo cat > /etc/systemd/system/etcd.service << EOF [Unit] Description=etcd key-value store Documentation=https://github.com/coreos/etcd [Service] Environment="ETCD_UNSUPPORTED_ARCH=s390x" ExecStart=/usr/local/bin/etcd \\ --name master \\ --cert-file=/srv/kubernetes/etcd.crt \\ --key-file=/srv/kubernetes/etcd.key \\ --peer-cert-file=/srv/kubernetes/etcd.crt \\ --peer-key-file=/srv/kubernetes/etcd.key \\ --trusted-ca-file=/srv/kubernetes/ca.pem \\ --peer-trusted-ca-file=/srv/kubernetes/ca.pem \\ --peer-client-cert-auth \\ --client-cert-auth \\ --initial-advertise-peer-urls https://192.168.100.230:2380 \\ --listen-peer-urls https://192.168.100.230:2380 \\ --listen-client-urls https://192.168.100.230:2379,http://127.0.0.1:2379 \\ --advertise-client-urls https://192.168.100.230:2379 \\ --initial-cluster-token etcd-cluster-0 \\ --initial-cluster master=https://192.168.100.230:2380 \\ --initial-cluster-state new \\ --data-dir=/var/lib/etcd \\ --debug Restart=always RestartSec=10s LimitNOFILE=40000 Type=notify EOF sudo systemctl enable etcd sudo systemctl start etcd sudo systemctl status etcd --no-pager
Flannel should be installed on all the nodes
cd ~/Downloads wget https://github.com/coreos/flannel/releases/download/v0.10.0/flanneld-s390x chmod +x flanneld-s390x sudo cp flanneld-s390x /usr/local/bin/flanneld
This should be run only once and only on the Master node
etcdctl --cert-file /srv/kubernetes/etcd.crt --key-file /srv/kubernetes/etcd.key --ca-file /srv/kubernetes/ca.pem set /coreos.com/network/config '{ "Network": "100.64.0.0/16", "SubnetLen": 24, "Backend": {"Type": "vxlan"} }'
Check the interface on which the nodes are connected using
. Here it is enc1. Replace it with the correct interface.ip a
sudo cat > /etc/systemd/system/flanneld.service << EOF [Unit] Description=Network fabric for containers Documentation=https://github.com/coreos/flannel After=network.target After=network-online.target Wants=network-online.target After=etcd.service Before=docker.service [Service] Type=notify Restart=always RestartSec=5 ExecStart= /usr/local/bin/flanneld \\ -etcd-endpoints=https://192.168.100.230:2379 \\ -iface=enc1 \\ -ip-masq=true \\ -subnet-file=/var/lib/flanneld/subnet.env \\ -etcd-cafile=/srv/kubernetes/ca.pem \\ -etcd-certfile=/srv/kubernetes/etcd.crt \\ -etcd-keyfile=/srv/kubernetes/etcd.key [Install] WantedBy=multi-user.target EOF sudo systemctl enable flanneld sudo systemctl start flanneld sudo systemctl status flanneld --no-pager
add the following lines to the /usr/lib/systemd/system/docker.service
and change the line EnvironmentFile=/var/lib/flanneld/subnet.env
to ExecStart=/usr/bin/dockerd -H fd://
.
The file should now some what look likeExecStart=/usr/bin/dockerd -H fd:// --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU} --iptables=false --ip-masq=false --ip-forward=true
[Unit] Description=Docker Application Container Engine Documentation=http://docs.docker.com After=network.target containerd.socket containerd.service lvm2-monitor.service SuSEfirewall2.service Requires=containerd.socket containerd.service [Service] EnvironmentFile=/etc/sysconfig/docker # FlannelD subnet setup EnvironmentFile=/var/lib/flanneld/subnet.env # While Docker has support for socket activation (-H fd://), this is not # enabled by default because enabling socket activation means that on boot your # containers won't start until someone tries to administer the Docker daemon. Type=notify ExecStart=/usr/bin/dockerd --containerd /run/containerd/containerd.sock --add-runtime oci=/usr/sbin/docker-runc $DOCKER_NETWORK_OPTIONS $DOCKER_OPTS -H fd:// --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU} --iptables=false --ip-masq=false --ip-forward=true ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this property. TasksMax=infinity # Set delegate yes so that systemd does not reset the cgroups of docker containers # Only systemd 218 and above support this property. Delegate=yes # This is not necessary because of how we set up containerd. #KillMode=process [Install] WantedBy=multi-user.target
Then run the following commands
sudo systemctl daemon-reload sudo systemctl stop docker sudo systemctl start docker
sudo cat > /etc/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target etcd.service flanneld.service [Service] EnvironmentFile=-/var/lib/flanneld/subnet.env #User=kube ExecStart=/usr/local/bin/kube-apiserver \\ --bind-address=0.0.0.0 \\ --advertise-address=192.168.100.230 \\ --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,ResourceQuota \\ --allow-privileged=true \\ --anonymous-auth=false \\ --apiserver-count=1 \\ --authorization-mode=RBAC,AlwaysAllow \\ --authorization-rbac-super-user=admin \\ --etcd-cafile=/srv/kubernetes/ca.pem \\ --etcd-certfile=/srv/kubernetes/etcd.crt \\ --etcd-keyfile=/srv/kubernetes/etcd.key \\ --etcd-servers=https://192.168.100.230:2379 \\ --enable-swagger-ui=true \\ --event-ttl=1h \\ --insecure-bind-address=0.0.0.0 \\ --kubelet-certificate-authority=/srv/kubernetes/ca.pem \\ --kubelet-client-certificate=/srv/kubernetes/kubelet.pem \\ --kubelet-client-key=/srv/kubernetes/kubelet-key.pem \\ --kubelet-https=true \\ --client-ca-file=/srv/kubernetes/ca.pem \\ --runtime-config=api/all=true,batch/v2alpha1=true,rbac.authorization.k8s.io/v1alpha1=true \\ --secure-port=6443 \\ --service-cluster-ip-range=100.65.0.0/24 \\ --storage-backend=etcd3 \\ --tls-cert-file=/srv/kubernetes/apiserver.pem \\ --tls-private-key-file=/srv/kubernetes/apiserver-key.pem \\ --tls-ca-file=/srv/kubernetes/ca.pem \\ --logtostderr=true \\ --v=6 Restart=on-failure #Type=notify #LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF sudo systemctl enable kube-apiserver sudo systemctl start kube-apiserver sudo systemctl status kube-apiserver --no-pager #Takes time to start receiving requests
sudo cat > /etc/systemd/system/kube-scheduler.service << EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/usr/local/bin/kube-scheduler \\ --leader-elect=true \\ --kubeconfig=/var/lib/kube-scheduler/kubeconfig \\ --master=https://192.168.100.230:6443 \\ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF sudo systemctl enable kube-scheduler sudo systemctl start kube-scheduler sudo systemctl status kube-scheduler --no-pager
sudo cat > /etc/systemd/system/kube-controller-manager.service << EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/usr/local/bin/kube-controller-manager \\ --v=2 \\ --allocate-node-cidrs=true \\ --attach-detach-reconcile-sync-period=1m0s \\ --cluster-cidr=100.64.0.0/16 \\ --cluster-name=k8s.virtual.local \\ --leader-elect=true \\ --root-ca-file=/srv/kubernetes/ca.pem \\ --service-account-private-key-file=/srv/kubernetes/apiserver-key.pem \\ --use-service-account-credentials=true \\ --kubeconfig=/var/lib/kube-controller-manager/kubeconfig \\ --cluster-signing-cert-file=/srv/kubernetes/ca.pem \\ --cluster-signing-key-file=/srv/kubernetes/ca-key.pem \\ --service-cluster-ip-range=100.65.0.0/24 \\ --configure-cloud-routes=false \\ --master=https://192.168.100.230:6443 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF sudo systemctl enable kube-controller-manager sudo systemctl start kube-controller-manager sudo systemctl status kube-controller-manager --no-pager
sudo cat > /etc/systemd/system/kubelet.service << EOF [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] ExecStart=/usr/local/bin/kubelet \ --allow-privileged=true \ --cluster-dns=10.0.64.0.10 \ --cluster-domain=cluster.local \ --container-runtime=docker \ --kubeconfig=/var/lib/kubelet/kubeconfig \ --serialize-image-pulls=false \ --register-node=true \ --tls-cert-file=/srv/kubernetes/server.crt \ --tls-private-key-file=/srv/kubernetes/server.key \ --cert-dir=/var/lib/kubelet \ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF sudo systemctl enable kubelet sudo systemctl start kubelet sudo systemctl status kubelet --no-pager
sudo cat > /etc/systemd/system/kube-proxy.service << EOF [Unit] Description=Kubernetes Kube Proxy Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/usr/local/bin/kube-proxy \ --cluster-cidr=10.64.0.0/16 \ --masquerade-all=true \ --kubeconfig=/var/lib/kube-proxy/kubeconfig \ --proxy-mode=iptables \ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF sudo systemctl enable kube-proxy sudo systemctl start kube-proxy sudo systemctl status kube-proxy --no-pager
Now that we have deployed the cluster let’s test it.
Running
should return the version of both kubectl and kube-api-serverkubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.8", GitCommit:"c138b85178156011dc934c2c9f4837476876fb07", GitTreeState:"clean", BuildDate:"2018-05-21T19:01:12Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/s390x"} Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.8", GitCommit:"c138b85178156011dc934c2c9f4837476876fb07", GitTreeState:"clean", BuildDate:"2018-05-21T18:53:18Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/s390x"}
Running
should return the status of all the componentskubectl get componentstatus
NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true"}
Running
should return the nodes sucessfully registered with the server and status of each node.kubectl get nodes
NAME STATUS ROLES AGE VERSION k8s-worker Ready <none> 6d v1.9.8
Let’s run an Ngnix app on the cluster.
kubectl run nginx --image=nginx --port=80 --replicas=3 kubectl get pods -o wide kubectl expose deployment nginx --type NodePort NODE_PORT=$(kubectl get svc nginx --output=jsonpath='{range .spec.ports[0]}{.nodePort}') curl http://192.168.100.169:${NODE_PORT} #The IP is of Worker node
-
If any of the Kubernetes component throws up an error, check the reason for the error by observing the logs of the service using
journalctl -fu <service name>
-
To debug a kubectl command, use the flag
-v=<log level>