diff --git a/README.md b/README.md index a0e65f28..89de0fad 100644 --- a/README.md +++ b/README.md @@ -174,9 +174,11 @@ * [源监rocketmq部署控](docs/cloud/kubernetes/kubernetes_rokectmq.md) * [Kubernetes集群包管理解决方案 Helm](docs/cloud/kubernetes/kubernetes_heml.md) * [Kubernetes原生配置管理利器 kustomize](docs/cloud/kubernetes/kubernetes_kustomize.md) - * [源监控](docs/cloud/kubernetes/kubernetes_ui.md) - * [源监控](docs/cloud/kubernetes/kubernetes_ui.md) - * [源监控](docs/cloud/kubernetes/kubernetes_ui.md) + * [kubernetes集群网络解决方案 flannel](docs/cloud/kubernetes/kubernetes_flannel.md) + * [kubernetes集群网络解决方案 calico](docs/cloud/kubernetes/kubernetes_calico.md) + * [kubernetes集群 underlay 网络方案 hybridnet](docs/cloud/kubernetes/kubernetes_hybridnet.md) + * [kubernetes版本双栈协议(IPv4&IPv6)集群部署](docs/cloud/kubernetes/kubernetes_ipv4_and_ipv6.md) + * [源Rancher容器云管理平台监控](docs/cloud/kubernetes/kubernetes_rancher.md) * [源监控](docs/cloud/kubernetes/kubernetes_ui.md) * [情感] * [情商](docs/emotion/eq.md) diff --git a/docs/cloud/kubernetes/kubernetes_calico.md b/docs/cloud/kubernetes/kubernetes_calico.md new file mode 100644 index 00000000..84587397 --- /dev/null +++ b/docs/cloud/kubernetes/kubernetes_calico.md @@ -0,0 +1,1441 @@ + + +# kubernetes网络解决方案 Calico + +# 一、CNI方案 + +学习目标 + +这一节,我们从 基础原理、方案解读、小结 三个方面来学习 + +**基础原理** + +容器访问模式 + +![1633650192071](../../img/kubernetes/kubernetes_calico/1633650192071.png) + +```powershell +方法1: 虚拟网桥 + 虚拟网卡对 +方法2: 多路复用 + 内核级的VLAN模块 +方法3: 硬件交换 + 硬件虚拟化 +``` + + + +CNI简介 + +```powershell + 根据我们对于容器的基本了解,他虽然可以借助于虚拟网桥docker0实现一定程度的网络功能,但是在大范围容器访问层面,其没有最好的网络解决方案,只能借助于一些第三方的网络解决方案来实现容器级别的跨网络通信能力。 + CNI的全称是Container Network Interface,Google和CoreOS联合定制的多容器通信的网络模型。在Kubernetes中通过一个CNI接口来替代docker0,它在宿主机上的默认名字叫cni0。它没有使用Docker的网络模型的原因有两个:1 不想被Docker要挟,2 自有的网络namespace设计有关。 +``` + +![1633651873150](../../img/kubernetes/kubernetes_calico/1633651873150.png) + +```powershell +CNI的设计思想即为:Kubernetes在启动Pod的pause容器之后,直接调用CNI网络插件,从而实现为Pod内部应用容器所在的Network Namespace配置符合预期的网络信息。这里面需要特别关注两个方面: + - Container必须有自己的网络命名空间的环境,也就是endpoint地址。 + - Container所在的网段必须能够注册网络地址信息 + +对容器网络的设置和操作都通过插件(Plugin)进行具体实现,CNI插件包括两种类型:CNIPlugin和IPAM(IP Address Management)Plugin。CNI Plugin负责为容器配置网络资源,IPAM Plugin负责对容器的IP地址进行分配和管理。IPAM Plugin作为CNI Plugin的一部分,与CNI Plugin一起工作。 +``` + +```powershell +在Kubernetes中,CNI对于容器网络的设置主要是以CNI Plugin插件的方式来为容器配置网络资源,它主要有三种模式: + MainPlugin + - 用来创建具体的网络设备的二进制文件 + - 比如bridge、ipvlan、vlan、host-device + IPAM Plugin + - IPAM 就是 IP Address Management + - 负责对容器的IP地址进行分配和管理,作为CNI Plugin的一部分,与CNI Plugin一起工作 + Meta Plugin + - 由CNI社区维护的内部插件功能模块,常见的插件功能模块有以下几种 + - flannel 专门为Flannel项目提供的插件 + - tuning 通过sysctl调整网络设备参数的二进制文件 + - portmap 通过iptables配置端口映射的二进制文件 + - bandwidth 使用 Token Bucket Filter (TBF)来进行限流的二进制文件 + - firewall 通过iptables或者firewalled添加规则控制容器的进出流量 +``` + +```powershell +更多详情查看:https://github.com/containernetworking/cni/blob/main/SPEC.md +``` + +CNI目前被谁管理? + +```powershell + 在 Kubernetes 1.24 之前,CNI 插件也可以由 kubelet 使用命令行参数 cni-bin-dir 和 network-plugin 管理。而在Kubernetes 1.24 移除了这些命令行参数, CNI 的管理不再是 kubelet 的工作。而变成下层的容器引擎需要做的事情了,比如cri-dockerd服务的启动文件 +``` + +```powershell +查看服务文件 /etc/systemd/system/cri-docker.service +ExecStart=/usr/local/bin/cri-dockerd --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin ... + +注意: + /opt/cni/bin 目录是部署kubernetes的时候,安装的cni-tools软件包自动创建出来的,这里面包含了很多的网络命令工具 +``` + +```powershell +flannel的CNI配置文件 +# cat /etc/cni/net.d/10-flannel.conflist +{ + "name": "cbr0", + "plugins": [ + { + "type": "flannel", # 为Flannel项目提供的插件,配置容器网络 + "delegate": { + "hairpinMode": true, + "isDefaultGateway": true + } + }, + { + "type": "portmap", # 通过iptables配置端口映射的二进制文件,配置端口映射 + "capabilities": { + "portMappings": true + } + } + ] +} +``` + +```powershell +calico的CNI配置文件 +# cat /etc/cni/net.d/10-calico.conflist +{ + "name": "k8s-pod-network", + "cniVersion": "0.3.1", + "plugins": [ + { + "type": "calico", + "log_level": "info", + "log_file_path": "/var/log/calico/cni/cni.log", + "datastore_type": "kubernetes", + "nodename": "kubernetes-master", + "mtu": 0, + "ipam": { + "type": "host-local", + "subnet": "usePodCidr" + }, + "policy": { + "type": "k8s" + }, + "kubernetes": { + "kubeconfig": "/etc/cni/net.d/calico-kubeconfig" + } + }, + { + "type": "portmap", + "snat": true, + "capabilities": {"portMappings": true} + }, + { + "type": "bandwidth", + "capabilities": {"bandwidth": true} + } + ] +} +``` + +**方案解读** + +```powershell +kubernetes提供了很多的网络解决方案,相关的资料如下: + https://kubernetes.io/zh-cn/docs/concepts/cluster-administration/addons/ +``` + +flannel方案 + +```powershell +项目地址:https://github.com/coreos/flannel + + Flannel是CoreOS 开发的项目,它是容器编排系统中最成熟的网络方案之一,旨在实现更好的容器间和主机间网络,由于其稳定和配置简单,所以它是CNI最早引入的一套方案。常见的 Kubernetes 集群部署工具和许多 Kubernetes 发行版都可以默认安装 Flannel。 + 如果你需要一个稳定而又简单的网络功能的话,不那么在乎安全性的话,Flannel是一个很好的选择。 +``` + +calico方案 + +```powershell +项目地址:https://github.com/projectcalico/cni-plugin + + Calico 是 Kubernetes 生态系统中另一种流行的网络选择。虽然 Flannel 被公认为是最简单的选择,但 Calico 以其性能、灵活性而闻名。Calico 的功能更为全面,除了通用的网络连接,还涉及网络安全和网络策略管理,甚至Calico还可以在微服务的网络治理中进行整合。Calico CNI 插件在 CNI 框架内封装了 Calico 的功能。 + 如果对你的环境而言,支持网络策略是非常重要的一点,而且你对其他性能和功能也有需求,那么 Calico 会是一个理想的选择。 +``` + +canal方案 + +```powershell +项目地址:https://github.com/projectcalico/canal + + Canal 试图将 Flannel 提供的网络功能与 Calico 的网络策略功能集成在一起,只不过在研发的时候,发现Flannel 和 Calico 这两个项目的标准化和灵活性都想当标准,那集成就没必要了,所以目前这个项目“烂尾”了。由于Canal的思路很好,业界习惯性地将 Flannel 和 Calico 的功能组合实施称为“Canal”。 + 对于那些喜欢 Flannel 提供的网络模型,同时喜欢 Calico 的网络策略功能的人来说,Canal是一个选择。 +``` + +weave方案 + +```powershell +项目地址:https://www.weave.works/oss/net/ + + Weave 是由 Weaveworks 提供的一种 Kubernetes CNI 网络选项,它在集群中的每个节点上部署路由组件,从而实现主机之间创建更加灵活的网状 overlay 网络,借助于内核级别的Open vSwitch 配置,从而实现具有一定程度的智能路由功能。除此之外,weave还具有想当的网络策略功能,网络加密传输功能等。 + 对于那些寻求功能丰富的网络、同时希望不要增加大量复杂性或管理难度的人来说,Weave 是一个很好的选择。 +``` + +![image-20220903022845285](../../img/kubernetes/kubernetes_calico/image-20220903022845285.png) + +```powershell +bandwidth - 带宽、consumption - 消耗、encryption - 加密 +资料来源: +https://docs.google.com/spreadsheets/d/12dQqSGI0ZcmuEy48nA0P_bPl7Yp17fNg7De47CYWzaM/edit#gid=404703955 +``` + + + +**小结** + +``` + +``` + + + +# 二、 Calico环境 + +学习目标 + +这一节,我们从 基础知识、原理解读、小结 三个方面来学习 + +**基础知识** + +简介 + +``` + Calico是一个开源的虚拟化网络方案,用于为云原生应用实现互联及策略控制.相较于 Flannel 来说,Calico 的优势是对网络策略(network policy),它允许用户动态定义 ACL 规则控制进出容器的数据报文,实现为 Pod 间的通信按需施加安全策略.不仅如此,Calico 还可以整合进大多数具备编排能力的环境,可以为 虚机和容器提供多主机间通信的功能。 + Calico 本身是一个三层的虚拟网络方案,它将每个节点都当作路由器,将每个节点的容器都当作是节点路由器的一个终端并为其分配一个 IP 地址,各节点路由器通过 BGP(Border Gateway Protocol)学习生成路由规则,从而将不同节点上的容器连接起来.因此,Calico 方案其实是一个纯三层的解决方案,通过每个节点协议栈的三层(网络层)确保容器之间的连通性,这摆脱了 flannel host-gw 类型的所有节点必须位于同一二层网络的限制,从而极大地扩展了网络规模和网络边界。 + +官方地址:https://www.tigera.io/project-calico/ +最新版本:v3.24.1 (20220827)、 v3.20.6 (20220802)、v3.21.6 (20220722)、 v3.23.3 (20220720)、v3.22.4 (20220720) +``` + +网络模型 + +``` +Calico为了实现更广层次的虚拟网络的应用场景,它支持多种网络模型来满足需求。 +underlay network - BGP(三层虚拟网络解决方案) +overlay network - IPIP(双层IP实现跨网段效果)、VXLAN(数据包标识实现大二层上的跨网段通信) +``` + +设计思想 + +``` +Calico不使用隧道或者NAT来实现转发,而是巧妙的把所有二三层流量转换成三层流量,并通过host上路由配置完成跨host转发。 +``` + +为什么用calico + +![image-20220905120240844](../../img/kubernetes/kubernetes_calico/image-20220905120240844.png) + +**原理解读** + +calico + +```powershell + Calico是一个开源的虚拟化网络方案,用于为云原生应用实现互联及策略控制.相较于 Flannel 来说,Calico 的优势是对网络策略(network policy),它允许用户动态定义 ACL 规则控制进出容器的数据报文,实现为 Pod 间的通信按需施加安全策略.不仅如此,Calico 还可以整合进大多数具备编排能力的环境,可以为 虚机和容器提供多主机间通信的功能。 + 我们平常使用Calico主要用到的是它的网络策略功能 +``` + +软件结构 + +![image-20220722175645610](../../img/kubernetes/kubernetes_calico/image-20220722175645610.png) + +```powershell +Felix + 每个节点都有,负责配置路由、ACL、向etcd宣告状态等 +BIRD + 每个节点都有,负责把 Felix 写入Kernel的路由信息 分发到整个 Calico网络,确保 workload 连通 +etcd + 存储calico自己的状态数据,可以结合kube-apiserver来工作 + 官方推荐; + < 50节点,可以结合 kube-apiserver 来实现数据的存储 + > 50节点,推荐使用独立的ETCD集群来进行处理。 + 参考资料: + https://projectcalico.docs.tigera.io/getting-started/kubernetes/self-managed-onprem/onpremises#install-calico +Route Reflector + 路由反射器,用于集中式的动态生成所有主机的路由表,非必须选项 + 超过100个节点推荐使用: + https://projectcalico.docs.tigera.io/getting-started/kubernetes/rancher#concepts +Calico编排系统插件 + 实现更广范围的虚拟网络解决方案。 + +参考资料:https://docs.projectcalico.org/reference/architecture/overview +``` + +工作模式 + +![image-20220722180040443](../../img/kubernetes/kubernetes_calico/image-20220722180040443.png) + +```powershell +对于节点量少的情况下,我们推荐使用左侧默认的模式,当节点量多的时候,我们推荐使用右侧的反射器模式 +``` + +**小结** + +``` + +``` + + + +## 2.1 Calico部署 + +学习目标 + +这一节,我们从 环境解析、简单实践、小结 三个方面来学习 + +**环境解析** + +k8s环境上的calico逻辑 + +![architecture-calico](../../img/kubernetes/kubernetes_calico/architecture-calico.svg) + +```powershell +confd + - 统一管控 Calico 数据存储以及 BGP 配置的轻量级的配置管理工具。 + - 当配置文件发生变化时,动态更新和生成BIRD 配置文件 +Dikastes + - 为 Istio 服务网格实施网络策略。作为 Istio Envoy 的 sidecar 代理在集群上运行。 +CNI plugin + - 为 Kubernetes 集群提供 Calico 网络,必须安装在 Kubernetes 集群中的每个节点上。 +Datastore plugin + - 独立使用etcd作为Calico的数据存储平台,特点在于独立扩展数据存储 + - 结合K8s的apiserver实现数据存储到etcd中,特点在于使用 Kubernetes的存储、RBAC、审计功能为Calico服务 +IPAM plugin + - 使用 Calico 的 IP 池资源来控制 IP 地址如何分配给集群内的 Pod。 +kube-controllers + - 监控 Kubernetes API 并根据集群状态执行操作 + - 主要是策略、命名空间、服务账号、负载、节点等通信策略的控制 +Typha + - 通过减少每个节点对数据存储的影响来增加规模。在数据存储和 Felix 实例之间作为守护进程运行。 + +参考资料: + https://projectcalico.docs.tigera.io/reference/architecture/overview +``` + +基础环境支持 + +```powershell +linux 主机基本要求: + x86-64、arm64、ppc64le 或 s390x 处理器 + 2CPU + 2GB 内存 + 10GB 可用磁盘空间 + RedHat Enterprise Linux 7.x+、CentOS 7.x+、Ubuntu 16.04+ 或 Debian 9.x+ + 确保 Calico 可以在主机上进行管理cali和接口 +其他需求: + kubernetes集群配置了 --pod-network-cidr 属性 +参考资料: + https://projectcalico.docs.tigera.io/getting-started/kubernetes/quickstart +``` + +部署解析 + +```powershell +对于calico在k8s集群上的部署来说,为了完成上面四个组件的部署,这里会涉及到两个部署组件 +``` + +| 组件名 | 组件作用 | +| ---------------------- | ------------------------------------------------------------ | +| calico-node | 需要部署到所有集群节点上的代理守护进程,提供封装好的Felix和BIRD | +| calico-kube-controller | 专用于k8s上对calico所有节点管理的中央控制器。负责calico与k8s集群的协同及calico核心功能实现。 | + +部署步骤 + +```powershell +1 获取资源配置文件 + 从calico官网获取相关的配置信息 +2 定制CIDR配置 + 定制calico自身对于pod网段的配置信息,并且清理无关的网络其他插件 +3 定制pod的manifest文件分配网络配置 + 默认的k8s集群在启动的时候,会有一个cidr的配置,有可能与calico进行冲突,那么我们需要修改一下 +4 应用资源配置文件 +``` + +```powershell +注意事项: + 对于calico来说,它自己会生成自己的路由表,如果路由表中存在响应的记录,默认情况下会直接使用,而不是覆盖掉当前主机的路由表 + 所以如果我们在部署calico之前,曾经使用过flannel,尤其是flannel的host-gw模式的话,一定要注意,在使用calico之前,将之前所有的路由表信息清空,否则无法看到calico的tunl的封装效果 +``` + +**简单实践** + +环境部署 + +```powershell +1 获取资源清单文件 +mkdir /data/kubernetes/network/calico -p +cd /data/kubernetes/network/calico/ +curl https://docs.projectcalico.org/manifests/calico.yaml -O +cp calico.yaml{,.bak} +``` + +```powershell +2 配置资源清单文件 +# vim calico.yaml +---- 官网推荐的修改内容 +4546 - name: CALICO_IPV4POOL_CIDR +4547 value: "10.244.0.0/16" +---- 方便我们的后续实验,新增调小子网段的分配 +4548 - name: CALICO_IPV4POOL_BLOCK_SIZE +4549 value: "24" +配置解析: + 开放默认注释的CALICO_IPV4POOL_CIDR变量,然后定制我们当前的pod的网段范围即可 + 原则上来说,我们修改官方提示的属性即可 +``` + +```powershell +3 定制pod的manifest文件分配网络配置 +# vim calico.yaml +---- 修改下面的内容 + 64 "ipam": { + 65 "type": "calico-ipam" + 66 }, +---- 修改后的内容 + 64 "ipam": { + 65 "type": "host-local", + 66 "subnet": "usePodCidr" + 67 }, +---- 定制calico使用k8s集群节点的地址 +4551 - name: USE_POD_CIDR +4552 value: "true" +配置解析: + Calico默认并不会从Node.Spec.PodCIDR中分配地址,但可通过USE_POD_CIDR变量并结合host-local这一IPAM插件以强制从PodCIDR中分配地址 +``` + +```powershell +4 定制默认的docker镜像 +查看默认的镜像 +# grep docker.io calico.yaml | uniq + image: docker.io/calico/cni:v3.24.1 + image: docker.io/calico/node:v3.24.1 + image: docker.io/calico/kube-controllers:v3.24.1 +获取镜像 +for i in $(grep docker.io calico.yaml | uniq | awk -F'/' '{print $NF}') +do + docker pull calico/$i + docker tag calico/$i kubernetes-register.superopsmsb.com/google_containers/$i + docker push kubernetes-register.superopsmsb.com/google_containers/$i + docker rmi calico/$i +done + +修改为定制的镜像 +sed -i 's#docker.io/calico#kubernetes-register.superopsmsb.com/google_containers#g' calico.yaml + +查看效果 +# grep google calico.yaml + image: kubernetes-register.superopsmsb.com/google_containers/cni:v3.24.1 + image: kubernetes-register.superopsmsb.com/google_containers/cni:v3.24.1 + image: kubernetes-register.superopsmsb.com/google_containers/node:v3.24.1 + image: kubernetes-register.superopsmsb.com/google_containers/node:v3.24.1 + image: kubernetes-register.superopsmsb.com/google_containers/kube-controllers:v3.24.1 +``` + +```powershell +5 应用资源配置文件 +清理之前的flannel插件 +kubectl delete -f kube-flannel.yml +kubectl get pod -n kube-system | grep flannel + +这个时候,先清除旧网卡,然后最好重启一下主机,直接清空所有的路由表信息 +ifconfig flannel.1 +reboot + +重启后,查看网络效果 +注意: + 为了避免后续calico网络测试的异常,我们这里最好只留下一个网卡 eth0 +``` + +```powershell +应用calico插件 +kubectl apply -f calico.yaml + +在calico-node部署的时候,会启动多个进程 +# kubectl get pod -n kube-system | egrep 'NAME|calico' +NAME READY STATUS RESTARTS AGE +calico-kube-controllers-549f7748b5-xqz8j 0/1 ContainerCreating 0 9s +calico-node-74c5w 0/1 Init:0/3 0 9s +... + +环境部署完毕后,查看效果 +# kubectl get pod -n kube-system | egrep 'NAME|calico' +NAME READY STATUS RESTARTS AGE +calico-kube-controllers-549f7748b5-xqz8j 0/1 Running 0 39s +calico-node-74c5w 0/1 Running 0 39s +... +``` + +```powershell +每个calico节点上都有多个进程,分别来处理不同的功能 +]# ps aux | egrep 'NAME | calico' +root 9315 0.5 1.1 1524284 43360 ? Sl 15:29 0:00 calico-node -confd +root 9316 0.2 0.8 1155624 32924 ? Sl 15:29 0:00 calico-node -monitor-token +root 9317 2.8 1.0 1598528 40992 ? Sl 15:29 0:02 calico-node -felix +root 9318 0.3 0.9 1155624 35236 ? Sl 15:29 0:00 calico-node -monitor-addresses +root 9319 0.3 0.8 1155624 33460 ? Sl 15:29 0:00 calico-node -status-reporter +root 9320 0.2 0.7 1155624 30364 ? Sl 15:29 0:00 calico-node -allocate-tunnel-addrs +``` + +测试效果 + +```powershell +获取镜像 +docker pull nginx +docket tag nginx kubernetes-register.superopsmsb.com/superopsmsb/nginx:1.23.1 +docker push kubernetes-register.superopsmsb.com/superopsmsb/nginx:1.23.1 +docker rmi nginx +``` + +```powershell +创建一个deployment +kubectl create deployment pod-deployment --image=kubernetes-register.superopsmsb.com/superopsmsb/nginx:1.23.1 --replicas=3 + +查看pod +# kubectl get pod +NAME READY STATUS RESTARTS AGE +pod-deployment-554dff674-267gq 1/1 Running 0 48s +pod-deployment-554dff674-c8cjs 1/1 Running 0 48s +pod-deployment-554dff674-pxrwb 1/1 Running 0 48s + +暴露一个service +kubectl expose deployment pod-deployment --port=80 --target-port=80 + +确认效果 +kubectl get service +curl 10.108.138.97 +``` + + + +**小结** + +``` + +``` + + + +## 2.2 简单实践 + +学习目标 + +这一节,我们从 网络解析、命令完善、小结 三个方面来学习 + +**网络解析** + +calico部署完毕后,会生成一系列的自定义配置属性信息 + +```powershell +自动生成一个api版本信息 +# kubectl api-versions | grep crd +crd.projectcalico.org/v1 + +该api版本信息中有大量的配置属性 +kubectl api-resources | grep crd.pro +``` + +```powershell +这里面的 ippools 里面包含了calico相关的网络属性信息 +# kubectl get ippools +NAME AGE +default-ipv4-ippool 37m + +查看这里配置的calico相关的信息 +# kubectl get ippools default-ipv4-ippool -o yaml +apiVersion: crd.projectcalico.org/v1 +kind: IPPool +... +spec: + allowedUses: + - Workload + - Tunnel + blockSize: 24 + cidr: 10.244.0.0/16 + ipipMode: Always + natOutgoing: true + nodeSelector: all() + vxlanMode: Never +结果显式: + 这里的calico采用的模型就是 ipip模型,分配的网段是使我们定制的 cidr网段,而且子网段也是我们定制的 24位掩码 +``` + +环境创建完毕后,会生成一个tunl0的网卡,所有的流量会走这个tunl0网卡 + +```powershell +确认网卡和路由信息 +]# ifconfig | grep flags +]# ip route list | grep tun +10.244.1.0/24 via 10.0.0.13 dev tunl0 proto bird onlink +10.244.2.0/24 via 10.0.0.14 dev tunl0 proto bird onlink +10.244.3.0/24 via 10.0.0.15 dev tunl0 proto bird onlink +10.244.4.0/24 via 10.0.0.16 dev tunl0 proto bird onlink +10.244.5.0/24 via 10.0.0.17 dev tunl0 proto bird onlink +结果显示: + calico模型中,默认使用的是tunl0虚拟网卡实现数据包的专线转发 +``` + +测试效果 + +```powershell +由于在calico的默认网络模型是 IPIP,所以我们在进行数据包测试的时候,可以通过直接抓取宿主机数据包,来发现双层ip效果 +kubectl get pod -o wide + +在master1上采用ping的方式来测试 node2上的节点pod +[root@kubernetes-master1 /data/kubernetes/network/calico]# ping -c 1 10.244.4.3 +PING 10.244.4.3 (10.244.4.3) 56(84) bytes of data. +64 bytes from 10.244.4.3: icmp_seq=1 ttl=63 time=0.794 ms +``` + +```powershell +在node2上检测数据包的效果 +[root@kubernetes-node2 ~]# tcpdump -i eth0 -nn ip host 10.0.0.16 and host 10.0.0.12 +tcpdump: verbose output suppressed, use -v or -vv for full protocol decode +listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes +15:38:52.231680 IP 10.0.0.12 > 10.0.0.16: IP 10.244.0.1 > 10.244.4.3: ICMP echo request, id 19189, seq 1, length 64 (ipip-proto-4) +15:38:52.231989 IP 10.0.0.16 > 10.0.0.12: IP 10.244.4.3 > 10.244.0.1: ICMP echo reply, id 19189, seq 1, length 64 (ipip-proto-4) +15:38:54.666538 IP 10.0.0.16.33992 > 10.0.0.12.179: Flags [P.], seq 4281052787:4281052806, ack 3643645463, win 58, length 19: BGP +15:38:54.666962 IP 10.0.0.12.179 > 10.0.0.16.33992: Flags [.], ack 19, win 58, length 0 +结果显式: + 每个数据包都是基于双层ip嵌套的方式来进行传输,而且协议是 ipip-proto-4 + 结合路由的分发详情,可以看到具体的操作效果。 + 具体效果:10.244.0.1 -> 10.0.0.12 -> 10.0.0.16 -> 10.244.6.3 +``` + + + +**命令完善** + +简介 + +```powershell + calico本身是一个复杂的系统,复杂到它自己提供一个非常重要的Restful接口,结合calicoctl命令来管理自身的相关属性信息,calicoctl可以直接与etcd进行操作,也可以通过kube-apiserver的方式与etcd来进行操作。默认情况下,它与kube-apiserver通信的认证方式与kubectl的命令使用同一个context。但是我们还是推荐,使用手工定制的一个配置文件。 + + calicoctl 是运行在集群之外的,用于管理集群功能的一个重要的组件。calicoctl 的安装方式很多,常见的方式有:单主机方式、kubectl命令插件方式、pod方式、主机容器方式。我们需要自己选择一种适合自己的方式 + 参考资料:https://projectcalico.docs.tigera.io/getting-started/kubernetes/hardway/the-calico-datastore#install +``` + +```powershell +获取专用命令 +cd /usr/local/bin/ +curl -L https://github.com/projectcalico/calico/releases/download/v3.24.1/calicoctl-linux-amd64 -o calicoctl +chmod +x calicoctl + +查看帮助 +# calicoctl --help +Usage: + calicoctl [options] [...] +``` + +命令的基本演示 + +```powershell +查看ip的管理 +calicoctl ipam --help + +查看ip的信息 +# calicoctl ipam show ++----------+---------------+-----------+------------+--------------+ +| GROUPING | CIDR | IPS TOTAL | IPS IN USE | IPS FREE | ++----------+---------------+-----------+------------+--------------+ +| IP Pool | 10.244.0.0/16 | 65536 | 0 (0%) | 65536 (100%) | ++----------+---------------+-----------+------------+--------------+ +``` + +```powershell +查看信息的显式效果 +calicoctl ipam show --help + +显式相关的配置属性 +# calicoctl ipam show --show-configuration ++--------------------+-------+ +| PROPERTY | VALUE | ++--------------------+-------+ +| StrictAffinity | false | +| AutoAllocateBlocks | true | +| MaxBlocksPerHost | 0 | ++--------------------+-------+ +``` + +将calico整合到kubectl里面 + +```powershell +定制kubectl 插件子命令 +# cd /usr/local/bin/ +# cp -p calicoctl kubectl-calico + +测试效果 +# kubectl calico --help +Usage: + kubectl-calico [options] [...] + +后续的操作基本上都一样了,比如获取网络节点效果 +[root@kubernetes-master1 /usr/local/bin]# kubectl calico node status +Calico process is running. + +IPv4 BGP status ++--------------+-------------------+-------+----------+-------------+ +| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | ++--------------+-------------------+-------+----------+-------------+ +| 10.0.0.15 | node-to-node mesh | up | 07:30:48 | Established | +| 10.0.0.17 | node-to-node mesh | up | 07:30:48 | Established | +| 10.0.0.13 | node-to-node mesh | up | 07:30:48 | Established | +| 10.0.0.14 | node-to-node mesh | up | 07:30:51 | Established | +| 10.0.0.16 | node-to-node mesh | up | 07:31:41 | Established | ++--------------+-------------------+-------+----------+-------------+ + +IPv6 BGP status +No IPv6 peers found. +注意: + 这里查看的是除了自己网段之外的其他节点的路由信息 +``` + +**小结** + +``` + +``` + +## 2.3 BGP实践 + +学习目标 + +这一节,我们从 bgp改造、反射器实践、小结 三个方面来学习 + +**bgp改造** + +简介 + +```powershell + 对于现有的calico环境,我们如果需要使用BGP环境,我们可以直接使用一个配置清单来进行修改calico环境即可。我们这里先来演示一下如何使用calicoctl修改配置属性。 +``` + +```powershell +获取当前的配置属性 +# kubectl calico get ipPools +NAME CIDR SELECTOR +default-ipv4-ippool 10.244.0.0/16 all() + +# kubectl calico get ipPools default-ipv4-ippool -o yaml +apiVersion: projectcalico.org/v3 +kind: IPPool +... +spec: + blockSize: 24 + cidr: 10.244.0.0/16 + ipipMode: Always + natOutgoing: true + nodeSelector: all() + vxlanMode: Never +``` + +```powershell +定制资源配置文件 +kubectl calico get ipPools default-ipv4-ippool -o yaml > default-ipv4-ippool.yaml + +修改配置文件 +# cat default-ipv4-ippool.yaml +apiVersion: projectcalico.org/v3 +kind: IPPool +metadata: + name: default-ipv4-ippool +spec: + blockSize: 24 + cidr: 10.244.0.0/16 + ipipMode: CrossSubnet + natOutgoing: true + nodeSelector: all() + vxlanMode: Never + 配置解析: + 仅仅将原来的Always 更换成 CrossSubnet(跨节点子网) 模式即可 + vxlanMode 的两个值可以实现所谓的 BGP with vxlan的效果 + +应用资源配置文件 +# kubectl calico apply -f default-ipv4-ippool.yaml +Successfully applied 1 'IPPool' resource(s) +``` + +检查效果 + +```powershell +查看路由信息 +[root@kubernetes-master1 ~]# ip route list | grep via +default via 10.0.0.2 dev eth0 +10.244.1.0/24 via 10.0.0.13 dev eth0 proto bird +10.244.2.0/24 via 10.0.0.14 dev eth0 proto bird +10.244.3.0/24 via 10.0.0.15 dev eth0 proto bird +10.244.4.0/24 via 10.0.0.16 dev eth0 proto bird +10.244.5.0/24 via 10.0.0.17 dev eth0 proto bird + +结果显式: + 更新完毕配置后,动态路由的信息就发生改变了,不再完全是 tunl0 网卡了,而是变成了通过具体的物理网卡eth0 转发出去了 +``` + +```powershell +在master1上ping在节点node2上的pod +[root@kubernetes-master1 ~]# ping -c 1 10.244.4.3 +PING 10.244.4.3 (10.244.4.3) 56(84) bytes of data. +64 bytes from 10.244.4.3: icmp_seq=1 ttl=63 time=0.671 ms + +由于这次是直接通过节点进行转发的,所以我们在node2节点上抓包的时候,直接通过内层ip抓取即可。 +[root@kubernetes-node2 ~]# tcpdump -i eth0 -nn ip host 10.244.4.3 +tcpdump: verbose output suppressed, use -v or -vv for full protocol decode +listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes +15:47:32.906462 IP 10.0.0.12 > 10.244.4.3: ICMP echo request, id 28248, seq 1, length 64 +15:47:32.906689 IP 10.244.4.3 > 10.0.0.12: ICMP echo reply, id 28248, seq 1, length 64 +结果显示: + 他们实现了直接的连通,无需进行数据包的转换了,效率更高一点 +``` + + + +**反射器实践** + +当前节点的网络效果 + +```powershell +查看当前节点的网络效果 +[root@kubernetes-master1 ~]# kubectl calico node status +Calico process is running. + +IPv4 BGP status ++--------------+-------------------+-------+----------+-------------+ +| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | ++--------------+-------------------+-------+----------+-------------+ +| 10.0.0.15 | node-to-node mesh | up | 07:30:48 | Established | +| 10.0.0.17 | node-to-node mesh | up | 07:30:48 | Established | +| 10.0.0.13 | node-to-node mesh | up | 07:30:48 | Established | +| 10.0.0.14 | node-to-node mesh | up | 07:30:51 | Established | +| 10.0.0.16 | node-to-node mesh | up | 07:31:41 | Established | ++--------------+-------------------+-------+----------+-------------+ +``` + +需求 + +```powershell + 当前的calico节点的网络状态是 BGP peer 的模型效果,也就是说 每个节点上都要维护 n-1 个路由配置信息,整个集群中需要维护 n*(n-1) 个路由效果,这在节点量非常多的场景中,不是我们想要的。 + 所以我们需要实现一种 BGP reflecter 的效果。 +``` + +```powershell + 如果我们要做 BGP reflecter 效果的话,需要对反射器角色做冗余,如果我们的集群是一个多主集群的话,可以将集群的master节点作为bgp的reflecter角色,从而实现反射器的冗余。 + 1 定制反射器角色 + 2 后端节点使用反射器 + 3 关闭默认的网格效果 +``` + +1 创建反射器角色 + +```powershell +定制资源定义文件 01-calico-reflector-master.yaml +apiVersion: projectcalico.org/v3 +kind: Node +metadata: + labels: + route-reflector: true + name: kubernetes-master1 +spec: + bgp: + ipv4Address: 10.0.0.12/16 + ipv4IPIPTunnelAddr: 10.244.0.1 + routeReflectorClusterID: 1.1.1.1 +属性解析; + metadata.labels 是非常重要的,因为它需要被后面的节点来进行获取 + metadata.name 的属性,必须是通过 calicoctl get nodes 获取到的节点名称。 + spec.bgp.ipv4Address必须是 指定节点的ip地址 + spec.bgp.ipv4IPIPTunnelAddr必须是 指定节点上tunl0的地址 + spec.bgp.routeReflectorClusterID 是BGP网络中的唯一标识,所以这里的集群标识只要唯一就可以了 + +应用资源配置文件 +kubectl calico apply -f 01-calico-reflector-master.yaml +``` + +2 更改节点的网络模型为 reflecter模型 + +```powershell +定制资源定义文件 02-calico-reflector-bgppeer.yaml +kind: BGPPeer +apiVersion: projectcalico.org/v3 +metadata: + name: bgppeer-demo +spec: + nodeSelector: all() + peerSelector: route-reflector=="true" +属性解析; + spec.nodeSelector 指定的所有后端节点 + spec.peerSelector 指定的是反射器的标签,标识所有的后端节点与这个反射器进行数据交流 + +应用资源配置文件 +kubectl calico apply -f 02-calico-reflector-bgppeer.yaml +``` + +3 关闭默认的网格效果 + +```powershell +定制资源定义文件 03-calico-reflector-defaultconfig.yaml +apiVersion: projectcalico.org/v3 +kind: BGPConfiguration +metadata: + name: default +spec: + logSeverityScreen: Info + nodeToNodeMeshEnabled: false + asNumber: 63400 + +属性解析; + metadata.name 在这里最好是default,因为我们要对BGP默认的网络效果进行关闭 + spec.nodeToNodeMeshEnabled 关闭后端节点的BGP peer默认状态 -- 即点对点通信关闭 + spec.asNumber 指定的是后端节点间使用反射器的时候,我们要在一个标志号内,这里随意写一个 + +应用资源配置文件 +kubectl calico apply -f 03-calico-reflector-defaultconfig.yaml +``` + +```powershell +查看效果 +[root@kubernetes-master1 ~/txt]# kubectl calico node status +... +IPv4 BGP status ++--------------+---------------+-------+----------+-------------+ +| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | ++--------------+---------------+-------+----------+-------------+ +| 10.0.0.13 | node specific | up | 07:51:47 | Established | +| 10.0.0.14 | node specific | up | 07:51:49 | Established | +| 10.0.0.15 | node specific | up | 07:51:49 | Established | +| 10.0.0.16 | node specific | up | 07:51:47 | Established | +| 10.0.0.17 | node specific | up | 07:51:47 | Established | ++--------------+---------------+-------+----------+-------------+ +结果显式: + 默认的点对点通信方式就已经丢失了,剩下了反射器模式 +``` + + + +**小结** + +``` + +``` + + + +## 2.4 策略实践 + +学习目标 + +这一节,我们从 属性解读、基本控制、小结 三个方面来学习。 + +**属性解读** + +策略简介 + +```powershell + 为了使用Network Policy,Kubernetes引入了一个新的资源对象Network Policy,供用户设置Pod间网络访问的策略。策略控制器用于监控指定区域创建对象(pod)时所生成的新API端点,并按需为其附加网络策略。 + + 对于Pod对象来说,网络流量分为 流入(Ingress)和流出(Egress)两个方向,每个方向包含允许和禁止两种控制策略,默认情况下,所有的策略都是允许的,应用策略后,所有未经明确允许的流量都将拒绝。 +``` + +![image-20220722193530812](../../img/kubernetes/kubernetes_calico/image-20220722193530812.png) + +```powershell +注意: + 网络策略的控制,可以是多个级别: + 集群级别、namespace级别、Pod级别、ip级别、端口级别 +``` + +资源对象属性 + +```powershell +apiVersion: networking.k8s.io/v1 # 资源隶属的API群组及版本号 +kind: NetworkPolicy # 资源类型的名称,名称空间级别的资源; +metadata: # 资源元数据 + name # 资源名称标识 + namespace # NetworkPolicy是名称空间级别的资源 +spec: # 期望的状态 + podSelector # 当前规则生效的一组目标Pod对象,必选字段;空值表示当前名称空间中的所有Pod资源 + policyTypes <[]string> # Ingress表示生效ingress字段;Egress表示生效egress字段,同时提供表示二者均有效 + ingress <[]Object> # 入站流量源端点对象列表,白名单,空值表示“所有” + - from <[]Object> # 具体的端点对象列表,空值表示所有合法端点 + - ipBlock # IP地址块范围内的端点,不能与另外两个字段同时使用 + - namespaceSelector # 匹配的名称空间内的端点 + podSelector # 由Pod标签选择器匹配到的端点,空值表示 + ports <[]Object> # 具体的端口对象列表,空值表示所有合法端口 + egress <[]Object> # 出站流量目标端点对象列表,白名单,空值表示“所有” + - to <[]Object> # 具体的端点对象列表,空值表示所有合法端点,格式同ingres.from; + ports <[]Object> # 具体的端口对象列表,空值表示所有合法端口 + +注意: + 入栈和出栈哪个策略生效,由 policyTypes 来决定。 + 如果仅配置了podSelector,表明,当前限制仅限于当前的命名空间 +``` + +配置示例 + +```powershell +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: test-network-policy + namespace: default +spec: + podSelector: + matchLabels: + role: db + policyTypes: + - Ingress + - Egress + ingress: + - from: + - ipBlock: + cidr: 10.244.0.0/16 + except: + - 10.244.1.0/24 + - namespacesSelector: + matchLabels: + project: develop + - podSelector: + matchLabels: + arch: frontend + ports: + - protocol: TCP + port: 6379 + egress: + - to: + - ipBlock: + cidr: 10.244.0.0/24 + ports: + - protocol: TCP + port: 3306 +``` + +准备工作 + +```powershell +在default命名空间创建应用 +[root@kubernetes-master1 ~]# kubectl create deployment nginx-web --image=kubernetes-register.superopsmsb.com/superopsmsb/nginx_web:v0.1 +[root@kubernetes-master1 ~]# kubectl expose deployment nginx-web --port=80 + +在superopsmsb命名空间创建应用 +[root@kubernetes-master1 ~]# kubectl create deployment nginx-web --image=kubernetes-register.superopsmsb.com/superopsmsb/nginx_web:v0.1 --namespace=superopsmsb +[root@kubernetes-master1 ~]# kubectl expose deployment nginx-web --port=80 --namespace=superopsmsb + +确认效果 +[root@kubernetes-master1 ~]# kubectl get pod -o wide +NAME READY... AGE IP ... +nginx-web-5865bb968d-759lc 1/1 ... 10s 10.244.1.4 ... + +[root@kubernetes-master1 ~]# kubectl get pod -o wide -n superopsmsb +NAME READY... AGE IP ... +nginx-web-65d688fd6-h6sbpp 1/1 ... 10s 10.244.1.5 ... +``` + +```powershell +开启superopsmsb的测试容器 +[root@kubernetes-master1 ~]# kubectl exec -it nginx-web-65d688fd6-h6sbp -n superopsmsb -- /bin/bash +root@nginx-web-65d688fd6-h6sbp:/# apt update; apt install net-tools iputils-ping dnsutils curl -y +``` + +```powershell +进入到容器里面查看效果 +[root@kubernetes-master1 ~]# kubectl exec -it nginx-web-5865bb968d-759lc -- /bin/bash +root@nginx-web-5865bb968d-759lc:/# apt update; apt install net-tools iputils-ping dnsutils curl -y + +域名检测 +root@nginx-web-5865bb968d-759lc:/# nslookup nginx-web +Server: 10.96.0.10 +Address: 10.96.0.10#53 +Name: nginx-web.default.svc.cluster.local +Address: 10.101.181.105 + +root@nginx-web-5865bb968d-759lc:/# nslookup nginx-web.superopsmsb.svc.cluster.local +Server: 10.96.0.10 +Address: 10.96.0.10#53 +Name: nginx-web.superopsmsb.svc.cluster.local +Address: 10.105.110.175 + +ping检测 +root@nginx-web-5865bb968d-759lc:/# ifconfig | grep 244 + inet 10.244.1.4 netmask 255.255.255.255 broadcast 0.0.0.0 + RX packets 22442 bytes 20956160 (19.9 MiB) +root@nginx-web-5865bb968d-759lc:/# ping -c 1 10.244.1.5 +PING 10.244.1.5 (10.244.1.5) 56(84) bytes of data. +64 bytes from 10.244.1.5: icmp_seq=1 ttl=63 time=0.357 ms +``` + + + +**基本控制** + +默认拒绝规则 + +```powershell +查看资源清单文件 +# cat 01_kubernetes_secure_networkpolicy_refuse.yaml +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: deny-all-ingress + namespace: superopsmsb +spec: + podSelector: {} + policyTypes: ["Ingress", "Egress"] + +应用资源清单文件 +# kubectl apply -f 01_kubernetes_secure_networkpolicy_refuse.yaml +networkpolicy.networking.k8s.io/deny-all-ingress created +``` + +```powershell + +``` + + + +```powershell +尝试default空间资源访问superopsmsb空间资源 +root@nginx-web-5865bb968d-759lc:/# ping -c 1 10.244.1.5 +PING 10.244.1.5 (10.244.1.5) 56(84) bytes of data. + +--- 10.244.1.5 ping statistics --- +1 packets transmitted, 0 received, 100% packet loss, time 0ms + +root@nginx-web-5865bb968d-759lc:/# curl nginx-web.superopsmsb.svc.cluster.local +curl: (28) Failed to connect to nginx-web.superopsmsb.svc.cluster.local port 80: Connection timed out +结果显示: + 访问失败 + +default空间资源访问非superopsmsb空间资源正常。 +root@nginx-web-5865bb968d-759lc:/# curl nginx-web +Hello Nginx, nginx-web-5865bb968d-759lc-1.23.0 +``` + +默认启用规则 + +```powershell +查看资源清单文件 +# cat 02_kubernetes_secure_networkpolicy_allow.yaml +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: allow-all-ingress + namespace: superopsmsb +spec: + podSelector: {} + policyTypes: ["Ingress", "Egress"] + egress: + - {} + ingress: + - {} + +应用资源清单文件 +# kubectl apply -f 02_kubernetes_secure_networkpolicy_allow.yaml +networkpolicy.networking.k8s.io/allow-all-ingress created +``` + +```powershell +在default空间访问superopsmsb空间资源 +root@nginx-web-5865bb968d-759lc:/# curl nginx-web.superopsmsb.svc.cluster.local +Hello Nginx, nginx-web-65d688fd6-h6sbp-1.23.0 +root@nginx-web-5865bb968d-759lc:/# ping -c 1 10.244.1.5 +PING 10.244.1.5 (10.244.1.5) 56(84) bytes of data. +64 bytes from 10.244.1.5: icmp_seq=1 ttl=63 time=0.181 ms +结果显示: + 网络策略成功了,而且多个网络策略可以叠加 + 注意:仅仅同名策略或者同类型策略可以覆盖 + +清理网络策略 +# kubectl delete -f 02_kubernetes_secure_networkpolicy_allow.yaml +``` + +当前namespace的流量限制 + +```powershell +查看资源清单文件 +# cat 03_kubernetes_secure_networkpolicy_ns.yaml +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: allow-current-ns + namespace: superopsmsb +spec: + podSelector: {} + policyTypes: ["Ingress", "Egress"] + egress: + - to: + - podSelector: {} + ingress: + - from: + - podSelector: {} +配置解析: + 虽然设置了egress和ingress属性,但是下面的podSelector没有选择节点,表示只有当前命名空间所有节点不受限制 + +应用资源清单文件 +# kubectl apply -f 03_kubernetes_secure_networkpolicy_ns.yaml +networkpolicy.networking.k8s.io/allow-current-ns created +``` + +```powershell +default资源访问superopsmsb资源 +root@nginx-web-5865bb968d-759lc:/# curl nginx-web.superopsmsb.svc.cluster.local +curl: (28) Failed to connect to nginx-web.superopsmsb.svc.cluster.local port 80: Connection timed out +root@nginx-web-5865bb968d-759lc:/# ping -c 1 10.244.1.5 +PING 10.244.1.5 (10.244.1.5) 56(84) bytes of data. +结果显示: + 访问失败 +``` + +```powershell +superopsmsb资源访问同命名空间的其他资源 +root@nginx-web-65d688fd6-h6sbp:/# ping 10.244.1.2 +PING 10.244.1.2 (10.244.1.2) 56(84) bytes of data. +64 bytes from 10.244.1.2: icmp_seq=1 ttl=63 time=0.206 ms +结果显示: + 同命名空间可以正常访问 +``` + + + +**小结** + +```powershell + +``` + + + +## 2.5 流量管控 + +学习目标 + +这一节,我们从 ip限制实践、pod限制实践、小结 三个方面来学习。 + +**ip限制知识** + +准备工作 + +```powershell +准备同名空间的两个测试pod +终端1 +kubectl run superopsmsb-client1 --image=kubernetes-register.superopsmsb.com/superopsmsb/busybox -n superopsmsb --rm -it -- /bin/sh + +终端2 +kubectl run superopsmsb-client2 --image=kubernetes-register.superopsmsb.com/superopsmsb/busybox -n superopsmsb --rm -it -- /bin/sh +``` + + + +简介 + +```powershell +查看资源清单文件 +# cat 04_kubernetes_secure_networkpolicy_ns_pod.yaml +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: deny-ns-pod + namespace: superopsmsb +spec: + podSelector: {} + policyTypes: ["Ingress", "Egress"] + egress: + - to: + - podSelector: {} + ingress: + - from: + - ipBlock: + cidr: 10.244.0.0/16 + except: + - 10.244.2.0/24 + ports: + - protocol: TCP + port: 80 + 配置解析: + 虽然设置了egress和ingress属性,但是下面的podSelector没有选择节点,表示只有当前命名空间所有节点不受限制 + +应用资源对象文件 +# kubectl apply -f 04_kubernetes_secure_networkpolicy_ns_pod.yaml +networkpolicy.networking.k8s.io/deny-ns-pod created +``` + +```powershell +3网段测试容器查看效果 +/ # ifconfig | grep 244 + inet addr:10.244.3.7 Bcast:0.0.0.0 Mask:255.255.255.255 +/ # wget 10.244.1.5 +Connecting to 10.244.1.5 (10.244.1.5:80) +index.html 100% |*****************************************************| 46 0:00:00 ETA +/ +``` + +```powershell +2网段测试容器查看效果 +/ # ifconfig | grep 244 + inet addr:10.244.2.6 Bcast:0.0.0.0 Mask:255.255.255.255 +/ # wget 10.244.1.5 +Connecting to 10.244.1.5 (10.244.1.5:80) +wget: can't connect to remote host (10.244.1.5): Connection timed out +/ +结果显示: + 同namespace可以进行ip级别的访问测试 +``` + + + +**pod限制实践** + +实践 + +```powershell +查看在ip限制的基础上,扩充pod限制资源清单文件 +# cat 05_kubernetes_secure_networkpolicy_ns_port.yaml +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: egress-controller + namespace: superopsmsb +spec: + podSelector: {} + policyTypes: ["Egress"] + egress: + - to: + ports: + - protocol: UDP + port: 53 + - to: + ports: + - protocol: TCP + port: 80 + 配置解析: + 虽然设置了egress和ingress属性,但是下面的podSelector没有选择节点,表示只有当前命名空间所有节点不受限制 + +应用资源对象文件 +# kubectl apply -f 05_kubernetes_secure_networkpolicy_ns_port.yaml +networkpolicy.networking.k8s.io/egress-controller created +``` + +```powershell +在所有pod测试容器中 +/ # nslookup nginx-web.default.svc.cluster.local +Server: 10.96.0.10 +Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local + +Name: nginx-web.default.svc.cluster.local +Address 1: 10.101.181.105 nginx-web.default.svc.cluster.local +/ # nslookup nginx-web +Server: 10.96.0.10 +Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local + +Name: nginx-web +Address 1: 10.105.110.175 nginx-web.superopsmsb.svc.cluster.local +结果显示: + 在不影响之前的策略的前提下,扩充了dns的解析功能 +``` + +**小结** + +```powershell + +``` + +## 2.6 基准测试 + +学习目标 + +这一节,我们从 软件简介、简单实践、小结 三个方面来学习 + +**软件简介** + +简介 + +```powershell + knb 全称 Kubernetes Network Benchmark,它是一个 bash 脚本,它将在目标 Kubernetes 集群上启动网络基准测试。 + +参考资料: + https://github.com/InfraBuilder/k8s-bench-suite +``` + +```powershell +它的主要特点如下: + 依赖很少的普通 bash 脚本 + 完成基准测试仅需 2 分钟 + 能够仅选择要运行的基准测试的子集 + 测试TCP 和 UDP带宽 + 自动检测 CNI MTU + 在基准报告中包括主机 cpu 和 ram 监控 + 能够使用 plotly/orca 基于结果数据创建静态图形图像(参见下面的示例) + 无需 ssh 访问,只需通过标准 kubectl访问目标集群 + 不需要高权限,脚本只会在两个节点上启动非常轻量级的 Pod。 + 它提供了数据的图形生成镜像和配套的服务镜像 +``` + +准备工作 + +```powershell +master节点下载核心服务镜像 +docker pull olegeech/k8s-bench-suite + +特定node节点下载镜像 +docker pull infrabuilder/bench-custom-monitor +docker pull infrabuilder/bench-iperf3 +docker pull quay.io/plotly/orca +``` + +注意事项 + +```powershell +由于需要测试k8s节点间的网络通信质量并且将结果输出,所以它在操作的过程中需要提前做一些准备 + 1 准备k8s集群的认证config文件 + 2 准备数据文件的映射目录 +``` + +**简单实践** + +命令解析 + +```powershell +对于容器内部的knb命令来说,它的基本格式如下: + knb --verbose --client-node 客户端节点 --server-node 服务端节点 + +注意: + docker run时候,可以通过NODE_AUTOSELECT从集群中自动选择几个节点来运行测试 + - 如果master节点的污点无法容忍,从而导致失败 +``` + +```powershell +查看命令帮助 +docker run -it --hostname knb --name knb --rm -v /tmp/my-graphs:/my-graphs -v /root/.kube/config:/.kube/config olegeech/k8s-bench-suite --help + +注意: + 如果网络效果不是很好的话,会因为镜像无法获取而导致失败,需要重试几次即可 + 所以,如果可以的话,提前下载到特定的工作节点上 +``` + +测试效果 + +```powershell +生成检测数据 +docker run -it --hostname knb --name knb --rm -v /tmp/my-graphs:/my-graphs -v /root/.kube/config:/.kube/config olegeech/k8s-bench-suite --verbose --server-node kubernetes-node1 --client-node kubernetes-node2 -o data -f /my-graphs/mybench.knbdata +``` + +```powershell +以json方式显示数据 +docker run -it --hostname knb --name knb --rm -v /tmp/my-graphs:/my-graphs -v /root/.kube/config:/.kube/config olegeech/k8s-bench-suite -fd /my-graphs/mybench.knbdata -o json +``` + +```powershell +根据数据进行绘图展示 + docker run -it --hostname knb --name knb --rm -v /tmp/my-graphs:/my-graphs -v /root/.kube/config:/.kube/config olegeech/k8s-bench-suite -fd /my-graphs/mybench.knbdata --plot --plot-dir /my-graphs/ +``` + +```powershell +设定数据的图形样式 +docker run -it --hostname knb --name knb --rm -v /tmp/my-graphs:/my-graphs -v /root/.kube/config:/.kube/config olegeech/k8s-bench-suite -fd /my-graphs/mybench.knbdata --plot --plot-args '--width 900 --height 600' --plot-dir /my-graphs +``` + + + +**小结** \ No newline at end of file diff --git a/docs/cloud/kubernetes/kubernetes_flannel.md b/docs/cloud/kubernetes/kubernetes_flannel.md new file mode 100644 index 00000000..8cfb2a43 --- /dev/null +++ b/docs/cloud/kubernetes/kubernetes_flannel.md @@ -0,0 +1,5674 @@ +# 1 集群环境实践 + +## 1.1 单主集群 + +### 1.1.1 基础环境部署 + +学习目标 + +这一节,我们从 集群规划、主机认证、小结 三个方面来学习。 + +**集群规划** + +简介 + +```powershell +在这里,我们以单主分布式的主机节点效果来演示kubernetes的最新版本的集群环境部署。 +``` + +![image-20220715092558666](../../img/kubernetes/kubernetes_flannel/image-20220715092558666.png) + +节点集群组件规划 + +![image-20220715093007357](../../img/kubernetes/kubernetes_flannel/image-20220715093007357.png) + +```powershell +master节点 + kubeadm(集群环境)、kubectl(集群管理)、kubelet(节点状态) + kube-apiserver、kube-controller-manager、kube-scheduler、etcd + containerd(docker方式部署)、flannel(插件部署) +node节点 + kubeadm(集群环境)、kubelet(节点状态) + kube-proxy、containerd(docker方式部署)、flannel(插件部署) +``` + +主机名规划 + +| 序号 | 主机ip | 主机名规划 | +| ---- | --------- | -------------------------------------------------------- | +| 1 | 10.0.0.12 | kubernetes-master.superopsmsb.com kubernetes-master | +| 2 | 10.0.0.15 | kubernetes-node1.superopsmsb.com kubernetes-node1 | +| 3 | 10.0.0.16 | kubernetes-node2.superopsmsb.com kubernetes-node2 | +| 4 | 10.0.0.17 | kubernetes-node3.superopsmsb.com kubernetes-node3 | +| 5 | 10.0.0.20 | kubernetes-register.superopsmsb.com kubernetes-register | + +```powershell +修改master节点主机的hosts文件 +[root@localhost ~]# cat /etc/hosts +10.0.0.12 kubernetes-master.superopsmsb.com kubernetes-master +10.0.0.15 kubernetes-node1.superopsmsb.com kubernetes-node1 +10.0.0.16 kubernetes-node2.superopsmsb.com kubernetes-node2 +10.0.0.17 kubernetes-node3.superopsmsb.com kubernetes-node3 +10.0.0.20 kubernetes-register.superopsmsb.com kubernetes-register +``` + + + +**主机认证** + +简介 + +```powershell + 因为整个集群节点中的很多文件配置都是一样的,所以我们需要配置跨主机免密码认证方式来定制集群的认证通信机制,这样后续在批量操作命令的时候,就非常轻松了。 +``` + +```powershell +为了让ssh通信速度更快一些,我们需要对ssh的配置文件进行一些基本的改造 +[root@localhost ~]# egrep 'APIA|DNS' /etc/ssh/sshd_config +GSSAPIAuthentication no +UseDNS no +[root@localhost ~]# systemctl restart sshd + +注意: + 这部分的操作,应该在所有集群主机环境初始化的时候进行设定 +``` + +脚本内容 + +```powershell +[root@localhost ~]# cat /data/scripts/01_remote_host_auth.sh +#!/bin/bash +# 功能: 批量设定远程主机免密码认证 +# 版本: v0.2 +# 作者: 书记 +# 联系: superopsmsb.com + +# 准备工作 +user_dir='/root' +host_file='/etc/hosts' +login_user='root' +login_pass='123456' +target_type=(部署 免密 同步 主机名 退出) + +# 菜单 +menu(){ + echo -e "\e[31m批量设定远程主机免密码认证管理界面\e[0m" + echo "=====================================================" + echo -e "\e[32m 1: 部署环境 2: 免密认证 3: 同步hosts \e[0m" + echo -e "\e[32m 4: 设定主机名 5:退出操作 \e[0m" + echo "=====================================================" +} +# expect环境 +expect_install(){ + if [ -f /usr/bin/expect ] + then + echo -e "\e[33mexpect环境已经部署完毕\e[0m" + else + yum install expect -y >> /dev/null 2>&1 && echo -e "\e[33mexpect软件安装完毕\e[0m" || (echo -e "\e[33mexpect软件安装失败\e[0m" && exit) + fi +} +# 秘钥文件生成环境 +create_authkey(){ + # 保证历史文件清空 + [ -d ${user_dir}/.ssh ] && rm -rf ${user_dir}/.ssh/* + # 构建秘钥文件对 + /usr/bin/ssh-keygen -t rsa -P "" -f ${user_dir}/.ssh/id_rsa + echo -e "\e[33m秘钥文件已经创建完毕\e[0m" +} +# expect自动匹配逻辑 +expect_autoauth_func(){ + # 接收外部参数 + command="$@" + expect -c " + spawn ${command} + expect { + \"yes/no\" {send \"yes\r\"; exp_continue} + \"*password*\" {send \"${login_pass}\r\"; exp_continue} + \"*password*\" {send \"${login_pass}\r\"} + }" +} +# 跨主机传输文件认证 +sshkey_auth_func(){ + # 接收外部的参数 + local host_list="$*" + for ip in ${host_list} + do + # /usr/bin/ssh-copy-id -i ${user_dir}/.ssh/id_rsa.pub root@10.0.0.12 + cmd="/usr/bin/ssh-copy-id -i ${user_dir}/.ssh/id_rsa.pub" + remote_host="${login_user}@${ip}" + expect_autoauth_func ${cmd} ${remote_host} + done +} + +# 跨主机同步hosts文件 +scp_hosts_func(){ + # 接收外部的参数 + local host_list="$*" + for ip in ${host_list} + do + remote_host="${login_user}@${ip}" + scp ${host_file} ${remote_host}:${host_file} + done +} + +# 跨主机设定主机名规划 +set_hostname_func(){ + # 接收外部的参数 + local host_list="$*" + for ip in ${host_list} + do + host_name=$(grep ${ip} ${host_file}|awk '{print $NF}') + remote_host="${login_user}@${ip}" + ssh ${remote_host} "hostnamectl set-hostname ${host_name}" + done +} +# 帮助信息逻辑 +Usage(){ + echo "请输入有效的操作id" +} +# 逻辑入口 +while true +do + menu + read -p "请输入有效的操作id: " target_id + if [ ${#target_type[@]} -ge ${target_id} ] + then + if [ ${target_type[${target_id}-1]} == "部署" ] + then + echo "开始部署环境操作..." + expect_install + create_authkey + elif [ ${target_type[${target_id}-1]} == "免密" ] + then + read -p "请输入需要批量远程主机认证的主机列表范围(示例: {12..19}): " num_list + ip_list=$(eval echo 10.0.0.$num_list) + echo "开始执行免密认证操作..." + sshkey_auth_func ${ip_list} + elif [ ${target_type[${target_id}-1]} == "同步" ] + then + read -p "请输入需要批量远程主机同步hosts的主机列表范围(示例: {12..19}): " num_list + ip_list=$(eval echo 10.0.0.$num_list) + echo "开始执行同步hosts文件操作..." + scp_hosts_func ${ip_list} + elif [ ${target_type[${target_id}-1]} == "主机名" ] + then + read -p "请输入需要批量设定远程主机主机名的主机列表范围(示例: {12..19}): " num_list + ip_list=$(eval echo 10.0.0.$num_list) + echo "开始执行设定主机名操作..." + set_hostname_func ${ip_list} + elif [ ${target_type[${target_id}-1]} == "退出" ] + then + echo "开始退出管理界面..." + exit + fi + else + Usage + fi +done +``` + +```powershell +为了更好的把环境部署成功,最好提前更新一下软件源信息 +[root@localhost ~]# yum makecache +``` + +```powershell +查看脚本执行效果 +[root@localhost ~]# /bin/bash /data/scripts/01_remote_host_auth.sh +批量设定远程主机免密码认证管理界面 +===================================================== + 1: 部署环境 2: 免密认证 3: 同步hosts + 4: 设定主机名 5:退出操作 +===================================================== +请输入有效的操作id: 1 +开始部署环境操作... +expect环境已经部署完毕 +Generating public/private rsa key pair. +Your identification has been saved in /root/.ssh/id_rsa. +Your public key has been saved in /root/.ssh/id_rsa.pub. +The key fingerprint is: +SHA256:u/Tzk0d9sNtG6r9Kx+6xPaENNqT3Lw178XWXQhX1yMw root@kubernetes-master +The key's randomart image is: ++---[RSA 2048]----+ +| .+| +| + o.| +| E .| +| o. | +| S + +.| +| . . *=+B| +| o o+B%O| +| . o. +.O+X| +| . .o.=+XB| ++----[SHA256]-----+ +秘钥文件已经创建完毕 +批量设定远程主机免密码认证管理界面 +===================================================== + 1: 部署环境 2: 免密认证 3: 同步hosts + 4: 设定主机名 5:退出操作 +===================================================== +请输入有效的操作id: 2 +请输入需要批量远程主机认证的主机列表范围(示例: {12..19}): 12 +开始执行免密认证操作... +spawn /usr/bin/ssh-copy-id -i /root/.ssh/id_rsa.pub root@10.0.0.12 +... +Now try logging into the machine, with: "ssh 'root@10.0.0.12'" +and check to make sure that only the key(s) you wanted were added. + +批量设定远程主机免密码认证管理界面 +===================================================== + 1: 部署环境 2: 免密认证 3: 同步hosts + 4: 设定主机名 5:退出操作 +===================================================== +请输入有效的操作id: 2 +请输入需要批量远程主机认证的主机列表范围(示例: {12..19}): {15..17} +开始执行免密认证操作... +spawn /usr/bin/ssh-copy-id -i /root/.ssh/id_rsa.pub root@10.0.0.15 +... +Now try logging into the machine, with: "ssh 'root@10.0.0.15'" +and check to make sure that only the key(s) you wanted were added. + +spawn /usr/bin/ssh-copy-id -i /root/.ssh/id_rsa.pub root@10.0.0.16 +... +Now try logging into the machine, with: "ssh 'root@10.0.0.16'" +and check to make sure that only the key(s) you wanted were added. + +spawn /usr/bin/ssh-copy-id -i /root/.ssh/id_rsa.pub root@10.0.0.17 +... +Now try logging into the machine, with: "ssh 'root@10.0.0.17'" +and check to make sure that only the key(s) you wanted were added. + +批量设定远程主机免密码认证管理界面 +===================================================== + 1: 部署环境 2: 免密认证 3: 同步hosts + 4: 设定主机名 5:退出操作 +===================================================== +请输入有效的操作id: 2 +请输入需要批量远程主机认证的主机列表范围(示例: {12..19}): 20 +开始执行免密认证操作... +spawn /usr/bin/ssh-copy-id -i /root/.ssh/id_rsa.pub root@10.0.0.20 +... +Now try logging into the machine, with: "ssh 'root@10.0.0.20'" +and check to make sure that only the key(s) you wanted were added. + +批量设定远程主机免密码认证管理界面 +===================================================== + 1: 部署环境 2: 免密认证 3: 同步hosts + 4: 设定主机名 5:退出操作 +===================================================== +请输入有效的操作id: 3 +请输入需要批量远程主机同步hosts的主机列表范围(示例: {12..19}): 12 +开始执行同步hosts文件操作... +hosts 100% 470 1.2MB/s 00:00 +批量设定远程主机免密码认证管理界面 +===================================================== + 1: 部署环境 2: 免密认证 3: 同步hosts + 4: 设定主机名 5:退出操作 +===================================================== +请输入有效的操作id: 3 +请输入需要批量远程主机同步hosts的主机列表范围(示例: {12..19}): {15..17} +开始执行同步hosts文件操作... +hosts 100% 470 226.5KB/s 00:00 +hosts 100% 470 458.8KB/s 00:00 +hosts 100% 470 533.1KB/s 00:00 +批量设定远程主机免密码认证管理界面 +===================================================== + 1: 部署环境 2: 免密认证 3: 同步hosts + 4: 设定主机名 5:退出操作 +===================================================== +请输入有效的操作id: 3 +请输入需要批量远程主机同步hosts的主机列表范围(示例: {12..19}): 20 +开始执行同步hosts文件操作... +hosts 100% 470 287.8KB/s 00:00 +批量设定远程主机免密码认证管理界面 +===================================================== + 1: 部署环境 2: 免密认证 3: 同步hosts + 4: 设定主机名 5:退出操作 +===================================================== +请输入有效的操作id: 4 +请输入需要批量设定远程主机主机名的主机列表范围(示例: {12..19}): 12 +开始执行设定主机名操作... +批量设定远程主机免密码认证管理界面 +===================================================== + 1: 部署环境 2: 免密认证 3: 同步hosts + 4: 设定主机名 5:退出操作 +===================================================== +请输入有效的操作id: 4 +请输入需要批量设定远程主机主机名的主机列表范围(示例: {12..19}): {15..17} +开始执行设定主机名操作... +批量设定远程主机免密码认证管理界面 +===================================================== + 1: 部署环境 2: 免密认证 3: 同步hosts + 4: 设定主机名 5:退出操作 +===================================================== +请输入有效的操作id: 4 +请输入需要批量设定远程主机主机名的主机列表范围(示例: {12..19}): 20 +开始执行设定主机名操作... +批量设定远程主机免密码认证管理界面 +===================================================== + 1: 部署环境 2: 免密认证 3: 同步hosts + 4: 设定主机名 5:退出操作 +===================================================== +请输入有效的操作id: 5 +开始退出管理界面... +``` + +检查效果 + +```powershell +[root@localhost ~]# exec /bin/bash +[root@kubernetes-master ~]# for i in 12 {15..17} 20 +> do +> name=$(ssh root@10.0.0.$i "hostname") +> echo 10.0.0.$i $name +> done +10.0.0.12 kubernetes-master +10.0.0.15 kubernetes-node1 +10.0.0.16 kubernetes-node2 +10.0.0.17 kubernetes-node3 +10.0.0.20 kubernetes-register +``` + + + +**小结** + +``` + +``` + + + +### 1.1.2 集群基础环境 + +学习目标 + +这一节,我们从 内核调整、软件源配置、小结 三个方面来学习。 + +**内核调整** + +简介 + +```powershell +根据kubernetes官方资料的相关信息,我们需要对kubernetes集群是所有主机进行内核参数的调整。 +``` + +禁用swap + +```powershell + 部署集群时,kubeadm默认会预先检查当前主机是否禁用了Swap设备,并在未禁用时强制终止部署过程。因此,在主机内存资源充裕的条件下,需要禁用所有的Swap设备,否则,就需要在后文的kubeadm init及kubeadm join命令执行时额外使用相关的选项忽略检查错误 + +关闭Swap设备,需要分两步完成。 +首先是关闭当前已启用的所有Swap设备: +swapoff -a + +而后编辑/etc/fstab配置文件,注释用于挂载Swap设备的所有行。 +方法一:手工编辑 +vim /etc/fstab +# UUID=0a55fdb5-a9d8-4215-80f7-f42f75644f69 none swap sw 0 0 + +方法二: +sed -i 's/.*swap.*/#&/' /etc/fstab + 替换后位置的&代表前面匹配的整行内容 +注意: + 只需要注释掉自动挂载SWAP分区项即可,防止机子重启后swap启用 + +内核(禁用swap)参数 +cat >> /etc/sysctl.d/k8s.conf << EOF +vm.swappiness=0 +EOF +sysctl -p /etc/sysctl.d/k8s.conf +``` + +网络参数 + +```powershell +配置iptables参数,使得流经网桥的流量也经过iptables/netfilter防火墙 +cat >> /etc/sysctl.d/k8s.conf << EOF +net.bridge.bridge-nf-call-ip6tables = 1 +net.bridge.bridge-nf-call-iptables = 1 +net.ipv4.ip_forward = 1 +EOF + +配置生效 +modprobe br_netfilter +modprobe overlay +sysctl -p /etc/sysctl.d/k8s.conf +``` + +```powershell +脚本方式 +[root@localhost ~]# cat /data/scripts/02_kubernetes_kernel_conf.sh +#!/bin/bash +# 功能: 批量设定kubernetes的内核参数调整 +# 版本: v0.1 +# 作者: 书记 +# 联系: superopsmsb.com + +# 禁用swap +swapoff -a +sed -i 's/.*swap.*/#&/' /etc/fstab +cat >> /etc/sysctl.d/k8s.conf << EOF +vm.swappiness=0 +EOF +sysctl -p /etc/sysctl.d/k8s.conf + +# 打开网络转发 +cat >> /etc/sysctl.d/k8s.conf << EOF +net.bridge.bridge-nf-call-ip6tables = 1 +net.bridge.bridge-nf-call-iptables = 1 +net.ipv4.ip_forward = 1 +EOF + +# 加载相应的模块 +modprobe br_netfilter +modprobe overlay +sysctl -p /etc/sysctl.d/k8s.conf +``` + +脚本执行 + +```powershell +master主机执行效果 +[root@kubernetes-master ~]# /bin/bash /data/scripts/02_kubernetes_kernel_conf.sh +vm.swappiness = 0 +vm.swappiness = 0 +net.bridge.bridge-nf-call-ip6tables = 1 +net.bridge.bridge-nf-call-iptables = 1 +net.ipv4.ip_forward = 1 +``` + +```powershell +node主机执行效果 +[root@kubernetes-master ~]# for i in {15..17};do ssh root@10.0.0.$i mkdir /data/scripts -p; scp /data/scripts/02_kubernetes_kernel_conf.sh root@10.0.0.$i:/data/scripts/02_kubernetes_kernel_conf.sh;ssh root@10.0.0.$i "/bin/bash /data/scripts/02_kubernetes_kernel_conf.sh";done +02_kubernetes_kernel_conf.sh 100% 537 160.6KB/s 00:00 +vm.swappiness = 0 +vm.swappiness = 0 +net.bridge.bridge-nf-call-ip6tables = 1 +net.bridge.bridge-nf-call-iptables = 1 +net.ipv4.ip_forward = 1 +02_kubernetes_kernel_conf.sh 100% 537 374.4KB/s 00:00 +vm.swappiness = 0 +vm.swappiness = 0 +net.bridge.bridge-nf-call-ip6tables = 1 +net.bridge.bridge-nf-call-iptables = 1 +net.ipv4.ip_forward = 1 +02_kubernetes_kernel_conf.sh 100% 537 154.5KB/s 00:00 +vm.swappiness = 0 +vm.swappiness = 0 +net.bridge.bridge-nf-call-ip6tables = 1 +net.bridge.bridge-nf-call-iptables = 1 +net.ipv4.ip_forward = 1 +``` + + + +**软件源配置** + +简介 + +```powershell + 由于我们需要在多台主机上初始化kubernetes主机环境,所以我们需要在多台主机上配置kubernetes的软件源,以最简便的方式部署kubernetes环境。 +``` + +定制阿里云软件源 + +```powershell +定制阿里云的关于kubernetes的软件源 +[root@kubernetes-master ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF +[kubernetes] +name=Kubernetes +baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 +enabled=1 +gpgcheck=0 +repo_gpgcheck=0 +gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg +EOF + +更新软件源 +[root@kubernetes-master ~]# yum makecache fast +``` + +其他节点主机同步软件源 + +```powershell +[root@kubernetes-master ~]# for i in {15..17};do scp /etc/yum.repos.d/kubernetes.repo root@10.0.0.$i:/etc/yum.repos.d/kubernetes.repo;ssh root@10.0.0.$i "yum makecache fast";done +``` + + + +**小结** + +``` + +``` + + + +### 1.1.3 容器环境 + +学习目标 + +这一节,我们从 Docker环境、环境配置、小结 三个方面来学习。 + +**Docker环境** + +简介 + +```powershell + 由于kubernetes1.24版本才开始将默认支持的容器引擎转换为了Containerd了,所以这里我们还是以Docker软件作为后端容器的引擎,因为目前的CKA考试环境是以kubernetes1.23版本为基准的。 +``` + +软件源配置 + +```powershell +安装基础依赖软件 +[root@kubernetes-master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 + +定制专属的软件源 +[root@kubernetes-master ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo +``` + +安装软件 + +```powershell +确定最新版本的docker +[root@kubernetes-master ~]# yum list docker-ce --showduplicates | sort -r + +安装最新版本的docker +[root@kubernetes-master ~]# yum install -y docker-ce +``` + +检查效果 + +```powershell +启动docker服务 +[root@kubernetes-master ~]# systemctl restart docker + +检查效果 +[root@kubernetes-master ~]# docker version +Client: Docker Engine - Community + Version: 20.10.17 + API version: 1.41 + Go version: go1.17.11 + Git commit: 100c701 + Built: Mon Jun 6 23:05:12 2022 + OS/Arch: linux/amd64 + Context: default + Experimental: true + +Server: Docker Engine - Community + Engine: + Version: 20.10.17 + API version: 1.41 (minimum version 1.12) + Go version: go1.17.11 + Git commit: a89b842 + Built: Mon Jun 6 23:03:33 2022 + OS/Arch: linux/amd64 + Experimental: false + containerd: + Version: 1.6.6 + GitCommit: 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 + runc: + Version: 1.1.2 + GitCommit: v1.1.2-0-ga916309 + docker-init: + Version: 0.19.0 + GitCommit: de40ad0 +``` + + + +**环境配置** + +需求 + +```powershell +1 镜像仓库 + 默认安装的docker会从官方网站上获取docker镜像,有时候会因为网络因素无法获取,所以我们需要配置国内镜像仓库的加速器 +2 kubernetes的改造 + kubernetes的创建容器,需要借助于kubelet来管理Docker,而默认的Docker服务进程的管理方式不是kubelet的类型,所以需要改造Docker的服务启动类型为systemd方式。 + +注意: + 默认情况下,Docker的服务修改有两种方式: + 1 Docker服务 - 需要修改启动服务文件,需要重载服务文件,比较繁琐。 + 2 daemon.json文件 - 修改文件后,只需要重启docker服务即可,该文件默认情况下不存在。 +``` + +定制docker配置文件 + +```powershell +定制配置文件 +[root@kubernetes-master ~]# cat >> /etc/docker/daemon.json <<-EOF +{ + "registry-mirrors": [ + "http://74f21445.m.daocloud.io", + "https://registry.docker-cn.com", + "http://hub-mirror.c.163.com", + "https://docker.mirrors.ustc.edu.cn" + ], + "insecure-registries": ["10.0.0.20:80"], + "exec-opts": ["native.cgroupdriver=systemd"] +} +EOF + +重启docker服务 +[root@kubernetes-master ~]# systemctl restart docker +``` + +检查效果 + +```powershell +查看配置效果 +[root@kubernetes-master ~]# docker info +Client: + ... + +Server: + ... + Cgroup Driver: systemd + ... + Insecure Registries: + 10.0.0.20:80 + 127.0.0.0/8 + Registry Mirrors: + http://74f21445.m.daocloud.io/ + https://registry.docker-cn.com/ + http://hub-mirror.c.163.com/ + https://docker.mirrors.ustc.edu.cn/ + Live Restore Enabled: false +``` + +docker环境定制脚本 + +```powershell +查看脚本内容 +[root@localhost ~]# cat /data/scripts/03_kubernetes_docker_install.sh +#!/bin/bash +# 功能: 安装部署Docker容器环境 +# 版本: v0.1 +# 作者: 书记 +# 联系: superopsmsb.com + +# 准备工作 + +# 软件源配置 +softs_base(){ + # 安装基础软件 + yum install -y yum-utils device-mapper-persistent-data lvm2 + # 定制软件仓库源 + yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo + # 更新软件源 + systemctl restart network + yum makecache fast +} + +# 软件安装 +soft_install(){ + # 安装最新版的docker软件 + yum install -y docker-ce + # 重启docker服务 + systemctl restart docker +} + +# 加速器配置 +speed_config(){ + # 定制加速器配置 +cat > /etc/docker/daemon.json <<-EOF +{ + "registry-mirrors": [ + "http://74f21445.m.daocloud.io", + "https://registry.docker-cn.com", + "http://hub-mirror.c.163.com", + "https://docker.mirrors.ustc.edu.cn" + ], + "insecure-registries": ["10.0.0.20:80"], + "exec-opts": ["native.cgroupdriver=systemd"] +} +EOF + # 重启docker服务 + systemctl restart docker +} + +# 环境监测 +docker_check(){ + process_name=$(docker info | grep 'p D' | awk '{print $NF}') + [ "${process_name}" == "systemd" ] && echo "Docker软件部署完毕" || (echo "Docker软件部署失败" && exit) +} + +# 软件部署 +main(){ + softs_base + soft_install + speed_config + docker_check +} + +# 调用主函数 +main +``` + +```powershell +其他主机环境部署docker +[root@kubernetes-master ~]# for i in {15..17} 20; do ssh root@10.0.0.$i "mkdir -p /data/scripts"; scp /data/scripts/03_kubernetes_docker_install.sh root@10.0.0.$i:/data/scripts/03_kubernetes_docker_install.sh; done + +其他主机各自执行下面的脚本 +/bin/bash /data/scripts/03_kubernetes_docker_install.sh +``` + + + +**小结** + +``` + +``` + + + + + +### 1.1.4 仓库环境 + +学习目标 + +这一节,我们从 容器仓库、Harbor实践、小结 三个方面来学习。 + +**容器仓库** + +简介 + +```powershell + Harbor是一个用于存储和分发Docker镜像的企业级Registry服务器,虽然Docker官方也提供了公共的镜像仓库,但是从安全和效率等方面考虑,部署企业内部的私有环境Registry是非常必要的,Harbor除了存储和分发镜像外还具有用户管理,项目管理,配置管理和日志查询,高可用部署等主要功能。 + 在本地搭建一个Harbor服务,其他在同一局域网的机器可以使用Harbor进行镜像提交和拉取,搭建前需要本地安装docker服务和docker-compose服务。 + 最新的软件版本是 2.5.3,我们这里采用 2.5.0版本。 +``` + +compose部署 + +```powershell +安装docker-compose +[root@kubernetes-register ~]# yum install -y docker-compose +``` + +harbor部署 + +```powershell +下载软件 +[root@kubernetes-register ~]# mkdir /data/{softs,server} -p +[root@kubernetes-register ~]# cd /data/softs +[root@kubernetes-register ~]# wget https://github.com/goharbor/harbor/releases/download/v2.5.0/harbor-offline-installer-v2.5.0.tgz + +解压软件 +[root@kubernetes-register ~]# tar -zxvf harbor-offline-installer-v2.5.0.tgz -C /data/server/ +[root@kubernetes-register ~]# cd /data/server/harbor/ + +加载镜像 +[root@kubernetes-register /data/server/harbor]# docker load < harbor.v2.5.0.tar.gz +[root@kubernetes-register /data/server/harbor]# docker images +REPOSITORY TAG IMAGE ID CREATED SIZE +goharbor/harbor-exporter v2.5.0 36396f138dfb 3 months ago 86.7MB +goharbor/chartmuseum-photon v2.5.0 eaedcf1f700b 3 months ago 225MB +goharbor/redis-photon v2.5.0 1e00fcc9ae63 3 months ago 156MB +goharbor/trivy-adapter-photon v2.5.0 4e24a6327c97 3 months ago 164MB +goharbor/notary-server-photon v2.5.0 6d5fe726af7f 3 months ago 112MB +goharbor/notary-signer-photon v2.5.0 932eed8b6e8d 3 months ago 109MB +goharbor/harbor-registryctl v2.5.0 90ef6b10ab31 3 months ago 136MB +goharbor/registry-photon v2.5.0 30e130148067 3 months ago 77.5MB +goharbor/nginx-photon v2.5.0 5041274b8b8a 3 months ago 44MB +goharbor/harbor-log v2.5.0 89fd73f9714d 3 months ago 160MB +goharbor/harbor-jobservice v2.5.0 1d097e877be4 3 months ago 226MB +goharbor/harbor-core v2.5.0 42a54bc05b02 3 months ago 202MB +goharbor/harbor-portal v2.5.0 c206e936f4f9 3 months ago 52.3MB +goharbor/harbor-db v2.5.0 d40a1ae87646 3 months ago 223MB +goharbor/prepare v2.5.0 36539574668f 3 months ago 268MB +``` + +```powershell +备份配置 +[root@kubernetes-register /data/server/harbor]# cp harbor.yml.tmpl harbor.yml + +修改配置 +[root@kubernetes-register /data/server/harbor]# vim harbor.yml.tmpl + # 修改主机名 + hostname: kubernetes-register.superopsmsb.com + http: + port: 80 + #https: 注释ssl相关的部分 + # port: 443 + # certificate: /your/certificate/path + # private_key: /your/private/key/path + # 修改harbor的登录密码 + harbor_admin_password: 123456 + # 设定harbor的数据存储目录 + data_volume: /data/server/harbor/data +``` + +```powershell +配置harbor +[root@kubernetes-register /data/server/harbor]# ./prepare +prepare base dir is set to /data/server/harbor +WARNING:root:WARNING: HTTP protocol is insecure. Harbor will deprecate http protocol in the future. Please make sure to upgrade to https +... +Generated configuration file: /compose_location/docker-compose.yml +Clean up the input dir + +启动harbor +[root@kubernetes-register /data/server/harbor]# ./install.sh +[Step 0]: checking if docker is installed ... +... +[Step 1]: checking docker-compose is installed ... +... +[Step 2]: loading Harbor images ... +... +Loaded image: goharbor/harbor-exporter:v2.5.0 +[Step 3]: preparing environment ... +... +[Step 4]: preparing harbor configs ... +... +[Step 5]: starting Harbor ... +... +✔ ----Harbor has been installed and started successfully.---- +``` + +```powershell +检查效果 +[root@kubernetes-register /data/server/harbor]# docker-compose ps + Name Command State Ports +------------------------------------------------------------------------------------------------- +harbor-core /harbor/entrypoint.sh Up +harbor-db /docker-entrypoint.sh 96 13 Up +harbor-jobservice /harbor/entrypoint.sh Up +harbor-log /bin/sh -c /usr/local/bin/ ... Up 127.0.0.1:1514->10514/tcp +harbor-portal nginx -g daemon off; Up +nginx nginx -g daemon off; Up 0.0.0.0:80->8080/tcp,:::80->8080/tcp +redis redis-server /etc/redis.conf Up +registry /home/harbor/entrypoint.sh Up +registryctl /home/harbor/start.sh Up +``` + +定制服务启动脚本 + +```powershell +定制服务启动脚本 +[root@kubernetes-register /data/server/harbor]# cat /lib/systemd/system/harbor.service +[Unit] +Description=Harbor +After=docker.service systemd-networkd.service systemd-resolved.service +Requires=docker.service +Documentation=http://github.com/vmware/harbor + +[Service] +Type=simple +Restart=on-failure +RestartSec=5 +#需要注意harbor的安装位置 +ExecStart=/usr/bin/docker-compose --file /data/server/harbor/docker-compose.yml up +ExecStop=/usr/bin/docker-compose --file /data/server/harbor/docker-compose.yml down + +[Install] +WantedBy=multi-user.target +``` + +```powershell +加载服务配置文件 +[root@kubernetes-register /data/server/harbor]# systemctl daemon-reload + +启动服务 +[root@kubernetes-register /data/server/harbor]# systemctl start harbor + +检查状态 +[root@kubernetes-register /data/server/harbor]# systemctl status harbor + +设置开机自启动 +[root@kubernetes-register /data/server/harbor]# systemctl enable harbor +``` + + + +**Harbor实践** + +windows定制harbor的访问域名 + +```powershell +10.0.0.20 kubernetes-register.superopsmsb.com +``` + +浏览器访问域名,用户名: admin, 密码:123456 + +![image-20220715125357778](../../img/kubernetes/kubernetes_flannel/image-20220715125357778.png) + +```powershell +输入用户名和密码后,点击登录,查看harbor的首页效果 +``` + +![image-20220715125531167](../../img/kubernetes/kubernetes_flannel/image-20220715125531167.png) + +创建工作账号 + +```powershell +点击用户管理,进入用户创建界面 +``` + +![image-20220715125606669](../../img/kubernetes/kubernetes_flannel/image-20220715125606669.png) + +```powershell +点击创建用户,进入用户创建界面 +``` + +![image-20220715125758524](../../img/kubernetes/kubernetes_flannel/image-20220715125758524.png) + +```powershell +点击确定后,查看创建用户效果 +``` + +![image-20220715125834969](../../img/kubernetes/kubernetes_flannel/image-20220715125834969.png) + +```powershell +点击左上角的管理员名称,退出终端页面 +``` + +仓库管理 + +```powershell +采用普通用户登录到harbor中 +``` + +![image-20220715130019232](../../img/kubernetes/kubernetes_flannel/image-20220715130019232.png) + +```powershell +创建shuji用户专用的项目仓库,名称为 superopsmsb,权限为公开的 +``` + +![image-20220715130129241](../../img/kubernetes/kubernetes_flannel/image-20220715130129241.png) + +```powershell +点击确定后,查看效果 +``` + +![image-20220715130201235](../../img/kubernetes/kubernetes_flannel/image-20220715130201235.png) + +提交镜像 + +```powershell +准备docker的配置文件 +[root@kubernetes-master ~]# grep insecure /etc/docker/daemon.json + "insecure-registries": ["kubernetes-register.superopsmsb.com"], +[root@kubernetes-master ~]# systemctl restart docker +[root@kubernetes-master ~]# docker info | grep -A2 Insecure + Insecure Registries: + kubernetes-register.superopsmsb.com + 127.0.0.0/8 +``` + +```powershell +登录仓库 +[root@kubernetes-master ~]# docker login kubernetes-register.superopsmsb.com -u shuji +Password: # 输入登录密码 A12345678a +WARNING! Your password will be stored unencrypted in /root/.docker/config.json. +Configure a credential helper to remove this warning. See +https://docs.docker.com/engine/reference/commandline/login/#credentials-store + +Login Succeeded +``` + +```powershell +下载镜像 +[root@kubernetes-master ~]# docker pull busybox + +定制镜像标签 +[root@kubernetes-master ~]# docker tag busybox kubernetes-register.superopsmsb.com/superopsmsb/busybox:v0.1 + +推送镜像 +[root@kubernetes-master ~]# docker push kubernetes-register.superopsmsb.com/superopsmsb/busybox:v0.1 +The push refers to repository [kubernetes-register.superopsmsb.com/superopsmsb/busybox] +7ad00cd55506: Pushed +v0.1: digest: sha256:dcdf379c574e1773d703f0c0d56d67594e7a91d6b84d11ff46799f60fb081c52 size: 527 +``` + +```powershell +删除镜像 +[root@kubernetes-master ~]# docker rmi busybox kubernetes-register.superopsmsb.com/superopsmsb/busybox:v0.1 + +下载镜像 +[root@kubernetes-master ~]# docker pull kubernetes-register.superopsmsb.com/superopsmsb/busybox:v0.1 +v0.1: Pulling from superopsmsb/busybox +19d511225f94: Pull complete +Digest: sha256:dcdf379c574e1773d703f0c0d56d67594e7a91d6b84d11ff46799f60fb081c52 +Status: Downloaded newer image for kubernetes-register.superopsmsb.com/superopsmsb/busybox:v0.1 +kubernetes-register.superopsmsb.com/superopsmsb/busybox:v0.1 +结果显示: + 我们的harbor私有仓库就构建好了 +``` + +同步所有的docker配置 + +```powershell +同步所有主机的docker配置 +[root@kubernetes-master ~]# for i in 15 16 17;do scp /etc/docker/daemon.json root@10.0.0.$i:/etc/docker/daemon.json; ssh root@10.0.0.$i "systemctl restart docker"; done +daemon.json 100% 299 250.0KB/s 00:00 +daemon.json 100% 299 249.6KB/s 00:00 +daemon.json 100% 299 243.5KB/s 00:00 +``` + +**小结** + +``` + +``` + + + +### 1.1.5 master环境部署 + +学习目标 + +这一节,我们从 软件安装、环境初始化、小结 三个方面来学习。 + +**软件安装** + +简介 + +```powershell + 我们已经把kubernetes集群所有主机的软件源配置完毕了,所以接下来,我们需要做的就是如何部署kubernetes环境 +``` + +软件安装 + +```powershell +查看默认的最新版本 +[root@kubernetes-master ~]# yum list kubeadm +已加载插件:fastestmirror +Loading mirror speeds from cached hostfile +可安装的软件包 +kubeadm.x86_64 1.24.3-0 kubernetes + +查看软件的最近版本 +[root@kubernetes-master ~]# yum list kubeadm --showduplicates | sort -r | grep 1.2 +kubeadm.x86_64 1.24.3-0 kubernetes +kubeadm.x86_64 1.24.2-0 kubernetes +kubeadm.x86_64 1.24.1-0 kubernetes +kubeadm.x86_64 1.24.0-0 kubernetes +kubeadm.x86_64 1.23.9-0 kubernetes +kubeadm.x86_64 1.23.8-0 kubernetes +... +``` + +```powershell +安装制定版本 +[root@kubernetes-master ~]# yum install kubeadm-1.23.9-0 kubectl-1.23.9-0 kubelet-1.23.9-0 -y + +注意: + +``` + +![image-20220715160251133](../../img/kubernetes/kubernetes_flannel/image-20220715160251133.png) + +```powershell +核心软件解析 + kubeadm 主要是对k8s集群来进行管理的,所以在master角色主机上安装 + kubelet 是以服务的方式来进行启动,主要用于收集节点主机的信息 + kubectl 主要是用来对集群中的资源对象进行管控,一半情况下,node角色的节点是不需要安装的。 +依赖软件解析 + libnetfilter_xxx是Linux系统下网络数据包过滤的配置工具 + kubernetes-cni是容器网络通信的软件 + socat是kubelet的依赖 + cri-tools是CRI容器运行时接口的命令行工具 +``` + +命令解读 + +```powershell +查看集群初始化命令 +[root@kubernetes-master ~]# kubeadm version +kubeadm version: &version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.9", GitCommit:"c1de2d70269039fe55efb98e737d9a29f9155246", GitTreeState:"clean", BuildDate:"2022-07-13T14:25:37Z", GoVersion:"go1.17.11", Compiler:"gc", Platform:"linux/amd64"} +[root@kubernetes-master ~]# kubeadm --help + + + ┌──────────────────────────────────────────────────────────┐ + │ KUBEADM │ + │ Easily bootstrap a secure Kubernetes cluster │ + │ │ + │ Please give us feedback at: │ + │ https://github.com/kubernetes/kubeadm/issues │ + └──────────────────────────────────────────────────────────┘ + +Example usage: + + Create a two-machine cluster with one control-plane node + (which controls the cluster), and one worker node + (where your workloads, like Pods and Deployments run). + + ┌──────────────────────────────────────────────────────────┐ + │ On the first machine: │ + ├──────────────────────────────────────────────────────────┤ + │ control-plane# kubeadm init │ + └──────────────────────────────────────────────────────────┘ + + ┌──────────────────────────────────────────────────────────┐ + │ On the second machine: │ + ├──────────────────────────────────────────────────────────┤ + │ worker# kubeadm join │ + └──────────────────────────────────────────────────────────┘ + + You can then repeat the second step on as many other machines as you like. + +Usage: + kubeadm [command] + +Available Commands: + certs Commands related to handling kubernetes certificates + completion Output shell completion code for the specified shell (bash or zsh) + config Manage configuration for a kubeadm cluster persisted in a ConfigMap in the cluster + help Help about any command + init Run this command in order to set up the Kubernetes control plane + join Run this on any machine you wish to join an existing cluster + kubeconfig Kubeconfig file utilities + reset Performs a best effort revert of changes made to this host by 'kubeadm init' or 'kubeadm join' + token Manage bootstrap tokens + upgrade Upgrade your cluster smoothly to a newer version with this command + version Print the version of kubeadm + +Flags: + --add-dir-header If true, adds the file directory to the header of the log messages + -h, --help help for kubeadm + --log-file string If non-empty, use this log file + --log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800) + --one-output If true, only write logs to their native severity level (vs also writing to each lower severity level) + --rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem. + --skip-headers If true, avoid header prefixes in the log messages + --skip-log-headers If true, avoid headers when opening log files + -v, --v Level number for the log level verbosity + +Additional help topics: + kubeadm alpha Kubeadm experimental sub-commands + +Use "kubeadm [command] --help" for more information about a command. +``` + +信息查看 + +```powershell +查看集群初始化时候的默认配置 +[root@kubernetes-master ~]# kubeadm config print init-defaults +apiVersion: kubeadm.k8s.io/v1beta3 +bootstrapTokens: +- groups: + - system:bootstrappers:kubeadm:default-node-token + token: abcdef.0123456789abcdef + ttl: 24h0m0s + usages: + - signing + - authentication +kind: InitConfiguration +localAPIEndpoint: + advertiseAddress: 1.2.3.4 + bindPort: 6443 +nodeRegistration: + criSocket: /var/run/dockershim.sock + imagePullPolicy: IfNotPresent + name: node + taints: null +--- +apiServer: + timeoutForControlPlane: 4m0s +apiVersion: kubeadm.k8s.io/v1beta3 +certificatesDir: /etc/kubernetes/pki +clusterName: kubernetes +controllerManager: {} +dns: {} +etcd: + local: + dataDir: /var/lib/etcd +imageRepository: k8s.gcr.io +kind: ClusterConfiguration +kubernetesVersion: 1.23.0 +networking: + dnsDomain: cluster.local + serviceSubnet: 10.96.0.0/12 +scheduler: {} +注意: + 可以重点关注,master和node的注释信息、镜像仓库和子网信息 + 这条命令可以生成定制的kubeadm.conf认证文件 +``` + + + +```powershell +检查当前版本的kubeadm所依赖的镜像版本 +[root@kubernetes-master ~]# kubeadm config images list +I0715 16:08:07.269149 7605 version.go:255] remote version is much newer: v1.24.3; falling back to: stable-1.23 +k8s.gcr.io/kube-apiserver:v1.23.9 +k8s.gcr.io/kube-controller-manager:v1.23.9 +k8s.gcr.io/kube-scheduler:v1.23.9 +k8s.gcr.io/kube-proxy:v1.23.9 +k8s.gcr.io/pause:3.6 +k8s.gcr.io/etcd:3.5.1-0 +k8s.gcr.io/coredns/coredns:v1.8.6 + +检查指定版本的kubeadm所依赖的镜像版本 +[root@kubernetes-master ~]# kubeadm config images list --kubernetes-version=v1.23.6 +k8s.gcr.io/kube-apiserver:v1.23.6 +k8s.gcr.io/kube-controller-manager:v1.23.6 +k8s.gcr.io/kube-scheduler:v1.23.6 +k8s.gcr.io/kube-proxy:v1.23.6 +k8s.gcr.io/pause:3.6 +k8s.gcr.io/etcd:3.5.1-0 +k8s.gcr.io/coredns/coredns:v1.8.6 +``` + +拉取环境依赖的镜像 + +```powershell +预拉取镜像文件 +[root@kubernetes-master ~]# kubeadm config images pull +I0715 16:09:24.185598 7624 version.go:255] remote version is much newer: v1.24.3; falling back to: stable-1.23 +failed to pull image "k8s.gcr.io/kube-apiserver:v1.23.9": output: Error response from daemon: Get "https://k8s.gcr.io/v2/": dial tcp 142.250.157.82:443: connect: connection timed out +, error: exit status 1 +To see the stack trace of this error execute with --v=5 or higher +注意: + 由于默认情况下,这些镜像是从一个我们访问不到的网站上拉取的,所以这一步在没有实现科学上网的前提下,不要执行。 + 推荐在初始化之前,更换一下镜像仓库,提前获取文件,比如我们可以从 + registry.aliyuncs.com/google_containers/镜像名称 获取镜像文件 +``` + +```powershell +镜像获取脚本内容 +[root@localhost ~]# cat /data/scripts/04_kubernetes_get_images.sh +#!/bin/bash +# 功能: 获取kubernetes依赖的Docker镜像文件 +# 版本: v0.1 +# 作者: 书记 +# 联系: superopsmsb.com +# 定制普通环境变量 +ali_mirror='registry.aliyuncs.com' +harbor_mirror='kubernetes-register.superopsmsb.com' +harbor_repo='google_containers' + +# 环境定制 +kubernetes_image_get(){ + # 获取脚本参数 + kubernetes_version="$1" + # 获取制定kubernetes版本所需镜像 + images=$(kubeadm config images list --kubernetes-version=${kubernetes_version} | awk -F "/" '{print $NF}') + + # 获取依赖镜像 + for i in ${images} + do + docker pull ${ali_mirror}/${harbor_repo}/$i + docker tag ${ali_mirror}/${harbor_repo}/$i ${harbor_mirror}/${harbor_repo}/$i + docker rmi ${ali_mirror}/${harbor_repo}/$i + done +} + +# 脚本的帮助 +Usage(){ + echo "/bin/bash $0 " +} + +# 脚本主流程 +if [ $# -eq 0 ] +then + read -p "请输入要获取kubernetes镜像的版本(示例: v1.23.9): " kubernetes_version + kubernetes_image_get ${kubernetes_version} +else + Usage +fi +``` + +```powershell +脚本执行效果 +[root@kubernetes-master ~]# /bin/bash /data/scripts/04_kubernetes_get_images.sh +请输入要获取kubernetes镜像的版本(示例: v1.23.9): v1.23.9 +... +``` + +```powershell +查看镜像 +[root@kubernetes-master ~]# docker images +[root@kubernetes-master ~]# docker images | awk '{print $1,$2}' +REPOSITORY TAG +kubernetes-register.superopsmsb.com/google_containers/kube-apiserver v1.23.9 +kubernetes-register.superopsmsb.com/google_containers/kube-controller-manager v1.23.9 +kubernetes-register.superopsmsb.com/google_containers/kube-scheduler v1.23.9 +kubernetes-register.superopsmsb.com/google_containers/kube-proxy v1.23.9 +kubernetes-register.superopsmsb.com/google_containers/etcd 3.5.1-0 +kubernetes-register.superopsmsb.com/google_containers/coredns v1.8.6 +kubernetes-register.superopsmsb.com/google_containers/pause 3.6 + +harbor创建仓库 + 登录harbor仓库,创建一个google_containers的公开仓库 + +登录仓库 +[root@kubernetes-master ~]# docker login kubernetes-register.superopsmsb.com -u shuji +Password: # 输入A12345678a + +提交镜像 +[root@kubernetes-master ~]# for i in $(docker images | grep -v TAG | awk '{print $1":"$2}');do docker push $i;done +``` + +**环境初始化** + +master主机环境初始化 + +```powershell +环境初始化命令 +kubeadm init --kubernetes-version=1.23.9 \ +--apiserver-advertise-address=10.0.0.12 \ +--image-repository kubernetes-register.superopsmsb.com/google_containers \ +--service-cidr=10.96.0.0/12 \ +--pod-network-cidr=10.244.0.0/16 \ +--ignore-preflight-errors=Swap +``` + +```powershell +参数解析: + --apiserver-advertise-address 要设定为当前集群的master地址,而且必须为ipv4|ipv6地址 +由于kubeadm init命令默认去外网获取镜像,这里我们使用--image-repository来指定使用国内镜像 + --kubernetes-version选项的版本号用于指定要部署的Kubenretes程序版本,它需要与当前的kubeadm支持的版本保持一致;该参数是必须的 + --pod-network-cidr选项用于指定分Pod分配使用的网络地址,它通常应该与要部署使用的网络插件(例如flannel、calico等)的默认设定保持一致,10.244.0.0/16是flannel默认使用的网络; + --service-cidr用于指定为Service分配使用的网络地址,它由kubernetes管理,默认即为10.96.0.0/12; + --ignore-preflight-errors=Swap 如果没有该项,必须保证系统禁用Swap设备的状态。一般最好加上 + --image-repository 用于指定我们在安装kubernetes环境的时候,从哪个镜像里面下载相关的docker镜像,如果需要用本地的仓库,那么就用本地的仓库地址即可 +``` + +环境初始化过程 + +```powershell +环境初始化命令 +[root@kubernetes-master ~]# kubeadm init --kubernetes-version=1.23.9 \ +> --apiserver-advertise-address=10.0.0.12 \ +> --image-repository kubernetes-register.superopsmsb.com/google_containers \ +> --service-cidr=10.96.0.0/12 \ +> --pod-network-cidr=10.244.0.0/16 \ +> --ignore-preflight-errors=Swap + +# 环境初始化过程 +[init] Using Kubernetes version: v1.23.9 +[preflight] Running pre-flight checks + [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' + [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' +[preflight] Pulling images required for setting up a Kubernetes cluster +[preflight] This might take a minute or two, depending on the speed of your internet connection +[preflight] You can also perform this action in beforehand using 'kubeadm config images pull' +[certs] Using certificateDir folder "/etc/kubernetes/pki" +[certs] Generating "ca" certificate and key +[certs] Generating "apiserver" certificate and key +[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes-master kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.12] +[certs] Generating "apiserver-kubelet-client" certificate and key +[certs] Generating "front-proxy-ca" certificate and key +[certs] Generating "front-proxy-client" certificate and key +[certs] Generating "etcd/ca" certificate and key +[certs] Generating "etcd/server" certificate and key +[certs] etcd/server serving cert is signed for DNS names [kubernetes-master localhost] and IPs [10.0.0.12 127.0.0.1 ::1] +[certs] Generating "etcd/peer" certificate and key +[certs] etcd/peer serving cert is signed for DNS names [kubernetes-master localhost] and IPs [10.0.0.12 127.0.0.1 ::1] +[certs] Generating "etcd/healthcheck-client" certificate and key +[certs] Generating "apiserver-etcd-client" certificate and key +[certs] Generating "sa" key and public key +[kubeconfig] Using kubeconfig folder "/etc/kubernetes" +[kubeconfig] Writing "admin.conf" kubeconfig file +[kubeconfig] Writing "kubelet.conf" kubeconfig file +[kubeconfig] Writing "controller-manager.conf" kubeconfig file +[kubeconfig] Writing "scheduler.conf" kubeconfig file +[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" +[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" +[kubelet-start] Starting the kubelet +[control-plane] Using manifest folder "/etc/kubernetes/manifests" +[control-plane] Creating static Pod manifest for "kube-apiserver" +[control-plane] Creating static Pod manifest for "kube-controller-manager" +[control-plane] Creating static Pod manifest for "kube-scheduler" +[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" +[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s +[apiclient] All control plane components are healthy after 11.006830 seconds +[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace +[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster +NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently. +[upload-certs] Skipping phase. Please see --upload-certs +[mark-control-plane] Marking the node kubernetes-master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] +[mark-control-plane] Marking the node kubernetes-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] +[bootstrap-token] Using token: vudfvt.fwpohpbb7yw2qy49 +[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles +[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes +[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials +[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token +[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster +[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace +[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key +[addons] Applied essential addon: CoreDNS +[addons] Applied essential addon: kube-proxy +# 基本初始化完毕后,需要做的一些事情 +Your Kubernetes control-plane has initialized successfully! + +To start using your cluster, you need to run the following as a regular user: + # 定制kubernetes的登录权限 + mkdir -p $HOME/.kube + sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config + sudo chown $(id -u):$(id -g) $HOME/.kube/config + +Alternatively, if you are the root user, you can run: + + export KUBECONFIG=/etc/kubernetes/admin.conf +# 定制kubernetes的网络配置 +You should now deploy a pod network to the cluster. +Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: + https://kubernetes.io/docs/concepts/cluster-administration/addons/ + +Then you can join any number of worker nodes by running the following on each as root: +# node节点注册到master节点 +kubeadm join 10.0.0.12:6443 --token vudfvt.fwpohpbb7yw2qy49 \ + --discovery-token-ca-cert-hash sha256:110b1efec63971fda17154782dc1179fa93ef90a8804be381e5a83a8a7748545 +``` + +确认效果 + +```powershell +未设定权限前操作 +[root@kubernetes-master ~]# kubectl get nodes +The connection to the server localhost:8080 was refused - did you specify the right host or port? + +设定kubernetes的认证权限 +[root@kubernetes-master ~]# mkdir -p $HOME/.kube +[root@kubernetes-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config +[root@kubernetes-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config + +再次检测 +[root@kubernetes-master ~]# kubectl get nodes +NAME STATUS ROLES AGE VERSION +kubernetes-master NotReady control-plane,master 4m10s v1.23.9 +``` + + + +**小结** + +``` + +``` + + + +### 1.1.6 node环境部署 + +学习目标 + +这一节,我们从 节点初始化、网络环境配置、小结 三个方面来学习。 + +**节点初始化** + +简介 + +```powershell + 对于node节点来说,我们无需对集群环境进行管理,所以不需要安装和部署kubectl软件,其他的正常安装,然后根据master节点的认证通信,我们可以进行节点加入集群的配置。 +``` + +安装软件 + +```powershell +所有node节点都执行如下步骤: +[root@kubernetes-master ~]# for i in {15..17}; do ssh root@10.0.0.$i "yum install kubeadm-1.23.9-0 kubelet-1.23.9-0 -y";done +``` + +节点初始化(以node1为例) + +```powershell +节点1进行环境初始化 +[root@kubernetes-node1 ~]# kubeadm join 10.0.0.12:6443 --token vudfvt.fwpohpbb7yw2qy49 \ +> --discovery-token-ca-cert-hash sha256:110b1efec63971fda17154782dc1179fa93ef90a8804be381e5a83a8a7748545 +[preflight] Running pre-flight checks + [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' + [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' +[preflight] Reading configuration from the cluster... +[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' +[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" +[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" +[kubelet-start] Starting the kubelet +[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... + +This node has joined the cluster: +* Certificate signing request was sent to apiserver and a response was received. +* The Kubelet was informed of the new secure connection details. + +Run 'kubectl get nodes' on the control-plane to see this node join the cluster. +``` + +```powershell +回到master节点主机查看节点效果 +[root@kubernetes-master ~]# kubectl get nodes +NAME STATUS ROLES AGE VERSION +kubernetes-master NotReady control-plane,master 17m v1.23.9 +kubernetes-node2 NotReady 2m10s v1.23.9 + +所有节点都做完后,再次查看master的节点效果 +[root@kubernetes-master ~]# kubectl get nodes +NAME STATUS ROLES AGE VERSION +kubernetes-master NotReady control-plane,master 21m v1.23.9 +kubernetes-node1 NotReady 110s v1.23.9 +kubernetes-node2 NotReady 6m17s v1.23.9 +kubernetes-node3 NotReady 17s v1.23.9 +``` + + + +**网络环境配置** + +简介 + +```powershell +根据master节点初始化的效果,我们这里需要单独将网络插件的功能实现 +``` + +插件环境部署 + +```powershell +创建基本目录 +mkdir /data/kubernetes/flannel -p +cd /data/kubernetes/flannel + +获取配置文件 +wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml +``` + +```powershell +获取相关镜像 +[root@kubernetes-master /data/kubernetes/flannel]# grep image kube-flannel.yml | grep -v '#' + image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0 + image: rancher/mirrored-flannelcni-flannel:v0.18.1 + image: rancher/mirrored-flannelcni-flannel:v0.18.1 + +定制镜像标签 +for i in $(grep image kube-flannel.yml | grep -v '#' | awk -F '/' '{print $NF}') +do + docker pull rancher/$i + docker tag rancher/$i kubernetes-register.superopsmsb.com/google_containers/$i + docker push kubernetes-register.superopsmsb.com/google_containers/$i + docker rmi rancher/$i +done +``` + +```powershell +备份配置文件 +[root@kubernetes-master /data/kubernetes/flannel]# cp kube-flannel.yml{,.bak} + +修改配置文件 +[root@kubernetes-master /data/kubernetes/flannel]# sed -i '/ image:/s/rancher/kubernetes-register.superopsmsb.com\/google_containers/' kube-flannel.yml +[root@kubernetes-master /data/kubernetes/flannel]# sed -n '/ image:/p' kube-flannel.yml + image: kubernetes-register.superopsmsb.com/google_containers/mirrored-flannelcni-flannel-cni-plugin:v1.1.0 + image: kubernetes-register.superopsmsb.com/google_containers/mirrored-flannelcni-flannel:v0.18.1 + image: kubernetes-register.superopsmsb.com/google_containers/mirrored-flannelcni-flannel:v0.18.1 + +应用配置文件 +[root@kubernetes-master /data/kubernetes/flannel]# kubectl apply -f kube-flannel.yml +namespace/kube-flannel created +clusterrole.rbac.authorization.k8s.io/flannel created +clusterrolebinding.rbac.authorization.k8s.io/flannel created +serviceaccount/flannel created +configmap/kube-flannel-cfg created +daemonset.apps/kube-flannel-ds created +``` + +检查效果 + +```powershell +查看集群节点效果 +[root@kubernetes-master /data/kubernetes/flannel]# kubectl get nodes +NAME STATUS ROLES AGE VERSION +kubernetes-master Ready control-plane,master 62m v1.23.9 +kubernetes-node1 Ready 42m v1.23.9 +kubernetes-node2 Ready 47m v1.23.9 +kubernetes-node3 Ready 41m v1.23.9 + +查看集群pod效果 +[root@kubernetes-master /data/kubernetes/flannel]# kubectl get pod -n kube-system +NAME READY STATUS RESTARTS AGE +coredns-5d555c984-gt4w9 1/1 Running 0 62m +coredns-5d555c984-t4gps 1/1 Running 0 62m +etcd-kubernetes-master 1/1 Running 0 62m +kube-apiserver-kubernetes-master 1/1 Running 0 62m +kube-controller-manager-kubernetes-master 1/1 Running 0 62m +kube-proxy-48txz 1/1 Running 0 43m +kube-proxy-5vdhv 1/1 Running 0 41m +kube-proxy-cblk7 1/1 Running 0 47m +kube-proxy-hglfm 1/1 Running 0 62m +kube-scheduler-kubernetes-master 1/1 Running 0 62m +``` + +**小结** + +``` + +``` + + + +### 1.1.7 集群环境实践 + +学习目标 + +这一节,我们从 基础功能、节点管理、小结 三个方面来学习。 + +**基础功能** + +简介 + +```powershell + 目前kubernetes的集群环境已经部署完毕了,但是有些基础功能配置还是需要来梳理一下的。默认情况下,我们在master上执行命令的时候,没有办法直接使用tab方式补全命令,我们可以采取下面方式来实现。 +``` + +命令补全 + +```powershell +获取相关环境配置 +[root@kubernetes-master ~]# kubectl completion bash + +加载这些配置 +[root@kubernetes-master ~]# source <(kubectl completion bash) + 注意: "<(" 两个符号之间没有空格 + +放到当前用户的环境文件中 +[root@kubernetes-master ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc +[root@kubernetes-master ~]# echo "source <(kubeadm completion bash)" >> ~/.bashrc +[root@kubernetes-master ~]# source ~/.bashrc + +测试效果 +[root@kubernetes-master ~]# kubectl get n +namespaces networkpolicies.networking.k8s.io nodes +[root@kubernetes-master ~]# kubeadm co +completion config +``` + + + +**命令简介** + +简介 + +```powershell +kubectl是kubernetes集群内部管理各种资源对象的核心命令,一般情况下,只有master主机才有 +``` + +命令帮助 + +```powershell +注意:虽然kubernetes的kubectl命令的一些参数发生了变化,甚至是移除,但是不影响旧有命令的正常使用。 +[root@kubernetes-master ~]# kubectl --help +kubectl controls the Kubernetes cluster manager. + + Find more information at: https://kubernetes.io/docs/reference/kubectl/overview/ + +Basic Commands (Beginner): 资源对象核心的基础命令 + create Create a resource from a file or from stdin + expose Take a replication controller, service, deployment or pod and expose it as a new +Kubernetes service + run 在集群中运行一个指定的镜像 + set 为 objects 设置一个指定的特征 + +Basic Commands (Intermediate): 资源对象的一些基本操作命令 + explain Get documentation for a resource + get 显示一个或更多 resources + edit 在服务器上编辑一个资源 + delete Delete resources by file names, stdin, resources and names, or by resources and +label selector + +Deploy Commands: 应用部署相关的命令 + rollout Manage the rollout of a resource + scale Set a new size for a deployment, replica set, or replication controller + autoscale Auto-scale a deployment, replica set, stateful set, or replication controller + +Cluster Management Commands: 集群管理相关的命令 + certificate 修改 certificate 资源. + cluster-info Display cluster information + top Display resource (CPU/memory) usage + cordon 标记 node 为 unschedulable + uncordon 标记 node 为 schedulable + drain Drain node in preparation for maintenance + taint 更新一个或者多个 node 上的 taints + +Troubleshooting and Debugging Commands: 集群故障管理相关的命令 + describe 显示一个指定 resource 或者 group 的 resources 详情 + logs 输出容器在 pod 中的日志 + attach Attach 到一个运行中的 container + exec 在一个 container 中执行一个命令 + port-forward Forward one or more local ports to a pod + proxy 运行一个 proxy 到 Kubernetes API server + cp Copy files and directories to and from containers + auth Inspect authorization + debug Create debugging sessions for troubleshooting workloads and nodes + +Advanced Commands: 集群管理高阶命令,一般用一个apply即可 + diff Diff the live version against a would-be applied version + apply Apply a configuration to a resource by file name or stdin + patch Update fields of a resource + replace Replace a resource by file name or stdin + wait Experimental: Wait for a specific condition on one or many resources + kustomize Build a kustomization target from a directory or URL. + +Settings Commands: 集群的一些设置性命令 + label 更新在这个资源上的 labels + annotate 更新一个资源的注解 + completion Output shell completion code for the specified shell (bash, zsh or fish) + +Other Commands: 其他的相关命令 + alpha Commands for features in alpha + api-resources Print the supported API resources on the server + api-versions Print the supported API versions on the server, in the form of "group/version" + config 修改 kubeconfig 文件 + plugin Provides utilities for interacting with plugins + version 输出 client 和 server 的版本信息 + +Usage: + kubectl [flags] [options] + +Use "kubectl --help" for more information about a given command. +Use "kubectl options" for a list of global command-line options (applies to all commands). +``` + +命令的帮助 + +```powershell +查看子命令的帮助信息 +样式1:kubectl 子命令 --help +样式2:kubectl help 子命令 +``` + +```powershell +样式1实践 +[root@kubernetes-master ~]# kubectl version --help +Print the client and server version information for the current context. + +Examples: + # Print the client and server versions for the current context + kubectl version + ... + +样式2实践 +[root@kubernetes-master ~]# kubectl help version +Print the client and server version information for the current context. + +Examples: + # Print the client and server versions for the current context + kubectl version +``` + +简单实践 + +```powershell +查看当前节点效果 +[root@kubernetes-master ~]# kubectl get nodes +NAME STATUS ROLES AGE VERSION +kubernetes-master Ready control-plane,master 77m v1.23.9 +kubernetes-node1 Ready 57m v1.23.9 +kubernetes-node2 Ready 62m v1.23.9 +kubernetes-node3 Ready 56m v1.23.9 + +移除节点3 +[root@kubernetes-master ~]# kubectl delete node kubernetes-node3 +node "kubernetes-node3" deleted + +查看效果 +[root@kubernetes-master ~]# kubectl get nodes +NAME STATUS ROLES AGE VERSION +kubernetes-master Ready control-plane,master 77m v1.23.9 +kubernetes-node1 Ready 57m v1.23.9 +kubernetes-node2 Ready 62m v1.23.9 +``` + +```powershell +节点环境重置 +[root@kubernetes-node3 ~]# kubeadm reset +[root@kubernetes-node3 ~]# systemctl restart kubelet docker +[root@kubernetes-node3 ~]# kubeadm join 10.0.0.12:6443 --token vudfvt.fwpohpbb7yw2qy49 --discovery-token-ca-cert-hash sha256:110b1efec63971fda17154782dc1179fa93ef90a8804be381e5a83a8a7748545 +``` + +```powershell +master节点查看效果 +[root@kubernetes-master ~]# kubectl get nodes +NAME STATUS ROLES AGE VERSION +kubernetes-master Ready control-plane,master 82m v1.23.9 +kubernetes-node1 Ready 62m v1.23.9 +kubernetes-node2 Ready 67m v1.23.9 +kubernetes-node3 Ready 55s v1.23.9 +``` + + + +**小结** + +``` + +``` + + + +### 1.1.8 资源对象解读 + +学习目标 + +这一节,我们从 资源对象、命令解读、小结 三个方面来学习。 + +**资源对象** + +简介 + +```powershell +根据之前的对于kubernetes的简介,我们知道kubernetes之所以在大规模的容器场景下的管理效率非常好,原因在于它将我们人工对于容器的手工操作全部整合到的各种资源对象里面。 +在进行资源对象解读之前,我们需要给大家灌输一个基本的认识: + kubernetes的资源对象有两种: + 默认的资源对象 - 默认的有五六十种,但是常用的也就那么一二十个。 + 自定义的资源对象 - 用户想怎么定义就怎么定义,数量无法统计,仅需要了解想要了解的即可 +``` + +![image-20220715012035435](../../img/kubernetes/kubernetes_flannel/image-20220715012035435.png) + +常见资源缩写 + +| 最基础资源对象 | | | | +| ----------------------- | ------ | -------------------------- | ---- | +| 资源对象全称 | 缩写 | 资源对象全称 | 缩写 | +| pod/pods | po | node/nodes | no | +| **最常见资源对象** | | | | +| 资源对象全称 | 缩写 | 资源对象全称 | 缩写 | +| replication controllers | rc | horizontal pod autoscalers | hpa | +| replica sets | rs | persistent volume | pv | +| deployment | deploy | persistent volume claims | pvc | +| services | svc | | | +| **其他资源对象** | | | | +| namespaces | ns | storage classes | sc | +| config maps | cm | clusters | | +| daemon sets | ds | stateful sets | | +| endpoints | ep | secrets | | +| events | ev | jobs | | +| ingresses | ing | | | + + + +**命令解读** + +语法解读 + +```powershell +命令格式: + kubectl [子命令] [资源类型] [资源名称] [其他标识-可选] +参数解析: + 子命令:操作Kubernetes资源对象的子命令,常见的有create、delete、describe、get等 + create 创建资源对象 describe 查找资源的详细信息 + delete 删除资源对象 get 获取资源基本信息 + 资源类型:Kubernetes资源类型,举例:结点的资源类型是nodes,缩写no + 资源名称: Kubernetes资源对象的名称,可以省略。 + 其他标识: 可选参数,一般用于信息的扩展信息展示 +``` + +资源对象基本信息 + +```powershell +查看资源类型基本信息 +[root@kubernetes-master ~]# kubectl api-resources | head -2 +NAME SHORTNAMES APIVERSION NAMESPACED KIND +bindings v1 true Binding +[root@kubernetes-master ~]# kubectl api-resources | grep -v NAME | wc -l +56 + +查看资源类型的版本信息 +[root@kubernetes-master ~]# kubectl api-versions | head -n2 +admissionregistration.k8s.io/v1 +apiextensions.k8s.io/v1 +``` + +常见资源对象查看 + +```powershell +获取资源所在命名空间 +[root@kubernetes-master ~]# kubectl get ns +NAME STATUS AGE +default Active 94m +kube-flannel Active 32m +kube-node-lease Active 94m +kube-public Active 94m +kube-system Active 94m + +获取命名空间的资源对象 +[root@kubernetes-master ~]# kubectl get pod +No resources found in default namespace. +[root@kubernetes-master ~]# kubectl get pod -n kube-system +NAME READY STATUS RESTARTS AGE +coredns-5d555c984-gt4w9 1/1 Running 0 149m +coredns-5d555c984-t4gps 1/1 Running 0 149m +etcd-kubernetes-master 1/1 Running 0 149m +kube-apiserver-kubernetes-master 1/1 Running 0 149m +kube-controller-manager-kubernetes-master 1/1 Running 0 149m +kube-proxy-48txz 1/1 Running 0 129m +kube-proxy-cblk7 1/1 Running 0 134m +kube-proxy-ds8x5 1/1 Running 2 (69m ago) 70m +kube-proxy-hglfm 1/1 Running 0 149m +kube-scheduler-kubernetes-master 1/1 Running 0 149m +``` + +```powershell +cs 获取集群组件相关资源 +[root@kubernetes-master ~]# kubectl get cs +Warning: v1 ComponentStatus is deprecated in v1.19+ +NAME STATUS MESSAGE ERROR +controller-manager Healthy ok +scheduler Healthy ok +etcd-0 Healthy {"health":"true","reason":""} +``` + +```powershell +sa 和 secrets 获取集群相关的用户相关信息 +[root@kubernetes-master ~]# kubectl get sa +NAME SECRETS AGE +default 1 150m + +[root@kubernetes-master ~]# kubectl get secrets +NAME TYPE DATA AGE +default-token-nc8rg kubernetes.io/service-account-token 3 150m +``` + +信息查看命令 + +```powershell +查看资源对象 + kubectl get 资源类型 资源名称 <-o yaml/json/wide | -w> + 参数解析: + -w 是实时查看资源的状态。 + -o 是以多种格式查看资源的属性信息 + --raw 从api地址中获取相关资源信息 + +描述资源对象 + kubectl describe 资源类型 资源名称 + 注意: + 这个命令非常重要,一般我们应用部署排错时候,就用它。 + +查看资源应用的访问日志 + kubectl logs 资源类型 资源名称 + 注意: + 这个命令非常重要,一般我们服务排错时候,就用它。 +``` + +查看信息基本信息 + +```powershell +基本的资源对象简要信息 +[root@kubernetes-master ~]# kubectl get nodes +NAME STATUS ROLES AGE VERSION +kubernetes-master Ready control-plane,master 157m v1.23.9 +kubernetes-node1 Ready 137m v1.23.9 +kubernetes-node2 Ready 142m v1.23.9 +kubernetes-node3 Ready 75m v1.23.9 + +[root@kubernetes-master ~]# kubectl get nodes kubernetes-node1 +NAME STATUS ROLES AGE VERSION +kubernetes-node1 Ready 137m v1.23.9 +``` + +```powershell +查看资源对象的属性信息 +[root@kubernetes-master ~]# kubectl get nodes kubernetes-node1 -o yaml +apiVersion: v1 +kind: Node +metadata: + annotations: + ... +``` + +```powershell +查看资源对象的扩展信息 +[root@kubernetes-master ~]# kubectl get nodes kubernetes-node1 -o wide +NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME +kubernetes-node1 Ready 139m v1.23.9 10.0.0.15 CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 docker://20.10.17 +``` + +查看资源的描述信息 + +```powershell +describe 查看资源对象的描述信息 +[root@kubernetes-master ~]# kubectl describe namespaces default +Name: default +Labels: kubernetes.io/metadata.name=default +Annotations: +Status: Active + +No resource quota. + +No LimitRange resource. +``` + +```powershell +logs 查看对象的应用访问日志 +[root@kubernetes-master ~]# kubectl -n kube-system logs kube-proxy-hglfm +I0715 08:35:01.883435 1 node.go:163] Successfully retrieved node IP: 10.0.0.12 +I0715 08:35:01.883512 1 server_others.go:138] "Detected node IP" address="10.0.0.12" +I0715 08:35:01.883536 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode="" +``` + +**小结** + +``` + +``` + + + +### 1.1.9 资源对象实践 + +学习目标 + +这一节,我们从 命令实践、应用实践、小结 三个方面来学习。 + +**命令实践** + +准备资源 + +```powershell +从网上获取镜像 +[root@kubernetes-master ~]# docker pull nginx + +镜像打标签 +[root@kubernetes-master ~]# docker tag nginx:latest kubernetes-register.superopsmsb.com/superopsmsb/nginx:1.23.0 + +提交镜像到仓库 +[root@kubernetes-master ~]# docker push kubernetes-register.superopsmsb.com/superopsmsb/nginx:1.23.0 + +删除镜像 +[root@kubernetes-master ~]# docker rmi nginx +``` + +创建资源对象 + +```powershell +创建一个应用 +[root@kubernetes-master ~]# kubectl create deployment nginx --image=kubernetes-register.superopsmsb.com/superopsmsb/nginx:1.23.0 + +查看效果 +[root@kubernetes-master ~]# kubectl get pod -o wide +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +nginx-f44f65dc-99dvt 1/1 Running 0 13s 10.244.5.2 kubernetes-node3 + +访问效果 +[root@kubernetes-master ~]# curl 10.244.5.2 -I +HTTP/1.1 200 OK +Server: nginx/1.23.0 +``` + +```powershell +为应用暴露流量入口 +[root@kubernetes-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort + +查看基本信息 +[root@kubernetes-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort +service/nginx exposed +[root@kubernetes-master ~]# kubectl get service nginx +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +nginx NodePort 10.105.31.160 80:32505/TCP 8s +注意: + 这里的 NodePort 代表在所有的节点主机上都开启一个能够被外网访问的端口 32505 + +访问效果 +[root@kubernetes-master ~]# curl 10.105.31.160 -I +HTTP/1.1 200 OK +Server: nginx/1.23.0 + +[root@kubernetes-master ~]# curl 10.0.0.12:32505 -I +HTTP/1.1 200 OK +Server: nginx/1.23.0 +``` + +查看容器基本信息 + +```powershell +查看资源的访问日志 +[root@kubernetes-master ~]# kubectl get pod +NAME READY STATUS RESTARTS AGE +nginx-f44f65dc-99dvt 1/1 Running 0 9m47s +[root@kubernetes-master ~]# kubectl logs nginx-f44f65dc-99dvt +... +10.244.0.0 - - [15/Jul/2022:12:45:57 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.29.0" "-" +``` + +```powershell +查看资源的详情信息 +[root@kubernetes-master ~]# kubectl describe pod nginx-f44f65dc-99dvt +# 资源对象的基本属性信息 +Name: nginx-f44f65dc-99dvt +Namespace: default +Priority: 0 +Node: kubernetes-node3/10.0.0.17 +Start Time: Fri, 15 Jul 2022 20:44:17 +0800 +Labels: app=nginx + pod-template-hash=f44f65dc +Annotations: +Status: Running +IP: 10.244.5.2 +IPs: + IP: 10.244.5.2 +Controlled By: ReplicaSet/nginx-f44f65dc +# pod内部的容器相关信息 +Containers: + nginx: # 容器的名称 + Container ID: docker://8c0d89c8ab48e02495a2db4a2b2133c86811bd8064f800a16739f9532670d854 + Image: kubernetes-register.superopsmsb.com/superopsmsb/nginx:1.23.0 + Image ID: docker-pullable://kubernetes-register.superopsmsb.com/superopsmsb/nginx@sha256:33cef86aae4e8487ff23a6ca16012fac28ff9e7a5e9759d291a7da06e36ac958 + Port: + Host Port: + State: Running + Started: Fri, 15 Jul 2022 20:44:24 +0800 + Ready: True + Restart Count: 0 + Environment: + Mounts: + /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7sxtx (ro) +Conditions: + Type Status + Initialized True + Ready True + ContainersReady True + PodScheduled True + +# pod内部数据相关的信息 +Volumes: + kube-api-access-7sxtx: + Type: Projected (a volume that contains injected data from multiple sources) + TokenExpirationSeconds: 3607 + ConfigMapName: kube-root-ca.crt + ConfigMapOptional: + DownwardAPI: true +QoS Class: BestEffort +Node-Selectors: +Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s + node.kubernetes.io/unreachable:NoExecute op=Exists for 300s +# 资源对象在创建过程中遇到的各种信息,这段信息是非常重要的 +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Scheduled 10m default-scheduler Successfully assigned default/nginx-f44f65dc-99dvt to kubernetes-node3 + Normal Pulling 10m kubelet Pulling image "kubernetes-register.superopsmsb.com/superopsmsb/nginx:1.23.0" + Normal Pulled 10m kubelet Successfully pulled image "kubernetes-register.superopsmsb.com/superopsmsb/nginx:1.23.0" in 4.335869479s + Normal Created 10m kubelet Created container nginx + Normal Started 10m kubelet Started container nginx +``` + +```powershell +资源对象内部的容器信息 +[root@kubernetes-master ~]# kubectl exec -it nginx-f44f65dc-99dvt -- /bin/bash +root@nginx-f44f65dc-99dvt:/# env +KUBERNETES_SERVICE_PORT_HTTPS=443 +KUBERNETES_SERVICE_PORT=443 +HOSTNAME=nginx-f44f65dc-99dvt +... + +修改nginx的页面信息 +root@nginx-f44f65dc-99dvt:/# grep -A1 'location /' /etc/nginx/conf.d/default.conf + location / { + root /usr/share/nginx/html; +root@nginx-f44f65dc-99dvt:/# echo $HOSTNAME > /usr/share/nginx/html/index.html +root@nginx-f44f65dc-99dvt:/# exit +exit + +访问应用效果 +[root@kubernetes-master ~]# curl 10.244.5.2 +nginx-f44f65dc-99dvt +[root@kubernetes-master ~]# curl 10.0.0.12:32505 +nginx-f44f65dc-99dvt +``` + + + +**应用实践** + +资源的扩容缩容 + +```powershell +pod的容量扩充 +[root@kubernetes-master ~]# kubectl help scale + +调整pod数量为3 +[root@kubernetes-master ~]# kubectl scale --replicas=3 deployment nginx +deployment.apps/nginx scaled + +查看效果 +[root@kubernetes-master ~]# kubectl get deployment +NAME READY UP-TO-DATE AVAILABLE AGE +nginx 1/3 3 1 20m +[root@kubernetes-master ~]# kubectl get pod -o wide +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +nginx-f44f65dc-99dvt 1/1 Running 0 20m 10.244.5.2 kubernetes-node3 +nginx-f44f65dc-rskvx 1/1 Running 0 16s 10.244.2.4 kubernetes-node1 +nginx-f44f65dc-xpkgq 1/1 Running 0 16s 10.244.1.2 kubernetes-node2 +``` + +```powershell +pod的容量收缩 +[root@kubernetes-master ~]# kubectl scale --replicas=1 deployment nginx +deployment.apps/nginx scaled +[root@kubernetes-master ~]# kubectl get deployment +NAME READY UP-TO-DATE AVAILABLE AGE +nginx 1/1 1 1 20m +``` + +资源的删除 + +```powershell +删除资源有两种方式 +方法1: + kubectl delete 资源类型 资源1 资源2 ... 资源n + 因为限制了资源类型,所以这种方法只能删除一种资源 +方法2: + kubectl delete 资源类型/资源 + 因为删除对象的时候,指定了资源类型,所以我们可以通过这种资源类型限制的方式同时删除多种类型资源 +``` + +```powershell +删除deployment资源 +[root@kubernetes-master ~]# kubectl delete deployments nginx +deployment.apps "nginx" deleted +[root@kubernetes-master ~]# kubectl get deployment +No resources found in default namespace. + +删除svc资源 +[root@kubernetes-master ~]# kubectl delete svc nginx +service "nginx" deleted +[root@kubernetes-master ~]# kubectl get svc nginx +Error from server (NotFound): services "nginx" not found +``` + + + +**小结** + +``` + +``` + + + +## 1.2 多主集群 + +### 1.2.1 集群环境解读 + +学习目标 + +这一节,我们从 集群规划、环境解读、小结 三个方面来学习。 + +**集群规划** + +简介 + +```powershell + 在生产中,因为kubernetes的主角色主机对于整个集群的重要性不可估量,我们的kubernetes集群一般都会采用多主分布式效果。 + 另外因为大规模环境下,涉及到的资源对象过于繁多,所以,kubernetes集群环境部署的时候,一般会采用属性高度定制的方式来实现。 + 为了方便后续的集群环境升级的管理操作,我们在高可用的时候,部署 1.23.8的软件版本(当然只需要1.22+版本都可以,不允许出现大跨版本的出现。) +``` + +实验环境的效果图 + +![image-20220715213302865](../../img/kubernetes/kubernetes_flannel/image-20220715213302865.png) + +```powershell +修改master节点主机的hosts文件 +[root@localhost ~]# cat /etc/hosts +10.0.0.12 kubernetes-master1.superopsmsb.com kubernetes-master1 +10.0.0.13 kubernetes-master2.superopsmsb.com kubernetes-master2 +10.0.0.14 kubernetes-master3.superopsmsb.com kubernetes-master3 +10.0.0.15 kubernetes-node1.superopsmsb.com kubernetes-node1 +10.0.0.16 kubernetes-node2.superopsmsb.com kubernetes-node2 +10.0.0.17 kubernetes-node3.superopsmsb.com kubernetes-node3 +10.0.0.18 kubernetes-ha1.superopsmsb.com kubernetes-ha1 +10.0.0.19 kubernetes-ha2.superopsmsb.com kubernetes-ha2 +10.0.0.20 kubernetes-register.superopsmsb.com kubernetes-register +``` + +```powershell +脚本执行实现跨主机免密码认证和hosts文件同步 +[root@localhost ~]# /bin/bash /data/scripts/01_remote_host_auth.sh +批量设定远程主机免密码认证管理界面 +===================================================== + 1: 部署环境 2: 免密认证 3: 同步hosts + 4: 设定主机名 5:退出操作 +===================================================== +请输入有效的操作id: 1 +``` + +另外两台master主机的基本配置 + +```powershell +kubernetes的内核参数调整 +/bin/bash /data/scripts/02_kubernetes_kernel_conf.sh + +底层docker环境的部署 +/bin/bash /data/scripts/03_kubernetes_docker_install.sh +``` + +```powershell +同步docker环境的基本配置 +[root@kubernetes-master ~]# for i in 13 14 +> do +> scp /etc/docker/daemon.json root@10.0.0.$i:/etc/docker/daemon.json +> ssh root@10.0.0.$i "systemctl restart docker" +> done +daemon.json 100% 299 86.3KB/s 00:00 +daemon.json 100% 299 86.3KB/s 00:00 +``` + +**环境解读** + +环境组成 + +```powershell + 多主分布式集群的效果是在单主分布式的基础上,将master主机扩充到3台,作为一个主角色集群来存在,这个时候我们需要考虑数据的接入和输出。 + 数据接入: + 三台master主机需要一个专属的控制平面来统一处理数据来源 + - 在k8s中,这入口一般是以port方式存在 + 数据流出: + 在kubernetes集群中,所有的数据都是以etcd的方式来单独存储的,所以这里无序过度干预。 +``` + +高可用梳理 + +```powershell + 为了保证所有数据都能够访问master主机,同时避免单台master出现异常,我们一般会通过高可用的方式将流量转交个后端,这里采用常见的 keepalived 和 haproxy的方式来实现。 +``` + +![1627541947686](../../img/kubernetes/kubernetes_flannel/1627541947686.png) + +其他功能 + +```powershell + 一般来说,对于大型集群来说,其内部的时间同步、域名解析、配置管理、持续交付等功能,都需要单独的主机来进行实现。由于我们这里主要来演示核心的多主集群环境,所以其他的功能,大家可以自由的向当前环境中补充。 +``` + +**小结** + +``` + +``` + + + +### 1.2.2 高可用环境实践 + +学习目标 + +这一节,我们从 高可用、高负载、高可用升级、小结 四个方面来学习。 + +**高可用** + +简介 + +```powershell + 所谓的高可用,将核心业务使用多台(一般是2台)主机共同工作,支撑并保障核心业务的正常运行,尤其是业务的对外不间断的对外提供服务。核心特点就是"冗余",它存在的目的就是为了解决单点故障(Single Point of Failure)问题的。 + 高可用集群是基于高扩展基础上的一个更高层次的网站稳定性解决方案。网站的稳定性体现在两个方面:网站可用性和恢复能力 +``` + +keepalived简介 + +![1627542432899](../../img/kubernetes/kubernetes_flannel/1627542432899.png) + +软件安装 + +```powershell +我们在 10.0.0.18和10.0.0.19主机上部署keepalived软件 +[root@kubernetes-ha1 ~]# yum install keepalived -y + +查看配置文件模板 +[root@kubernetes-ha1 ~]# rpm -ql keepalived | grep sample +/usr/share/doc/keepalived-1.3.5/samples +``` + +软件配置 + +```powershell +10.0.0.19主机配置高可用从节点 +[root@kubernetes-ha2 ~]# cp /etc/keepalived/keepalived.conf{,.bak} +[root@kubernetes-ha2 ~]# cat /etc/keepalived/keepalived.conf +global_defs { + router_id kubernetes_ha2 +} + +vrrp_instance VI_1 { + state BACKUP # 当前节点为高可用从角色 + interface eth0 + virtual_router_id 51 + priority 90 + advert_int 1 # 主备通讯时间间隔 + authentication { + auth_type PASS + auth_pass 1111 + } + virtual_ipaddress { + 10.0.0.200 dev eth0 label eth0:1 + } +} + +重启服务后查看效果 +[root@kubernetes-ha2 ~]# systemctl restart keepalived.service +[root@kubernetes-ha2 ~]# ifconfig eth0:1 +eth0:1: flags=4163 mtu 1500 + inet 10.0.0.200 netmask 255.255.255.255 broadcast 0.0.0.0 + ether 00:50:56:24:cd:0e txqueuelen 1000 (Ethernet) +结果显示: + 高可用功能已经开启了 +``` + +```powershell +10.0.0.18主机配置高可用主节点 +[root@kubernetes-ha1 ~]# cp /etc/keepalived/keepalived.conf{,.bak} +从10.0.0.19主机拉取配置文件 +[root@kubernetes-ha1 ~]# scp root@10.0.0.19:/etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf + +修改配置文件 +[root@kubernetes-ha1 ~]# cat /etc/keepalived/keepalived.conf +global_defs { + router_id kubernetes_ha1 +} + +vrrp_instance VI_1 { + state MASTER # 当前节点为高可用主角色 + interface eth0 + virtual_router_id 51 + priority 100 + advert_int 1 + authentication { + auth_type PASS + auth_pass 1111 + } + virtual_ipaddress { + 10.0.0.200 dev eth0 label eth0:1 + } +} + +启动服务 +[root@kubernetes-ha1 ~]# systemctl start keepalived.service +[root@kubernetes-ha1 ~]# ifconfig eth0:1 +eth0:1: flags=4163 mtu 1500 + inet 10.0.0.200 netmask 255.255.255.255 broadcast 0.0.0.0 + ether 00:50:56:2d:d9:0a txqueuelen 1000 (Ethernet) +[root@kubernetes-ha2 ~]# ifconfig eth0:1 +eth0:1: flags=4163 mtu 1500 + ether 00:50:56:24:cd:0e txqueuelen 1000 (Ethernet) +结果显示: + 高可用的主节点把从节点的ip地址给夺过来了,实现了节点的漂移 +``` + +```powershell +主角色关闭服务 +[root@kubernetes-ha1 ~]# systemctl stop keepalived.service +[root@kubernetes-ha1 ~]# ifconfig eth0:1 +eth0:1: flags=4163 mtu 1500 + ether 00:50:56:2d:d9:0a txqueuelen 1000 (Ethernet) + +从节点查看vip效果 +[root@kubernetes-ha2 ~]# ifconfig eth0:1 +eth0:1: flags=4163 mtu 1500 + inet 10.0.0.200 netmask 255.255.255.255 broadcast 0.0.0.0 + ether 00:50:56:24:cd:0e txqueuelen 1000 (Ethernet) + +主角色把vip抢过来 +[root@kubernetes-ha1 ~]# systemctl start keepalived.service +eth0:1: flags=4163 mtu 1500 + inet 10.0.0.200 netmask 255.255.255.255 broadcast 0.0.0.0 + ether 00:50:56:2d:d9:0a txqueuelen 1000 (Ethernet) +``` + +```powershell +注意: + 在演示实验环境的时候,如果你的主机资源不足,可以只开一台keepalived +``` + +**高负载** + +简介 + +```powershell + 所谓的高负载集群,指的是在当前业务环境集群中,所有的主机节点都处于正常的工作活动状态,它们共同承担起用户的请求带来的工作负载压力,保证用户的正常访问。支持高可用的软件很多,比如nginx、lvs、haproxy、等,我们这里用的是haproxy。 + HAProxy是法国开发者 威利塔罗(Willy Tarreau) 在2000年使用C语言开发的一个开源软件,是一款具备高并发(一万以上)、高性能的TCP和HTTP负载均衡器,支持基于cookie的持久性,自动故障切换,支持正则表达式及web状态统计、它也支持基于数据库的反向代理。 +``` + +软件安装 + +```powershell +我们在 10.0.0.18和10.0.0.19主机上部署haproxy软件 +[root@kubernetes-ha1 ~]# yum install haproxy -y +``` + +软件配置 + +```powershell +10.0.0.18主机配置高负载 +[root@kubernetes-ha1 ~]# cp /etc/haproxy/haproxy.cfg{,.bak} +[root@kubernetes-ha1 ~]# cat /etc/haproxy/haproxy.cfg +... +listen status + bind 10.0.0.200:9999 + mode http + log global + stats enable + stats uri /haproxy-status + stats auth superopsmsb:123456 + +listen kubernetes-api-6443 + bind 10.0.0.200:6443 + mode tcp + server kubernetes-master1 10.0.0.12:6443 check inter 3s fall 3 rise 5 + server kubernetes-master2 10.0.0.13:6443 check inter 3s fall 3 rise 5 + server kubernetes-master3 10.0.0.14:6443 check inter 3s fall 3 rise 5 + +重启服务后查看效果 +[root@kubernetes-ha1 ~]# systemctl start haproxy.service +[root@kubernetes-ha1 ~]# netstat -tnulp | head -n4 +Active Internet connections (only servers) +Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name +tcp 0 0 10.0.0.200:6443 0.0.0.0:* LISTEN 2790/haproxy +tcp 0 0 10.0.0.200:9999 0.0.0.0:* LISTEN 2790/haproxy +``` + +```powershell +浏览器查看haproxy的页面效果 10.0.0.200:9999/haproxy-status,输入用户名和密码后效果如下 +``` + +![image-20220716075320996](../../img/kubernetes/kubernetes_flannel/image-20220716075320996.png) + +```powershell +10.0.0.19主机配置高可用主节点 +[root@kubernetes-ha2 ~]# cp /etc/haproxy/haproxy.cfg{,.bak} +把10.0.0.18主机配置传递到10.0.0.19主机 +[root@kubernetes-ha1 ~]# scp /etc/haproxy/haproxy.cfg root@10.0.0.19:/etc/haproxy/haproxy.cfg + +默认情况下,没有vip的节点是无法启动haproxy的 +[root@kubernetes-ha2 ~]# systemctl start haproxy +[root@kubernetes-ha2 ~]# systemctl status haproxy.service +● haproxy.service - HAProxy Load Balancer + Loaded: loaded (/usr/lib/systemd/system/haproxy.service; disabled; vendor preset: disabled) + Active: failed (Result: exit-code) + ... + +通过修改内核的方式,让haproxy绑定一个不存在的ip地址从而启动成功 +[root@kubernetes-ha2 ~]# sysctl -a | grep nonlocal +net.ipv4.ip_nonlocal_bind = 0 +net.ipv6.ip_nonlocal_bind = 0 + +开启nonlocal的内核参数 +[root@kubernetes-ha2 ~]# echo 'net.ipv4.ip_nonlocal_bind = 1' >> /etc/sysctl.conf +[root@kubernetes-ha2 ~]# sysctl -p +net.ipv4.ip_nonlocal_bind = 1 +注意: + 这一步最好在ha1上也做一下 + +再次启动haproxy服务 +[root@kubernetes-ha2 ~]# systemctl start haproxy +[root@kubernetes-ha2 ~]# systemctl status haproxy | grep Active + Active: active (running) since 六 2062-07-16 08:05:00 CST; 14s ago +[root@kubernetes-ha1 ~]# netstat -tnulp | head -n4 +Active Internet connections (only servers) +Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name +tcp 0 0 10.0.0.200:6443 0.0.0.0:* LISTEN 3062/haproxy +tcp 0 0 10.0.0.200:9999 0.0.0.0:* LISTEN 3062/haproxy +``` + +```powershell +主角色关闭服务 +[root@kubernetes-ha1 ~]# systemctl stop keepalived.service +[root@kubernetes-ha1 ~]# ifconfig eth0:1 +eth0:1: flags=4163 mtu 1500 + ether 00:50:56:2d:d9:0a txqueuelen 1000 (Ethernet) + +从节点查看vip效果 +[root@kubernetes-ha2 ~]# ifconfig eth0:1 +eth0:1: flags=4163 mtu 1500 + inet 10.0.0.200 netmask 255.255.255.255 broadcast 0.0.0.0 + ether 00:50:56:24:cd:0e txqueuelen 1000 (Ethernet) +``` + +**高可用升级** + +问题 + +```powershell + 目前我们实现了高可用的环境,无论keepalived是否在哪台主机存活,都有haproxy能够提供服务,但是在后续处理的时候,会出现一种问题,haproxy一旦故障,而keepalived没有同时关闭的话,会导致服务无法访问。效果如下 +``` + +![image-20220716081105875](../../img/kubernetes/kubernetes_flannel/image-20220716081105875.png) + +```powershell + 所以我们有必要对keeaplived进行升级,需要借助于其内部的脚本探测机制实现对后端haproxy进行探测,如果后端haproxy异常就直接把当前的keepalived服务关闭。 +``` + +制作keepalived的脚本文件 + +```powershell +[root@localhost ~]# cat /etc/keepalived/check_haproxy.sh +#!/bin/bash +# 功能: keepalived检测后端haproxy的状态 +# 版本: v0.1 +# 作者: 书记 +# 联系: superopsmsb.com + +# 检测后端haproxy的状态 +haproxy_status=$(ps -C haproxy --no-header | wc -l) +# 如果后端没有haproxy服务,则尝试启动一次haproxy +if [ $haproxy_status -eq 0 ];then + systemctl start haproxy >> /dev/null 2>&1 + sleep 3 + # 如果重启haproxy还不成功的话,则关闭keepalived服务 + if [ $(ps -C haproxy --no-header | wc -l) -eq 0 ] + then + systemctl stop keepalived + fi +fi + +脚本赋予执行权限 +[root@kubernetes-ha1 ~]# chmod +x /etc/keepalived/check_haproxy.sh +``` + +改造keepalived的配置文件 + +```powershell +查看10.0.0.18主机的高可用配置修改 +[root@kubernetes-ha1 ~]# cat /etc/keepalived/keepalived.conf +global_defs { + router_id kubernetes_ha1 +} +# 定制keepalive检测后端haproxy服务 +vrrp_script chk_haproxy { + script "/bin/bash /etc/keepalived/check_haproxy.sh" + interval 2 + weight -20 # 每次检测失败都降低权重值 +} +vrrp_instance VI_1 { + state MASTER + interface eth0 + virtual_router_id 51 + priority 100 + advert_int 1 + authentication { + auth_type PASS + auth_pass 1111 + } + # 应用内部的检测机制 + track_script { + chk_haproxy + } + virtual_ipaddress { + 10.0.0.200 dev eth0 label eth0:1 + } +} +注意: + keepalived所有配置后面不允许有任何空格,否则启动有可能出现异常 + +重启keepalived服务 +[root@kubernetes-ha1 ~]# systemctl restart keepalived +``` + +```powershell +传递检测脚本给10.0.0.19主机 +[root@kubernetes-ha1 ~]# scp /etc/keepalived/check_haproxy.sh root@10.0.0.19:/etc/keepalived/check_haproxy.sh + +10.0.0.19主机,也将这两部分修改配置添加上,并重启keepalived服务 +[root@kubernetes-ha2 ~]# systemctl restart keepalived +``` + +服务检测 + +```powershell +查看当前vip效果 +[root@kubernetes-ha1 ~]# ifconfig eth0:1 +eth0:1: flags=4163 mtu 1500 + inet 10.0.0.200 netmask 255.255.255.255 broadcast 0.0.0.0 + ether 00:50:56:2d:d9:0a txqueuelen 1000 (Ethernet) +``` + +```powershell +因为haproxy内置的强制保证服务启动机制,无法通过stop的方式关闭,通过移除配置文件方式强制关闭haproxy +[root@kubernetes-ha1 ~]# mv /etc/haproxy/haproxy.cfg ./ +[root@kubernetes-ha1 ~]# systemctl stop haproxy.service +[root@kubernetes-ha1 ~]# systemctl status haproxy.service | grep Active + Active: failed (Result: exit-code) + +10.0.0.18主机检查vip效果 +[root@kubernetes-ha1 ~]# ifconfig eth0:1 +eth0:1: flags=4163 mtu 1500 + ether 00:50:56:2d:d9:0a txqueuelen 1000 (Ethernet) + +10.0.0.19主机检测vip效果 +[root@kubernetes-ha2 ~]# ifconfig eth0:1 +eth0:1: flags=4163 mtu 1500 + inet 10.0.0.200 netmask 255.255.255.255 broadcast 0.0.0.0 + ether 00:50:56:24:cd:0e txqueuelen 1000 (Ethernet) +结果显示: + 到此为止,我们所有的高可用环境都梳理完毕了 +``` + + + +**小结** + +``` + +``` + + + +### 1.2.3 集群初始化实践 + +学习目标 + +这一节,我们从 配置解读、文件实践、小结 三个方面来学习。 + +**配置解读** + +简介 + +```powershell + 对于kubernetes来说,它提供了高度定制配置的方式来设定集群环境,为了方便后续集群的升级等操作,我们这里安装 1.23.8版本。 +``` + +环境准备 + +```powershell +所有kubernetes主机节点上都安装1.23.8版本的软件: + yum remove kubeadm kubectl kubelet -y + yum install kubeadm-1.23.8-0 kubectl-1.23.8-0 kubelet-1.23.8-0 -y +``` + +```powershell +执行脚本获取制定版本的kubernetes依赖的docker镜像 +[root@kubernetes-master ~]# /bin/bash /data/scripts/04_kubernetes_get_images.sh +请输入要获取kubernetes镜像的版本(示例: v1.23.9): v1.23.8 +... + +提交镜像到远程仓库 +[root@kubernetes-master ~]# for i in $(docker images | grep -v TAG | awk '{print $1":"$2}');do docker push $i;done +``` + +配置文件解读 + +```powershell +kubernetes的集群初始化两种方式: + 方法1:命令方式灵活,但是后期配置不可追溯 + 方法2:文件方式繁琐,但是所有配置都可以控制 +``` + +```powershell +查看集群初始化时候的默认配置 +[root@kubernetes-master ~]# kubeadm config print init-defaults +apiVersion: kubeadm.k8s.io/v1beta3 # 不同版本的api版本不一样 +# 认证授权相关的基本信息 +bootstrapTokens: +- groups: + - system:bootstrappers:kubeadm:default-node-token + token: abcdef.0123456789abcdef + ttl: 24h0m0s + usages: + - signing + - authentication +kind: InitConfiguration +# 第一个master节点配置的入口 +localAPIEndpoint: + advertiseAddress: 1.2.3.4 + bindPort: 6443 +# node节点注册到master集群的通信方式 +nodeRegistration: + criSocket: /var/run/dockershim.sock + imagePullPolicy: IfNotPresent + name: node + taints: null +--- +# 基本的集群属性信息 +apiServer: + timeoutForControlPlane: 4m0s +apiVersion: kubeadm.k8s.io/v1beta3 +certificatesDir: /etc/kubernetes/pki +clusterName: kubernetes +controllerManager: {} +dns: {} +# kubernetes的数据管理方式 +etcd: + local: + dataDir: /var/lib/etcd +# 镜像仓库的配置 +imageRepository: k8s.gcr.io +kind: ClusterConfiguration +# kubernetes版本的定制 +kubernetesVersion: 1.23.0 +# kubernetes的网络基本信息 +networking: + dnsDomain: cluster.local + serviceSubnet: 10.96.0.0/12 +scheduler: {} +``` + +```powershell +修改配置文件,让其支持新master的内容 + bootstrapTokens.groups.ttl 修改为48h或者更大 + localAPIEndpoint.advertiseAddress 修改为当前主机的真实ip地址 + controlPlanelEndpoint 如果涉及到haproxy的方式,需要额外添加基于VIP的Endpoint地址 + 注意:该项一般是在 dns{} 的上一行 + imageRepository 修改为我们能够访问到镜像的仓库地址 + kubernetesVersion 修改为当前的 k8s 的版本信息 + networking.serviceSubnet 改为自己的规划中的service地址信息,最好是大一点的私网地址段 +``` + +配置文件变动 + +```powershell +创建专属目录 +[root@kubernetes-master ~]# mkdir -p /data/kubernetes/cluster_init + +在集群初始化节点上生成通信认证的配置信息 +[root@kubernetes-master ~]# kubeadm config print init-defaults > /data/kubernetes/cluster_init/kubeadm-init-1-23-8.yml +``` + +```powershell +修改配置文件 +[root@kubernetes-master ~]# cat /data/kubernetes/cluster_init/kubeadm-init-1-23-8.yml +apiVersion: kubeadm.k8s.io/v1beta3 +bootstrapTokens: +- groups: + - system:bootstrappers:kubeadm:default-node-token + token: abcdef.0123456789abcdef + ttl: 48h0m0s + usages: + - signing + - authentication +kind: InitConfiguration +localAPIEndpoint: + advertiseAddress: 10.0.0.12 + bindPort: 6443 +nodeRegistration: + criSocket: /var/run/dockershim.sock + imagePullPolicy: IfNotPresent + name: kubernetes-master1 + taints: + - effect: NoSchedule + key: node-role.kubernetes.io/master +--- +apiServer: + timeoutForControlPlane: 4m0s +apiVersion: kubeadm.k8s.io/v1beta3 +certificatesDir: /etc/kubernetes/pki +clusterName: kubernetes +controllerManager: {} +controlPlaneEndpoint: 10.0.0.200:6443 +dns: {} +etcd: + local: + dataDir: /var/lib/etcd +imageRepository: kubernetes-register.superopsmsb.com/google_containers +kind: ClusterConfiguration +kubernetesVersion: 1.23.8 +networking: + dnsDomain: cluster.local + serviceSubnet: 10.96.0.0/12 + podSubnet: 10.244.0.0/16 +scheduler: {} +``` + + + +**文件实践** + +环境初始化 + +```powershell +初始化集群环境 +[root@kubernetes-master ~]# kubeadm init --config /data/kubernetes/cluster_init/kubeadm-init-1-23-8.yml +[root@kubernetes-master ~]# kubeadm init --config /data/kubernetes/cluster_init/kubeadm-init-1-23-8.yml +[init] Using Kubernetes version: v1.23.8 +[preflight] Running pre-flight checks + [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' + [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' +[preflight] Pulling images required for setting up a Kubernetes cluster +[preflight] This might take a minute or two, depending on the speed of your internet connection +[preflight] You can also perform this action in beforehand using 'kubeadm config images pull' +[certs] Using certificateDir folder "/etc/kubernetes/pki" +[certs] Generating "ca" certificate and key +[certs] Generating "apiserver" certificate and key +[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes-master1 kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.12 10.0.0.200] +[certs] Generating "apiserver-kubelet-client" certificate and key +[certs] Generating "front-proxy-ca" certificate and key +[certs] Generating "front-proxy-client" certificate and key +[certs] Generating "etcd/ca" certificate and key +[certs] Generating "etcd/server" certificate and key +[certs] etcd/server serving cert is signed for DNS names [kubernetes-master1 localhost] and IPs [10.0.0.12 127.0.0.1 ::1] +[certs] Generating "etcd/peer" certificate and key +[certs] etcd/peer serving cert is signed for DNS names [kubernetes-master1 localhost] and IPs [10.0.0.12 127.0.0.1 ::1] +[certs] Generating "etcd/healthcheck-client" certificate and key +[certs] Generating "apiserver-etcd-client" certificate and key +[certs] Generating "sa" key and public key +[kubeconfig] Using kubeconfig folder "/etc/kubernetes" +[kubeconfig] Writing "admin.conf" kubeconfig file +[kubeconfig] Writing "kubelet.conf" kubeconfig file +[kubeconfig] Writing "controller-manager.conf" kubeconfig file +[kubeconfig] Writing "scheduler.conf" kubeconfig file +[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" +[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" +[kubelet-start] Starting the kubelet +[control-plane] Using manifest folder "/etc/kubernetes/manifests" +[control-plane] Creating static Pod manifest for "kube-apiserver" +[control-plane] Creating static Pod manifest for "kube-controller-manager" +[control-plane] Creating static Pod manifest for "kube-scheduler" +[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" +[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s +[apiclient] All control plane components are healthy after 15.554614 seconds +[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace +[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster +NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently. +[upload-certs] Skipping phase. Please see --upload-certs +[mark-control-plane] Marking the node kubernetes-master1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] +[mark-control-plane] Marking the node kubernetes-master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] +[bootstrap-token] Using token: abcdef.0123456789abcdef +[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles +[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes +[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials +[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token +[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster +[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace +[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key +[addons] Applied essential addon: CoreDNS +[addons] Applied essential addon: kube-proxy + +Your Kubernetes control-plane has initialized successfully! + +To start using your cluster, you need to run the following as a regular user: + + mkdir -p $HOME/.kube + sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config + sudo chown $(id -u):$(id -g) $HOME/.kube/config + +Alternatively, if you are the root user, you can run: + + export KUBECONFIG=/etc/kubernetes/admin.conf + +You should now deploy a pod network to the cluster. +Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: + https://kubernetes.io/docs/concepts/cluster-administration/addons/ + +# 这里多了一个主角色节点加入集群的步骤 +You can now join any number of control-plane nodes by copying certificate authorities +and service account keys on each node and then running the following as root: + + kubeadm join 10.0.0.200:6443 --token abcdef.0123456789abcdef \ + --discovery-token-ca-cert-hash sha256:bfaed502274988b5615006e91337db9252707f2dd033e9d32670175bf8445d67 \ + --control-plane + +Then you can join any number of worker nodes by running the following on each as root: + +kubeadm join 10.0.0.200:6443 --token abcdef.0123456789abcdef \ + --discovery-token-ca-cert-hash sha256:bfaed502274988b5615006e91337db9252707f2dd033e9d32670175bf8445d67 +``` + +```powershell +设定主角色主机的权限文件 +[root@kubernetes-master1 ~]# mkdir -p $HOME/.kube +[root@kubernetes-master1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config +[root@kubernetes-master1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config +``` + +master节点的集群加入 + +```powershell +获取master集群节点添加到控制平面的基本认证信息 +[root@kubernetes-master1 ~]# kubeadm init phase upload-certs --upload-certs +I0716 10:13:59.447562 119631 version.go:255] remote version is much newer: v1.24.3; falling back to: stable-1.23 +[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace +[upload-certs] Using certificate key: +120fe6a5baad0ec32da4ae185728f5273e415aeb28e36de3b92ad4e1ecd7d69a +``` + +```powershell +构造master节点添加到控制平面的命令,在剩余的maste2和master3上执行 +[root@kubernetes-master2 ~]# kubeadm join 10.0.0.200:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:bfaed502274988b5615006e91337db9252707f2dd033e9d32670175bf8445d67 --control-plane --certificate-key 120fe6a5baad0ec32da4ae185728f5273e415aeb28e36de3b92ad4e1ecd7d69a +``` + +```powershell +所有master节点执行命令完毕后查看效果 +[root@kubernetes-master3 ~]# kubectl get nodes +NAME STATUS ROLES AGE VERSION +kubernetes-master1 Ready control-plane,master 16m v1.23.8 +kubernetes-master2 NotReady control-plane,master 2m13s v1.23.8 +kubernetes-master3 NotReady control-plane,master 68s v1.23.8 +``` + + + +**小结** + +``` + +``` + + + +### 1.2.4 工作节点实践 + +学习目标 + +这一节,我们从 节点实践、dashboard实践、小结 三个方面来学习。 + +**节点实践** + +将三个工作节点添加到kubernetes集群中 + +```powershell +三个节点执行的命令一致 +kubeadm join 10.0.0.200:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:bfaed502274988b5615006e91337db9252707f2dd033e9d32670175bf8445d67 +``` + +```powershell +节点执行命令完毕后,回到任意一台master上查看效果 +[root@kubernetes-master3 ~]# kubectl get nodes +NAME STATUS ROLES AGE VERSION +kubernetes-master1 Ready control-plane,master 16m v1.23.8 +kubernetes-master2 NotReady control-plane,master 2m13s v1.23.8 +kubernetes-master3 NotReady control-plane,master 68s v1.23.8 +kubernetes-node1 Ready 44s v1.23.8 +kubernetes-node2 Ready 25s v1.23.8 +kubernetes-node3 Ready 35s v1.23.8 +结果显示: + 有的节点状态是Ready,有的不是,原因是我们的flannel环境还没有定制 +``` + +定制网络环境 + +```powershell +master1节点上执行网络配置文件 +[root@kubernetes-master1 ~]# kubectl apply -f /data/kubernetes/flannel/kube-flannel.yml +namespace/kube-flannel created +clusterrole.rbac.authorization.k8s.io/flannel created +clusterrolebinding.rbac.authorization.k8s.io/flannel created +serviceaccount/flannel created +configmap/kube-flannel-cfg created +daemonset.apps/kube-flannel-ds created +``` + +```powershell +再次查看节点效果 +[root@kubernetes-master1 ~]# kubectl get nodes +NAME STATUS ROLES AGE VERSION +kubernetes-master1 Ready control-plane,master 18m v1.23.8 +kubernetes-master2 Ready control-plane,master 3m48s v1.23.8 +kubernetes-master3 Ready control-plane,master 2m43s v1.23.8 +kubernetes-node1 Ready 2m19s v1.23.8 +kubernetes-node2 Ready 2m v1.23.8 +kubernetes-node3 Ready 2m10s v1.23.8 +``` + +**dashboard实践** + +简介 + +```powershell + 目前来说,kubernetes的dashboard插件版本没有通用的,只能找到适合当前kubernetes版本的软件。另外,默认的dashboard有可能会受到各种因素的限制,我们还是希望去安装专门的kubernetes的可视化界面软件。 + +参考资料: https://kubernetes.io/docs/concepts/cluster-administration/addons/ +``` + +![image-20220716103544284](../../img/kubernetes/kubernetes_flannel/image-20220716103544284.png) + +```powershell +在这里,我们选择v2.5.1版本,这个版本对于1.23.x版本支持没有太大的问题。 +[root@kubernetes-master1 ~]# mkdir /data/kubernetes/dashboard -p +[root@kubernetes-master1 ~]# cd /data/kubernetes/dashboard/ +[root@kubernetes-master1 /data/kubernetes/dashboard]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.1/aio/deploy/recommended.yaml +[root@kubernetes-master1 /data/kubernetes/dashboard]# mv recommended.yaml dashboard-2.5.1.yaml +[root@kubernetes-master1 /data/kubernetes/dashboard]# cp dashboard-2.5.1.yaml{,.bak} +``` + +修改配置文件 + +```powershell +添加相关配置 +[root@kubernetes-master1 /data/kubernetes/dashboard]# vim dashboard-2.5.1.yaml +kind: Service +... +spec: + ports: + - port: 443 + targetPort: 8443 + nodePort: 30443 # 增加此行 + type: NodePort # 增加此行 + selector: + k8s-app: kubernetes-dashboard +``` + +```powershell +定制依赖镜像 +for i in $(grep 'image:' dashboard-2.5.1.yaml | grep -v '#' | awk -F '/' '{print $NF}') +do + docker pull kubernetesui/$i + docker tag kubernetesui/$i kubernetes-register.superopsmsb.com/google_containers/$i + docker push kubernetes-register.superopsmsb.com/google_containers/$i + docker rmi kubernetesui/$i +done + +确认镜像 +[root@kubernetes-master1 /data/kubernetes/dashboard]# docker images | egrep 'dashboard|scr' | awk '{print $1,$2}' +kubernetes-register.superopsmsb.com/google_containers/dashboard v2.5.1 +kubernetes-register.superopsmsb.com/google_containers/metrics-scraper v1.0.7 +``` + +```powershell +修改配置文件 +[root@kubernetes-master1 /data/kubernetes/dashboard]# sed -i '/ image:/s/kubernetesui/kubernetes-register.superopsmsb.com\/google_containers/' dashboard-2.5.1.yaml +[root@kubernetes-master1 /data/kubernetes/dashboard]# sed -n '/ image:/p' dashboard-2.5.1.yaml + image: kubernetes-register.superopsmsb.com/google_containers/dashboard:v2.5.1 + image: kubernetes-register.superopsmsb.com/google_containers/metrics-scraper:v1.0.7 +``` + +应用配置文件 + +```powershell +[root@kubernetes-master1 /data/kubernetes/dashboard]# kubectl apply -f dashboard-2.5.1.yaml +namespace/kubernetes-dashboard created +serviceaccount/kubernetes-dashboard created +service/kubernetes-dashboard created +secret/kubernetes-dashboard-certs created +secret/kubernetes-dashboard-csrf created +secret/kubernetes-dashboard-key-holder created +configmap/kubernetes-dashboard-settings created +role.rbac.authorization.k8s.io/kubernetes-dashboard created +clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created +rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created +clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created +deployment.apps/kubernetes-dashboard created +service/dashboard-metrics-scraper created +deployment.apps/dashboard-metrics-scraper created + +查看环境的准备效果 +[root@kubernetes-master1 ~]# kubectl get all -n kubernetes-dashboard +``` + +![image-20220716105728014](../../img/kubernetes/kubernetes_flannel/image-20220716105728014.png) + +浏览器访问效果 + +![image-20220716105728014](../../img/kubernetes/kubernetes_flannel/image-20220716105844575.png) + +dashboard界面登录 + +```powershell +由于涉及到后续我们还没有整理的资料,所以大家执行执行一条命令即可,之余涉及到具体知识点,我们后续再说。 +kubectl create serviceaccount dashboard-admin -n kube-system +kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin +kubernetes_user=$(kubectl -n kube-system get secret | grep dashboard-admin | awk '{print $1}') +kubectl describe secrets -n kube-system ${kubernetes_user} +``` + +![image-20220716110204077](../../img/kubernetes/kubernetes_flannel/image-20220716110204077.png) + +```powershell +拷贝token信息,然后粘贴到浏览器界面的输入框中,然后点击登录,查看效果 +``` + +![image-20220716110306390](../../img/kubernetes/kubernetes_flannel/image-20220716110306390.png) + +**小结** + +``` + +``` + + + +### 1.2.5 集群初始化解读 + +学习目标 + +这一节,我们从 流程简介、流程解读、小结 三个方面来学习。 + +```powershell +参考资料: https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow +``` + +**流程简介** + +参考资料 + +```powershell +https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow +``` + +流程简介 + +```powershell +master节点启动: + 当前环境检查,确保当前主机可以部署kubernetes + 生成kubernetes对外提供服务所需要的各种私钥以及数字证书 + 生成kubernetes控制组件的kubeconfig文件 + 生成kubernetes控制组件的pod对象需要的manifest文件 + 为集群控制节点添加相关的标识,不让主节点参与node角色工作 + 生成集群的统一认证的token信息,方便其他节点加入到当前的集群 + 进行基于TLS的安全引导相关的配置、角色策略、签名请求、自动配置策略 + 为集群安装DNS和kube-porxy插件 +``` + +```powershell +node节点加入 + 当前环境检查,读取相关集群配置信息 + 获取集群相关数据后启动kubelet服务 + 获取认证信息后,基于证书方式进行通信 +``` + +**流程解读** + +master节点初始化流程 + +```powershell +1 当前环境检查,确保当前主机可以部署kubernetes +[init] Using Kubernetes version: v1.23.8 +[preflight] Running pre-flight checks + [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.s ervice' + [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubele t.service' +[preflight] Pulling images required for setting up a Kubernetes cluster +[preflight] This might take a minute or two, depending on the speed of your internet connection +[preflight] You can also perform this action in beforehand using 'kubeadm config images pull' +``` + +```powershell +2 生成kubernetes对外提供服务所需要的各种私钥以及数字证书 +[certs] Using certificateDir folder "/etc/kubernetes/pki" +[certs] Generating "ca" certificate and key +[certs] Generating "apiserver" certificate and key +[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes-master1 kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.12 10.0.0.200] +[certs] Generating "apiserver-kubelet-client" certificate and key +[certs] Generating "front-proxy-ca" certificate and key +[certs] Generating "front-proxy-client" certificate and key +[certs] Generating "etcd/ca" certificate and key +[certs] Generating "etcd/server" certificate and key +[certs] etcd/server serving cert is signed for DNS names [kubernetes-master1 localhost] and IPs [10.0.0.12 127.0.0.1 ::1] +[certs] Generating "etcd/peer" certificate and key +[certs] etcd/peer serving cert is signed for DNS names [kubernetes-master1 localhost] and IPs [10.0.0.12 127.0.0.1 ::1] +[certs] Generating "etcd/healthcheck-client" certificate and key +[certs] Generating "apiserver-etcd-client" certificate and key +[certs] Generating "sa" key and public key +``` + +```powershell +3 生成kubernetes控制组件的kubeconfig文件及相关的启动配置文件 +[kubeconfig] Using kubeconfig folder "/etc/kubernetes" +[kubeconfig] Writing "admin.conf" kubeconfig file +[kubeconfig] Writing "kubelet.conf" kubeconfig file +[kubeconfig] Writing "controller-manager.conf" kubeconfig file +[kubeconfig] Writing "scheduler.conf" kubeconfig file +[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" +[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" +[kubelet-start] Starting the kubelet +``` + +```powershell +4 生成kubernetes控制组件的pod对象需要的manifest文件 +[control-plane] Using manifest folder "/etc/kubernetes/manifests" +[control-plane] Creating static Pod manifest for "kube-apiserver" +[control-plane] Creating static Pod manifest for "kube-controller-manager" +[control-plane] Creating static Pod manifest for "kube-scheduler" +[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" +[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s +[apiclient] All control plane components are healthy after 15.554614 seconds +[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace +[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster +NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently. +[upload-certs] Skipping phase. Please see --upload-certs +``` + +```powershell +5 为集群控制节点添加相关的标识,不让主节点参与node角色工作 +[mark-control-plane] Marking the node kubernetes-master1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] +[mark-control-plane] Marking the node kubernetes-master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] +``` + +```powershell +6 生成集群的统一认证的token信息,方便其他节点加入到当前的集群 +[bootstrap-token] Using token: abcdef.0123456789abcdef +[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles +[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes +[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials +[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token +[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster +[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace +``` + +```powershell +7 进行基于TLS的安全引导相关的配置、角色策略、签名请求、自动配置策略 +[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key +``` + +```powershell +8 为集群安装DNS和kube-porxy插件 +[addons] Applied essential addon: CoreDNS +[addons] Applied essential addon: kube-proxy +``` + +node节点初始化流程 + +```powershell +1 当前环境检查,读取相关集群配置信息 +[preflight] Running pre-flight checks +[preflight] Reading configuration from the cluster... +[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' +``` + +```powershell +2 获取集群相关数据后启动kubelet服务 +[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" +[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" +[kubelet-start] Starting the kubelet +``` + +```powershell +3 获取认证信息后,基于证书方式进行通信 +[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... + +This node has joined the cluster: +* Certificate signing request was sent to apiserver and a response was received. +* The Kubelet was informed of the new secure connection details. + +Run 'kubectl get nodes' on the control-plane to see this node join the cluster. +``` + + + +**小结** + +``` + +``` + + + +## 1.3 综合实践 + +### 1.3.1 应用案例解读 + +学习目标 + +这一节,我们从 环境规划、实践解读、小结 三个方面来学习。 + +**环境规划** + +简介 + +```powershell + 为了让大家对于kubernetes在生产中如何工作,我们这里从0创建容器环境,然后迁移到kubernetes环境上,因为还没有涉及到kubernetes本身的功能学习,所以涉及到部分的资源清单文件,我们会直接粘贴复制,后面的资源清单信息,我们会依次进行手工编写处理。 +``` + +环境规划 + +![image-20220716135622610](../../img/kubernetes/kubernetes_flannel/image-20220716135622610.png) + +```powershell + 这里定制一个nginx代理后端的一个简单应用,后面所有的知识点应用,基本上都是基于这个架构进行上下延伸的。 +``` + +访问分析 + +```powershell +用户访问 10.0.0.201, + 如果后面url关键字是 /,则将流量转交给 Nginx,应用返回 Hello Nginx 容器名-版本号 + 如果后面url关键字是 /django/,则将流量转交给 Django,应用返回 Hello Django 容器名-版本号 + 如果后面url关键字是 /tomcat/,则将流量转交给 Tomcat,应用返回 Hello Tomcat 容器名-版本号 +``` + +基本思路 + +```powershell +1 定制各个专用的Docker镜像文件 +2 定制kubernetes的专属资源清单文件 +3 调试kubernetes的资源清单文件 +4 开放外部高可用数据入口 +``` + + + +**实践解读** + +获取基本镜像 + +```powershell +获取依赖的基准镜像 +[root@kubernetes-master1 ~]# for i in nginx django tomcat +do + docker pull $i + docker tag $i kubernetes-register.superopsmsb.com/superopsmsb/$i + docker push kubernetes-register.superopsmsb.com/superopsmsb/$i + docker rmi $i +done +查看效果 +[root@kubernetes-master1 ~]# docker images | egrep 'nginx|djan|tom' | awk '{print $1,$2}' +kubernetes-register.superopsmsb.com/superopsmsb/nginx latest +kubernetes-register.superopsmsb.com/superopsmsb/tomcat latest +kubernetes-register.superopsmsb.com/superopsmsb/django latest +``` + +nginx镜像基本环境 + +```powershell +启动容器 +[root@kubernetes-master1 ~]# docker run -d -P --name nginx-test kubernetes-register.superopsmsb.com/superopsmsb/nginx +538acda8cef9d834915b0953fc353d24f922d592d72a26595c0c6604377c6ab9 + +进入容器查看基本信息 +[root@kubernetes-master1 ~]# docker exec -it nginx-test /bin/bash +root@538acda8cef9:/# env | egrep 'HOSTNAME|NGINX_VERSION' +HOSTNAME=538acda8cef9 +NGINX_VERSION=1.23.0 +root@538acda8cef9:/# grep root /etc/nginx/conf.d/default.conf + root /usr/share/nginx/html; +root@fc943556ab2b:/# echo "Hello Nginx, $HOSTNAME-$NGINX_VERSION" > /usr/share/nginx/html/index.html +root@fc943556ab2b:/# curl localhost +Hello Nginx, fc943556ab2b-1.23.0 + +移除容器 +[root@kubernetes-master1 ~]# docker rm -f nginx-test +nginx-test +``` + +django基本环境 + +```powershell +准备外置目录,方便文件留存 +[root@kubernetes-master1 ~]# mkdir /data/code -p + +启动容器 +[root@kubernetes-master1 ~]# docker run -it -p 8000:8000 -v /data/code:/data/code --name django-test kubernetes-register.superopsmsb.com/superopsmsb/django /bin/bash +root@9db574d98e95:/# cd /data/code/ + +创建项目和应用 +root@9db574d98e95:/data/code# django-admin startproject blog +root@9db574d98e95:/data/code# cd blog/ +root@9db574d98e95:/data/code/blog# python manage.py startapp app1 +``` + +```powershell +由于容器内部环境不足,我们在宿主机环境下编辑文件 +[root@kubernetes-master1 ~]# cat /data/code/blog/app1/views.py +from django.http import HttpResponse +import os +def index(request): + # 获取系统环境变量 + hostname=os.environ.get("HOSTNAME") + version=os.environ.get("DJANGO_VERSION") + # 定制专属的信息输出 + message="{}, {}-{}\n".format("Hello Django", hostname, version) + return HttpResponse(message) + +定制应用的注册效果 +[root@kubernetes-master1 ~]# cat /data/code/blog/blog/settings.py +... +ALLOWED_HOSTS = ['*'] +... +INSTALLED_APPS = [ + ... + 'app1', +] + +定制数据流转效果 +[root@kubernetes-master1 ~]# cat /data/code/blog/blog/urls.py +... +from app1.views import * +urlpatterns = [ + url(r'^admin/', admin.site.urls), + url(r'', index), +] +``` + +```powershell +容器内部启动服务 +root@9db574d98e95:/data/code/blog# python manage.py runserver 0.0.0.0:8000 +... +Starting development server at http://0.0.0.0:8000/ +Quit the server with CONTROL-C. + +容器外部访问效果 +[root@kubernetes-master1 ~]# curl 10.0.0.12:8000 +Hello Django, 2894f12ebd0c 1.10.4 + +移除容器 +[root@kubernetes-master1 ~]# docker rm -f django-test +django-test +``` + +tomcat基本环境 + +```powershell +启动容器 +[root@kubernetes-master1 ~]# docker run -d -P --name tomcat-test kubernetes-register.superopsmsb.com/superopsmsb/tomcat +8190d3afaa448142e573e826593258316e3480e38c575c864ec5c2708d132a33 + +进入容器查看基本信息 +[root@kubernetes-master1 ~]# docker exec -it tomcat-test /bin/bash +root@8190d3afaa44:/usr/local/tomcat# env | egrep 'HOSTNAME|TOMCAT_VERSION' +HOSTNAME=8190d3afaa44 +TOMCAT_VERSION=10.0.22 +root@8190d3afaa44:/usr/local/tomcat# mv webapps webapps-bak +root@8190d3afaa44:/usr/local/tomcat# mv webapps.dist webapps +root@8190d3afaa44:/usr/local/tomcat# echo "Hello Tomcat, $HOSTNAME-$TOMCAT_VERSION" > webapps/ROOT/index.jsp +root@8190d3afaa44:/usr/local/tomcat# curl localhost:8080 +Hello Tomcat, 8190d3afaa44-10.0.22 + +移除容器 +[root@kubernetes-master1 ~]# docker rm -f tomcat-test +tomcat-test +``` + + + +**小结** + +``` + +``` + + + +### 1.3.2 应用环境定制 + +学习目标 + +这一节,我们从 Nginx构建、Django构建、Tomcat构建、小结 四个方面来学习。 + +**Nginx构建** + +创建镜像构建文件 + +```powershell +创建基准目录 +[root@kubernetes-master1 ~]# mkdir /data/images/web/{nginx,tomcat,django} -p +``` + +定制Nginx镜像 + +创建准备文件 + +```powershell +准备基准代码文件 +[root@kubernetes-master1 ~]# mkdir /data/images/web/nginx/scripts -p + +创建服务启动文件 +[root@kubernetes-master1 ~]# cat /data/images/web/nginx/scripts/startup.sh +#!/bin/bash +# 定制容器里面的nginx服务启动脚本 + +# 定制tomcat的首页内容 +echo "Hello Nginx, $HOSTNAME-$NGINX_VERSION" > /usr/share/nginx/html/index.html + +# 启动nginx +nginx -g "daemon off;" +``` + +```powershell +定制Dockerfile文件 +[root@kubernetes-master1 ~]# cat /data/images/web/nginx/Dockerfile +# 构建一个基于nginx的定制镜像 +# 基础镜像 +FROM kubernetes-register.superopsmsb.com/superopsmsb/nginx +# 镜像作者 +MAINTAINER shuji@superopsmsb.com + +# 添加文件 +ADD scripts/startup.sh /data/scripts/startup.sh + +# 执行命令 +CMD ["/bin/bash", "/data/scripts/startup.sh"] +``` + +```powershell +定制构造镜像 +[root@kubernetes-master1 ~]# docker build -t kubernetes-register.superopsmsb.com/superopsmsb/nginx_web:v0.1 /data/images/web/nginx/ + +测试构造镜像 +[root@kubernetes-master1 ~]# docker run -d --name nginx-test -p 666:80 kubernetes-register.superopsmsb.com/superopsmsb/nginx_web:v0.1 +58b84726ff29b87e3c8e6a6489e4bead4d298bdd17ac08d90a05a8ad8674906e +[root@kubernetes-master1 ~]# curl 10.0.0.12:666 +Hello Nginx, 58b84726ff29-1.23.0 +[root@kubernetes-master1 ~]# docker rm -f nginx-test +nginx-test + +提交镜像到远程仓库 +[root@kubernetes-master1 ~]# docker push kubernetes-register.superopsmsb.com/superopsmsb/nginx_web:v0.1 +``` + + + +**Django构建** + +创建准备文件 + +```powershell +准备基准代码文件 +[root@kubernetes-master1 ~]# mv /data/code/blog /data/images/web/django/ +[root@kubernetes-master1 ~]# mkdir /data/images/web/nginx/scripts -p + +创建服务启动文件 +[root@kubernetes-master1 ~]# cat /data/images/web/django/scripts/startup.sh +#!/bin/bash +# 定制容器里面的django服务启动脚本 + +# 定制服务启动命令 +python /data/code/blog/manage.py runserver 0.0.0.0:8000 +``` + +定制Dockerfile + +```powershell +定制Dockerfile文件 +[root@kubernetes-master1 ~]# cat /data/images/web/django/Dockerfile +# 构建一个基于django的定制镜像 +# 基础镜像 +FROM kubernetes-register.superopsmsb.com/superopsmsb/django +# 镜像作者 +MAINTAINER shuji@superopsmsb.com + +# 拷贝文件 +ADD blog /data/code/blog +ADD scripts/startup.sh /data/scripts/startup.sh + +# 暴露django端口 +EXPOSE 8000 + +# 定制容器的启动命令 +CMD ["/bin/bash", "/data/scripts/startup.sh"] +``` + +```powershell +定制构造镜像 +[root@kubernetes-master1 ~]# docker build -t kubernetes-register.superopsmsb.com/superopsmsb/django_web:v0.1 /data/images/web/django/ + +测试构造镜像 +[root@kubernetes-master1 ~]# docker run -d --name django-test -p 666:8000 kubernetes-register.superopsmsb.com/superopsmsb/django_web:v0.1 +d8f6a1a237f9276917ffc6e233315d02940437f91c64f43894fb3fab8fd50783 +[root@kubernetes-master1 ~]# curl 10.0.0.12:666 +Hello Django, d8f6a1a237f9 1.10.4 +[root@kubernetes-master1 ~]# docker rm -f django-test +nginx-test + +提交镜像到远程仓库 +[root@kubernetes-master1 ~]# docker push kubernetes-register.superopsmsb.com/superopsmsb/django_web:v0.1 +``` + +**Tomcat构建** + +创建准备文件 + +```powershell +准备基准代码文件 +[root@kubernetes-master1 ~]# mkdir /data/images/web/tomcat/scripts -p + +创建服务启动文件 +[root@kubernetes-master1 ~]# cat /data/images/web/tomcat/scripts/startup.sh +#!/bin/bash +# 定制容器里面的tomcat服务启动脚本 + +# 定制tomcat的首页内容 +echo "Hello Tomcat, $HOSTNAME-$TOMCAT_VERSION" > /usr/local/tomcat/webapps/ROOT/index.jsp +# 启动tomcat +catalina.sh run +``` + +定制Dockerfile + +```powershell +定制Dockerfile文件 +[root@kubernetes-master1 ~]# cat /data/images/web/tomcat/Dockerfile +# 构建一个基于tomcat的定制镜像 +# 基础镜像 +FROM kubernetes-register.superopsmsb.com/superopsmsb/tomcat +# 镜像作者 +MAINTAINER shuji@superopsmsb.com + +# 拷贝文件 +RUN mv webapps.dist/* webapps/ + +# 添加文件 +ADD scripts/startup.sh /data/scripts/startup.sh + +# 执行命令 +CMD ["/bin/bash", "/data/scripts/startup.sh"] +``` + +```powershell +定制构造镜像 +[root@kubernetes-master1 ~]# docker build -t kubernetes-register.superopsmsb.com/superopsmsb/tomcat_web:v0.1 /data/images/web/tomcat/ + +测试构造镜像 +[root@kubernetes-master1 ~]# docker run -d --name tomcat-test -p 666:8080 kubernetes-register.superopsmsb.com/superopsmsb/tomcat_web:v0.1 +e46eb26c49ab873351219e98d6e236fd0445aa39edb8edb0bf86560a808614fb +[root@kubernetes-master1 ~]# curl 10.0.0.12:666 +Hello Tomcat, e46eb26c49ab-10.0.22 +[root@kubernetes-master1 ~]# docker rm -f tomcat-test +tomcat-test + +提交镜像到远程仓库 +[root@kubernetes-master1 ~]# docker push kubernetes-register.superopsmsb.com/superopsmsb/tomcat_web:v0.1 +``` + + + +**小结** + +``` + +``` + + + +### 1.3.3 应用环境实践 + +学习目标 + +这一节,我们从 资源清单解读、应用环境实践、小结 三个方面来学习。 + +**资源清单解读** + +创建基准配置文件 + +```powershell +创建基准目录 +[root@kubernetes-master1 ~]# mkdir /data/kubernetes/cluster_test -p +``` + +nginx入口资源清单 + +```powershell +[root@kubernetes-master1 ~]# cat /data/kubernetes/cluster_test/01_kubernetes-nginx-proxy.yml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: superopsmsb-nginx-proxy + labels: + app: nginx +spec: + replicas: 1 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: kubernetes-register.superopsmsb.com/superopsmsb/nginx + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: superopsmsb-nginx-proxy + labels: + app: superopsmsb-nginx-proxy +spec: + type: NodePort + selector: + app: nginx + ports: + - protocol: TCP + name: http + port: 80 + targetPort: 80 + nodePort: 30080 +``` + +nginx静态web资源清单 + +```powershell +[root@kubernetes-master1 ~]# cat /data/kubernetes/cluster_test/02_kubernetes-nginx-web.yml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: superopsmsb-nginx-web + labels: + app: nginx-web +spec: + replicas: 1 + selector: + matchLabels: + app: nginx-web + template: + metadata: + labels: + app: nginx-web + spec: + containers: + - name: nginx + image: kubernetes-register.superopsmsb.com/superopsmsb/nginx_web:v0.1 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: superopsmsb-nginx-web + labels: + app: nginx-web +spec: + type: NodePort + selector: + app: nginx-web + ports: + - protocol: TCP + name: http + port: 80 + targetPort: 80 + nodePort: 31080 +``` + +Django动态web资源清单 + +```powershell +[root@kubernetes-master1 ~]# cat /data/kubernetes/cluster_test/03_kubernetes-django-web.yml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: superopsmsb-django-web + labels: + app: django +spec: + replicas: 1 + selector: + matchLabels: + app: django + template: + metadata: + labels: + app: django + spec: + containers: + - name: django + image: kubernetes-register.superopsmsb.com/superopsmsb/django_web:v0.1 + ports: + - containerPort: 8000 +--- +apiVersion: v1 +kind: Service +metadata: + name: superopsmsb-django-web + labels: + app: django-web +spec: + type: NodePort + selector: + app: django + ports: + - protocol: TCP + name: http + port: 8000 + targetPort: 8000 + nodePort: 31800 +``` + +Tomcat动态web资源清单 + +```powershell +[root@kubernetes-master1 ~]# cat /data/kubernetes/cluster_test/04_kubernetes-tomcat-web.yml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: superopsmsb-tomcat-web + labels: + app: tomcat +spec: + replicas: 1 + selector: + matchLabels: + app: tomcat + template: + metadata: + labels: + app: tomcat + spec: + containers: + - name: django + image: kubernetes-register.superopsmsb.com/superopsmsb/tomcat_web:v0.1 + ports: + - containerPort: 8080 +--- +apiVersion: v1 +kind: Service +metadata: + name: superopsmsb-tomcat-web + labels: + app: tomcat-web +spec: + type: NodePort + selector: + app: tomcat + ports: + - protocol: TCP + name: http + port: 8080 + targetPort: 8080 + nodePort: 31880 +``` + + + +**应用环境实践** + +创建环境应用 + +```powershell +应用所有的资源清单文件 +[root@kubernetes-master1 ~]# kubectl apply -f /data/kubernetes/cluster_test +deployment.apps/superopsmsb-nginx-proxy created +service/superopsmsb-nginx-proxy created +deployment.apps/superopsmsb-nginx-web created +service/superopsmsb-nginx-web created +deployment.apps/superopsmsb-django-web created +service/superopsmsb-django-web created +deployment.apps/superopsmsb-tomcat-web created +service/superopsmsb-tomcat-web created +``` + +```powershell +检查资源对象 +[root@kubernetes-master1 ~]# kubectl get svc,deployment,pod +``` + +![image-20220716142721779](../../img/kubernetes/kubernetes_flannel/image-20220716142721779.png) + +检查效果 + +```powershell +[root@kubernetes-master1 ~]# curl 10.0.0.12:31800 +Hello Django, superopsmsb-django-web-5bbcb646df-44hcz-1.10.4 +[root@kubernetes-master1 ~]# curl 10.0.0.12:31080 +Hello Nginx, 00cfe7d5a6e8-1.23.0 +[root@kubernetes-master1 ~]# curl 10.0.0.12:31880 +Hello Tomcat, 1734569458ff-10.0.22 +[root@kubernetes-master1 ~]# curl -I -s 10.0.0.12:30080 | head -n2 +HTTP/1.1 200 OK +Server: nginx/1.23.0 +``` + + + +**小结** + +``` + +``` + + + +### 1.3.4 应用环境升级 + +学习目标 + +这一节,我们从 需求定制、清单升级、小结 三个方面来学习。 + +**需求定制** + +简介 + +```powershell +需求解析: + 1 nginx需要实现反向代理的功能 + 2 nginx、tomcat和django的web应用不对外暴露端口 +``` + +命令简介 + +```powershell + 我们需要在nginx-proxy的pod中定制各个反向代理的基本配置,这就需要连接到pod内部进行基本的操作。我们可以借助于 exec命令来实现。 + +命令格式: + kubectl exec -it 资源对象 -- 容器命令 +``` + +进入到容器内容进行操作 + +```powershell +进入nginx代理容器 +[root@kubernetes-master1 ~]# kubectl exec -it superopsmsb-nginx-proxy-7dcd57d844-z89qr -- /bin/bash +root@superopsmsb-nginx-proxy-7dcd57d844-z89qr:/# + +安装基本测试环境命令 +root@superopsmsb-nginx-proxy-7dcd57d844-z89qr:/# apt update +root@superopsmsb-nginx-proxy-7dcd57d844-z89qr:/# apt install vim net-tools iputils-ping dnsutils curl -y + +在pod内部可以看到所有的后端应用的service地址 +root@superopsmsb-nginx-proxy-7dcd57d844-z89qr:/# env | grep '_HOST' +SUPEROPSMSB_TOMCAT_WEB_SERVICE_HOST=10.111.147.205 +SUPEROPSMSB_NGINX_WEB_SERVICE_HOST=10.111.115.222 +KUBERNETES_SERVICE_HOST=10.96.0.1 +SUPEROPSMSB_NGINX_PROXY_SERVICE_HOST=10.102.253.91 +SUPEROPSMSB_DJANGO_WEB_SERVICE_HOST=10.108.200.190 +``` + +```powershell +kubernetes内部提供了dns服务的能力 +root@superopsmsb-nginx-proxy-7dcd57d844-z89qr:/# nslookup superopsmsb-django-web +Server: 10.96.0.10 +Address: 10.96.0.10#53 +Name: superopsmsb-django-web.default.svc.cluster.local +Address: 10.108.200.190 + +root@superopsmsb-nginx-proxy-7dcd57d844-z89qr:/# nslookup superopsmsb-nginx-web +Server: 10.96.0.10 +Address: 10.96.0.10#53 +Name: superopsmsb-nginx-web.default.svc.cluster.local +Address: 10.111.115.222 + +root@superopsmsb-nginx-proxy-7dcd57d844-z89qr:/# nslookup superopsmsb-tomcat-web +Server: 10.96.0.10 +Address: 10.96.0.10#53 +Name: superopsmsb-tomcat-web.default.svc.cluster.local +Address: 10.111.147.205 + +后端服务可以直接通过域名来访问 +root@superopsmsb-nginx-proxy-7dcd57d844-z89qr:/# curl superopsmsb-django-web:8000 +Hello Django, superopsmsb-django-web-5bbcb646df-44hcz-1.10.4 +root@superopsmsb-nginx-proxy-7dcd57d844-z89qr:/# curl superopsmsb-tomcat-web:8080 +Hello Tomcat, 1734569458ff-10.0.22 +root@superopsmsb-nginx-proxy-7dcd57d844-z89qr:/# curl superopsmsb-nginx-web +Hello Nginx, 00cfe7d5a6e8-1.23.0 +``` + +定制反向配置 + +```powershell +定制nginx的代理配置文件 +root@superopsmsb-nginx-proxy-7dcd57d844-z89qr:/# grep -Env '#|^$' /etc/nginx/conf.d/default.conf +1:server { +2: listen 80; +3: listen [::]:80; +4: server_name localhost; +8: location / { +9: proxy_pass http://superopsmsb-nginx-web/; +10: } +11: location /django/ { +12: proxy_pass http://superopsmsb-django-web:8000/; +13: } +14: location /tomcat/ { +15: proxy_pass http://superopsmsb-tomcat-web:8080/; +16: } +21: error_page 500 502 503 504 /50x.html; +22: location = /50x.html { +23: root /usr/share/nginx/html; +24: } +48:} + +检测配置文件 +root@superopsmsb-nginx-proxy-7dcd57d844-z89qr:/# nginx -t +nginx: the configuration file /etc/nginx/nginx.conf syntax is ok +nginx: configuration file /etc/nginx/nginx.conf test is successful + +重载配置文件 +root@superopsmsb-nginx-proxy-7dcd57d844-z89qr:/# nginx -s reload +2022/07/16 07:08:46 [notice] 634#634: signal process started +``` + +外部主机测试效果 + +```powershell +测试nginx代理的数据跳转效果 +root@superopsmsb-nginx-proxy-7dcd57d844-z89qr:/# curl superopsmsb-nginx-proxy +Hello Nginx, 00cfe7d5a6e8-1.23.0 +root@superopsmsb-nginx-proxy-7dcd57d844-z89qr:/# curl superopsmsb-nginx-proxy/django/ +Hello Django, superopsmsb-django-web-5bbcb646df-44hcz-1.10.4 +root@superopsmsb-nginx-proxy-7dcd57d844-z89qr:/# curl superopsmsb-nginx-proxy/tomcat/ +Hello Tomcat, 1734569458ff-10.0.22 +``` + + + +**清单升级** + +镜像构造 + +```powershell +定制nginx的配置文件 +[root@kubernetes-master1 ~]# mkdir /data/images/proxy/nginx/conf -p +[root@kubernetes-master1 ~]# vim /data/images/proxy/nginx/conf/default.conf +server { + listen 80; + listen [::]:80; + server_name localhost; + location / { + proxy_pass http://superopsmsb-nginx-web/; + } + location /django/ { + proxy_pass http://superopsmsb-django-web:8000/; + } + location /tomcat/ { + proxy_pass http://superopsmsb-tomcat-web:8080/; + } + error_page 500 502 503 504 /50x.html; + location = /50x.html { + root /usr/share/nginx/html; + } +} +``` + +```powershell +定制镜像的Dockerfile文件 +[root@kubernetes-master1 ~]# cat /data/images/proxy/nginx/Dockerfile +# 构建一个基于nginx的定制镜像 +# 基础镜像 +FROM kubernetes-register.superopsmsb.com/superopsmsb/nginx +# 镜像作者 +MAINTAINER shuji@superopsmsb.com + +# 增加相关文件 +ADD conf/default.conf /etc/nginx/conf.d/default.conf +``` + +```powershell +镜像的构建 +[root@kubernetes-master1 ~]# docker build -t kubernetes-register.superopsmsb.com/superopsmsb/nginx_proxy:v0.1 /data/images/proxy/nginx/ + +镜像提交 +[root@kubernetes-master1 ~]# docker push register.superopsmsb.com/superopsmsb/nginx_proxy:v0.1 +``` + +改造资源清单文件 + +```powershell +[root@kubernetes-master1 ~]# cat /data/kubernetes/cluster_test/01_kubernetes-nginx-proxy-update.yml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: superopsmsb-nginx-proxy + labels: + app: nginx +spec: + replicas: 1 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: kubernetes-register.superopsmsb.com/superopsmsb/nginx_proxy:v0.1 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: superopsmsb-nginx-proxy + labels: + app: superopsmsb-nginx-proxy +spec: + type: NodePort + selector: + app: nginx + ports: + - protocol: TCP + name: http + port: 80 + targetPort: 80 + nodePort: 30080 +``` + +测试效果 + +```powershell +应用资源清单文件 +[root@kubernetes-master1 /data/kubernetes/cluster_test]# kubectl delete -f 01_kubernetes-nginx-proxy.yml +[root@kubernetes-master1 /data/kubernetes/cluster_test]# kubectl apply -f 01_kubernetes-nginx-proxy-update.yml +``` + +```powershell +测试效果 +[root@kubernetes-master1 /data/kubernetes/cluster_test]# curl 10.0.0.12:30080 +Hello Nginx, 00cfe7d5a6e8-1.23.0 +[root@kubernetes-master1 /data/kubernetes/cluster_test]# curl 10.0.0.12:30080/django/ +Hello Django, superopsmsb-django-web-5bbcb646df-44hcz-1.10.4 +[root@kubernetes-master1 /data/kubernetes/cluster_test]# curl 10.0.0.12:30080/tomcat/ +Hello Tomcat, 1734569458ff-10.0.22 +``` + +**高可用** + +改造keepalived配置 + +```powershell +10.0.0.18主机修改配置 +[root@kubernetes-ha1 ~]# cat /etc/keepalived/keepalived.conf +... +vrrp_instance VI_2 { + state BACKUP + interface eth0 + virtual_router_id 52 + priority 100 + advert_int 1 + authentication { + auth_type PASS + auth_pass 2222 + } + track_script { + chk_haproxy + } + virtual_ipaddress { + 10.0.0.201 dev eth0 label eth0:2 + } +} + +10.0.0.18重启服务 +[root@kubernetes-ha1 ~]# systemctl restart keepalived.service +``` + +```powershell +10.0.0.19主机修改配置 +[root@kubernetes-ha2 ~]# cat /etc/keepalived/keepalived.conf +... +vrrp_instance VI_2 { + state MASTER + interface eth0 + virtual_router_id 52 + priority 100 + advert_int 1 + authentication { + auth_type PASS + auth_pass 2222 + } + track_script { + chk_haproxy + } + virtual_ipaddress { + 10.0.0.201 dev eth0 label eth0:2 + } +} + +10.0.0.19重启服务 +[root@kubernetes-ha2 ~]# systemctl restart keepalived.service +``` + +```powershell +查看效果 +[root@kubernetes-ha1 ~]# ifconfig eth0:1 +eth0:1: flags=4163 mtu 1500 + inet 10.0.0.200 netmask 255.255.255.255 broadcast 0.0.0.0 + ether 00:50:56:2d:d9:0a txqueuelen 1000 (Ethernet) + +[root@kubernetes-ha2 ~]# ifconfig eth0:2 +eth0:2: flags=4163 mtu 1500 + inet 10.0.0.201 netmask 255.255.255.255 broadcast 0.0.0.0 + ether 00:50:56:24:cd:0e txqueuelen 1000 (Ethernet) +``` + +改造haproxy配置 - 两台主机的配置内容一样 + +```powershell +以10.0.0.18为例,修改配置文件 +[root@kubernetes-ha1 ~]# cat /etc/haproxy/haproxy.cfg +... +listen kubernetes-nginx-30080 + bind 10.0.0.201:80 + mode tcp + server kubernetes-master1 10.0.0.12:30080 check inter 3s fall 3 rise 5 + server kubernetes-master2 10.0.0.13:30080 check inter 3s fall 3 rise 5 + server kubernetes-master3 10.0.0.14:30080 check inter 3s fall 3 rise 5 + +重启haproxy服务 +[root@kubernetes-ha1 ~]# systemctl restart haproxy.service +``` + +检查效果 + +```powershell +haproxy检查效果 +[root@kubernetes-ha1 ~]# netstat -tnulp +Active Internet connections (only servers) +Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name +tcp 0 0 10.0.0.200:6443 0.0.0.0:* LISTEN 71198/haproxy +tcp 0 0 10.0.0.200:9999 0.0.0.0:* LISTEN 71198/haproxy +tcp 0 0 10.0.0.201:80 0.0.0.0:* LISTEN 71198/haproxy +``` + +```powershell +浏览器访问haproxy的状态页面效果 +``` + +![image-20220716153757411](../../img/kubernetes/kubernetes_flannel/image-20220716153757411.png) + +```powershell +找一个客户端访问vip +[root@kubernetes-master1 ~]# curl 10.0.0.201 +Hello Nginx, superopsmsb-nginx-web-757bcb8fc9-lz6fh-1.23.0 +[root@kubernetes-master1 ~]# curl 10.0.0.201/tomcat/ +Hello Tomcat, superopsmsb-tomcat-web-66c86cb7bc-l2ml2-10.0.22 +[root@kubernetes-master1 ~]# curl 10.0.0.201/django/ +Hello Django, superopsmsb-django-web-5bbcb646df-xm8jf-1.10.4 +``` + +# 1 网络管理 + +## 1.1Service + +### 1.1.1 网络体系 + +学习目标 + +这一节,我们从 应用流程、细节解读、小结 三个方面来学习。 + +**应用流程** + +资源对象体系 + +![image-20220715012035435](../../img/kubernetes/kubernetes_flannel/image-20220715012035435.png) + +简介 + +```powershell + 通过对Pod及其管理资源RC和Deployment的实践,我们知道,我们所有的应用服务都是工作在pod资源中,由于每个Pod都有独立的ip地址,而大量的动态创建和销毁操作后,虽然pod资源的数量是控制住了,但是由于pod重新启动,导致他的IP很有可能发生了变化,假设我们这里有前段应用的pod和后端应用的pod,那么再剧烈变动的场景中,两个应用该如何自由通信呢?难道是让我们以类似nginx负载均衡的方式手工定制pod ip然后进行一一管理么?但是这是做不到的。 + + Kubernetes集群就为我们提供了这样的一个对象--Service,它定义了一组Pod的逻辑集合和一个用于访问它们的策略,它可以基于标签的方式自动找到对应的pod应用,而无需关心pod的ip地址变化与否,从而实现了类似负载均衡的效果. + 这个资源在master端的Controller组件中,由Service Controller 来进行统一管理。 +``` + +service + +```powershell + service是Kubernetes里最核心的资源对象之一,每一个Service都是一个完整的业务服务,我们之前学到的Pod、RC、Deployment等资源对象都是为Service服务的。他们之间的关系如下图: +``` + +![image-20210922135912555](../../img/kubernetes/kubernetes_flannel/image-20210922135912555.png) + +解析 + +```powershell + Kubernetes 的 Service定义了一个服务的访问入口地址,前端的应用Pod通过Service访问其背后一组有Pod副本组成的集群示例,Service通过Label Selector访问指定的后端Pod,RC保证Service的服务能力和服务质量处于预期状态。 + + Service是Kubernetes中最高一级的抽象资源对象,每个Service提供一个独立的服务,集群Service彼此间使用TCP/IP进行通信,将不同的服务组合在一起运行起来,就行了我们所谓的"系统",效果如下图 +``` + +![image-20220721090648605](../../img/kubernetes/kubernetes_flannel/image-20220721090648605.png) + +**细节解读** + +Pod入口 + +```powershell + 我们知道每个Pod都有一个专用的IP地址,加上Pod内部容器的Port端口,就组成了一个访问Pod专用的EndPoint(Pod IP+Container Port),从而实现了用户外部资源访问Pod内部应用的效果。这个EndPoint资源在master端的Controller组件中,由EndPoint Controller 来进行统一管理。 +``` + +kube-proxy + +```powershell + Pod是工作在不同的Node节点上,而Node节点上有一个kube-proxy组件,它本身就是一个软件负载均衡器,在内部有一套专有的负载均衡与会话保持机制,可以达到,接收到所有对Service请求,进而转发到后端的某个具体的Pod实例上,相应该请求。 + -- kube-proxy 其实就是 Service Controller位于各节点上的agent。 +``` + +service表现 + +```powershell + Kubernetes给Service分配一个全局唯一的虚拟ip地址--cluster IP,它不存在任何网络设备上,Service通过内容的标签选择器,指定相应该Service的Pod资源,这样以来,请求发给cluster IP,后端的Pod资源收到请求后,就会响应请求。 + +这种情况下,每个Service都有一个全局唯一通信地址,整个系统的内部服务间调用就变成了最基础的TCP/IP网络通信问题。如果我们的集群内部的服务想要和外部的网络进行通信,方法很多,比如: + NodePort类型,通过在所有结点上增加一个对外的端口,用于接入集群外部请求 + ingress类型,通过集群附加服务功能,将外部的域名流量转交到集群内部。 +``` + +service vs endpoint + +```powershell +1 当创建 Service资源的时候,最重要的就是为Service指定能够提供服务的标签选择器, +2 Service Controller就会根据标签选择器创建一个同名的Endpoint资源对象。 +3 Endpoint Controller开始介入,使用Endpoint的标签选择器(继承自Service标签选择器),筛选符合条件的pod资源 +4 Endpoint Controller 将符合要求的pod资源绑定到Endpoint上,并告知给Service资源,谁可以正常提供服务。 +5 Service 根据自身的cluster IP向外提供由Endpoint提供的服务资源。 + +-- 所以Service 其实就是 为动态的一组pod资源对象 提供一个固定的访问入口。 +``` + +**小结** + +``` + +``` + + + +### 1.1.2 工作模型 + +学习目标 + +这一节,我们从 模型解读、类型解读、小结 三个方面来学习。 + +**模型解读** + +简介 + +```powershell + Service对象,对于当前集群的节点来说,本质上就是工作节点的一些iptables或ipvs规则,这些规则由kube-proxy进行实时维护,站在kubernetes的发展脉络上来说,kube-proxy将请求代理至相应端点的方式有三种:userspace/iptables/ipvs。目前我们主要用的是 iptables/ipvs 两种。 +``` + +模式解析 + +![image-20220721093742953](../../img/kubernetes/kubernetes_flannel/image-20220721093742953.png) + +```powershell + userspace模型是k8s(1.1-1.2)最早的一种工作模型,作用就是将service的策略转换成iptables规则,这些规则仅仅做请求的拦截,而不对请求进行调度处理。 + 该模型中,请求流量到达内核空间后,由套接字送往用户空间的kube-proxy,再由它送回内核空间,并调度至后端Pod。因为涉及到来回转发,效率不高,另外用户空间的转发,默认开启了会话粘滞,会导致流量转发给无效的pod上。 +``` + +```powershell + iptables模式是k8s(1.2-至今)默认的一种模式,作用是将service的策略转换成iptables规则,不仅仅包括拦截,还包括调度,捕获到达ClusterIP和Port的流量,并重定向至当前Service的代理的后端Pod资源。性能比userspace更加高效和可靠 +缺点: + 不会在后端Pod无响应时自动重定向,而userspace可以 + 中量级k8s集群(service有几百个)能够承受,但是大量级k8s集群(service有几千个)维护达几万条规则,难度较大 +``` + +```powershell + ipvs是自1.8版本引入,1.11版本起为默认设置,通过内核的Netlink接口创建相应的ipvs规则 +请求流量的转发和调度功能由ipvs实现,余下的其他功能仍由iptables完成。ipvs流量转发速度快,规则同步性能好,且支持众多调度算法,如rr/lc/dh/sh/sed/nq等。 + +注意: + 对于我们kubeadm方式安装k8s集群来说,他会首先检测当前主机上是否已经包含了ipvs模块,如果加载了,就直接用ipvs模式,如果没有加载ipvs模块的话,会自动使用iptables模式。 +``` + + + +**类型解读** + +service类型 + +```powershell + 对于k8s来说,内部服务的自由通信可以满足我们环境的稳定运行,但是我们作为一个平台,其核心功能还是将平台内部的服务发布到外部环境,那么在k8s环境平台上,Service主要有四种样式来满足我们的需求,种类如下: +``` + +```powershell +ClusterIP + 这是service默认的服务暴露模式,主要针对的对象是集群内部。 +NodePort + 在ClusterIP的基础上,以:方式对外提供服务,默认端口范围沿用Docker初期的随机端口范围 30000~32767,但是NodePort设定的时候,会在集群所有节点上实现相同的端口。 + +LoadBalancer + 基于NodePort之上,使用运营商负载均衡器方式实现对外提供服务底层是基于IaaS云创建一个k8s云,同时该平台也支持LBaaS产品服务。 + +ExternalName + 当前k8s集群依赖集群外部的服务,那么通过externalName将外部主机引入到k8s集群内部外部主机名以 DNS方式解析为一个 CNAME记录给k8s集群的其他主机来使用这种Service既不会有ClusterIP,也不会有NodePort.而且依赖于内部的CoreDNS功能 +``` + +**小结** + +``` + +``` + +### 1.1.3 SVC实践 + +学习目标 + +这一节,我们从 资源实践、NodePort实践、小结 三个方面来学习。 + +**资源实践** + +资源属性 + +```powershell +apiVersion: v1 +kind: Service +metadata: + name: … + namespace: … + labels: + key1: value1 + key2: value2 +spec: + type # Service类型,默认为ClusterIP + selector # 等值类型的标签选择器,内含“与”逻辑 + ports: # Service的端口对象列表 + - name # 端口名称 + protocol # 协议,目前仅支持TCP、UDP和SCTP,默认为TCP + port # Service的端口号 + targetPort # 后端目标进程的端口号或名称,名称需由Pod规范定义 + nodePort # 节点端口号,仅适用于NodePort和LoadBalancer类型 + clusterIP # Service的集群IP,建议由系统自动分配 + externalTrafficPolicy # 外部流量策略处理方式,Local表示由当前节点处理,Cluster表示向集群范围调度 + loadBalancerIP # 外部负载均衡器使用的IP地址,仅适用于LoadBlancer + externalName # 外部服务名称,该名称将作为Service的DNS CNAME值 +``` + +手工方法 + +```powershell +创建一个应用 +[root@kubernetes-master ~]# kubectl create deployment nginx --image=kubernetes-register.superopsmsb.com/superopsmsb/nginx_web:v0.1 + +创建多种类型svc +[root@kubernetes-master ~]# kubectl expose deployment nginx --port=80 +[root@kubernetes-master ~]# kubectl expose deployment nginx --name=svc-default --port=80 +[root@kubernetes-master ~]# kubectl expose deployment nginx --name=svc-nodeport --port=80 --type=NodePort +[root@kubernetes-master ~]# kubectl expose deployment nginx --name=svc-loadblancer --port=80 --type=LoadBalancer + +查看效果 +[root@kubernetes-master1 ~]# kubectl get svc +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +nginx ClusterIP 10.99.57.240 80/TCP 4m31s +svc-default ClusterIP 10.103.54.139 80/TCP 3m2s +svc-loadblancer LoadBalancer 10.104.39.177 80:30778/TCP 2m44s +svc-nodeport NodePort 10.100.104.140 80:32335/TCP 2m54s +``` + +资源对象方式 + +```powershell +查看对象标签 +[root@kubernetes-master1 ~]# kubectl get pod --show-labels +NAME READY STATUS RESTARTS AGE LABELS +nginx-f44f65dc-pbht6 1/1 Running 0 8h app=nginx,pod-template-hash=f44f65dc +``` + +简单实践 + +```powershell +定制资源清单文件 +[root@kubernetes-master1 ~]# mkdir /data/kubernetes/service -p ; cd /data/kubernetes/service +[root@kubernetes-master1 /data/kubernetes/service]# +[root@kubernetes-master1 /data/kubernetes/service]# vim 01_kubernetes-service_test.yml +apiVersion: v1 +kind: Service +metadata: + name: superopsmsb-nginx-service +spec: + selector: + app: nginx + ports: + - name: http + port: 80 + +应用资源清单文件 +[root@kubernetes-master1 /data/kubernetes/service]# kubectl apply -f 01_kubernetes-service_test.yml +service/superopsmsb-nginx-service created +``` + +```powershell +查看service效果 +[root@kubernetes-master1 /data/kubernetes/service]# kubectl describe svc superopsmsb-nginx-service +Name: superopsmsb-nginx-service +... +IP: 10.109.240.56 +IPs: 10.109.240.56 +Port: http 8000/TCP +TargetPort: 8000/TCP +Endpoints: 10.244.3.63:8000 +... + +访问service +[root@kubernetes-master1 /data/kubernetes/service]# curl -s 10.101.25.116 -I | head -n1 +HTTP/1.1 200 OK +``` + + + +**NodePort实践** + +属性解读 + +```powershell +NodePort会在所有的节点主机上,暴露一个指定或者随机的端口,供外部的服务能够正常的访问pod内部的资源。 +``` + +简单实践 + +```powershell +[root@kubernetes-master1 /data/kubernetes/service]# vim 02_kubernetes-service_nodePort.yml +apiVersion: v1 +kind: Service +metadata: + name: superopsmsb-nginx-nodeport +spec: + selector: + app: nginx + ports: + - name: http + port: 80 + nodePort: 30080 + +应用资源清单文件 +[root@kubernetes-master1 /data/kubernetes/service]# kubectl apply -f 02_kubernetes-service_nodePort.yml +service/superopsmsb-nginx-nodeport created + +``` + +```powershell +检查效果 +[root@kubernetes-master1 /data/kubernetes/service]# kubectl describe svc superopsmsb-nginx-nodeport +Name: superopsmsb-nginx-nodeport +... +Type: NodePort +IP Family Policy: SingleStack +IP Families: IPv4 +IP: 10.102.1.177 +IPs: 10.102.1.177 +Port: http 80/TCP +TargetPort: 80/TCP +NodePort: http 30080/TCP +... + +访问效果 +[root@kubernetes-master1 /data/kubernetes/service]# curl 10.0.0.12:30080 Hello Nginx, nginx-6944855df5-8zjdn-1.23.0 +``` + + + +**小结** + +``` + +``` + + + +### 1.1.4 IPVS实践 + +学习目标 + +这一节,我们从 基础知识、简单实践、小结 三个方面来学习。 + +**基础知识** + +简介 + +![image-20220721093742953](../../img/kubernetes/kubernetes_flannel/image-20220721093742953.png) + +```powershell +关键点: + ipvs会在每个节点上创建一个名为kube-ipvs0的虚拟接口,并将集群所有Service对象的ClusterIP和ExternalIP都配置在该接口; + - 所以每增加一个ClusterIP 或者 EternalIP,就相当于为 kube-ipvs0 关联了一个地址罢了。 + kube-proxy为每个service生成一个虚拟服务器( IPVS Virtual Server)的定义。 +``` + +```powershell +基本流程: + 所以当前节点接收到外部流量后,如果该数据包是交给当前节点上的clusterIP,则会直接将数据包交给kube-ipvs0,而这个接口是内核虚拟出来的,而kube-proxy定义的VS直接关联到kube-ipvs0上。 + 如果是本地节点pod发送的请求,基本上属于本地通信,效率是非常高的。 + 默认情况下,这里的ipvs使用的是nat转发模型,而且支持更多的后端调度算法。仅仅在涉及到源地址转换的场景中,会涉及到极少量的iptables规则(应该不会超过20条) +``` + +```powershell +前提:当前操作系统需要提前加载ipvs模块 + yum install ipvsadm -y +``` + +kube-proxy + +```powershell + 对于k8s来说,默认情况下,支持的规则是 iptables,我们可以通过多种方式对我们的代理模式进行更改,因为这些规则都是基于kube-proxy来定制的,所以,我们如果要更改代理模式的话,就需要调整kube-proxy的属性。 +``` + +```powershell +在k8s集群中,关于kube-proxy的所有属性信息,我们可以通过一个 configmap 的资源对象来了解一下 + +[root@kubernetes-master1 ~]# kubectl describe configmap kube-proxy -n kube-system +Name: kube-proxy +Namespace: kube-system +Labels: app=kube-proxy +... +iptables: + masqueradeAll: false # 这个属性打开的话,会对所有的请求都进行源地址转换 + ... +ipvs: + excludeCIDRs: null + minSyncPeriod: 0s + scheduler: "" # 调度算法,默认是randomrobin + ... +kind: KubeProxyConfiguration +metricsBindAddress: "" +mode: "" # 默认没有指定,就是使用 iptables 规则 +... +``` + +查看默认模式 + +```powershell +通过kube-proxy-b8dpc的pod日志查看模式 +[root@kubernetes-master1 /data/kubernetes/service]# kubectl logs kube-proxy-b8dpc -n kube-system +... +I0719 08:39:44.410078 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode="" +I0719 08:39:44.462438 1 server_others.go:206] "Using iptables Proxier" +... +``` + + + +**简单实践** + +准备 + +```powershell +清理所有svc +[root@kubernetes-master1 /data/kubernetes/service]# for i in $(kubectl get svc | egrep -v 'NAME|kubernetes' | awk '{print $1}') +> do +> kubectl delete svc $i +> done +``` + +修改kube-proxy模式 + +```powershell +我们在测试环境中,临时修改一下configmap中proxy的基本属性 - 临时环境推荐 +[root@kubernetes-master1 /data/kubernetes/service]# kubectl edit configmap kube-proxy -n kube-system + ... + mode: "ipvs" + ... + +重启所有的kube-proxy pod对象 +[root@kubernetes-master1 /data/kubernetes/service]# kubectl delete pod -n kube-system -l k8s-app=kube-proxy +``` + +```powershell +通过kube-proxy的pod日志查看模式 +[root@kubernetes-master1 /data/kubernetes/service]# kubectl logs kube-proxy-fb9pz -n kube-system +... +I0721 10:31:40.408116 1 server_others.go:269] "Using ipvs Proxier" +I0721 10:31:40.408155 1 server_others.go:271] "Creating dualStackProxier for ipvs" +... +``` + +测试效果 + +```powershell +安装参考命令 +[root@kubernetes-master1 /data/kubernetes/service]# yum install ipvsadm -y + +查看规则效果 +[root@kubernetes-master1 /data/kubernetes/service]# ipvsadm -Ln +IP Virtual Server version 1.2.1 (size=4096) +Prot LocalAddress:Port Scheduler Flags + -> RemoteAddress:Port Forward Weight ActiveConn InActConn +TCP 172.17.0.1:30443 rr + -> 10.244.3.2:8443 Masq 1 0 0 + ... +``` + +```powershell +创建一个service +[root@kubernetes-master1 /data/kubernetes/service]# kubectl apply -f 02_kubernetes-service_nodePort.yml +service/superopsmsb-nginx-nodeport created + +查看svc的ip +[root@kubernetes-master1 /data/kubernetes/service]# kubectl get svc superopsmsb-nginx-nodeport -o wide +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR +superopsmsb-nginx-nodeport NodePort 10.106.138.242 80:30080/TCP 23s app=nginx + +查看ipvsadm规则 +[root@kubernetes-master1 /data/kubernetes/service]# ipvsadm -Ln | grep -A1 10.106.138.242 +TCP 10.106.138.242:80 rr + -> 10.244.3.64:80 Masq 1 0 0 + +查看防火墙规则 +[root@kubernetes-master1 /data/kubernetes/service]# iptables -t nat -S KUBE-NODE-PORT +-N KUBE-NODE-PORT +-A KUBE-NODE-PORT -p tcp -m comment --comment "Kubernetes nodeport TCP port for masquerade purpose" -m set --match-set KUBE-NODE-PORT-TCP dst -j KUBE-MARK-MASQ +结果显示: + 没有生成对应的防火墙规则 +``` + + + +**小结** + +``` + +``` + + + +## 1.2 其他资源 + +### 1.2.1 域名服务 + +学习目标 + +这一节,我们从 场景需求、域名测试、小结 三个方面来学习。 + +**场景需求** + +简介 + +```powershell + 在传统的系统部署中,服务运行在一个固定的已知的 IP 和端口上,如果一个服务需要调用另外一个服务,可以通过地址直接调用,但是,在虚拟化或容器话的环境中,以我们的k8s集群为例,如果存在个位数个service我们可以很快的找到对应的clusterip地址,进而找到指定的资源,虽然ip地址不容易记住,因为service在创建的时候会为每个clusterip分配一个名称,我们同样可以根据这个名称找到对应的服务。但是,如果我们的集群中有1000个Service,我们如何找到指定的service呢? +``` + +```powershell + 虽然我们可以借助于传统的DNS机制来实现,但是在k8s集群中,服务实例的启动和销毁是很频繁的,服务地址在动态的变化,所以传统的方式配置DNS解析记录就不太好实现了。所以针对于这种场景,我们如果需要将请求发送到动态变化的服务实例上,可以通过一下两个步骤来实现: + 服务注册 — 创建服务实例后,主动将当前服务实例的信息,存储到一个集中式的服务管理中心。 + 服务发现 — 当A服务需要找未知的B服务时,先去服务管理中心查找B服务地址,然后根据该地址找到B服务 +``` + +DNS方案 + +```powershell + 专用于kubernetes集群中的服务注册和发现的解决方案就是KubeDNS。kubeDNS自从k8s诞生以来,其方案的具体实现样式前后经历了三代,分别是 SkyDNS、KubeDNS、CoreDNS(目前默认的)。 +``` + +![image-20220721185041526](../../img/kubernetes/kubernetes_flannel/image-20220721185041526.png) + +域名解析记录 + +```powershell + Kubelet会为创建的每一个容器于/etc/resolv.conf配置文件中生成DNS查询客户端依赖到的必要配置,相关的配置信息源自于kubelet的配置参数,容器的DNS服务器由clusterDNS参数的值设定,它的取值为kube-system名称空间中的Service对象kube-dns的ClusterIP,默认为10.96.0.10. + DNS搜索域的值由clusterDomain参数的值设定,若部署Kubernetes集群时未特别指定,其值将为cluster.local、svc.cluster.local和NAMESPACENAME.svc.cluster.local +``` + +```powershell +kubeadm 1.23.8 环境初始化配置文件中与dns相关的检索信息 +[root@kubernetes-master1 ~]# grep -A1 networking /data/kubernetes/cluster_init/kubeadm_init_1.23.8.yml +networking: + dnsDomain: cluster.local +``` + +资源对象的dns记录 + +```powershell + 对于kubernetes的内部资源对象来说,为了更好的绕过变化频率更高的ip地址的限制,它可以在内部以dns记录的方式进行对象发现,dns记录具有标准的名称格式: + 资源对象名.命名空间名.svc.cluster.local +``` + +**域名测试** + +创建一个svc记录 + +```pow +[root@kubernetes-master1 /data/kubernetes/service]# kubectl apply -f 01_kubernetes-service_test.yml +service/superopsmsb-nginx-service created +[root@kubernetes-master1 /data/kubernetes/service]# kubectl get svc superopsmsb-nginx-service -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR +superopsmsb-nginx-service ClusterIP 10.97.135.126 80/TCP 23s app=nginx +``` + +资源对象的名称记录 + +```powershell +查看本地的pod效果 +[root@kubernetes-master1 ~]# kubectl get pod +NAME READY STATUS RESTARTS AGE +nginx-6944855df5-8zjdn 1/1 Running 0 38m + +查看pod内部的resolv.conf文件 +[root@kubernetes-master1 ~]# kubectl exec -it nginx-6944855df5-8zjdn -- cat /etc/resolv.conf +nameserver 10.96.0.10 +search default.svc.cluster.local svc.cluster.local cluster.local localhost +options ndots:5 +可以看到: + 资源对象的查看dns的后缀主要有四种: + default.svc.cluster.local + svc.cluster.local + cluster.local + localhost +``` + +```powershell +查看内部的资源对象完整域名 +[root@kubernetes-master1 /data/kubernetes/service]# kubectl exec -it nginx-6944855df5-8zjdn -- /bin/bash +root@nginx-6944855df5-8zjdn:/# curl +curl: try 'curl --help' or 'curl --manual' for more information +root@nginx-6944855df5-8zjdn:/# curl superopsmsb-nginx-service +Hello Nginx, nginx-6944855df5-8zjdn-1.23.0 +root@nginx-6944855df5-8zjdn:/# curl superopsmsb-nginx-service.default.svc.cluster.local +Hello Nginx, nginx-6944855df5-8zjdn-1.23.0 +root@nginx-6944855df5-8zjdn:/# curl superopsmsb-nginx-service.default.svc.cluster.local. +Hello Nginx, nginx-6944855df5-8zjdn-1.23.0 +``` + +内部dns测试效果 + +```powershell +安装dns测试工具 +root@nginx-6944855df5-8zjdn:/# apt update +root@nginx-6944855df5-8zjdn:/# apt install dnsutils -y +``` + +```powershell +资源对象的查看效果 +root@nginx-6944855df5-8zjdn:/# nslookup superopsmsb-nginx-service +Server: 10.96.0.10 +Address: 10.96.0.10#53 + +Name: superopsmsb-nginx-service.default.svc.cluster.local +Address: 10.97.135.126 + +root@nginx-6944855df5-8zjdn:/# nslookup 10.97.135.126 +126.135.97.10.in-addr.arpa name = superopsmsb-nginx-service.default.svc.cluster.local. + +root@nginx-6944855df5-8zjdn:/# nslookup kubernetes +Server: 10.96.0.10 +Address: 10.96.0.10#53 + +Name: kubernetes.default.svc.cluster.local +Address: 10.96.0.1 +``` + +```powershell +查看跨命名空间的资源对象 +[root@kubernetes-master1 ~]# kubectl get svc -n kube-system +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP,9153/TCP 2d2h + +回到pod终端查看效果 +root@nginx-6944855df5-8zjdn:/# nslookup kube-dns.kube-system.svc.cluster.local +Server: 10.96.0.10 +Address: 10.96.0.10#53 + +Name: kube-dns.kube-system.svc.cluster.local +Address: 10.96.0.10 + +root@nginx-6944855df5-8zjdn:/# nslookup 10.96.0.10 +10.0.96.10.in-addr.arpa name = kube-dns.kube-system.svc.cluster.local. + +root@nginx-6944855df5-8zjdn:/# nslookup kube-dns +Server: 10.96.0.10 +Address: 10.96.0.10#53 + +** server can't find kube-dns: NXDOMAIN +结果显示: + 对于跨命名空间的资源对象必须使用完整的名称格式 +``` + +pod资源对象解析 + +```powershell +查看pod资源对象 +[root@kubernetes-master1 ~]# kubectl get pod -o wide +NAME READY STATUS RESTARTS AGE IP ... +nginx-6944855df5-8zjdn 1/1 Running 0 56m 10.244.3.64 ... + +构造资源对象名,pod的ip名称转换 +root@nginx-6944855df5-8zjdn:/# nslookup 10-244-3-64.superopsmsb-nginx-service.default.svc.cluster.local. Server: 10.96.0.10 +Address: 10.96.0.10#53 + +Name: 10-244-3-64.superopsmsb-nginx-service.default.svc.cluster.local +Address: 10.244.3.64 +``` + + + +**小结** + +``` + +``` + + + +### 1.2.2 CoreDNS + +学习目标 + +这一节,我们从 基础知识、简单实践、小结 三个方面来学习。 + +**基础知识** + +简介 + +```powershell + coredns是一个用go语言编写的开源的DNS服务,coredns是首批加入CNCF组织的云原生开源项目,并且作为已经在CNCF毕业的项目,coredns还是目前kubernetes中默认的dns服务。同时,由于coredns可以集成插件,它还能够实现服务发现的功能。 + + coredns和其他的诸如bind、knot、powerdns、unbound等DNS服务不同的是:coredns非常的灵活,并且几乎把所有的核心功能实现都外包给了插件。如果你想要在coredns中加入Prometheus的监控支持,那么只需要安装对应的prometheus插件并且启用即可。 +``` + +配置解析 + +```powershell +coredns的配置依然是存放在 configmap中 +[root@kubernetes-master1 ~]# kubectl get cm coredns -n kube-system +NAME DATA AGE +coredns 1 2d2h +``` + +```powershell +查看配置详情 +[root@kubernetes-master1 ~]# kubectl describe cm coredns -n kube-system +Name: coredns +Namespace: kube-system +Labels: +Annotations: + +Data +==== +Corefile: +---- +.:53 { + errors + health { # 健康检测 + lameduck 5s + } + ready + kubernetes cluster.local in-addr.arpa ip6.arpa { # 解析配置 + pods insecure + fallthrough in-addr.arpa ip6.arpa + ttl 30 + } + prometheus :9153 + forward . /etc/resolv.conf { # 转发配置 + max_concurrent 1000 + } + cache 30 + loop + reload # 自动加载 + loadbalance +} + +BinaryData +==== + +Events: + +``` + +```powershell +其他属性和示例: + except domain 排除的域名 + +添加dns解析 + hosts { + 192.168.8.100 www.example.com + fallthrough # 在CoreDNS里面表示如果自己无法处理,则交由下个插件处理。 + } + +``` + +**简单实践** + +修改实践 + +```powershell +修改配置文件 +[root@kubernetes-master1 ~]# kubectl edit cm coredns -n kube-system +... + forward . /etc/resolv.conf { + max_concurrent 1000 + except www.baidu.com. + } + hosts { + 10.0.0.20 harbor.superopsmsb.com + fallthrough + } + ... + 注意: + 多个dns地址间用空格隔开 + 排除的域名最好在末尾添加 “.”,对于之前的旧版本来说可能会出现无法保存的现象 +``` + +测试效果 + +```powershell +同步dns的配置信息 +[root@kubernetes-master1 ~]# kubectl delete pod -l k8s-app=kube-dns -n kube-system +pod "coredns-5d555c984-fmrht" deleted +pod "coredns-5d555c984-scvjh" deleted + +删除旧pod,使用新pod测试 (可以忽略) +[root@kubernetes-master1 /data/kubernetes/service]# kubectl get pod +NAME READY STATUS RESTARTS AGE +nginx-6944855df5-8zjdn 1/1 Running 0 70m +[root@kubernetes-master1 /data/kubernetes/service]# kubectl delete pod nginx-6944855df5-8zjdn +pod "nginx-6944855df5-8zjdn" deleted +``` + +```powershell +pod中测试效果 +[root@kubernetes-master1 /data/kubernetes/service]# kubectl exec -it nginx-6944855df5-9s4bd -- /bin/bash +root@nginx-6944855df5-9s4bd:/# apt update ; apt install dnsutils -y + +测试效果 +root@nginx-6944855df5-9s4bd:/# nslookup www.baidu.com +Server: 10.96.0.10 +Address: 10.96.0.10#53 + +** server can't find www.baidu.com: SERVFAIL + +root@nginx-6944855df5-9s4bd:/# nslookup harbor.superopsmsb.com +Server: 10.96.0.10 +Address: 10.96.0.10#53 + +Name: harbor.superopsmsb.com +Address: 10.0.0.20 +``` + +清理环境 + +```powershell +将之前的配置清理掉 +``` + + + +``` + +``` + +### 1.2.3 无头服务 + +学习目标 + +这一节,我们从 基础知识、简单实践、小结 三个方面来学习。 + +**基础知识** + +简介 + +```powershell + 在kubernetes中,有一种特殊的无头service,它本身是Service,但是没有ClusterIP,这种svc在一些有状态的场景中非常重要。 +``` + +```powershell + 无头服务场景下,k8s会将一个集群内部的所有成员提供唯一的DNS域名来作为每个成员的网络标识,集群内部成员之间使用域名通信,这个时候,就特别依赖service的selector属性配置了。 + 无头服务管理的域名是如下的格式:$(service_name).$(k8s_namespace).svc.cluster.local。 +``` + +pod资源对象的解析记录 + +```powershell +dns解析记录 + A记录 + ---...svc. A PodIP +关键点: + svc_name的解析结果从常规Service的ClusterIP,转为各个Pod的IP地址; + 反解,则从常规的clusterip解析为service name,转为从podip到hostname, ---...svc. + 指的是a-b-c-d格式,而非Pod自己的主机名; +``` + +**简单实践** + +准备工作 + +```powershell +扩展pod对象数量 +[root@kubernetes-master1 ~]# kubectl scale deployment nginx --replicas=3 +deployment.apps/nginx scaled +[root@kubernetes-master1 ~]# kubectl get pod -o wide +NAME READY STATUS RESTARTS AGE IP ... +nginx-6944855df5-9m44b 1/1 Running 0 5s 10.244.2.4 ... +nginx-6944855df5-9s4bd 1/1 Running 0 14m 10.244.3.2 kubernetes-node3 +nginx-6944855df5-mswl4 1/1 Running 0 5s 10.244.1.3 kubernetes-node2 +``` + +定制无头服务 + +```powershell +手工定制无头服务 +[root@kubernetes-master1 ~]# kubectl create service clusterip service-headless --clusterip="None" +service/service-headless-cmd created + +查看无头服务 +[root@kubernetes-master1 ~]# kubectl get svc service-headless +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service-headless ClusterIP None 6s +``` + +```powershell +资源清单文件定制无头服务 +[root@kubernetes-master1 /data/kubernetes/service]# cat 03_kubernetes-service_headless.yml +apiVersion: v1 +kind: Service +metadata: + name: superopsmsb-nginx-headless +spec: + selector: + app: nginx + ports: + - name: http + port: 80 + clusterIP: "None" + +应用资源对象 +[root@kubernetes-master1 /data/kubernetes/service]# kubectl apply -f 03_kubernetes-service_headless.yml +service/superopsmsb-nginx-headless created +[root@kubernetes-master1 /data/kubernetes/service]# kubectl get svc +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +kubernetes ClusterIP 10.96.0.1 443/TCP 6m33s +service-headless ClusterIP None 61s +superopsmsb-nginx-headless ClusterIP None 80/TCP 7s +``` + +测试效果 + +```powershell +进入测试容器 +[root@kubernetes-master1 /data/kubernetes/service]# kubectl exec -it nginx-fd669dcb-h5d9d -- /bin/bash +root@nginx-fd669dcb-h5d9d:/# nslookup superopsmsb-nginx-headless +Server: 10.96.0.10 +Address: 10.96.0.10#53 + +Name: superopsmsb-nginx-headless.default.svc.cluster.local +Address: 10.244.2.4 +Name: superopsmsb-nginx-headless.default.svc.cluster.local +Address: 10.244.1.3 +Name: superopsmsb-nginx-headless.default.svc.cluster.local +Address: 10.244.3.2 +结果显式: + 由于我们没有对域名做定向解析,那么找请求的时候,就像无头苍蝇似的,到处乱串 +``` + +**小结** + +``` + +``` + + + +## 1.3 flannel方案 + +### 1.3.1 网络方案 + +学习目标 + +这一节,我们从 存储原理、方案解读、小结 三个方面来学习。 + +**存储原理** + +容器实现网络访问样式 + +![1633650192071](../../img/kubernetes/kubernetes_flannel/1633650192071.png) + +```powershell +1 虚拟网桥: + brdige,用纯软件的方式实现一个虚拟网络,用一个虚拟网卡接入到我们虚拟网桥上去。这样就能保证每一个容器和每一个pod都能有一个专用的网络接口,从而实现每一主机组件有网络接口。 + +2 多路复用: + MacVLAN,借助于linux内核级的VLAN模块,在同一个物理网卡上配置多个 MAC 地址,每个 MAC 给一个Pod使用,然后借助物理网卡中的MacVLAN机制进行跨节点之间进行通信了。 + +3 硬件交换: + 很多网卡都已经支持"单根IOV的虚拟化"了,借助于单根IOV(SR-IOV)的方式,直接在物理主机虚拟出多个接口来,通信性能很高,然后每个虚拟网卡能够分配给容器使用。 +``` + +容器网络方案 + +```powershell +任何一种能够让容器自由通信的网络解决方案,必须包含三个功能: + 1 构建一个网络 + 2 将容器接入到这个网络中 + 3 实时维护所有节点上的路由信息,实现容器的通信 +``` + +CNI方案 + +```powershell +1 所有节点的内核都启用了VXLAN的功能模块 + 每个节点都启动一个cni网卡,并维护所有节点所在的网段的路由列表 +2 node上的pod发出请求到达cni0 + 根据内核的路由列表判断对端网段的节点位置 + 经由 隧道设备 对数据包进行封装标识,对端节点的隧道设备解封标识数据包, + 当前数据包一看当前节点的路由表发现有自身的ip地址,这直接交给本地的pod +3 多个节点上的路由表信息维护,就是各种网络解决方案的工作位置 + +注意: + 我们可以部署多个网络解决方案,但是对于CNI来说,只会有一个生效。如果网络信息过多会导致冲突 +``` + +![1633651873150](../../img/kubernetes/kubernetes_flannel/1633651873150.png) + +**方案解读** + +CNI插件 + +```powershell + 根据我们刚才对pod通信的回顾,多节点内的pod通信,k8s是通过CNI接口来实现网络通信的。CNI基本思想:创建容器时,先创建好网络名称空间,然后调用CNI插件配置这个网络,而后启动容器内的进程 + +CNI插件类别:main、meta、ipam + main,实现某种特定的网络功能,如loopback、bridge、macvlan、ipvlan + meta,自身不提供任何网络实现,而是调用其他插件,如flannel + ipam,仅用于分配IP地址,不提供网络实现 +``` + +常见方案 + +```powershell +Flannel + 提供叠加网络,基于linux TUN/TAP,使用UDP封装IP报文来创建叠加网络,并借助etcd维护网络分配情况 + +Calico + 基于BGP的三层网络,支持网络策略实现网络的访问控制。在每台机器上运行一个vRouter,利用内核转发数据包,并借助iptables实现防火墙等功能 + +kube-router + K8s网络一体化解决方案,可取代kube-proxy实现基于ipvs的Service,支持网络策略、完美兼容BGP的高级特性 + +其他网络解决方案: + Canal: 由Flannel和Calico联合发布的一个统一网络插件,支持网络策略 + Weave Net: 多主机容器的网络方案,支持去中心化的控制平面 + Contiv:思科方案,直接提供多租户网络,支持L2(VLAN)、L3(BGP)、Overlay(VXLAN) + +更多的解决方案,大家可以参考: + https://kubernetes.io/docs/concepts/cluster-administration/addons/ +``` + +**小结** + +``` + +``` + + + +### 1.3.2 flannel + +学习目标 + +这一节,我们从 信息查看、原理解读、小结 三个方面来学习。 + +**信息查看** + +查看网络配置 + +```powershell +[root@kubernetes-master1 ~]# cat /etc/kubernetes/manifests/kube-controller-manager.yaml +apiVersion: v1 +... +spec: + containers: + - command: + - kube-controller-manager + - --allocate-node-cidrs=true + ... + - --cluster-cidr=10.244.0.0/16 + 配置解析: + allocate-node-cidrs属性表示,每增加一个新的节点,都从cluster-cidr子网中切分一个新的子网网段分配给对应的节点上。 + 这些相关的网络状态属性信息,会经过 kube-apiserver 存储到etcd中。 +``` + +CNI配置 + +```powershell + 使用CNI插件编排网络,Pod初始化或删除时,kubelet会调用默认CNI插件,创建虚拟设备接口附加到相关的底层网络,设置IP、路由并映射到Pod对象网络名称空间. + kubelet在/etc/cni/net.d目录查找cni json配置文件,基于type属性到/opt/cni/bin中查找相关插件的二进制文件,然后调用相应插件设置网络 +``` + +网段配置 + +```powershell +查看网段配置 +[root@kubernetes-master1 ~]# cat /run/flannel/subnet.env +FLANNEL_NETWORK=10.244.0.0/16 +FLANNEL_SUBNET=10.244.0.1/24 +FLANNEL_MTU=1450 +FLANNEL_IPMASQ=true +``` + +网段分配原理 + +```powershell +分配原理解读 + 集群的 kube-controller-manager 负责控制每个节点的网段分配 + 集群的 etcd 负责存储所有节点的网络配置存储 + 集群的 flannel 负责各个节点的路由表定制及其数据包的拆分和封装 + -- 所以flannel各个节点是平等的,仅负责数据平面的操作。网络功能相对来说比较简单。 + 另外一种插件 calico相对于flannel来说,多了一个控制节点,来管控所有的网络节点的服务进程。 +``` + +flannel网络 + +```powershell +pod效果 +[root@kubernetes-master1 ~]# kubectl get pod -n kube-flannel -o wide +NAME READY STATUS ... IP NODE ... +kube-flannel-ds-b6hxm 1/1 Running ... 10.0.0.15 kubernetes-node1 ... +kube-flannel-ds-bx7rq 1/1 Running ... 10.0.0.12 kubernetes-master1 ... +kube-flannel-ds-hqwrk 1/1 Running ... 10.0.0.13 kubernetes-master2 ... +kube-flannel-ds-npcw6 1/1 Running ... 10.0.0.17 kubernetes-node3 ... +kube-flannel-ds-sx427 1/1 Running ... 10.0.0.14 kubernetes-master3 ... +kube-flannel-ds-v5f4p 1/1 Running ... 10.0.0.16 kubernetes-node2 ... +``` + +```powershell +网卡效果 +[root@kubernetes-master1 ~]# for i in {12..17};do ssh root@10.0.0.$i ifconfig | grep -A1 flannel ;done +flannel.1: flags=4163 mtu 1450 + inet 10.244.0.0 netmask 255.255.255.255 broadcast 0.0.0.0 +flannel.1: flags=4163 mtu 1450 + inet 10.244.4.0 netmask 255.255.255.255 broadcast 0.0.0.0 +flannel.1: flags=4163 mtu 1450 + inet 10.244.5.0 netmask 255.255.255.255 broadcast 0.0.0.0 +flannel.1: flags=4163 mtu 1450 + inet 10.244.1.0 netmask 255.255.255.255 broadcast 0.0.0.0 +flannel.1: flags=4163 mtu 1450 + inet 10.244.2.0 netmask 255.255.255.255 broadcast 0.0.0.0 +flannel.1: flags=4163 mtu 1450 + inet 10.244.3.0 netmask 255.255.255.255 broadcast 0.0.0.0 +注意: + flannel.1 后面的.1 就是 vxlan的网络标识。便于隧道正常通信。 +``` + +```powershell +路由效果 +[root@kubernetes-master1 ~]# ip route list | grep flannel +10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink +10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink +10.244.3.0/24 via 10.244.3.0 dev flannel.1 onlink +10.244.4.0/24 via 10.244.4.0 dev flannel.1 onlink +10.244.5.0/24 via 10.244.5.0 dev flannel.1 onlink +``` + + + +**原理解读** + +flannel模型 + +```powershell +vxlan模型 + pod与Pod经由隧道封装后通信,各节点彼此间能通信就行,不要求在同一个二层网络 + 这是默认采用的方式 +host-gw模型 + Pod与Pod不经隧道封装而直接通信,要求各节点位于同一个二层网络 + +vxlan directrouting模型 + 它是vxlan和host-gw自由组合的一种模型 + 位于同一个二层网络上的、但不同节点上的Pod间通信,无须隧道封装;但非同一个二层网络上的节点上的Pod间通信,仍须隧道封装 +``` + +![image-20220722073310706](../../img/kubernetes/kubernetes_flannel/image-20220722073310706.png) + +vxlan原理 + +```powershell +1 节点上的pod通过虚拟网卡对,连接到cni0的虚拟网络交换机上 + 当有外部网络通信的时候,借助于 flannel.1网卡向外发出数据包 +2 经过 flannel.1 网卡的数据包,借助于flanneld实现数据包的封装和解封 + 最后送给宿主机的物理接口,发送出去 +3 对于pod来说,它以为是通过 flannel.x -> vxlan tunnel -> flannel.x 实现数据通信 + 因为它们的隧道标识都是".1",所以认为是一个vxlan,直接路由过去了,没有意识到底层的通信机制。 +注意: + 由于这种方式,是对数据报文进行了多次的封装,降低了当个数据包的有效载荷。所以效率降低了 +``` + +host-gw原理 + +```powershell +1 节点上的pod通过虚拟网卡对,连接到cni0的虚拟网络交换机上。 +2 pod向外通信的时候,到达CNI0的时候,不再直接交给flannel.1由flanneld来进行打包处理了。 +3 cni0直接借助于内核中的路由表,通过宿主机的网卡交给同网段的其他主机节点 +4 对端节点查看内核中的路由表,发现目标就是当前节点,所以交给对应的cni0,进而找到对应的pod。 +``` + +**小结** + +``` + +``` + + + +### 1.3.3 主机网络 + +学习目标 + +这一节,我们从 配置解读、简单实践、小结 三个方面来学习。 + +**配置解读** + +配置文件 + +```powershell +我们在部署flannel的时候,有一个配置文件,在这个配置文件中的configmap中就定义了虚拟网络的接入功能。 +[root@kubernetes-master1 /data/kubernetes/flannel]# cat kube-flannel.yml +kind: ConfigMap +... +data: + cni-conf.json: | cni插件的功能配置 + { + "name": "cbr0", + "cniVersion": "0.3.1", + "plugins": [ + { + "type": "flannel", 基于flannel实现网络通信 + "delegate": { + "hairpinMode": true, + "isDefaultGateway": true + } + }, + { + "type": "portmap", 来实现端口映射的功能 + "capabilities": { + "portMappings": true + } + } + ] + } + net-conf.json: | flannel的网址分配 + { + "Network": "10.244.0.0/16", + "Backend": { + "Type": "vxlan" 来新节点的时候,基于vxlan从network中获取子网 + } + } +``` + +路由表信息 + +```powershell +[root@kubernetes-master1 ~]# ip route list | grep flannel +10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink +10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink +10.244.3.0/24 via 10.244.3.0 dev flannel.1 onlink +10.244.4.0/24 via 10.244.4.0 dev flannel.1 onlink +10.244.5.0/24 via 10.244.5.0 dev flannel.1 onlink + +结果显示: + 如果数据包的目标是当前节点,这直接通过cni来进行处理 + 如果数据包的目标是其他节点,这根据路由配置,交给对应节点上的flannel.1网卡来进行处理 + 然后交给配套的flanneld对数据包进行封装 +``` + +数据包转发解析 + +```powershell +发现目标地址 +[root@kubernetes-master1 /data/kubernetes/flannel]# ip neigh | grep flannel +10.244.2.0 dev flannel.1 lladdr ca:d2:1b:08:40:c2 PERMANENT +10.244.5.0 dev flannel.1 lladdr 92:d8:04:76:cf:af PERMANENT +10.244.1.0 dev flannel.1 lladdr ae:08:68:7d:fa:31 PERMANENT +10.244.4.0 dev flannel.1 lladdr c2:ef:c2:a6:aa:04 PERMANENT +10.244.3.0 dev flannel.1 lladdr ee:dd:92:60:41:8a PERMANENT + +转发给指定主机 +[root@kubernetes-master1 /data/kubernetes/flannel]# bridge fdb show flannel.1 | grep flannel.1 +ee:dd:92:60:41:8a dev flannel.1 dst 10.0.0.17 self permanent +ae:08:68:7d:fa:31 dev flannel.1 dst 10.0.0.15 self permanent +c2:ef:c2:a6:aa:04 dev flannel.1 dst 10.0.0.13 self permanent +ca:d2:1b:08:40:c2 dev flannel.1 dst 10.0.0.16 self permanent +92:d8:04:76:cf:af dev flannel.1 dst 10.0.0.14 self permanent +``` + +测试效果 + +```powershell +准备工作 +[root@kubernetes-master1 ~]# kubectl scale deployment nginx --replicas=1 +deployment.apps/nginx scaled + +开启测试容器 +[root@kubernetes-master1 ~]# kubectl run flannel-test --image="kubernetes-register.superopsmsb.com/superopsmsb/busybox:1.28" -it --rm --command -- /bin/sh + +pod现状 +[root@kubernetes-master1 ~]# kubectl get pod -o wide +NAME ... IP NODE ... +flannel-test ... 10.244.1.4 kubernetes-node1 ... +nginx-fd669dcb-h5d9d ... 10.244.3.2 kubernetes-node3 ... +``` + +```powershell +测试容器发送测试数据 +[root@kubernetes-master1 ~]# kubectl run flannel-test --image="kubernetes-register.superopsmsb.com/superopsmsb/busybox:1.28" -it --rm --command -- /bin/sh +If you don't see a command prompt, try pressing enter. +/ # ping -c 1 10.244.3.2 +PING 10.244.3.2 (10.244.3.2): 56 data bytes +64 bytes from 10.244.3.2: seq=0 ttl=62 time=1.479 ms +... + +Flannel默认使用8285端口作为UDP封装报文的端口,VxLan使用8472端口,我们在node3上抓取node1的数据包 +[root@kubernetes-node3 ~]# tcpdump -i eth0 -en host 10.0.0.15 and udp port 8472 +... +07:46:40.104526 00:50:56:37:14:42 > 00:50:56:37:ed:73, ethertype IPv4 (0x0800), length 148: 10.0.0.17.53162 > 10.0.0.15.otv: OTV, flags [I] (0x08), overlay 0, instance 1 +ee:dd:92:60:41:8a > ae:08:68:7d:fa:31, ethertype IPv4 (0x0800), length 98: 10.244.3.2 > 10.244.1.4: ICMP echo reply, id 1792, seq 0, length 64 + +结果显示: + 这里面每一条数据,都包括了两层ip数据包 +``` + +host-gw实践 + +```powershell +修改flannel的配置文件,将其转换为 host-gw 模型。 +[root@kubernetes-master1 ~]# kubectl get cm -n kube-flannel +NAME DATA AGE +kube-flannel-cfg 2 82m + +修改资源配置文件 +[root@kubernetes-master1 ~]# kubectl edit cm kube-flannel-cfg -n kube-flannel +... + net-conf.json: | + { + "Network": "10.244.0.0/16", + "Backend": { + "Type": "host-gw" + } + } + +重启pod +[root@kubernetes-master1 ~]# kubectl delete pod -n kube-flannel -l app +[root@kubernetes-master1 ~]# kubectl get pod -n kube-flannel +NAME READY STATUS RESTARTS AGE +kube-flannel-ds-5dw2r 1/1 Running 0 21s +kube-flannel-ds-d9bh2 1/1 Running 0 21s +kube-flannel-ds-jvn2f 1/1 Running 0 21s +kube-flannel-ds-kcrb4 1/1 Running 0 22s +kube-flannel-ds-ptlx8 1/1 Running 0 21s +kube-flannel-ds-wxqd7 1/1 Running 0 22s +``` + +检查信息 + +```powershell +查看路由信息 +[root@kubernetes-master1 ~]# ip route list | grep 244 +10.244.1.0/24 via 10.0.0.15 dev eth0 +10.244.2.0/24 via 10.0.0.16 dev eth0 +10.244.3.0/24 via 10.0.0.17 dev eth0 +10.244.4.0/24 via 10.0.0.13 dev eth0 +10.244.5.0/24 via 10.0.0.14 dev eth0 + +在flannel-test中继续wget测试 +/ # wget 10.244.3.2 +Connecting to 10.244.3.2 (10.244.3.2:80) +index.html 100% |***| 41 0:00:00 ETA +``` + +```powershell +在node3中继续抓包 +[root@kubernetes-node3 ~]# tcpdump -i eth0 -nn host 10.244.1.4 and tcp port 80 +... +08:01:46.930340 IP 10.244.3.2.80 > 10.244.1.4.59026: Flags [FP.], seq 232:273, ack 74, win 56, length 41: HTTP +08:01:46.930481 IP 10.244.1.4.59026 > 10.244.3.2.80: Flags [.], ack 232, win 58, length 0 +08:01:46.931619 IP 10.244.1.4.59026 > 10.244.3.2.80: Flags [F.], seq 74, ack 274, win 58, length 0 +08:01:46.931685 IP 10.244.3.2.80 > 10.244.1.4.59026: Flags [.], ack 75, win 56, length 0 +结果显示: + 两个pod形成了直接的通信效果。 +``` + + + +**小结** + +``` + +``` + diff --git a/docs/cloud/kubernetes/kubernetes_hybridnet.md b/docs/cloud/kubernetes/kubernetes_hybridnet.md new file mode 100644 index 00000000..3588781c --- /dev/null +++ b/docs/cloud/kubernetes/kubernetes_hybridnet.md @@ -0,0 +1,1219 @@ +# k8s集群 underlay 网络方案 hybridnet + +# 零、容器网络方案介绍 + +## 0.1 overlay 网络方案 + +基于VXLAN、 NVGRE等封装技术实现overlay叠加网络: + +1、叠加网络/覆盖网络, 在物理网络的基础之上叠加实现新的虚拟网络, 即可使网络的中的容器可以相互通信。 + +2、优点是对物理网络的兼容性比较好, 可以实现pod的夸宿主机子网通信。 + +3、calico与flannel等网络插件都支持overlay网络。 + +4、缺点是有额外的封装与解封性能开销。 + +5、目前私有云使用比较多。 + + + +## 0.2 underlay网络方案 + +Underlay网络就是传统IT基础设施网络, 由交换机和路由器等设备组成, 借助以太网协议、 路由协议和VLAN协议等驱动, 它还是Overlay网络的底层网络, 为Overlay网络提供数据通信服务。 容器网络中的Underlay网络是指借助驱动程序将宿主机的底层网络接口直接暴露给容器使用的一种网络构建技术,较为常见的解决方案有MAC VLAN、 IP VLAN和直接路由等。 + +Underlay依赖于物理网络进行跨主机通信。 + +1、Mac Vlan模式: + +MAC VLAN: 支持在同一个以太网接口上虚拟出多个网络接口(子接口), 每个虚拟接口都拥有唯一的MAC地址并可配置网卡子接口IP,基于Docker宿主机物理网卡的不同子接口实现多个虚拟vlan,一个子接口就是一个虚拟vlan,容器通过宿主机的路由功能和外网保持通信。 + + + +2、IP VLAN模式: + +IP VLAN类似于MAC VLAN, 它同样创建新的虚拟网络接口并为每个接口分配唯一的IP地址, 不同之处在于, 每个虚拟接口将共享使用物理接口的MAC地址。 + + + + + +![image-20230404151314708](../../img/kubernetes/kubernetes_hybridnet/image-20230404151314708.png) + + + + + +## 0.3 K8S Pod通信简介 + + + +![image-20230404151837989](../../img/kubernetes/kubernetes_hybridnet/image-20230404151837989.png) + + + +- Overlay网络: + + +Flannel Vxlan、 Calico BGP、 Calico Vxlan +将pod 地址信息封装在宿主机地址信息以内, 实现跨主机且可跨node子网的通信报文。 + + + +- 直接路由: + + +Flannel Host-gw、 Flannel VXLAN Directrouting、 Calico Directrouting +基于主机路由, 实现报文从源主机到目的主机的直接转发, 不需要进行报文的叠加封装, 性能比overlay更好。 + + + +- Underlay: + + +不需要为pod启用单独的虚拟机网络, 而是直接使用宿主机物理网络, pod甚至可以在k8s环境之外的节点直接访问(与node节点的网络被打通), 相当于把pod当桥接模式的虚拟机使用, 比较方便k8s环境以外的访问访问k8s环境中的pod中的服务, 而且由于主机使用的宿主机网络, 其性能最好。 + + + + + +# 一、K8S集群部署 + +> 基于kubeadm部署K8S 1.26集群,pod network cidr为10.244.0.0/16,service为默认10.96.0.0/12 + + + +~~~powershell +方案1: +pod可以选择overlay或者underlay, SVC使用overlay, 如果是underlay需要配置SVC使用宿主机的子网比如以下场景是overlay网络、 后期会用于overlay场景的pod, service会用于overlay的svc场景。 +kubeadm init --apiserver-advertise-address=192.168.10.160 \ +--apiserver-bind-port=6443 \ +--kubernetes-version=v1.26.3 \ +--pod-network-cidr=10.244.0.0/16 \ +--service-cidr=10.96.0.0/12 \ +--service-dns-domain=cluster.local \ +--image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \ +--cri-socket unix:///var/run/cri-dockerd.sock + +--image-repository=“”为空则使用默认的google容器镜像仓库 +--cri-socket 可以使用unix:///var/run/containerd/containerd.sock或unix:///var/run/cri-dockerd.sock +~~~ + + + +~~~powershell +方案2: +pod可以选择overlay或者underlay, SVC使用underlay初始化 +--pod-network-cidr=10.244.0.0/16会用于后期overlay的场景, underlay的网络CIDR后期单独指定, overlay会与underlay并存 +--service-cidr=192.168.200.0/24用于后期的underlay svc, 通过SVC可以直接访问pod。 +kubeadm init --apiserver-advertise-address=192.168.10.160 \ +--apiserver-bind-port=6443 \ +--kubernetes-version=v1.26.3 \ +--pod-network-cidr=10.244.0.0/16 \ +--service-cidr=192.168.200.0/24 \ +--service-dns-domain=cluster.local \ +--image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \ +--cri-socket unix:///var/run/cri-dockerd.sock + +--service-cidr= 与已存在的网段不能冲突 +--image-repository=“”为空则使用默认的google容器镜像仓库 +--cri-socket 可以使用unix:///var/run/containerd/containerd.sock或unix:///var/run/cri-dockerd.sock +~~~ + + + + + +# 二、Helm部署 + +> github链接:https://github.com/helm/helm/releases + + + +![image-20230404120724930](../../img/kubernetes/kubernetes_hybridnet/image-20230404120724930.png) + + + + + +~~~powershell +# wget https://get.helm.sh/helm-v3.11.2-linux-amd64.tar.gz +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# ls +helm-v3.11.2-linux-amd64.tar.gz +[root@k8s-master01 ~]# tar xf helm-v3.11.2-linux-amd64.tar.gz +[root@k8s-master01 ~]# ls + helm-v3.11.2-linux-amd64.tar.gz linux-amd64 + +[root@k8s-master01 ~]# ls linux-amd64/ +helm LICENSE README.md +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# mv linux-amd64/helm /usr/local/bin/helm +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# helm version +version.BuildInfo{Version:"v3.11.2", GitCommit:"912ebc1cd10d38d340f048efaf0abda047c3468e", GitTreeState:"clean", GoVersion:"go1.18.10"} +~~~ + + + +# 三、部署hybridnet + + + +~~~powershell +[root@k8s-master01 ~]# helm repo add hybridnet https://alibaba.github.io/hybridnet/ +"hybridnet" has been added to your repositories +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# helm repo list +NAME URL +hybridnet https://alibaba.github.io/hybridnet/ +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# helm repo update +Hang tight while we grab the latest from your chart repositories... +...Successfully got an update from the "hybridnet" chart repository +Update Complete. ⎈Happy Helming!⎈ +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# helm install hybridnet hybridnet/hybridnet -n kube-system --set init.cidr=10.244.0.0/16 +~~~ + + + +~~~powershell +输出: +W0404 12:14:47.796075 111159 warnings.go:70] spec.template.spec.nodeSelector[beta.kubernetes.io/os]: deprecated since v1.14; use "kubernetes.io/os" instead +W0404 12:14:47.796100 111159 warnings.go:70] spec.template.metadata.annotations[scheduler.alpha.kubernetes.io/critical-pod]: non-functional in v1.16+; use the "priorityClassName" field instead +NAME: hybridnet +LAST DEPLOYED: Tue Apr 4 12:14:47 2023 +NAMESPACE: kube-system +STATUS: deployed +REVISION: 1 +TEST SUITE: None +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# helm list -n kube-system +NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION +hybridnet kube-system 1 2023-04-04 12:14:47.158751157 +0800 CST deployed hybridnet-0.6.0 0.8.0 +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# kubectl get pods -n kube-system +NAME READY STATUS RESTARTS AGE +calico-typha-c856d6bfd-7qnkk 1/1 Running 0 107s +calico-typha-c856d6bfd-l8nhw 1/1 Running 0 107s +calico-typha-c856d6bfd-slppp 1/1 Running 0 109s +coredns-787d4945fb-lfk42 1/1 Running 0 15h +coredns-787d4945fb-t8x2t 1/1 Running 0 15h +etcd-k8s-master01 1/1 Running 0 15h +hybridnet-daemon-ls2rh 1/2 Running 1 (19s ago) 114s +hybridnet-daemon-lxcb6 1/2 Running 1 (70s ago) 114s +hybridnet-daemon-xp7t4 1/2 Running 1 (30s ago) 114s +hybridnet-manager-55f5488b46-2x5qw 0/1 Pending 0 114s +hybridnet-manager-55f5488b46-ddpjw 0/1 Pending 0 109s +hybridnet-manager-55f5488b46-tx78h 0/1 Pending 0 109s +hybridnet-webhook-55d848f89c-8zrs2 0/1 Pending 0 114s +hybridnet-webhook-55d848f89c-9f9rf 0/1 Pending 0 114s +hybridnet-webhook-55d848f89c-q9xgn 0/1 Pending 0 114s +kube-apiserver-k8s-master01 1/1 Running 0 15h +kube-controller-manager-k8s-master01 1/1 Running 0 15h +kube-proxy-5v642 1/1 Running 0 15h +kube-proxy-vnwhh 1/1 Running 0 15h +kube-proxy-zgrj6 1/1 Running 0 15h +kube-scheduler-k8s-master01 1/1 Running 0 15h +~~~ + + + +~~~powershell +此时hybridnet-manager、hybridnet-webhook pod Pending,通过describe查看发现集群没有节点打上master标签 +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# kubectl describe pods hybridnet-manager-55f5488b46-2x5qw -n kube-system +Name: hybridnet-manager-55f5488b46-2x5qw +Namespace: kube-system +Priority: 2000000000 +Priority Class Name: system-cluster-critical +Service Account: hybridnet +Node: +Labels: app=hybridnet + component=manager + pod-template-hash=55f5488b46 +Annotations: +Status: Pending +IP: +IPs: +Controlled By: ReplicaSet/hybridnet-manager-55f5488b46 +Containers: + hybridnet-manager: + Image: docker.io/hybridnetdev/hybridnet:v0.8.0 + Port: 9899/TCP + Host Port: 9899/TCP + Command: + /hybridnet/hybridnet-manager + --default-ip-retain=true + --feature-gates=MultiCluster=false,VMIPRetain=false + --controller-concurrency=Pod=1,IPAM=1,IPInstance=1 + --kube-client-qps=300 + --kube-client-burst=600 + --metrics-port=9899 + Environment: + DEFAULT_NETWORK_TYPE: Overlay + DEFAULT_IP_FAMILY: IPv4 + NAMESPACE: kube-system (v1:metadata.namespace) + Mounts: + /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ctkkr (ro) +Conditions: + Type Status + PodScheduled False +Volumes: + kube-api-access-ctkkr: + Type: Projected (a volume that contains injected data from multiple sources) + TokenExpirationSeconds: 3607 + ConfigMapName: kube-root-ca.crt + ConfigMapOptional: + DownwardAPI: true +QoS Class: BestEffort +Node-Selectors: node-role.kubernetes.io/master= +Tolerations: :NoSchedule op=Exists + node.kubernetes.io/not-ready:NoExecute op=Exists for 300s + node.kubernetes.io/unreachable:NoExecute op=Exists for 300s +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Warning FailedScheduling 3m32s (x2 over 3m34s) default-scheduler 0/3 nodes are available: 3 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# kubectl get node --show-labels +NAME STATUS ROLES AGE VERSION LABELS +k8s-master01 Ready control-plane 15h v1.26.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master01,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers= +k8s-worker01 Ready 15h v1.26.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-worker01,kubernetes.io/os=linux +k8s-worker02 Ready 15h v1.26.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-worker02,kubernetes.io/os=linux +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# kubectl label node k8s-master01 node-role.kubernetes.io/master= +node/k8s-master01 labeled +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# kubectl get nodes +NAME STATUS ROLES AGE VERSION +k8s-master01 Ready control-plane,master 15h v1.26.3 +k8s-worker01 Ready 15h v1.26.3 +k8s-worker02 Ready 15h v1.26.3 +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# kubectl get node --show-labels +NAME STATUS ROLES AGE VERSION LABELS +k8s-master01 Ready control-plane,master 15h v1.26.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master01,kubernetes.io/os=linux,networking.alibaba.com/overlay-network-attachment=true,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers= +k8s-worker01 Ready 15h v1.26.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-worker01,kubernetes.io/os=linux,networking.alibaba.com/overlay-network-attachment=true +k8s-worker02 Ready 15h v1.26.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-worker02,kubernetes.io/os=linux,networking.alibaba.com/overlay-network-attachment=true +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# kubectl get pods -n kube-system -o wide +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +calico-typha-c856d6bfd-7qnkk 1/1 Running 0 9m17s 192.168.10.160 k8s-master01 +calico-typha-c856d6bfd-l8nhw 1/1 Running 0 9m17s 192.168.10.161 k8s-worker01 +calico-typha-c856d6bfd-slppp 1/1 Running 0 9m19s 192.168.10.162 k8s-worker02 +coredns-787d4945fb-lfk42 1/1 Running 0 15h 10.88.0.3 k8s-master01 +coredns-787d4945fb-t8x2t 1/1 Running 0 15h 10.88.0.2 k8s-master01 +etcd-k8s-master01 1/1 Running 0 15h 192.168.10.160 k8s-master01 +hybridnet-daemon-ls2rh 1/2 Running 1 (7m49s ago) 9m24s 192.168.10.161 k8s-worker01 +hybridnet-daemon-lxcb6 1/2 Running 1 (8m40s ago) 9m24s 192.168.10.162 k8s-worker02 +hybridnet-daemon-xp7t4 1/2 Running 1 (8m ago) 9m24s 192.168.10.160 k8s-master01 +hybridnet-manager-55f5488b46-2x5qw 1/1 Running 0 9m24s 192.168.10.160 k8s-master01 +hybridnet-manager-55f5488b46-ddpjw 0/1 Pending 0 9m19s +hybridnet-manager-55f5488b46-tx78h 0/1 Pending 0 9m19s +hybridnet-webhook-55d848f89c-8zrs2 0/1 Pending 0 9m24s +hybridnet-webhook-55d848f89c-9f9rf 1/1 Running 0 9m24s 192.168.10.160 k8s-master01 +hybridnet-webhook-55d848f89c-q9xgn 0/1 Pending 0 9m24s +kube-apiserver-k8s-master01 1/1 Running 0 15h 192.168.10.160 k8s-master01 +kube-controller-manager-k8s-master01 1/1 Running 0 15h 192.168.10.160 k8s-master01 +kube-proxy-5v642 1/1 Running 0 15h 192.168.10.160 k8s-master01 +kube-proxy-vnwhh 1/1 Running 0 15h 192.168.10.161 k8s-worker01 +kube-proxy-zgrj6 1/1 Running 0 15h 192.168.10.162 k8s-worker02 +kube-scheduler-k8s-master01 1/1 Running 0 15h 192.168.10.160 k8s-master01 +~~~ + + + +# 四、创建hybridnet网络 + + + +~~~powershell +[root@k8s-master01 ~]# mkdir /root/hybridnet +[root@k8s-master01 ~]# cd hybridnet/ +[root@k8s-master01 hybridnet]# +~~~ + + + +~~~powershell +[root@k8s-master01 hybridnet]# kubectl label node k8s-master01 network=underlay-nethost +node/k8s-master01 labeled +[root@k8s-master01 hybridnet]# kubectl label node k8s-worker01 network=underlay-nethost +node/k8s-worker01 labeled +[root@k8s-master01 hybridnet]# kubectl label node k8s-worker02 network=underlay-nethost +node/k8s-worker02 labeled +~~~ + + + +~~~powershell +[root@k8s-master01 hybridnet]# kubectl get node --show-labels +NAME STATUS ROLES AGE VERSION LABELS +k8s-master01 Ready control-plane,master 16h v1.26.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master01,kubernetes.io/os=linux,network=underlay-nethost,networking.alibaba.com/overlay-network-attachment=true,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers= +k8s-worker01 Ready 15h v1.26.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-worker01,kubernetes.io/os=linux,network=underlay-nethost,networking.alibaba.com/overlay-network-attachment=true +k8s-worker02 Ready 15h v1.26.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-worker02,kubernetes.io/os=linux,network=underlay-nethost,networking.alibaba.com/overlay-network-attachment=true +~~~ + + + +~~~powershell +[root@k8s-master01 hybridnet]# vim 01-create-underlay-network.yaml +[root@k8s-master01 hybridnet]# cat 01-create-underlay-network.yaml +--- +apiVersion: networking.alibaba.com/v1 +kind: Network +metadata: + name: underlay-network1 +spec: + netID: 0 + type: Underlay + nodeSelector: + network: "underlay-nethost" + +--- +apiVersion: networking.alibaba.com/v1 +kind: Subnet +metadata: + name: underlay-network1 +spec: + network: underlay-network1 + netID: 0 + range: + version: "4" + cidr: "192.168.10.0/24" + gateway: "192.168.10.2" + start: "192.168.10.10" + end: "192.168.10.20" +~~~ + + + +~~~powershell +[root@k8s-master01 hybridnet]# kubectl create -f 01-create-underlay-network.yaml +~~~ + + + +~~~powershell +[root@k8s-master01 hybridnet]# kubectl get network +NAME NETID TYPE MODE V4TOTAL V4USED V4AVAILABLE LASTALLOCATEDV4SUBNET V6TOTAL V6USED V6AVAILABLE LASTALLOCATEDV6SUBNET +init 4 Overlay 65534 2 65532 init 0 0 0 +underlay-network1 0 Underlay 11 0 11 underlay-network1 0 0 0 +~~~ + + + +~~~powershell +[root@k8s-master01 hybridnet]# kubectl get subnet +NAME VERSION CIDR START END GATEWAY TOTAL USED AVAILABLE NETID NETWORK +init 4 10.244.0.0/16 65534 2 65532 init +underlay-network1 4 192.168.10.0/24 192.168.10.10 192.168.10.20 192.168.10.2 11 11 0 underlay-network1 +~~~ + + + +# 五、查看节点Labels信息 + + + +~~~powershell +[root@k8s-master01 hybridnet]# kubectl get nodes --show-labels +NAME STATUS ROLES AGE VERSION LABELS +k8s-master01 Ready control-plane,master 16h v1.26.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master01,kubernetes.io/os=linux,network=underlay-nethost,networking.alibaba.com/dualstack-address-quota=empty,networking.alibaba.com/ipv4-address-quota=nonempty,networking.alibaba.com/ipv6-address-quota=empty,networking.alibaba.com/overlay-network-attachment=true,networking.alibaba.com/underlay-network-attachment=true,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers= +k8s-worker01 Ready 16h v1.26.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-worker01,kubernetes.io/os=linux,network=underlay-nethost,networking.alibaba.com/dualstack-address-quota=empty,networking.alibaba.com/ipv4-address-quota=nonempty,networking.alibaba.com/ipv6-address-quota=empty,networking.alibaba.com/overlay-network-attachment=true,networking.alibaba.com/underlay-network-attachment=true +k8s-worker02 Ready 16h v1.26.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-worker02,kubernetes.io/os=linux,network=underlay-nethost,networking.alibaba.com/dualstack-address-quota=empty,networking.alibaba.com/ipv4-address-quota=nonempty,networking.alibaba.com/ipv6-address-quota=empty,networking.alibaba.com/overlay-network-attachment=true,networking.alibaba.com/underlay-network-attachment=true +~~~ + + + +~~~powershell +[root@k8s-master01 hybridnet]# kubectl describe nodes k8s-master01 +Name: k8s-master01 +Roles: control-plane,master +Labels: beta.kubernetes.io/arch=amd64 + beta.kubernetes.io/os=linux + kubernetes.io/arch=amd64 + kubernetes.io/hostname=k8s-master01 + kubernetes.io/os=linux + network=underlay-nethost + networking.alibaba.com/dualstack-address-quota=empty + networking.alibaba.com/ipv4-address-quota=nonempty + networking.alibaba.com/ipv6-address-quota=empty + networking.alibaba.com/overlay-network-attachment=true + networking.alibaba.com/underlay-network-attachment=true + node-role.kubernetes.io/control-plane= + node-role.kubernetes.io/master= + node.kubernetes.io/exclude-from-external-load-balancers= +Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock + node.alpha.kubernetes.io/ttl: 0 + projectcalico.org/IPv4Address: 192.168.10.160/24 + projectcalico.org/IPv4VXLANTunnelAddr: 10.244.32.128 + volumes.kubernetes.io/controller-managed-attach-detach: true +CreationTimestamp: Mon, 03 Apr 2023 20:26:44 +0800 +Taints: node-role.kubernetes.io/control-plane:NoSchedule +Unschedulable: false +Lease: + HolderIdentity: k8s-master01 + AcquireTime: + RenewTime: Tue, 04 Apr 2023 12:35:31 +0800 +Conditions: + Type Status LastHeartbeatTime LastTransitionTime Reason Message + ---- ------ ----------------- ------------------ ------ ------- + NetworkUnavailable False Tue, 04 Apr 2023 11:59:24 +0800 Tue, 04 Apr 2023 11:59:24 +0800 CalicoIsUp Calico is running on this node + MemoryPressure False Tue, 04 Apr 2023 12:31:18 +0800 Mon, 03 Apr 2023 20:26:39 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available + DiskPressure False Tue, 04 Apr 2023 12:31:18 +0800 Mon, 03 Apr 2023 20:26:39 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure + PIDPressure False Tue, 04 Apr 2023 12:31:18 +0800 Mon, 03 Apr 2023 20:26:39 +0800 KubeletHasSufficientPID kubelet has sufficient PID available + Ready True Tue, 04 Apr 2023 12:31:18 +0800 Mon, 03 Apr 2023 20:26:47 +0800 KubeletReady kubelet is posting ready status +Addresses: + InternalIP: 192.168.10.160 + Hostname: k8s-master01 +Capacity: + cpu: 4 + ephemeral-storage: 51175Mi + hugepages-1Gi: 0 + hugepages-2Mi: 0 + memory: 4026120Ki + pods: 110 +Allocatable: + cpu: 4 + ephemeral-storage: 48294789041 + hugepages-1Gi: 0 + hugepages-2Mi: 0 + memory: 3923720Ki + pods: 110 +System Info: + Machine ID: f618107e5de3464bbfc77620a718fdd5 + System UUID: B55A4D56-8EBB-7F7D-F774-2CAFA717C713 + Boot ID: 289d296b-02fd-4135-a427-81143def0ed9 + Kernel Version: 3.10.0-1160.76.1.el7.x86_64 + OS Image: CentOS Linux 7 (Core) + Operating System: linux + Architecture: amd64 + Container Runtime Version: containerd://1.7.0 + Kubelet Version: v1.26.3 + Kube-Proxy Version: v1.26.3 +PodCIDR: 10.244.0.0/24 +PodCIDRs: 10.244.0.0/24 +Non-terminated Pods: (11 in total) + Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age + --------- ---- ------------ ---------- --------------- ------------- --- + kube-system calico-typha-c856d6bfd-7qnkk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 20m + kube-system coredns-787d4945fb-lfk42 100m (2%) 0 (0%) 70Mi (1%) 170Mi (4%) 16h + kube-system coredns-787d4945fb-t8x2t 100m (2%) 0 (0%) 70Mi (1%) 170Mi (4%) 16h + kube-system etcd-k8s-master01 100m (2%) 0 (0%) 100Mi (2%) 0 (0%) 16h + kube-system hybridnet-daemon-xp7t4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 20m + kube-system hybridnet-manager-55f5488b46-2x5qw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 20m + kube-system hybridnet-webhook-55d848f89c-9f9rf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 20m + kube-system kube-apiserver-k8s-master01 250m (6%) 0 (0%) 0 (0%) 0 (0%) 16h + kube-system kube-controller-manager-k8s-master01 200m (5%) 0 (0%) 0 (0%) 0 (0%) 16h + kube-system kube-proxy-5v642 0 (0%) 0 (0%) 0 (0%) 0 (0%) 16h + kube-system kube-scheduler-k8s-master01 100m (2%) 0 (0%) 0 (0%) 0 (0%) 16h +Allocated resources: + (Total limits may be over 100 percent, i.e., overcommitted.) + Resource Requests Limits + -------- -------- ------ + cpu 850m (21%) 0 (0%) + memory 240Mi (6%) 340Mi (8%) + ephemeral-storage 0 (0%) 0 (0%) + hugepages-1Gi 0 (0%) 0 (0%) + hugepages-2Mi 0 (0%) 0 (0%) +Events: + +~~~ + + + +~~~powershell +[root@k8s-master01 hybridnet]# kubectl describe nodes k8s-worker01 +Name: k8s-worker01 +Roles: +Labels: beta.kubernetes.io/arch=amd64 + beta.kubernetes.io/os=linux + kubernetes.io/arch=amd64 + kubernetes.io/hostname=k8s-worker01 + kubernetes.io/os=linux + network=underlay-nethost + networking.alibaba.com/dualstack-address-quota=empty + networking.alibaba.com/ipv4-address-quota=nonempty + networking.alibaba.com/ipv6-address-quota=empty + networking.alibaba.com/overlay-network-attachment=true + networking.alibaba.com/underlay-network-attachment=true +Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock + node.alpha.kubernetes.io/ttl: 0 + projectcalico.org/IPv4Address: 192.168.10.161/24 + projectcalico.org/IPv4VXLANTunnelAddr: 10.244.79.64 + volumes.kubernetes.io/controller-managed-attach-detach: true +CreationTimestamp: Mon, 03 Apr 2023 20:28:44 +0800 +Taints: +Unschedulable: false +Lease: + HolderIdentity: k8s-worker01 + AcquireTime: + RenewTime: Tue, 04 Apr 2023 12:39:22 +0800 +Conditions: + Type Status LastHeartbeatTime LastTransitionTime Reason Message + ---- ------ ----------------- ------------------ ------ ------- + NetworkUnavailable False Tue, 04 Apr 2023 11:59:42 +0800 Tue, 04 Apr 2023 11:59:42 +0800 CalicoIsUp Calico is running on this node + MemoryPressure False Tue, 04 Apr 2023 12:36:30 +0800 Mon, 03 Apr 2023 20:28:43 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available + DiskPressure False Tue, 04 Apr 2023 12:36:30 +0800 Mon, 03 Apr 2023 20:28:43 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure + PIDPressure False Tue, 04 Apr 2023 12:36:30 +0800 Mon, 03 Apr 2023 20:28:43 +0800 KubeletHasSufficientPID kubelet has sufficient PID available + Ready True Tue, 04 Apr 2023 12:36:30 +0800 Mon, 03 Apr 2023 20:28:47 +0800 KubeletReady kubelet is posting ready status +Addresses: + InternalIP: 192.168.10.161 + Hostname: k8s-worker01 +Capacity: + cpu: 4 + ephemeral-storage: 51175Mi + hugepages-1Gi: 0 + hugepages-2Mi: 0 + memory: 4026128Ki + pods: 110 +Allocatable: + cpu: 4 + ephemeral-storage: 48294789041 + hugepages-1Gi: 0 + hugepages-2Mi: 0 + memory: 3923728Ki + pods: 110 +System Info: + Machine ID: f618107e5de3464bbfc77620a718fdd5 + System UUID: 7DD24D56-096D-7853-6E3E-0ED5FAB35AC3 + Boot ID: 8f8f964d-a639-4a11-972d-043dca2f718e + Kernel Version: 3.10.0-1160.76.1.el7.x86_64 + OS Image: CentOS Linux 7 (Core) + Operating System: linux + Architecture: amd64 + Container Runtime Version: containerd://1.7.0 + Kubelet Version: v1.26.3 + Kube-Proxy Version: v1.26.3 +PodCIDR: 10.244.2.0/24 +PodCIDRs: 10.244.2.0/24 +Non-terminated Pods: (3 in total) + Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age + --------- ---- ------------ ---------- --------------- ------------- --- + kube-system calico-typha-c856d6bfd-l8nhw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 24m + kube-system hybridnet-daemon-ls2rh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 24m + kube-system kube-proxy-vnwhh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 16h +Allocated resources: + (Total limits may be over 100 percent, i.e., overcommitted.) + Resource Requests Limits + -------- -------- ------ + cpu 0 (0%) 0 (0%) + memory 0 (0%) 0 (0%) + ephemeral-storage 0 (0%) 0 (0%) + hugepages-1Gi 0 (0%) 0 (0%) + hugepages-2Mi 0 (0%) 0 (0%) +Events: + +~~~ + + + +~~~powershell +[root@k8s-master01 hybridnet]# kubectl describe nodes k8s-worker02 +Name: k8s-worker02 +Roles: +Labels: beta.kubernetes.io/arch=amd64 + beta.kubernetes.io/os=linux + kubernetes.io/arch=amd64 + kubernetes.io/hostname=k8s-worker02 + kubernetes.io/os=linux + network=underlay-nethost + networking.alibaba.com/dualstack-address-quota=empty + networking.alibaba.com/ipv4-address-quota=nonempty + networking.alibaba.com/ipv6-address-quota=empty + networking.alibaba.com/overlay-network-attachment=true + networking.alibaba.com/underlay-network-attachment=true +Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock + node.alpha.kubernetes.io/ttl: 0 + projectcalico.org/IPv4Address: 192.168.10.162/24 + projectcalico.org/IPv4VXLANTunnelAddr: 10.244.69.192 + volumes.kubernetes.io/controller-managed-attach-detach: true +CreationTimestamp: Mon, 03 Apr 2023 20:28:39 +0800 +Taints: +Unschedulable: false +Lease: + HolderIdentity: k8s-worker02 + AcquireTime: + RenewTime: Tue, 04 Apr 2023 12:39:49 +0800 +Conditions: + Type Status LastHeartbeatTime LastTransitionTime Reason Message + ---- ------ ----------------- ------------------ ------ ------- + NetworkUnavailable False Tue, 04 Apr 2023 12:00:08 +0800 Tue, 04 Apr 2023 12:00:08 +0800 CalicoIsUp Calico is running on this node + MemoryPressure False Tue, 04 Apr 2023 12:36:02 +0800 Mon, 03 Apr 2023 20:28:39 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available + DiskPressure False Tue, 04 Apr 2023 12:36:02 +0800 Mon, 03 Apr 2023 20:28:39 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure + PIDPressure False Tue, 04 Apr 2023 12:36:02 +0800 Mon, 03 Apr 2023 20:28:39 +0800 KubeletHasSufficientPID kubelet has sufficient PID available + Ready True Tue, 04 Apr 2023 12:36:02 +0800 Mon, 03 Apr 2023 20:28:46 +0800 KubeletReady kubelet is posting ready status +Addresses: + InternalIP: 192.168.10.162 + Hostname: k8s-worker02 +Capacity: + cpu: 4 + ephemeral-storage: 51175Mi + hugepages-1Gi: 0 + hugepages-2Mi: 0 + memory: 4026120Ki + pods: 110 +Allocatable: + cpu: 4 + ephemeral-storage: 48294789041 + hugepages-1Gi: 0 + hugepages-2Mi: 0 + memory: 3923720Ki + pods: 110 +System Info: + Machine ID: f618107e5de3464bbfc77620a718fdd5 + System UUID: 43224D56-1D27-9147-1CC6-B7FDBED96000 + Boot ID: 86cbf2e5-324d-4457-8585-f599ddc38c7c + Kernel Version: 3.10.0-1160.76.1.el7.x86_64 + OS Image: CentOS Linux 7 (Core) + Operating System: linux + Architecture: amd64 + Container Runtime Version: containerd://1.7.0 + Kubelet Version: v1.26.3 + Kube-Proxy Version: v1.26.3 +PodCIDR: 10.244.1.0/24 +PodCIDRs: 10.244.1.0/24 +Non-terminated Pods: (3 in total) + Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age + --------- ---- ------------ ---------- --------------- ------------- --- + kube-system calico-typha-c856d6bfd-slppp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 24m + kube-system hybridnet-daemon-lxcb6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25m + kube-system kube-proxy-zgrj6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 16h +Allocated resources: + (Total limits may be over 100 percent, i.e., overcommitted.) + Resource Requests Limits + -------- -------- ------ + cpu 0 (0%) 0 (0%) + memory 0 (0%) 0 (0%) + ephemeral-storage 0 (0%) 0 (0%) + hugepages-1Gi 0 (0%) 0 (0%) + hugepages-2Mi 0 (0%) 0 (0%) +Events: +~~~ + + + +# 六、创建pod使用overlay网络 + + + +~~~powershell +[root@k8s-master01 hybridnet]# vim 02-tomcat-app1-overlay.yaml +[root@k8s-master01 hybridnet]# cat 02-tomcat-app1-overlay.yaml +kind: Deployment +apiVersion: apps/v1 +metadata: + labels: + app: myserver-tomcat-app1-deployment-overlay-label + name: myserver-tomcat-app1-deployment-overlay + namespace: myserver +spec: + replicas: 1 + selector: + matchLabels: + app: myserver-tomcat-app1-overlay-selector + template: + metadata: + labels: + app: myserver-tomcat-app1-overlay-selector + spec: + containers: + - name: myserver-tomcat-app1-container + image: registry.cn-hangzhou.aliyuncs.com/zhangshijie/tomcat-app1:v1 + imagePullPolicy: IfNotPresent + + ports: + - containerPort: 8080 + protocol: TCP + name: http + env: + - name: "password" + value: "123456" + - name: "age" + value: "18" + +--- +kind: Service +apiVersion: v1 +metadata: + labels: + app: myserver-tomcat-app1-service-overlay-label + name: myserver-tomcat-app1-service-overlay + namespace: myserver +spec: + type: NodePort + ports: + - name: http + port: 80 + protocol: TCP + targetPort: 8080 + nodePort: 30003 + selector: + app: myserver-tomcat-app1-overlay-selector +~~~ + + + +~~~powershell +[root@k8s-master01 hybridnet]# kubectl create ns myserver +namespace/myserver created +~~~ + + + +~~~powershell +[root@k8s-master01 hybridnet]# kubectl create -f 02-tomcat-app1-overlay.yaml +deployment.apps/myserver-tomcat-app1-deployment-overlay created +service/myserver-tomcat-app1-service-overlay created +~~~ + + + +~~~powershell +[root@k8s-master01 hybridnet]# kubectl get pods -n myserver -o wide +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +myserver-tomcat-app1-deployment-overlay-5bb44b6bf6-lnp68 1/1 Running 0 2m1s 10.244.0.17 k8s-worker02 + +~~~ + + + +~~~powershell +[root@k8s-master01 hybridnet]# kubectl get svc -n myserver +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +myserver-tomcat-app1-service-overlay NodePort 10.111.170.113 80:30003/TCP 40s +~~~ + + + +![image-20230404131326907](../../img/kubernetes/kubernetes_hybridnet/image-20230404131326907.png) + + + +# 七、创建pod并使用underlay网络 + + + +~~~powershell +[root@k8s-master01 hybridnet]# vim 03-tomcat-app1-underlay.yaml +[root@k8s-master01 hybridnet]# cat 03-tomcat-app1-underlay.yaml +kind: Deployment +apiVersion: apps/v1 +metadata: + labels: + app: myserver-tomcat-app1-deployment-underlay-label + name: myserver-tomcat-app1-deployment-underlay + namespace: myserver +spec: + replicas: 1 + selector: + matchLabels: + app: myserver-tomcat-app1-underlay-selector + template: + metadata: + labels: + app: myserver-tomcat-app1-underlay-selector + annotations: + networking.alibaba.com/network-type: Underlay + spec: + containers: + - name: myserver-tomcat-app1-container + + image: registry.cn-hangzhou.aliyuncs.com/zhangshijie/tomcat-app1:v2 + imagePullPolicy: IfNotPresent + + ports: + - containerPort: 8080 + protocol: TCP + name: http + env: + - name: "password" + value: "123456" + - name: "age" + value: "18" + +--- +kind: Service +apiVersion: v1 +metadata: + labels: + app: myserver-tomcat-app1-service-underlay-label + name: myserver-tomcat-app1-service-underlay + namespace: myserver +spec: + ports: + - name: http + port: 80 + protocol: TCP + targetPort: 8080 + selector: + app: myserver-tomcat-app1-underlay-selector +~~~ + + + +~~~powershell +[root@k8s-master01 hybridnet]# kubectl create -f 03-tomcat-app1-underlay.yaml +deployment.apps/myserver-tomcat-app1-deployment-underlay created +service/myserver-tomcat-app1-service-underlay created +~~~ + + + +~~~powershell +[root@k8s-master01 hybridnet]# kubectl get pods -n myserver -o wide +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +myserver-tomcat-app1-deployment-overlay-5bb44b6bf6-lnp68 1/1 Running 0 5m13s 10.244.0.17 k8s-worker02 +myserver-tomcat-app1-deployment-underlay-7f65f45449-mvkj7 1/1 Running 0 10m 192.168.10.10 k8s-worker01 +~~~ + + + +~~~powershell +[root@k8s-master01 hybridnet]# kubectl get svc -n myserver +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +myserver-tomcat-app1-service-overlay NodePort 10.111.170.113 80:30003/TCP 5m56s +myserver-tomcat-app1-service-underlay ClusterIP 10.99.144.184 80/TCP 100s +~~~ + + + +![image-20230404131724427](../../img/kubernetes/kubernetes_hybridnet/image-20230404131724427.png) + + + +~~~powershell +[root@k8s-master01 hybridnet]# curl http://192.168.10.10:8080/myapp/ +tomcat app1 v2 +~~~ + + + +# 八、创建service并使用underlay网络 + +## 8.1 K8S集群初始化注意事项 + +> 初始化完成后,把所有worker节点加入 + +~~~powershell +kubeadm init --apiserver-advertise-address=192.168.10.160 \ +--apiserver-bind-port=6443 \ +--kubernetes-version=v1.26.3 \ +--pod-network-cidr=10.244.0.0/16 \ +--service-cidr=192.168.200.0/24 \ +--service-dns-domain=cluster.local \ +--cri-socket unix:///var/run/containerd/containerd.sock +~~~ + + + +~~~powershell +--service-cidr=192.168.200.0/24 此位置指定的网段不能与本地已有网络冲突 +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# kubectl get svc +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +kubernetes ClusterIP 192.168.200.1 443/TCP 82m +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# kubectl get svc -n kube-system +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +calico-typha ClusterIP 192.168.200.249 5473/TCP 79m +hybridnet-webhook ClusterIP 192.168.200.149 443/TCP 79m +kube-dns ClusterIP 192.168.200.10 53/UDP,53/TCP,9153/TCP 83m +~~~ + + + +> k8s集群初始化成功后,必须先安装helm,再通过helm部署hybridnet后创建underlay网络。 + + + +## 8.2 创建underlay网络 + +~~~powershell +[root@k8s-master01 ~]# mkdir /root/hybridnet +[root@k8s-master01 ~]# cd hybridnet/ +[root@k8s-master01 hybridnet]# +~~~ + + + +~~~powershell +[root@k8s-master01 hybridnet]# kubectl label node k8s-master01 network=underlay-nethost +node/k8s-master01 labeled +[root@k8s-master01 hybridnet]# kubectl label node k8s-worker01 network=underlay-nethost +node/k8s-worker01 labeled +[root@k8s-master01 hybridnet]# kubectl label node k8s-worker02 network=underlay-nethost +node/k8s-worker02 labeled +~~~ + + + +~~~powershell +[root@k8s-master01 hybridnet]# vim 01-underlay-network.yaml +[root@k8s-master01 hybridnet]# cat 01-underlay-network.yaml +--- +apiVersion: networking.alibaba.com/v1 +kind: Network +metadata: + name: underlay-network1 +spec: + netID: 0 + type: Underlay + nodeSelector: + network: "underlay-nethost" + +--- +apiVersion: networking.alibaba.com/v1 +kind: Subnet +metadata: + name: underlay-network1 +spec: + network: underlay-network1 + netID: 0 + range: + version: "4" + cidr: "192.168.200.0/24" + gateway: "192.168.200.254" + start: "192.168.200.50" + end: "192.168.200.200" +~~~ + + + +~~~powershell +[root@k8s-master01 hybridnet]# kubectl create -f 01-underlay-network.yaml +~~~ + + + +~~~powershell +[root@k8s-master01 hybridnet]# kubectl get network +NAME NETID TYPE MODE V4TOTAL V4USED V4AVAILABLE LASTALLOCATEDV4SUBNET V6TOTAL V6USED V6AVAILABLE LASTALLOCATEDV6SUBNET +init 4 Overlay 65534 3 65531 init 0 0 0 +underlay-network1 0 Underlay 151 0 151 underlay-network1 0 0 0 +~~~ + + + +~~~powershell +[root@k8s-master01 hybridnet]# kubectl get subnet +NAME VERSION CIDR START END GATEWAY TOTAL USED AVAILABLE NETID NETWORK +init 4 10.244.0.0/16 65534 3 65531 init +underlay-network1 4 192.168.200.0/24 192.168.200.50 192.168.200.200 192.168.200.254 151 151 0 underlay-network1 +~~~ + + + + + +## 8.3 创建service使用underlay网络 + + + +~~~powershell +[root@k8s-master01 hybridnet]# vim 02-service-underlay.yaml +[root@k8s-master01 hybridnet]# cat 02-service-underlay.yaml +kind: Deployment +apiVersion: apps/v1 +metadata: + labels: + app: myserver-tomcat-app1-deployment-underlay-label + name: myserver-tomcat-app1-deployment-underlay + namespace: myserver +spec: + replicas: 1 + selector: + matchLabels: + app: myserver-tomcat-app1-underlay-selector + template: + metadata: + labels: + app: myserver-tomcat-app1-underlay-selector + spec: + containers: + - name: myserver-tomcat-app1-container + + image: registry.cn-hangzhou.aliyuncs.com/zhangshijie/tomcat-app1:v2 + imagePullPolicy: IfNotPresent + + ports: + - containerPort: 8080 + protocol: TCP + name: http + env: + - name: "password" + value: "123456" + - name: "age" + value: "18" + +--- +kind: Service +apiVersion: v1 +metadata: + labels: + app: myserver-tomcat-app1-service-underlay-label + name: myserver-tomcat-app1-service-underlay + namespace: myserver + annotations: + networking.alibaba.com/network-type: Underlay 重点注意这里 +spec: + ports: + - name: http + port: 80 + protocol: TCP + targetPort: 8080 + selector: + app: myserver-tomcat-app1-underlay-selector +~~~ + + + +~~~powershell +[root@k8s-master01 hybridnet]# kubectl create ns myserver +~~~ + + + +~~~powershell +[root@k8s-master01 hybridnet]# kubectl create -f 02-service-underlay.yaml +~~~ + + + +~~~powershell +[root@k8s-master01 hybridnet]# kubectl get pods -n myserver -o wide +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +myserver-tomcat-app1-deployment-underlay-849f7f9cf5-x48x8 1/1 Running 0 13m 10.244.0.3 k8s-worker02 + +~~~ + + + +~~~powershell +[root@k8s-master01 hybridnet]# kubectl get svc -n myserver -o wide +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR +myserver-tomcat-app1-service-underlay ClusterIP 192.168.200.234 80/TCP 61m app=myserver-tomcat-app1-underlay-selector +~~~ + + + +## 8.4 访问 + +### 8.4.1 在K8S集群内节点上访问 + + + +~~~powershell +[root@k8s-master01 hybridnet]# curl http://10.244.0.3:8080/myapp/ +tomcat app1 v2 +~~~ + + + +~~~powershell +[root@k8s-master01 hybridnet]# curl http://192.168.200.234/myapp/ +tomcat app1 v2 +~~~ + + + + + +### 8.4.2 在K8S集群外windows主机上访问 + + + +![image-20230406143957394](../../img/kubernetes/kubernetes_hybridnet/image-20230406143957394.png) + + + + + +~~~powershell +C:\WINDOWS\system32>route add 192.168.200.0 mask 255.255.255.0 -p 192.168.10.160 +说明: +去往192.168.200.0/24网段通过192.168.10.160,这个192.168.10.160为k8s集群节点IP地址。 +~~~ + + + +![image-20230406144105191](../../img/kubernetes/kubernetes_hybridnet/image-20230406144105191.png) + + + +![image-20230406144215187](../../img/kubernetes/kubernetes_hybridnet/image-20230406144215187.png) + + + +![image-20230406144245003](../../img/kubernetes/kubernetes_hybridnet/image-20230406144245003.png) + + + +![image-20230406144319092](../../img/kubernetes/kubernetes_hybridnet/image-20230406144319092.png) \ No newline at end of file diff --git a/docs/cloud/kubernetes/kubernetes_ipv4_and_ipv6.md b/docs/cloud/kubernetes/kubernetes_ipv4_and_ipv6.md new file mode 100644 index 00000000..9bbcdb56 --- /dev/null +++ b/docs/cloud/kubernetes/kubernetes_ipv4_and_ipv6.md @@ -0,0 +1,1482 @@ +# K8S 1.22版本双栈协议(IPv4&IPv6)集群部署 + +# 一、部署说明 + +> 此笔记主要针对 k8s 1.22,其它版本请自行测试使用。 +> +> 必须使用Open vSwitch功能。 + + + +# 二、主机准备 + +## 2.1 主机名配置 + +由于本次使用3台主机完成kubernetes集群部署,其中1台为master节点,名称为k8s-master01;其中2台为worker节点,名称分别为:k8s-worker01及k8s-worker02 + +~~~powershell +master节点 +# hostnamectl set-hostname k8s-master01 +~~~ + + + +~~~powershell +worker01节点 +# hostnamectl set-hostname k8s-worker01 +~~~ + + + +~~~powershell +worker02节点 +# hostnamectl set-hostname k8s-worker02 +~~~ + + + +## 2.2 主机IP地址配置 + +### 2.2.1 k8s-master节点 + +~~~powershell +[root@k8s-master01 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33 +[root@k8s-master01 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33 +TYPE="Ethernet" +PROXY_METHOD="none" +BROWSER_ONLY="no" +BOOTPROTO="none" +DEFROUTE="yes" +IPV4_FAILURE_FATAL="no" +NAME="ens33" +UUID="063bfc1c-c7c2-4c62-89d0-35ae869e44e7" +DEVICE="ens33" +ONBOOT="yes" +IPADDR="192.168.10.160" +PREFIX="24" +GATEWAY="192.168.10.2" +DNS1="119.29.29.29" +IPV6INIT="yes" +IPV6_AUTOCONF="no" +IPV6_DEFROUTE="yes" +IPV6_FAILURE_FATAL="no" +IPV6_ADDR_GEN_MODE="stable-privacy" +IPV6ADDR=2003::11/64 +IPV6_DEFAULTGW=2003::1 +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# systemctl restart network + +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# ip a s +~~~ + + + +### 2.2.2 k8s-worker01节点 + + + +~~~powershell +[root@k8s-worker01 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33 +[root@k8s-worker01 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33 +TYPE="Ethernet" +PROXY_METHOD="none" +BROWSER_ONLY="no" +BOOTPROTO="none" +DEFROUTE="yes" +IPV4_FAILURE_FATAL="no" +NAME="ens33" +UUID="063bfc1c-c7c2-4c62-89d0-35ae869e44e7" +DEVICE="ens33" +ONBOOT="yes" +IPADDR="192.168.10.161" +PREFIX="24" +GATEWAY="192.168.10.2" +DNS1="119.29.29.29" +IPV6INIT="yes" +IPV6_AUTOCONF="no" +IPV6_DEFROUTE="yes" +IPV6_FAILURE_FATAL="no" +IPV6_ADDR_GEN_MODE="stable-privacy" +IPV6ADDR=2003::12/64 +IPV6_DEFAULTGW=2003::1 +~~~ + + + +~~~powershell +[root@k8s-worker01 ~]# systemctl restart network + +~~~ + + + +~~~powershell +[root@k8s-worker01 ~]# ip a s +~~~ + + + + + +### 2.2.3 k8s-worker02节点 + + + +~~~powershell +[root@k8s-worker02 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33 +[root@k8s-worker02 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33 +TYPE="Ethernet" +PROXY_METHOD="none" +BROWSER_ONLY="no" +BOOTPROTO="none" +DEFROUTE="yes" +IPV4_FAILURE_FATAL="no" +NAME="ens33" +UUID="063bfc1c-c7c2-4c62-89d0-35ae869e44e7" +DEVICE="ens33" +ONBOOT="yes" +IPADDR="192.168.10.162" +PREFIX="24" +GATEWAY="192.168.10.2" +DNS1="119.29.29.29" +IPV6INIT="yes" +IPV6_AUTOCONF="no" +IPV6_DEFROUTE="yes" +IPV6_FAILURE_FATAL="no" +IPV6_ADDR_GEN_MODE="stable-privacy" +IPV6ADDR=2003::13/64 +IPV6_DEFAULTGW=2003::1 +~~~ + + + +~~~powershell +[root@k8s-worker02 ~]# systemctl restart network + +~~~ + + + +~~~powershell +[root@k8s-worker02 ~]# ip a s +~~~ + + + +### 2.2.4 在k8s-master上ping通ipv6地址 + + + +~~~powershell +[root@k8s-master01 ~]# ping6 -c 4 2003::11 +PING 2003::11(2003::11) 56 data bytes +64 bytes from 2003::11: icmp_seq=1 ttl=64 time=0.030 ms +64 bytes from 2003::11: icmp_seq=2 ttl=64 time=0.065 ms +64 bytes from 2003::11: icmp_seq=3 ttl=64 time=0.037 ms +64 bytes from 2003::11: icmp_seq=4 ttl=64 time=0.030 ms + +--- 2003::11 ping statistics --- +4 packets transmitted, 4 received, 0% packet loss, time 3083ms +rtt min/avg/max/mdev = 0.030/0.040/0.065/0.015 ms +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# ping6 -c 4 2003::12 +PING 2003::12(2003::12) 56 data bytes +64 bytes from 2003::12: icmp_seq=1 ttl=64 time=0.323 ms +64 bytes from 2003::12: icmp_seq=2 ttl=64 time=0.557 ms +64 bytes from 2003::12: icmp_seq=3 ttl=64 time=0.552 ms +64 bytes from 2003::12: icmp_seq=4 ttl=64 time=1.30 ms + +--- 2003::12 ping statistics --- +4 packets transmitted, 4 received, 0% packet loss, time 3050ms +rtt min/avg/max/mdev = 0.323/0.685/1.308/0.371 ms +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# ping6 -c 4 2003::13 +PING 2003::13(2003::13) 56 data bytes +64 bytes from 2003::13: icmp_seq=1 ttl=64 time=0.370 ms +64 bytes from 2003::13: icmp_seq=2 ttl=64 time=0.348 ms +64 bytes from 2003::13: icmp_seq=3 ttl=64 time=0.491 ms +64 bytes from 2003::13: icmp_seq=4 ttl=64 time=0.497 ms + +--- 2003::13 ping statistics --- +4 packets transmitted, 4 received, 0% packet loss, time 3059ms +rtt min/avg/max/mdev = 0.348/0.426/0.497/0.071 ms +~~~ + + + + + +## 2.3 主机名解析 + + + +~~~powershell +# vim /etc/hosts +# cat /etc/hosts +127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 +::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 +192.168.10.160 k8s-master01 +192.168.10.161 k8s-worker01 +192.168.10.162 k8s-worker02 +2003::11 k8s-master01 +2003::12 k8s-worker01 +2003::13 k8s-worker02 +~~~ + + + +## 2.4 主机安全设置 + +> 所有主机均需操作。 + + + +~~~powershell +# systemctl stop firewalld && systemctl disable firewalld +~~~ + + + +~~~powershell +# sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config +~~~ + + + +## 2.5 升级操作系统内核 + +> 所有主机均需要操作。 + + + +~~~powershell +导入elrepo gpg key +# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org +~~~ + + + +~~~powershell +安装elrepo YUM源仓库 +# yum -y install https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm +~~~ + + + +~~~powershell +安装kernel-ml版本,ml为长期稳定版本,lt为长期维护版本 +# yum --enablerepo="elrepo-kernel" -y install kernel-lt.x86_64 +~~~ + + + +~~~powershell +设置grub2默认引导为0 +# grub2-set-default 0 +~~~ + + + +~~~powershell +重新生成grub2引导文件 +# grub2-mkconfig -o /boot/grub2/grub.cfg +~~~ + + + +~~~powershell +更新后,需要重启,使用升级的内核生效。 +# reboot +~~~ + + + +~~~powershell +重启后,需要验证内核是否为更新对应的版本 +# uname -r +~~~ + + + +#### 2.6 配置内核转发及网桥过滤 + +>所有主机均需要操作。 + + + +~~~powershell +添加网桥过滤及内核转发配置文件 +# cat /etc/sysctl.d/k8s.conf +net.bridge.bridge-nf-call-ip6tables = 1 +net.bridge.bridge-nf-call-iptables = 1 +vm.swappiness = 0 +~~~ + + + +~~~powershell +加载br_netfilter模块 +# modprobe br_netfilter +~~~ + + + +~~~powershell +查看是否加载 +# lsmod | grep br_netfilter +br_netfilter 22256 0 +bridge 151336 1 br_netfilter +~~~ + + + +## 2.7 调整主机内核参数 + + + +~~~powershell +# vim /etc/sysctl.conf +添加如下内容: +net.ipv4.ip_forward = 1 +net.ipv6.conf.all.forwarding = 1 +net.ipv4.ip_nonlocal_bind = 1 +~~~ + + + +~~~powershell +使其生效 +# sysctl -p /etc/sysctl.conf +~~~ + + + +## 2.8 主机时钟同步设置 + + + +~~~powershell +# crontab -e +0 */1 * * * ntpdate time1.aliyun.com +~~~ + + + +## 2.9 主机swap分区设置 + + + +~~~powershell +临时关闭 +# swapoff -a +~~~ + + + +~~~powershell +永远关闭swap分区,需要重启操作系统 +# cat /etc/fstab +...... + +# /dev/mapper/centos-swap swap swap defaults 0 0 + +在上一行中行首添加# +~~~ + + + +## 2.10 Open vSwitch + +> 下载地址链接:http://www.openvswitch.org/download/ +> +> OVS + + + + + +~~~powershell +安装依赖 +# yum -y install openssl openssl-devel +~~~ + + + +~~~powershell +查看是否存在python3 +# yum list python3 +~~~ + + + +~~~powershell +安装依赖 +# yum install python3 python3-devel -y +~~~ + + + +~~~powershell +下载软件源码包 +# wget https://www.openvswitch.org/releases/openvswitch-2.16.2.tar.gz +~~~ + + + +~~~powershell +解压 +# tar -xf openvswitch-2.16.2.tar.gz +~~~ + + + +~~~powershell +# cd openvswitch-2.16.2/ +~~~ + + + +~~~powershell +[root@k8s-XXX openvswitch-2.16.2]# ./configure +~~~ + + + + + +~~~powershell +[root@k8s-XXX openvswitch-2.16.2]# make +~~~ + + + +~~~powershell +[root@k8s-XXX openvswitch-2.16.2]# make install +~~~ + + + +~~~powershell +加载模块至内核 +# modprobe openvswitch +# lsmod | grep openvswitch +openvswitch 139264 2 +nsh 16384 1 openvswitch +nf_conncount 24576 1 openvswitch +nf_nat 45056 5 ip6table_nat,xt_nat,openvswitch,iptable_nat,xt_MASQUERADE +nf_conntrack 147456 8 xt_conntrack,nf_nat,nfnetlink_cttimeout,xt_nat,openvswitch,nf_conntrack_netlink,nf_conncount,xt_MASQUERADE +nf_defrag_ipv6 24576 2 nf_conntrack,openvswitch +libcrc32c 16384 4 nf_conntrack,nf_nat,openvswitch,xfs +~~~ + + + +# 三、Docker安装 + +## 3.1 Docker安装YUM源准备 + +>使用阿里云开源软件镜像站。 + + + +~~~powershell +# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo +~~~ + + + +## 3.2 Docker安装 + + + +~~~powershell +# yum -y install docker-ce-20.10.17 docke-ce-cli-20.10.17 +~~~ + + + +## 3.3 启动Docker服务 + + + +~~~powershell +# systemctl enable --now docker +~~~ + + + +## 3.4 修改cgroup方式 + +>/etc/docker/daemon.json 默认没有此文件,需要单独创建 + +~~~powershell +在/etc/docker/daemon.json添加如下内容 + +# cat /etc/docker/daemon.json +{ + "exec-opts": ["native.cgroupdriver=systemd"] +} +~~~ + + + +~~~powershell +# systemctl restart docker +~~~ + + + + + +# 四、k8s软件安装 + +## 4.1 YUM源准备 + +~~~powershell +# vim /etc/yum.repos.d/k8s.repo +# cat /etc/yum.repos.d/k8s.repo +[kubernetes] +name=Kubernetes +baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ +enabled=1 +gpgcheck=0 +repo_gpgcheck=0 +gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg +~~~ + + + + + +## 4.2 K8S软件安装及kubelet配置 + +### 4.2.1 软件安装 + +~~~powershell +# yum -y install kubeadm-1.22.11-0 kubelet-1.22.11-0 kubectl-1.22.11-0 +~~~ + + + +### 4.2.2 kubelet配置 + +>为了实现docker使用的cgroupdriver与kubelet使用的cgroup的一致性,建议修改如下文件内容。 + + + +~~~powershell +# vim /etc/sysconfig/kubelet +KUBELET_EXTRA_ARGS="--cgroup-driver=systemd" +~~~ + + + +~~~powershell +设置kubelet为开机自启动即可,由于没有生成配置文件,集群初始化后自动启动 +# systemctl enable kubelet +~~~ + + + +## 4.3 K8S集群初始化配置文件准备 + + + +>默认 Kubernetes 下每个 node 使用 /24 掩码为 Pod 分配 IPv4 地址,使用 /64 掩码为 Pod 分配 IPv6 地址。可以通过 Controller-manager 参数调整掩码大小,本文会将 IPv4 掩码调整为 /25,IPv6 掩码调整为 /80。 +> +>(k8s 强制限制 node 掩码不能比 CIDR 掩码小 16 以上,因此当 IPv6 CIDR 使用 /64 得掩码时,node 掩码不能大于 /80 + + + +~~~powershell +[root@k8s-master01 ~]# vim kubeadm-config.yaml +[root@k8s-master01 ~]# cat kubeadm-config.yaml +apiVersion: kubeadm.k8s.io/v1beta3 +kind: ClusterConfiguration +networking: + podSubnet: 10.244.0.0/16,2004::/64 + serviceSubnet: 10.96.0.0/16,2005::/110 +controllerManager: + extraArgs: + "node-cidr-mask-size-ipv4": "25" + "node-cidr-mask-size-ipv6": "80" +imageRepository: "" +clusterName: "smartgo-cluster" +kubernetesVersion: "v1.22.11" +--- +apiVersion: kubeadm.k8s.io/v1beta3 +kind: InitConfiguration +localAPIEndpoint: + advertiseAddress: "192.168.10.160" + bindPort: 6443 +nodeRegistration: + kubeletExtraArgs: + node-ip: 192.168.10.160,2003::11 +~~~ + + + +## 4.4 K8S集群初始化 + + + +~~~powershell +[root@k8s-master01 ~]# kubeadm init --config=kubeadm-config.yaml +~~~ + + + +~~~powershell +输出信息: +[init] Using Kubernetes version: v1.22.11 +[preflight] Running pre-flight checks + [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' +[preflight] Pulling images required for setting up a Kubernetes cluster +[preflight] This might take a minute or two, depending on the speed of your internet connection +[preflight] You can also perform this action in beforehand using 'kubeadm config images pull' +[certs] Using certificateDir folder "/etc/kubernetes/pki" +[certs] Generating "ca" certificate and key +[certs] Generating "apiserver" certificate and key +[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.10.160] +[certs] Generating "apiserver-kubelet-client" certificate and key +[certs] Generating "front-proxy-ca" certificate and key +[certs] Generating "front-proxy-client" certificate and key +[certs] Generating "etcd/ca" certificate and key +[certs] Generating "etcd/server" certificate and key +[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.10.160 127.0.0.1 ::1] +[certs] Generating "etcd/peer" certificate and key +[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.10.160 127.0.0.1 ::1] +[certs] Generating "etcd/healthcheck-client" certificate and key +[certs] Generating "apiserver-etcd-client" certificate and key +[certs] Generating "sa" key and public key +[kubeconfig] Using kubeconfig folder "/etc/kubernetes" +[kubeconfig] Writing "admin.conf" kubeconfig file +[kubeconfig] Writing "kubelet.conf" kubeconfig file +[kubeconfig] Writing "controller-manager.conf" kubeconfig file +[kubeconfig] Writing "scheduler.conf" kubeconfig file +[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" +[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" +[kubelet-start] Starting the kubelet +[control-plane] Using manifest folder "/etc/kubernetes/manifests" +[control-plane] Creating static Pod manifest for "kube-apiserver" +[control-plane] Creating static Pod manifest for "kube-controller-manager" +[control-plane] Creating static Pod manifest for "kube-scheduler" +[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" +[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s +[apiclient] All control plane components are healthy after 5.501915 seconds +[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace +[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster +[upload-certs] Skipping phase. Please see --upload-certs +[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] +[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule] +[bootstrap-token] Using token: tf68fl.pj4xsh62osypb4bj +[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles +[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes +[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials +[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token +[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster +[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace +[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key +[addons] Applied essential addon: CoreDNS +[addons] Applied essential addon: kube-proxy + +Your Kubernetes control-plane has initialized successfully! + +To start using your cluster, you need to run the following as a regular user: + + mkdir -p $HOME/.kube + sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config + sudo chown $(id -u):$(id -g) $HOME/.kube/config + +Alternatively, if you are the root user, you can run: + + export KUBECONFIG=/etc/kubernetes/admin.conf + +You should now deploy a pod network to the cluster. +Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: + https://kubernetes.io/docs/concepts/cluster-administration/addons/ + +Then you can join any number of worker nodes by running the following on each as root: + +kubeadm join 192.168.10.160:6443 --token tf68fl.pj4xsh62osypb4bj \ + --discovery-token-ca-cert-hash sha256:e4960afef684bbee72ae904356321997a6eef5bb0394a8d74b72ebaa0b638ecd + +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# mkdir -p $HOME/.kube +[root@k8s-master01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config +[root@k8s-master01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config + +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# kubectl get nodes +NAME STATUS ROLES AGE VERSION +k8s-master01 NotReady control-plane 2m5s v1.22.11 +~~~ + + + +## 4.5 把工作节点添加到K8S集群 + +### 4.5.1 创建kubeadm-config.yaml文件 + +> 用于添加worker节点 + + + +~~~powershell +[root@k8s-worker01 ~]# vim kubeadm-config.yaml + +[root@k8s-worker01 ~]# cat kubeadm-config.yaml +apiVersion: kubeadm.k8s.io/v1beta3 +kind: JoinConfiguration +discovery: + bootstrapToken: + apiServerEndpoint: 192.168.10.160:6443 + token: "tf68fl.pj4xsh62osypb4bj" + caCertHashes: + - "sha256:e4960afef684bbee72ae904356321997a6eef5bb0394a8d74b72ebaa0b638ecd" +nodeRegistration: + kubeletExtraArgs: + node-ip: 192.168.10.161,2003::12 +~~~ + + + +~~~powershell +[root@k8s-worker01 ~]# kubeadm join --config=kubeadm-config.yaml +[preflight] Running pre-flight checks + [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' +[preflight] Reading configuration from the cluster... +[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' +[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" +[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" +[kubelet-start] Starting the kubelet +[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... + +This node has joined the cluster: +* Certificate signing request was sent to apiserver and a response was received. +* The Kubelet was informed of the new secure connection details. + +Run 'kubectl get nodes' on the control-plane to see this node join the cluster. +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# kubectl get nodes +NAME STATUS ROLES AGE VERSION +k8s-master01 NotReady control-plane 16m v1.22.11 +k8s-worker01 NotReady 88s v1.22.11 +~~~ + + + +~~~powershell +[root@k8s-worker02 ~]# vim kubeadm-config.yaml +[root@k8s-worker02 ~]# cat kubeadm-config.yaml +apiVersion: kubeadm.k8s.io/v1beta3 +kind: JoinConfiguration +discovery: + bootstrapToken: + apiServerEndpoint: 192.168.10.160:6443 + token: "tf68fl.pj4xsh62osypb4bj" + caCertHashes: + - "sha256:e4960afef684bbee72ae904356321997a6eef5bb0394a8d74b72ebaa0b638ecd" +nodeRegistration: + kubeletExtraArgs: + node-ip: 192.168.10.162,2003::13 +~~~ + + + +~~~powershell +[root@k8s-worker02 ~]# kubeadm join --config=kubeadm-config.yaml +[preflight] Running pre-flight checks + [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' +[preflight] Reading configuration from the cluster... +[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' +[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" +[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" +[kubelet-start] Starting the kubelet +[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... + +This node has joined the cluster: +* Certificate signing request was sent to apiserver and a response was received. +* The Kubelet was informed of the new secure connection details. + +Run 'kubectl get nodes' on the control-plane to see this node join the cluster. +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# kubectl get nodes +NAME STATUS ROLES AGE VERSION +k8s-master01 NotReady control-plane 19m v1.22.11 +k8s-worker01 NotReady 4m44s v1.22.11 +k8s-worker02 NotReady 31s v1.22.11 +~~~ + + + +# 五、网络工具Antrea部署 + +## 5.0 认识Antrea + + + +Antrea 是一个旨在成为 Kubernetes 原生的[Kubernetes网络解决方案。](https://kubernetes.io/)[它在第 3/4 层运行,利用Open vSwitch](https://www.openvswitch.org/)作为网络数据平面,为 Kubernetes 集群提供网络和安全服务 。 + + + + + +![image-20230324174008962](../../img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230324174008962.png) + + + +Open vSwitch 是一种广泛采用的高性能可编程虚拟交换机;Antrea 利用它来实现 Pod 网络和安全功能。例如,Open vSwitch 使 Antrea 能够以非常有效的方式实施 Kubernetes 网络策略。 + + + +## 5.1 获取antrea部署文件 + + + +![image-20230324173843525](../../img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230324173843525.png) + + + +![image-20230324173901871](../../img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230324173901871.png) + + + + + +![image-20230324174249721](../../img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230324174249721.png) + + + +![image-20230324174316268](../../img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230324174316268.png) + + + +~~~powershell +[root@k8s-master01 ~]# wget https://github.com/antrea-io/antrea/releases/download/v1.11.0/antrea.yml +~~~ + + + +## 5.2 修改antrea部署文件 + +> 1、禁用overlay封装模式及SNAT模式 +> +> 2、配置Service的IPv4及IPv6地址段。 + + + +~~~powershell +[root@k8s-master01 ~]# vim antrea.yml +~~~ + + + + + +~~~powershell +3022 trafficEncapMode: "encap" +3023 +3024 # Whether or not to SNAT (using the Node IP) the egress traffic from a Pod to the external network. +3025 # This option is for the noEncap traffic mode only, and the default value is false. In the noEncap +3026 # mode, if the cluster's Pod CIDR is reachable from the external network, then the Pod traffic to +3027 # the external network needs not be SNAT'd. In the networkPolicyOnly mode, antrea-agent never +3028 # performs SNAT and this option will be ignored; for other modes it must be set to false. +3029 noSNAT: false + + +3022 trafficEncapMode: "noencap" +3023 +3024 # Whether or not to SNAT (using the Node IP) the egress traffic from a Pod to the external network. +3025 # This option is for the noEncap traffic mode only, and the default value is false. In the noEncap +3026 # mode, if the cluster's Pod CIDR is reachable from the external network, then the Pod traffic to +3027 # the external network needs not be SNAT'd. In the networkPolicyOnly mode, antrea-agent never +3028 # performs SNAT and this option will be ignored; for other modes it must be set to false. +3029 noSNAT: true + +~~~ + + + +~~~powershell +3097 serviceCIDR: "" +3098 +3099 # ClusterIP CIDR range for IPv6 Services. It's required when using kube-proxy to provide IPv6 Service in a Dual-Stack +3100 # cluster or an IPv6 only cluster. The value should be the same as the configuration for kube-apiserver specified by +3101 # --service-cluster-ip-range. When AntreaProxy is enabled, this parameter is not needed. +3102 # No default value for this field. +3103 serviceCIDRv6: "" + + +3097 serviceCIDR: "10.96.0.0/16" +3098 +3099 # ClusterIP CIDR range for IPv6 Services. It's required when using kube-proxy to provide IPv6 Service in a Dual-Stack +3100 # cluster or an IPv6 only cluster. The value should be the same as the configuration for kube-apiserver specified by +3101 # --service-cluster-ip-range. When AntreaProxy is enabled, this parameter is not needed. +3102 # No default value for this field. +3103 serviceCIDRv6: "2005::/110" + +~~~ + + + + + +## 5.3 应用antrea部署文件 + + + +~~~powershell +[root@k8s-master01 ~]# kubectl create -f antrea.yml +~~~ + + + +~~~powershell +输出内容: +customresourcedefinition.apiextensions.k8s.io/antreaagentinfos.crd.antrea.io created +customresourcedefinition.apiextensions.k8s.io/antreacontrollerinfos.crd.antrea.io created +customresourcedefinition.apiextensions.k8s.io/clustergroups.crd.antrea.io created +customresourcedefinition.apiextensions.k8s.io/clusternetworkpolicies.crd.antrea.io created +customresourcedefinition.apiextensions.k8s.io/egresses.crd.antrea.io created +customresourcedefinition.apiextensions.k8s.io/externalentities.crd.antrea.io created +customresourcedefinition.apiextensions.k8s.io/externalippools.crd.antrea.io created +customresourcedefinition.apiextensions.k8s.io/externalnodes.crd.antrea.io created +customresourcedefinition.apiextensions.k8s.io/ippools.crd.antrea.io created +customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.antrea.io created +customresourcedefinition.apiextensions.k8s.io/supportbundlecollections.crd.antrea.io created +customresourcedefinition.apiextensions.k8s.io/tiers.crd.antrea.io created +customresourcedefinition.apiextensions.k8s.io/traceflows.crd.antrea.io created +customresourcedefinition.apiextensions.k8s.io/trafficcontrols.crd.antrea.io created +serviceaccount/antrea-agent created +serviceaccount/antctl created +serviceaccount/antrea-controller created +secret/antrea-agent-service-account-token created +secret/antctl-service-account-token created +configmap/antrea-config created +customresourcedefinition.apiextensions.k8s.io/groups.crd.antrea.io created +clusterrole.rbac.authorization.k8s.io/antrea-agent created +clusterrole.rbac.authorization.k8s.io/antctl created +clusterrole.rbac.authorization.k8s.io/antrea-cluster-identity-reader created +clusterrole.rbac.authorization.k8s.io/antrea-controller created +clusterrole.rbac.authorization.k8s.io/aggregate-antrea-policies-edit created +clusterrole.rbac.authorization.k8s.io/aggregate-antrea-policies-view created +clusterrole.rbac.authorization.k8s.io/aggregate-traceflows-edit created +clusterrole.rbac.authorization.k8s.io/aggregate-traceflows-view created +clusterrole.rbac.authorization.k8s.io/aggregate-antrea-clustergroups-edit created +clusterrole.rbac.authorization.k8s.io/aggregate-antrea-clustergroups-view created +clusterrolebinding.rbac.authorization.k8s.io/antrea-agent created +clusterrolebinding.rbac.authorization.k8s.io/antctl created +clusterrolebinding.rbac.authorization.k8s.io/antrea-controller created +service/antrea created +daemonset.apps/antrea-agent created +deployment.apps/antrea-controller created +apiservice.apiregistration.k8s.io/v1beta2.controlplane.antrea.io created +apiservice.apiregistration.k8s.io/v1beta1.system.antrea.io created +apiservice.apiregistration.k8s.io/v1alpha1.stats.antrea.io created +mutatingwebhookconfiguration.admissionregistration.k8s.io/crdmutator.antrea.io created +validatingwebhookconfiguration.admissionregistration.k8s.io/crdvalidator.antrea.io created +~~~ + + + + + +## 5.4 验证antrea部署是否成功 + + + +~~~powershell +[root@k8s-master01 ~]# kubectl get pods -n kube-system +NAME READY STATUS RESTARTS AGE +antrea-agent-2fzm4 2/2 Running 0 99s +antrea-agent-9jp7g 2/2 Running 0 99s +antrea-agent-vkmk4 2/2 Running 0 99s +antrea-controller-789587f966-j62zq 1/1 Running 0 99s +coredns-787d4945fb-82tmg 1/1 Running 0 37m +coredns-787d4945fb-vdsln 1/1 Running 0 37m +etcd-k8s-master01 1/1 Running 0 38m +kube-apiserver-k8s-master01 1/1 Running 0 38m +kube-controller-manager-k8s-master01 1/1 Running 0 38m +kube-proxy-4pvpv 1/1 Running 0 37m +kube-proxy-4szqs 1/1 Running 0 18m +kube-proxy-sl8h5 1/1 Running 0 23m +kube-scheduler-k8s-master01 1/1 Running 0 38m +~~~ + + + +## 5.5 查看K8S集群节点主机路由信息 + + + +~~~powershell +[root@k8s-master01 ~]# route +Kernel IP routing table +Destination Gateway Genmask Flags Metric Ref Use Iface +default gateway 0.0.0.0 UG 100 0 0 ens33 +10.244.0.0 0.0.0.0 255.255.255.128 U 0 0 0 antrea-gw0 +10.244.0.128 k8s-worker01 255.255.255.128 UG 0 0 0 ens33 +10.244.1.0 k8s-worker02 255.255.255.128 UG 0 0 0 ens33 +172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 +192.168.10.0 0.0.0.0 255.255.255.0 U 100 0 0 ens33 +192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0 +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# route -6 +Kernel IPv6 routing table +Destination Next Hop Flag Met Ref Use If +[::]/96 [::] !n 1024 2 0 lo +0.0.0.0/96 [::] !n 1024 2 0 lo +2002:a00::/24 [::] !n 1024 1 0 lo +2002:7f00::/24 [::] !n 1024 2 0 lo +2002:a9fe::/32 [::] !n 1024 1 0 lo +2002:ac10::/28 [::] !n 1024 2 0 lo +2002:c0a8::/32 [::] !n 1024 1 0 lo +2002:e000::/19 [::] !n 1024 4 0 lo +2003::/64 [::] U 100 7 0 ens33 +2004::/80 [::] U 256 3 0 antrea-gw0 +2004::1:0:0:0/80 k8s-worker01 UG 1024 1 0 ens33 +2004::2:0:0:0/80 k8s-worker02 UG 1024 1 0 ens33 +3ffe:ffff::/32 [::] !n 1024 1 0 lo +fe80::/64 [::] U 100 1 0 ens33 +[::]/0 gateway UG 100 5 0 ens33 +localhost/128 [::] Un 0 7 0 lo +2003::/128 [::] Un 0 3 0 ens33 +k8s-master01/128 [::] Un 0 8 0 ens33 +2004::/128 [::] Un 0 3 0 antrea-gw0 +k8s-master01/128 [::] Un 0 3 0 antrea-gw0 +fe80::/128 [::] Un 0 3 0 ens33 +fe80::/128 [::] Un 0 3 0 antrea-gw0 +k8s-master01/128 [::] Un 0 4 0 ens33 +k8s-master01/128 [::] Un 0 3 0 antrea-gw0 +ff00::/8 [::] U 256 3 0 ens33 +ff00::/8 [::] U 256 2 0 antrea-gw0 +[::]/0 [::] !n -1 1 0 lo +~~~ + + + +> 需要在外部设备上设置静态路由使得pod可路由 + + + +# 六、测试双栈协议可用性 + +## 6.1 IPv6访问测试 + + + +~~~powershell +[root@k8s-master01 ~]# vim deployment.yaml +[root@k8s-master01 ~]# cat deployment.yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app: nginxweb + name: nginxweb +spec: + replicas: 2 + selector: + matchLabels: + app: nginxweb + template: + metadata: + labels: + app: nginxweb + spec: + containers: + - name: nginxweb + image: nginx:latest + imagePullPolicy: IfNotPresent + ports: + - containerPort: 80 + name: http + protocol: TCP +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# kubectl apply -f deployment.yaml +deployment.apps/nginxweb created +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# kubectl get pods +NAME READY STATUS RESTARTS AGE +nginxweb-5c77c86f4d-9fdgh 1/1 Running 0 4s +nginxweb-5c77c86f4d-hzk6x 1/1 Running 0 4s +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# kubectl get pods -o yaml | grep ip + - ip: 10.244.0.130 + - ip: 2004::1:0:0:2 + - ip: 10.244.1.2 + - ip: 2004::2:0:0:2 +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# kubectl describe pods nginxweb-5c77c86f4d-9fdgh +Name: nginxweb-5c77c86f4d-9fdgh +Namespace: default +Priority: 0 +Node: k8s-worker01/192.168.10.161 +Start Time: Fri, 24 Mar 2023 19:56:16 +0800 +Labels: app=nginxweb + pod-template-hash=5c77c86f4d +Annotations: +Status: Running +IP: 10.244.0.130 +IPs: + IP: 10.244.0.130 + IP: 2004::1:0:0:2 +Controlled By: ReplicaSet/nginxweb-5c77c86f4d +Containers: + nginxweb: + Container ID: docker://68139623df054e7eb47f8a0fdb3891dc36c926ef36edc5b4c4dc25e81ffe3d01 + Image: nginx:latest + Image ID: docker-pullable://nginx@sha256:617661ae7846a63de069a85333bb4778a822f853df67fe46a688b3f0e4d9cb87 + Port: 80/TCP + Host Port: 0/TCP + State: Running + Started: Fri, 24 Mar 2023 19:56:18 +0800 + Ready: True + Restart Count: 0 + Environment: + Mounts: + /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dgjq4 (ro) +Conditions: + Type Status + Initialized True + Ready True + ContainersReady True + PodScheduled True +Volumes: + kube-api-access-dgjq4: + Type: Projected (a volume that contains injected data from multiple sources) + TokenExpirationSeconds: 3607 + ConfigMapName: kube-root-ca.crt + ConfigMapOptional: + DownwardAPI: true +QoS Class: BestEffort +Node-Selectors: +Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s + node.kubernetes.io/unreachable:NoExecute op=Exists for 300s +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Scheduled 75s default-scheduler Successfully assigned default/nginxweb-5c77c86f4d-9fdgh to k8s-worker01 + Normal Pulled 74s kubelet Container image "nginx:latest" already present on machine + Normal Created 74s kubelet Created container nginxweb + Normal Started 73s kubelet Started container nginxweb +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# ping6 2004::1:0:0:2 +PING 2004::1:0:0:2(2004::1:0:0:2) 56 data bytes +64 bytes from 2004::1:0:0:2: icmp_seq=8 ttl=62 time=2049 ms +64 bytes from 2004::1:0:0:2: icmp_seq=9 ttl=62 time=1024 ms +64 bytes from 2004::1:0:0:2: icmp_seq=10 ttl=62 time=2.42 ms +64 bytes from 2004::1:0:0:2: icmp_seq=11 ttl=62 time=0.370 ms +64 bytes from 2004::1:0:0:2: icmp_seq=12 ttl=62 time=0.651 ms +64 bytes from 2004::1:0:0:2: icmp_seq=13 ttl=62 time=1.30 ms +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# ping6 2004::2:0:0:2 +PING 2004::2:0:0:2(2004::2:0:0:2) 56 data bytes +64 bytes from 2004::2:0:0:2: icmp_seq=1 ttl=62 time=1.35 ms +64 bytes from 2004::2:0:0:2: icmp_seq=2 ttl=62 time=1.24 ms +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# curl -g -6 [2004::1:0:0:2] + + + +Welcome to nginx! + + + +

Welcome to nginx!

+

If you see this page, the nginx web server is successfully installed and +working. Further configuration is required.

+ +
+ +

Thank you for using nginx.

+ + +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# curl -g -6 [2004::2:0:0:2] + + + +Welcome to nginx! + + + +

Welcome to nginx!

+

If you see this page, the nginx web server is successfully installed and +working. Further configuration is required.

+ +

For online documentation and support please refer to +nginx.org.
+Commercial support is available at +nginx.com.

+ +

Thank you for using nginx.

+ + +~~~ + + + +## 6.2 双栈service测试 + +Kubernetes Service 的 IPv4 family 支持下列选项,默认为 IPv4 SingleStack: + +- SingleStack +- PreferDualStack +- RequireDualStack + + + +~~~powershell +[root@k8s-master01 ~]# vim service.yaml +[root@k8s-master01 ~]# cat service.yaml +apiVersion: v1 +kind: Service +metadata: + name: nginxweb-v6 +spec: + selector: + app: nginxweb + ports: + - protocol: TCP + port: 80 + targetPort: 80 + type: NodePort + ipFamilyPolicy: RequireDualStack +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# kubectl create -f service.yaml +service/nginxweb-v6 created +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# kubectl get svc +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +kubernetes ClusterIP 10.96.0.1 443/TCP 63m +nginxweb-v6 NodePort 10.96.53.221 80:32697/TCP 22s +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# kubectl get svc -o yaml +apiVersion: v1 +items: +- apiVersion: v1 + kind: Service + metadata: + creationTimestamp: "2023-03-24T11:12:51Z" + labels: + component: apiserver + provider: kubernetes + name: kubernetes + namespace: default + resourceVersion: "205" + uid: d1884296-7060-4dab-961b-3537de7490d0 + spec: + clusterIP: 10.96.0.1 + clusterIPs: + - 10.96.0.1 + internalTrafficPolicy: Cluster + ipFamilies: + - IPv4 + ipFamilyPolicy: SingleStack + ports: + - name: https + port: 443 + protocol: TCP + targetPort: 6443 + sessionAffinity: None + type: ClusterIP + status: + loadBalancer: {} +- apiVersion: v1 + kind: Service + metadata: + creationTimestamp: "2023-03-24T12:16:26Z" + name: nginxweb-v6 + namespace: default + resourceVersion: "6905" + uid: c2baa4f5-1ec8-43d3-9fc9-77f9236a4eba + spec: + clusterIP: 10.96.53.221 + clusterIPs: + - 10.96.53.221 + - 2005::2c47 + externalTrafficPolicy: Cluster + internalTrafficPolicy: Cluster + ipFamilies: + - IPv4 + - IPv6 + ipFamilyPolicy: RequireDualStack + ports: + - nodePort: 32697 + port: 80 + protocol: TCP + targetPort: 80 + selector: + app: nginxweb + sessionAffinity: None + type: NodePort + status: + loadBalancer: {} +kind: List +metadata: + resourceVersion: "" + selfLink: "" +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# kubectl describe svc nginxweb-v6 +Name: nginxweb-v6 +Namespace: default +Labels: +Annotations: +Selector: app=nginxweb +Type: NodePort +IP Family Policy: RequireDualStack +IP Families: IPv4,IPv6 +IP: 10.96.53.221 +IPs: 10.96.53.221,2005::2c47 +Port: 80/TCP +TargetPort: 80/TCP +NodePort: 32697/TCP +Endpoints: 10.244.0.130:80,10.244.1.2:80 +Session Affinity: None +External Traffic Policy: Cluster +Events: +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# curl -g -6 [2005::2c47] + + + +Welcome to nginx! + + + +

Welcome to nginx!

+

If you see this page, the nginx web server is successfully installed and +working. Further configuration is required.

+ +

For online documentation and support please refer to +nginx.org.
+Commercial support is available at +nginx.com.

+ +

Thank you for using nginx.

+ + +~~~ + + + + + + + +![image-20230324202324615](../../img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230324202324615.png) + + + + + +![image-20230324202133124](../../img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230324202133124.png) + + + diff --git a/docs/cloud/kubernetes/kubernetes_rancher.md b/docs/cloud/kubernetes/kubernetes_rancher.md new file mode 100644 index 00000000..82ec2238 --- /dev/null +++ b/docs/cloud/kubernetes/kubernetes_rancher.md @@ -0,0 +1,1137 @@ +# Rancher容器云管理平台 + +# 一、主机硬件说明 + +| 序号 | 硬件 | 操作及内核 | +| ---- | ------------------------- | ---------- | +| 1 | CPU 4 Memory 4G Disk 100G | CentOS7 | +| 2 | CPU 4 Memory 4G Disk 100G | CentOS7 | +| 3 | CPU 4 Memory 4G Disk 100G | CentOS7 | +| 4 | CPU 4 Memory 4G Disk 100G | CentOS7 | + + + +# 二、主机配置 + +## 2.1 主机名 + +~~~powershell +# hostnamectl set-hostname rancherserver +~~~ + + + +~~~powershell +# hostnamectl set-hostname k8s-master01 +~~~ + + + +~~~powershell +# hostnamectl set-hostname k8s-worker01 +~~~ + + + +~~~powershell +# hostnamectl set-hostname k8s-worker02 +~~~ + + + + + + + +## 2.2 IP地址 + +~~~powershell +[root@rancherserver ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33 +# cat /etc/sysconfig/network-scripts/ifcfg-ens33 +TYPE="Ethernet" +PROXY_METHOD="none" +BROWSER_ONLY="no" +BOOTPROTO="none" +DEFROUTE="yes" +IPV4_FAILURE_FATAL="no" +IPV6INIT="yes" +IPV6_AUTOCONF="yes" +IPV6_DEFROUTE="yes" +IPV6_FAILURE_FATAL="no" +IPV6_ADDR_GEN_MODE="stable-privacy" +NAME="ens33" +UUID="ec87533a-8151-4aa0-9d0f-1e970affcdc6" +DEVICE="ens33" +ONBOOT="yes" +IPADDR="192.168.10.130" +PREFIX="24" +GATEWAY="192.168.10.2" +DNS1="119.29.29.29" +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33 +[root@k8s-master01 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33 +TYPE="Ethernet" +PROXY_METHOD="none" +BROWSER_ONLY="no" +BOOTPROTO="none" +DEFROUTE="yes" +IPV4_FAILURE_FATAL="no" +IPV6INIT="yes" +IPV6_AUTOCONF="yes" +IPV6_DEFROUTE="yes" +IPV6_FAILURE_FATAL="no" +IPV6_ADDR_GEN_MODE="stable-privacy" +NAME="ens33" +UUID="ec87533a-8151-4aa0-9d0f-1e970affcdc6" +DEVICE="ens33" +ONBOOT="yes" +IPADDR="192.168.10.131" +PREFIX="24" +GATEWAY="192.168.10.2" +DNS1="119.29.29.29" +~~~ + + + +~~~powershell +[root@k8s-worker01 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33 +[root@k8s-worker01 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33 +TYPE="Ethernet" +PROXY_METHOD="none" +BROWSER_ONLY="no" +BOOTPROTO="none" +DEFROUTE="yes" +IPV4_FAILURE_FATAL="no" +IPV6INIT="yes" +IPV6_AUTOCONF="yes" +IPV6_DEFROUTE="yes" +IPV6_FAILURE_FATAL="no" +IPV6_ADDR_GEN_MODE="stable-privacy" +NAME="ens33" +UUID="ec87533a-8151-4aa0-9d0f-1e970affcdc6" +DEVICE="ens33" +ONBOOT="yes" +IPADDR="192.168.10.132" +PREFIX="24" +GATEWAY="192.168.10.2" +DNS1="119.29.29.29" +~~~ + + + +~~~powershell +[root@k8s-worker02 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33 +[root@k8s-worker02 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33 +TYPE="Ethernet" +PROXY_METHOD="none" +BROWSER_ONLY="no" +BOOTPROTO="none" +DEFROUTE="yes" +IPV4_FAILURE_FATAL="no" +IPV6INIT="yes" +IPV6_AUTOCONF="yes" +IPV6_DEFROUTE="yes" +IPV6_FAILURE_FATAL="no" +IPV6_ADDR_GEN_MODE="stable-privacy" +NAME="ens33" +UUID="ec87533a-8151-4aa0-9d0f-1e970affcdc6" +DEVICE="ens33" +ONBOOT="yes" +IPADDR="192.168.10.133" +PREFIX="24" +GATEWAY="192.168.10.2" +DNS1="119.29.29.29" +~~~ + + + + + +## 2.3 主机名与IP地址解析 + + + +~~~powershell +# cat /etc/hosts +127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 +::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 +192.168.10.130 rancherserver +192.168.10.131 k8s-master01 +192.168.10.132 k8s-worker01 +192.168.10.133 k8s-worker02 +~~~ + + + + + +## 2.4 主机安全设置 + +~~~powershell +# systemctl stop firewalld;systemctl disable firewalld + +# firewall-cmd --state +not running +~~~ + + + +~~~powershell +# sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config +~~~ + + + + + +## 2.5 主机时钟同步 + + + +~~~powershell +# crontab -l +0 */1 * * * ntpdate time1.aliyun.com +~~~ + + + +## 2.6 关闭swap + +> 关闭k8s集群节点swap + + + +~~~powershell +# cat /etc/fstab + +默认开启,修改后关闭 +#/dev/mapper/centos-swap swap swap defaults 0 0 +~~~ + + + +~~~powershell +临时关闭所有 +# swapoff -a +~~~ + + + +## 2.7 配置内核路由转发 + + + +~~~powershell +# vim /etc/sysctl.conf +# cat /etc/sysctl.conf +... +net.ipv4.ip_forward=1 +~~~ + + + +~~~powershell +# sysctl -p +net.ipv4.ip_forward = 1 +~~~ + + + +# 三、docker-ce安装 + +> 所有主机安装docker-ce + + + +~~~powershell +# wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo +~~~ + + + +~~~powershell +# yum -y install docker-ce +~~~ + + + +~~~powershell +# systemctl enable --now docker +~~~ + + + +# 四、rancher安装 + +![image-20220816144830488](../../img/kubernetes/kubernetes_rancher/image-20220816144830488.png) + + + + + +![image-20220816144953656](../../img/kubernetes/kubernetes_rancher/image-20220816144953656.png) + + + + + +~~~powershell +[root@rancherserver ~]# docker pull rancher/rancher:v2.5.15 +~~~ + + + +~~~powershell +[root@rancherserver ~]# mkdir -p /opt/data/rancher_data +~~~ + + + +~~~powershell +[root@rancherserver ~]# docker run -d --privileged -p 80:80 -p 443:443 -v /opt/data/rancher_data:/var/lib/rancher --restart=always --name rancher-2.5.15 rancher/rancher:v2.5.15 +~~~ + + + +~~~powershell +[root@rancherserver ~]# docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +99e367eb35a3 rancher/rancher:v2.5.15 "entrypoint.sh" 26 seconds ago Up 26 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp rancher-2.5.15 +~~~ + + + +# 五、通过Rancher部署kubernetes集群 + +## 5.1 Rancher访问 + + + +![image-20220816150151634](../../img/kubernetes/kubernetes_rancher/image-20220816150151634.png) + + + +![image-20220816150209913](../../img/kubernetes/kubernetes_rancher/image-20220816150209913.png) + + + + + +![image-20220816150231704](../../img/kubernetes/kubernetes_rancher/image-20220816150231704.png) + + + +![image-20220816150305037](../../img/kubernetes/kubernetes_rancher/image-20220816150305037.png) + +> 本次密码为Kubemsb123 + + + +![image-20220816150322211](../../img/kubernetes/kubernetes_rancher/image-20220816150322211.png) + + + +![image-20220816150450409](../../img/kubernetes/kubernetes_rancher/image-20220816150450409.png) + + + + + +## 5.2 通过Rancher创建Kubernetes集群 + + + +![image-20220816150706088](../../img/kubernetes/kubernetes_rancher/image-20220816150706088.png) + + + +![image-20220816150736103](../../img/kubernetes/kubernetes_rancher/image-20220816150736103.png) + + + + + +![image-20220816150900071](../../img/kubernetes/kubernetes_rancher/image-20220816150900071.png) + + + + + +![image-20220816151001001](../../img/kubernetes/kubernetes_rancher/image-20220816151001001.png) + + + +![image-20220816151039880](../../img/kubernetes/kubernetes_rancher/image-20220816151039880.png) + + + + + +![image-20220816151117519](../../img/kubernetes/kubernetes_rancher/image-20220816151117519.png) + + + +![image-20220816151131738](../../img/kubernetes/kubernetes_rancher/image-20220816151131738.png) + + + +![image-20220816151144464](../../img/kubernetes/kubernetes_rancher/image-20220816151144464.png) + + + + + +![image-20220816151157176](../../img/kubernetes/kubernetes_rancher/image-20220816151157176.png) + + + +![image-20220816151225471](../../img/kubernetes/kubernetes_rancher/image-20220816151225471.png) + + + +![image-20220816151259299](../../img/kubernetes/kubernetes_rancher/image-20220816151259299.png) + + + + + +添加master节点主机 + + + +![image-20220816151352377](../../img/kubernetes/kubernetes_rancher/image-20220816151352377.png) + + + +~~~powershell +[root@k8s-master01 ~]# docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.5.15 --server https://192.168.10.130 --token 7985nkpc48854kwmgh6pnfb7hcrkwhlcmx6wxq8tb4vszxn2qv9xdd --ca-checksum f6d5f24fcd41aa057a205d4f6922dfd309001126d040431222bfba7aa93517b7 --etcd --controlplane --worker +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +8e7e73b477dc rancher/rancher-agent:v2.5.15 "run.sh --server htt…" 20 seconds ago Up 18 seconds brave_ishizaka +~~~ + + + + + + + +添加 worker节点 + + + +![image-20220816151709536](../../img/kubernetes/kubernetes_rancher/image-20220816151709536.png) + + + + + +![image-20220816151731937](../../img/kubernetes/kubernetes_rancher/image-20220816151731937.png) + + + +![image-20220816151746413](../../img/kubernetes/kubernetes_rancher/image-20220816151746413.png) + + + + + +~~~powershell +[root@k8s-worker01 ~]# docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.5.15 --server https://192.168.10.130 --token 7985nkpc48854kwmgh6pnfb7hcrkwhlcmx6wxq8tb4vszxn2qv9xdd --ca-checksum f6d5f24fcd41aa057a205d4f6922dfd309001126d040431222bfba7aa93517b7 --worker +~~~ + + + +~~~powershell +[root@k8s-worker02 ~]# docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.5.15 --server https://192.168.10.130 --token 7985nkpc48854kwmgh6pnfb7hcrkwhlcmx6wxq8tb4vszxn2qv9xdd --ca-checksum f6d5f24fcd41aa057a205d4f6922dfd309001126d040431222bfba7aa93517b7 --worker +~~~ + + + +![image-20220816152002056](../../img/kubernetes/kubernetes_rancher/image-20220816152002056.png) + +所有节点激活后状态 + + + +![image-20220816152858892](../../img/kubernetes/kubernetes_rancher/image-20220816152858892.png) + +![image-20220816152917840](../../img/kubernetes/kubernetes_rancher/image-20220816152917840.png) + + + +![image-20220816153009799](../../img/kubernetes/kubernetes_rancher/image-20220816153009799.png) + + + + + +# 六、配置通过命令行访问Kubernetes集群 + + + +![image-20220816153241393](../../img/kubernetes/kubernetes_rancher/image-20220816153241393.png) + + + + + +![image-20220816153301099](../../img/kubernetes/kubernetes_rancher/image-20220816153301099.png) + + + + + +![image-20220816153357410](../../img/kubernetes/kubernetes_rancher/image-20220816153357410.png) + + + + + +在集群节点命令行,如果访问呢? + + + +![image-20220816153917499](../../img/kubernetes/kubernetes_rancher/image-20220816153917499.png) + + + + + +![image-20220816154006380](../../img/kubernetes/kubernetes_rancher/image-20220816154006380.png) + + + + + + + +~~~powershell +[root@k8s-master01 ~]# cat < /etc/yum.repos.d/kubernetes.repo +[kubernetes] +name=Kubernetes +baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ +enabled=1 +gpgcheck=1 +repo_gpgcheck=1 +gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg +EOF +~~~ + + + +修改gpgcheck=0及修改repo_gpgcheck=0 + + + +~~~powershell +[root@k8s-master01 ~]# vim /etc/yum.repos.d/kubernetes.repo +[root@k8s-master01 ~]# cat /etc/yum.repos.d/kubernetes.repo +[kubernetes] +name=Kubernetes +baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ +enabled=1 +gpgcheck=0 +repo_gpgcheck=0 +gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# yum -y install kubectl +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# mkdir ~/.kube +~~~ + + + +![image-20220816155110578](../../img/kubernetes/kubernetes_rancher/image-20220816155110578.png) + + + +![image-20220816155140075](../../img/kubernetes/kubernetes_rancher/image-20220816155140075.png) + + + +~~~powershell +[root@k8s-master01 ~]# vim ~/.kube/config +[root@k8s-master01 ~]# cat ~/.kube/config +apiVersion: v1 +kind: Config +clusters: +- name: "kubemsb-smart-1" + cluster: + server: "https://192.168.10.130/k8s/clusters/c-5jtsf" + certificate-authority-data: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJwekNDQ\ + VUyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQTdNUnd3R2dZRFZRUUtFeE5rZVc1aGJXbGoKY\ + kdsemRHVnVaWEl0YjNKbk1Sc3dHUVlEVlFRREV4SmtlVzVoYldsamJHbHpkR1Z1WlhJdFkyRXdIa\ + GNOTWpJdwpPREUyTURZMU9UVTBXaGNOTXpJd09ERXpNRFkxT1RVMFdqQTdNUnd3R2dZRFZRUUtFe\ + E5rZVc1aGJXbGpiR2x6CmRHVnVaWEl0YjNKbk1Sc3dHUVlEVlFRREV4SmtlVzVoYldsamJHbHpkR\ + 1Z1WlhJdFkyRXdXVEFUQmdjcWhrak8KUFFJQkJnZ3Foa2pPUFFNQkJ3TkNBQVJMbDZKOWcxMlJQT\ + G93dnVHZkM0YnR3ZmhHUDBpR295N1U2cjBJK0JZeAozZCtuVDBEc3ZWOVJWV1BDOGZCdGhPZmJQN\ + GNYckx5YzJsR081RHkrSXRlM28wSXdRREFPQmdOVkhROEJBZjhFCkJBTUNBcVF3RHdZRFZSMFRBU\ + UgvQkFVd0F3RUIvekFkQmdOVkhRNEVGZ1FVNnZYWXBRYm9IdXF0UlBuS1FrS3gKMjBSZzJqMHdDZ\ + 1lJS29aSXpqMEVBd0lEU0FBd1JRSWdMTUJ6YXNDREU4T2tCUk40TWRuZWNRU0xjMFVXQmNseApGO\ + UFCem1MQWQwb0NJUUNlRWFnRkdBa1ZsUnV1czllSE1VRUx6ODl6VlY5L096b3hvY1ROYnA5amlBP\ + T0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQ==" +- name: "kubemsb-smart-1-k8s-master01" + cluster: + server: "https://192.168.10.131:6443" + certificate-authority-data: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM0VENDQ\ + WNtZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFTTVJBd0RnWURWUVFERXdkcmRXSmwKT\ + FdOaE1CNFhEVEl5TURneE5qQTNNVFV3TmxvWERUTXlNRGd4TXpBM01UVXdObG93RWpFUU1BNEdBM\ + VVFQXhNSAphM1ZpWlMxallUQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ\ + 0VCQUt4Qkh3S0RFZE5WCm1tU2JFZDZXaTZzRFNXcklPZEZ5dFN5Z1puVmk2cXFkYmxXODZRQ1Y1U\ + WdEckI5QU43aDJ1RHRZMlFiNGZOTmEKQWZkSVVhS2tjZ0taNnplS1Z3eEdRYkptcEovSk5yYWw2N\ + ENldng5QTU1UUFBL29FSzBVbkdackliSjQ5dEl4awp1TnMwNFVIRWxVVUZpWjlmckdBQU9sK3lVa\ + GxXQUlLQzhmMUZSeVhpaVZEN2FTcTVodHNWeC9qczBBUUo3R2dFCjMxQUdRcmF4S2s0STVCN1hBY\ + 1pybHdrS1ljaGFPZnBlNkV6Ly9HZXVFekR5VnN3a09uK2h0ZGNIMlJVSHozUlcKWWVTMGw2ZzZpO\ + HcyUXRUakNwTUtId1FRTmQwSjdFM0k1aS9CRVA0azhXSHZIYjBkQk8ydVkyTml1cmNMWWw4dgpHO\ + DUyZ2ZibWt2OENBd0VBQWFOQ01FQXdEZ1lEVlIwUEFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93U\ + UZNQU1CCkFmOHdIUVlEVlIwT0JCWUVGQkg4VzVBbmxKYVNrVXowSzNobms1Vm55MVNnTUEwR0NTc\ + UdTSWIzRFFFQkN3VUEKQTRJQkFRQmE0WmtsWmRINUFCOTNWaXhYOUhnMEYwYXdVZWduNkVSRGtRQ\ + VBlcHZNaG5ON1lyVGlFN3lUSGxvWApLNS9ROTJ5Y2FnRGVlNjlEbHpvWEppTlNzdEZWYmtaSVN0O\ + HVRZFhCYjFoSUtzbXBVYWlSeXFoRmVjbnRaSi85CmhCWmZMRjZnNitBNUlvVGxYOThqMERVU21IV\ + is2Q29raXhPV3ZESmJ6dkI2S3VXdnhQbTF5WFgveVpBTDd1U1gKcUNnTC84UjJjSm53dUZhTnFvS\ + 3I3STE5bDBRNi9VWWQ0bWhralpmUTdqdGlraEpmQXpWRUFtWlVza0hZSkRtdwp6bzJKMUJLL0Jxb\ + m8rSFplbThFTExpK1ZhRXVlR280blF4ZmpaSlF2MWFXZHhCMnRrUWovdWNUa1QxRU1kUFBsCm0rd\ + kh2MWtYNW5BaHl5eWR0dG12UGRlOWFHOUwKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=" + +users: +- name: "kubemsb-smart-1" + user: + token: "kubeconfig-user-9cn9x.c-5jtsf:x57644qvmbfqpmh78fb4cbdnm8zbbxk9hmjb2bjggl5j2hvwnvj4c9" + + +contexts: +- name: "kubemsb-smart-1" + context: + user: "kubemsb-smart-1" + cluster: "kubemsb-smart-1" +- name: "kubemsb-smart-1-k8s-master01" + context: + user: "kubemsb-smart-1" + cluster: "kubemsb-smart-1-k8s-master01" + +current-context: "kubemsb-smart-1" +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# kubectl get nodes +NAME STATUS ROLES AGE VERSION +k8s-master01 Ready controlplane,etcd,worker 35m v1.20.15 +k8s-worker01 Ready worker 31m v1.20.15 +k8s-worker02 Ready worker 27m v1.20.15 +~~~ + + + +# 七、通过Rancher部署Nginx应用 + + + +![image-20220816155537732](../../img/kubernetes/kubernetes_rancher/image-20220816155537732.png) + + + + + +![image-20220816155741848](../../img/kubernetes/kubernetes_rancher/image-20220816155741848.png) + + + +![image-20220816155932470](../../img/kubernetes/kubernetes_rancher/image-20220816155932470.png) + + + +![image-20220816160056177](../../img/kubernetes/kubernetes_rancher/image-20220816160056177.png) + + + +![image-20220816160200746](../../img/kubernetes/kubernetes_rancher/image-20220816160200746.png) + + + + + +![image-20220816160254842](../../img/kubernetes/kubernetes_rancher/image-20220816160254842.png) + + + + + +![image-20220816160423255](../../img/kubernetes/kubernetes_rancher/image-20220816160423255.png) + + + + + +![image-20220816160443666](../../img/kubernetes/kubernetes_rancher/image-20220816160443666.png) + + + +![image-20220816160733605](../../img/kubernetes/kubernetes_rancher/image-20220816160733605.png) + + + +![image-20220816160825424](../../img/kubernetes/kubernetes_rancher/image-20220816160825424.png) + + + + + +~~~powershell +[root@k8s-master01 ~]# kubectl get svc -n kubemsbf-1 +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +myapp-1 ClusterIP 10.43.15.240 80/TCP 4m35s +myapp-1-nodeport NodePort 10.43.214.118 80:32406/TCP 4m35s +~~~ + + + + + +![image-20220816162243693](../../img/kubernetes/kubernetes_rancher/image-20220816162243693.png) + + + +# 八、通过Rancher部署mysql数据库 + +## 8.1 持久化存储类准备 + +### 8.1.1 NFS服务 + +~~~powershell +[root@nfsserver ~]# lsblk +NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT +sda 8:0 0 100G 0 disk +├─sda1 8:1 0 1G 0 part /boot +└─sda2 8:2 0 99G 0 part + ├─centos-root 253:0 0 50G 0 lvm / + ├─centos-swap 253:1 0 2G 0 lvm + └─centos-home 253:2 0 47G 0 lvm /home +sdb 8:16 0 100G 0 disk /sdb +~~~ + + + +~~~powershell +[root@nfsserver ~]# mkdir /sdb +~~~ + + + +~~~powershell +[root@nfsserver ~]# mkfs.xfs /dev/sdb +~~~ + + + +~~~powershell +[root@nfsserver ~]#vim /etc/fstab +[root@nfsserver ~]# cat /etc/fstab +...... +/dev/sdb /sdb xfs defaults 0 0 +~~~ + + + +~~~powershell +[root@nfsserver ~]# mount -a +~~~ + + + +~~~powershell +[root@nfsserver ~]# vim /etc/exports +[root@nfsserver ~]# cat /etc/exports +/sdb *(rw,sync,no_root_squash) +~~~ + + + +~~~powershell +[root@nfsserver ~]# systemctl enable --now nfs-server +~~~ + + + +~~~powershell +[root@nfsserver ~]# showmount -e +Export list for nfs-server: +/sdb * +~~~ + + + +### 8.1.2 存储卷 + + + +~~~powershell +[root@k8s-master01 ~]# for file in class.yaml deployment.yaml rbac.yaml ; do wget https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/nfs-client/deploy/$file ; done +~~~ + +> 需要修改class.yaml中资源对象名称为nfs-client +> +> 需要修改deployment.yaml中nfs server及其共享的目录、容器对应的镜像。 + + + +~~~powershell + [root@k8s-master01 ~]# kubectl apply -f class.yaml +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# kubectl apply -f rbac.yaml +~~~ + + + +~~~powershell +[root@k8s-master01 nfsdir]# vim deployment.yaml +[root@k8s-master01 nfsdir]# cat deployment.yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nfs-client-provisioner + labels: + app: nfs-client-provisioner + # replace with namespace where provisioner is deployed + namespace: default +spec: + replicas: 1 + strategy: + type: Recreate + selector: + matchLabels: + app: nfs-client-provisioner + template: + metadata: + labels: + app: nfs-client-provisioner + spec: + serviceAccountName: nfs-client-provisioner + containers: + - name: nfs-client-provisioner + image: registry.cn-beijing.aliyuncs.com/pylixm/nfs-subdir-external-provisioner:v4.0.0 + volumeMounts: + - name: nfs-client-root + mountPath: /persistentvolumes + env: + - name: PROVISIONER_NAME + value: fuseim.pri/ifs + - name: NFS_SERVER + value: 192.168.10.133 + - name: NFS_PATH + value: /sdb + volumes: + - name: nfs-client-root + nfs: + server: 192.168.10.133 + path: /sdb +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# kubectl apply -f deployment.yaml +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# kubectl get sc +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +nfs-client fuseim.pri/ifs Delete Immediate false 109m +~~~ + + + + + +![image-20220816185710948](../../img/kubernetes/kubernetes_rancher/image-20220816185710948.png) + + + + + +## 8.2 MySQL数据库部署 + +### 8.2.1 PVC准备 + + + +![image-20220816202438365](../../img/kubernetes/kubernetes_rancher/image-20220816202438365.png) + + + +![image-20220816202505925](../../img/kubernetes/kubernetes_rancher/image-20220816202505925.png) + + + + + +![image-20220816202814643](../../img/kubernetes/kubernetes_rancher/image-20220816202814643.png) + + + +![image-20220816202832741](../../img/kubernetes/kubernetes_rancher/image-20220816202832741.png) + + + + + +![image-20220816202919580](../../img/kubernetes/kubernetes_rancher/image-20220816202919580.png) + + + +![image-20220816202953810](../../img/kubernetes/kubernetes_rancher/image-20220816202953810.png) + +![image-20220816203153592](../../img/kubernetes/kubernetes_rancher/image-20220816203153592.png) + + + +![image-20220816203212769](../../img/kubernetes/kubernetes_rancher/image-20220816203212769.png) + + + + + +~~~powershell +[root@k8s-master01 ~]# kubectl get pvc -n kubemsbdata +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +myvolume Bound pvc-52460d7f-db89-40ab-b09e-ab9d0cfcaa17 5Gi RWO nfs-client 80s +~~~ + + + +~~~powershell +[root@k8s-master01 ~]# kubectl get pv +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-52460d7f-db89-40ab-b09e-ab9d0cfcaa17 5Gi RWO Delete Bound kubemsbdata/myvolume nfs-client 84s +~~~ + + + +~~~powershell +[root@nfsserver ~]# ls /sdb +kubemsbdata-myvolume-pvc-52460d7f-db89-40ab-b09e-ab9d0cfcaa17 +~~~ + + + + + +### 8.2.2 MySQL部署 + + + +![image-20220816203527686](../../img/kubernetes/kubernetes_rancher/image-20220816203527686.png) + + + + + +![image-20220816205034201](../../img/kubernetes/kubernetes_rancher/image-20220816205034201.png) + + + + + +![image-20220816204138975](../../img/kubernetes/kubernetes_rancher/image-20220816204138975.png) + + + + + +![image-20220816204242066](../../img/kubernetes/kubernetes_rancher/image-20220816204242066.png) + + + +![image-20220816204323541](../../img/kubernetes/kubernetes_rancher/image-20220816204323541.png) + + + +![image-20220816204356332](../../img/kubernetes/kubernetes_rancher/image-20220816204356332.png) + + + +![image-20220816204448683](../../img/kubernetes/kubernetes_rancher/image-20220816204448683.png) + + + +![image-20220816205248456](../../img/kubernetes/kubernetes_rancher/image-20220816205248456.png) + + + + + +### 8.2.2 MySQL访问 + +#### 8.2.2.1 方案一 通过Rancher web界面访问 + + + +![image-20220816201739371](../../img/kubernetes/kubernetes_rancher/image-20220816201739371.png) + + + +![image-20220816201826018](../../img/kubernetes/kubernetes_rancher/image-20220816201826018.png) + + + + + +#### 8.2.2.2 方案二 通过主机访问 + +~~~powershell +[root@k8s-master01 ~]# ss -anput | grep ":32666" +tcp LISTEN 0 128 *:32666 *:* users:(("kube-proxy",pid=7654,fd=3)) +~~~ + + + +~~~powershell +[root@k8s-master01 nfsdir]# mysql -h 192.168.10.131 -uroot -p123456 -P 32666 +...... +MySQL [(none)]> show databases; ++--------------------+ +| Database | ++--------------------+ +| information_schema | +| kubemsb | +| mysql | +| performance_schema | +| sys | ++--------------------+ +5 rows in set (0.01 sec) +~~~ + + + + + +# 九、部署wordpress + + + +![image-20220816210148864](../../img/kubernetes/kubernetes_rancher/image-20220816210148864.png) + + + +![image-20220816210220210](../../img/kubernetes/kubernetes_rancher/image-20220816210220210.png) + + + +![image-20220816210402733](../../img/kubernetes/kubernetes_rancher/image-20220816210402733.png) + + + +![image-20220816210440574](../../img/kubernetes/kubernetes_rancher/image-20220816210440574.png) + + + +![image-20220816211312775](../../img/kubernetes/kubernetes_rancher/image-20220816211312775.png) + + + + + +![image-20220816211412434](../../img/kubernetes/kubernetes_rancher/image-20220816211412434.png) + + + + + +~~~powershell +[root@k8s-master01 ~]# dig -t a mysqldata1-0.mysqldata1.kubemsbdata.svc.cluster.local @10.43.0.10 + +; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.8 <<>> -t a mysqldata1-0.mysqldata1.kubemsbdata.svc.cluster.local @10.43.0.10 +;; global options: +cmd +;; Got answer: +;; WARNING: .local is reserved for Multicast DNS +;; You are currently testing what happens when an mDNS query is leaked to DNS +;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 63314 +;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 +;; WARNING: recursion requested but not available + +;; OPT PSEUDOSECTION: +; EDNS: version: 0, flags:; udp: 4096 +;; QUESTION SECTION: +;mysqldata1-0.mysqldata1.kubemsbdata.svc.cluster.local. IN A + +;; ANSWER SECTION: +mysqldata1-0.mysqldata1.kubemsbdata.svc.cluster.local. 5 IN A 10.42.1.4 + +;; Query time: 0 msec +;; SERVER: 10.43.0.10#53(10.43.0.10) +;; WHEN: 二 8月 16 21:20:18 CST 2022 +;; MSG SIZE rcvd: 151 +~~~ + + + + + +![image-20220816212308034](../../img/kubernetes/kubernetes_rancher/image-20220816212308034.png) + + + + + +![image-20220816212409188](../../img/kubernetes/kubernetes_rancher/image-20220816212409188.png) + + + + + +![image-20220816212543225](../../img/kubernetes/kubernetes_rancher/image-20220816212543225.png) + + + + + +![image-20220816212701687](../../img/kubernetes/kubernetes_rancher/image-20220816212701687.png) + + + + + +![image-20220816212725300](../../img/kubernetes/kubernetes_rancher/image-20220816212725300.png) + + + +![image-20220816212928917](../../img/kubernetes/kubernetes_rancher/image-20220816212928917.png) + + + +![image-20220816212956908](../../img/kubernetes/kubernetes_rancher/image-20220816212956908.png) + + + + + +![image-20220816213049117](../../img/kubernetes/kubernetes_rancher/image-20220816213049117.png) + + + +![image-20220816213120382](../../img/kubernetes/kubernetes_rancher/image-20220816213120382.png) \ No newline at end of file diff --git a/docs/img/kubernetes/kubernetes_calico/1633650192071.png b/docs/img/kubernetes/kubernetes_calico/1633650192071.png new file mode 100644 index 00000000..b3abceb5 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_calico/1633650192071.png differ diff --git a/docs/img/kubernetes/kubernetes_calico/1633651873150.png b/docs/img/kubernetes/kubernetes_calico/1633651873150.png new file mode 100644 index 00000000..d52533f0 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_calico/1633651873150.png differ diff --git a/docs/img/kubernetes/kubernetes_calico/architecture-calico.svg b/docs/img/kubernetes/kubernetes_calico/architecture-calico.svg new file mode 100644 index 00000000..d95f16ca --- /dev/null +++ b/docs/img/kubernetes/kubernetes_calico/architecture-calico.svg @@ -0,0 +1,157 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/docs/img/kubernetes/kubernetes_calico/image-20220722175645610.png b/docs/img/kubernetes/kubernetes_calico/image-20220722175645610.png new file mode 100644 index 00000000..edc02a88 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_calico/image-20220722175645610.png differ diff --git a/docs/img/kubernetes/kubernetes_calico/image-20220722180040443.png b/docs/img/kubernetes/kubernetes_calico/image-20220722180040443.png new file mode 100644 index 00000000..ae5a139e Binary files /dev/null and b/docs/img/kubernetes/kubernetes_calico/image-20220722180040443.png differ diff --git a/docs/img/kubernetes/kubernetes_calico/image-20220722193530812.png b/docs/img/kubernetes/kubernetes_calico/image-20220722193530812.png new file mode 100644 index 00000000..e06efd27 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_calico/image-20220722193530812.png differ diff --git a/docs/img/kubernetes/kubernetes_calico/image-20220902221015457.png b/docs/img/kubernetes/kubernetes_calico/image-20220902221015457.png new file mode 100644 index 00000000..e159a181 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_calico/image-20220902221015457.png differ diff --git a/docs/img/kubernetes/kubernetes_calico/image-20220902222000384.png b/docs/img/kubernetes/kubernetes_calico/image-20220902222000384.png new file mode 100644 index 00000000..08a2ed2a Binary files /dev/null and b/docs/img/kubernetes/kubernetes_calico/image-20220902222000384.png differ diff --git a/docs/img/kubernetes/kubernetes_calico/image-20220902222609464.png b/docs/img/kubernetes/kubernetes_calico/image-20220902222609464.png new file mode 100644 index 00000000..a0e30520 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_calico/image-20220902222609464.png differ diff --git a/docs/img/kubernetes/kubernetes_calico/image-20220902224059528.png b/docs/img/kubernetes/kubernetes_calico/image-20220902224059528.png new file mode 100644 index 00000000..d5a0c50d Binary files /dev/null and b/docs/img/kubernetes/kubernetes_calico/image-20220902224059528.png differ diff --git a/docs/img/kubernetes/kubernetes_calico/image-20220902225359861.png b/docs/img/kubernetes/kubernetes_calico/image-20220902225359861.png new file mode 100644 index 00000000..812964d5 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_calico/image-20220902225359861.png differ diff --git a/docs/img/kubernetes/kubernetes_calico/image-20220903002830742.png b/docs/img/kubernetes/kubernetes_calico/image-20220903002830742.png new file mode 100644 index 00000000..a71f0a6c Binary files /dev/null and b/docs/img/kubernetes/kubernetes_calico/image-20220903002830742.png differ diff --git a/docs/img/kubernetes/kubernetes_calico/image-20220903022845285.png b/docs/img/kubernetes/kubernetes_calico/image-20220903022845285.png new file mode 100644 index 00000000..62cecef2 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_calico/image-20220903022845285.png differ diff --git a/docs/img/kubernetes/kubernetes_calico/image-20220905120240844.png b/docs/img/kubernetes/kubernetes_calico/image-20220905120240844.png new file mode 100644 index 00000000..d15418aa Binary files /dev/null and b/docs/img/kubernetes/kubernetes_calico/image-20220905120240844.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/1627541947686.png b/docs/img/kubernetes/kubernetes_flannel/1627541947686.png new file mode 100644 index 00000000..c96b697c Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/1627541947686.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/1627542432899.png b/docs/img/kubernetes/kubernetes_flannel/1627542432899.png new file mode 100644 index 00000000..9a3cd43a Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/1627542432899.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/1633650192071.png b/docs/img/kubernetes/kubernetes_flannel/1633650192071.png new file mode 100644 index 00000000..b3abceb5 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/1633650192071.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/1633651873150.png b/docs/img/kubernetes/kubernetes_flannel/1633651873150.png new file mode 100644 index 00000000..d52533f0 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/1633651873150.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/image-20210922135912555.png b/docs/img/kubernetes/kubernetes_flannel/image-20210922135912555.png new file mode 100644 index 00000000..bbb1552c Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/image-20210922135912555.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/image-20220715012035435.png b/docs/img/kubernetes/kubernetes_flannel/image-20220715012035435.png new file mode 100644 index 00000000..73a0c008 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/image-20220715012035435.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/image-20220715092558666.png b/docs/img/kubernetes/kubernetes_flannel/image-20220715092558666.png new file mode 100644 index 00000000..f5618db2 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/image-20220715092558666.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/image-20220715093007357.png b/docs/img/kubernetes/kubernetes_flannel/image-20220715093007357.png new file mode 100644 index 00000000..b0266932 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/image-20220715093007357.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/image-20220715125357778.png b/docs/img/kubernetes/kubernetes_flannel/image-20220715125357778.png new file mode 100644 index 00000000..47a1c48c Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/image-20220715125357778.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/image-20220715125531167.png b/docs/img/kubernetes/kubernetes_flannel/image-20220715125531167.png new file mode 100644 index 00000000..9f2941c5 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/image-20220715125531167.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/image-20220715125606669.png b/docs/img/kubernetes/kubernetes_flannel/image-20220715125606669.png new file mode 100644 index 00000000..8453f84e Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/image-20220715125606669.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/image-20220715125758524.png b/docs/img/kubernetes/kubernetes_flannel/image-20220715125758524.png new file mode 100644 index 00000000..5ed93ee9 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/image-20220715125758524.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/image-20220715125834969.png b/docs/img/kubernetes/kubernetes_flannel/image-20220715125834969.png new file mode 100644 index 00000000..0bf55c9d Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/image-20220715125834969.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/image-20220715130019232.png b/docs/img/kubernetes/kubernetes_flannel/image-20220715130019232.png new file mode 100644 index 00000000..919345fd Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/image-20220715130019232.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/image-20220715130129241.png b/docs/img/kubernetes/kubernetes_flannel/image-20220715130129241.png new file mode 100644 index 00000000..7414dbd4 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/image-20220715130129241.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/image-20220715130201235.png b/docs/img/kubernetes/kubernetes_flannel/image-20220715130201235.png new file mode 100644 index 00000000..aeb087c0 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/image-20220715130201235.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/image-20220715160251133.png b/docs/img/kubernetes/kubernetes_flannel/image-20220715160251133.png new file mode 100644 index 00000000..d540bc25 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/image-20220715160251133.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/image-20220715213302865.png b/docs/img/kubernetes/kubernetes_flannel/image-20220715213302865.png new file mode 100644 index 00000000..5c2c7f48 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/image-20220715213302865.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/image-20220716075320996.png b/docs/img/kubernetes/kubernetes_flannel/image-20220716075320996.png new file mode 100644 index 00000000..49c3ce5f Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/image-20220716075320996.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/image-20220716081105875.png b/docs/img/kubernetes/kubernetes_flannel/image-20220716081105875.png new file mode 100644 index 00000000..a16e7279 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/image-20220716081105875.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/image-20220716103544284.png b/docs/img/kubernetes/kubernetes_flannel/image-20220716103544284.png new file mode 100644 index 00000000..9d73ffca Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/image-20220716103544284.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/image-20220716105728014.png b/docs/img/kubernetes/kubernetes_flannel/image-20220716105728014.png new file mode 100644 index 00000000..2943d4a3 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/image-20220716105728014.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/image-20220716105844575.png b/docs/img/kubernetes/kubernetes_flannel/image-20220716105844575.png new file mode 100644 index 00000000..2ee6e285 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/image-20220716105844575.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/image-20220716110204077.png b/docs/img/kubernetes/kubernetes_flannel/image-20220716110204077.png new file mode 100644 index 00000000..12507eb6 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/image-20220716110204077.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/image-20220716110306390.png b/docs/img/kubernetes/kubernetes_flannel/image-20220716110306390.png new file mode 100644 index 00000000..aa5761ba Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/image-20220716110306390.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/image-20220716135622610.png b/docs/img/kubernetes/kubernetes_flannel/image-20220716135622610.png new file mode 100644 index 00000000..19fdc36b Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/image-20220716135622610.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/image-20220716142721779.png b/docs/img/kubernetes/kubernetes_flannel/image-20220716142721779.png new file mode 100644 index 00000000..28641431 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/image-20220716142721779.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/image-20220716153757411.png b/docs/img/kubernetes/kubernetes_flannel/image-20220716153757411.png new file mode 100644 index 00000000..8987faf4 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/image-20220716153757411.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/image-20220721090648605.png b/docs/img/kubernetes/kubernetes_flannel/image-20220721090648605.png new file mode 100644 index 00000000..250a1800 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/image-20220721090648605.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/image-20220721093742953.png b/docs/img/kubernetes/kubernetes_flannel/image-20220721093742953.png new file mode 100644 index 00000000..124a231c Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/image-20220721093742953.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/image-20220721185041526.png b/docs/img/kubernetes/kubernetes_flannel/image-20220721185041526.png new file mode 100644 index 00000000..dcf40d35 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/image-20220721185041526.png differ diff --git a/docs/img/kubernetes/kubernetes_flannel/image-20220722073310706.png b/docs/img/kubernetes/kubernetes_flannel/image-20220722073310706.png new file mode 100644 index 00000000..010540b3 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_flannel/image-20220722073310706.png differ diff --git a/docs/img/kubernetes/kubernetes_hybridnet/image-20230404120724930.png b/docs/img/kubernetes/kubernetes_hybridnet/image-20230404120724930.png new file mode 100644 index 00000000..5394b1cf Binary files /dev/null and b/docs/img/kubernetes/kubernetes_hybridnet/image-20230404120724930.png differ diff --git a/docs/img/kubernetes/kubernetes_hybridnet/image-20230404131326907.png b/docs/img/kubernetes/kubernetes_hybridnet/image-20230404131326907.png new file mode 100644 index 00000000..29bb547d Binary files /dev/null and b/docs/img/kubernetes/kubernetes_hybridnet/image-20230404131326907.png differ diff --git a/docs/img/kubernetes/kubernetes_hybridnet/image-20230404131724427.png b/docs/img/kubernetes/kubernetes_hybridnet/image-20230404131724427.png new file mode 100644 index 00000000..bfb7ec1c Binary files /dev/null and b/docs/img/kubernetes/kubernetes_hybridnet/image-20230404131724427.png differ diff --git a/docs/img/kubernetes/kubernetes_hybridnet/image-20230404150812646.png b/docs/img/kubernetes/kubernetes_hybridnet/image-20230404150812646.png new file mode 100644 index 00000000..302b4364 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_hybridnet/image-20230404150812646.png differ diff --git a/docs/img/kubernetes/kubernetes_hybridnet/image-20230404151024046.png b/docs/img/kubernetes/kubernetes_hybridnet/image-20230404151024046.png new file mode 100644 index 00000000..0bb31df5 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_hybridnet/image-20230404151024046.png differ diff --git a/docs/img/kubernetes/kubernetes_hybridnet/image-20230404151104712.png b/docs/img/kubernetes/kubernetes_hybridnet/image-20230404151104712.png new file mode 100644 index 00000000..30e381f6 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_hybridnet/image-20230404151104712.png differ diff --git a/docs/img/kubernetes/kubernetes_hybridnet/image-20230404151314708.png b/docs/img/kubernetes/kubernetes_hybridnet/image-20230404151314708.png new file mode 100644 index 00000000..b28ed47b Binary files /dev/null and b/docs/img/kubernetes/kubernetes_hybridnet/image-20230404151314708.png differ diff --git a/docs/img/kubernetes/kubernetes_hybridnet/image-20230404151505472.png b/docs/img/kubernetes/kubernetes_hybridnet/image-20230404151505472.png new file mode 100644 index 00000000..a6051837 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_hybridnet/image-20230404151505472.png differ diff --git a/docs/img/kubernetes/kubernetes_hybridnet/image-20230404151537521.png b/docs/img/kubernetes/kubernetes_hybridnet/image-20230404151537521.png new file mode 100644 index 00000000..d8d94b34 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_hybridnet/image-20230404151537521.png differ diff --git a/docs/img/kubernetes/kubernetes_hybridnet/image-20230404151701002.png b/docs/img/kubernetes/kubernetes_hybridnet/image-20230404151701002.png new file mode 100644 index 00000000..690131fa Binary files /dev/null and b/docs/img/kubernetes/kubernetes_hybridnet/image-20230404151701002.png differ diff --git a/docs/img/kubernetes/kubernetes_hybridnet/image-20230404151837989.png b/docs/img/kubernetes/kubernetes_hybridnet/image-20230404151837989.png new file mode 100644 index 00000000..177960ff Binary files /dev/null and b/docs/img/kubernetes/kubernetes_hybridnet/image-20230404151837989.png differ diff --git a/docs/img/kubernetes/kubernetes_hybridnet/image-20230406143957394.png b/docs/img/kubernetes/kubernetes_hybridnet/image-20230406143957394.png new file mode 100644 index 00000000..23f57908 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_hybridnet/image-20230406143957394.png differ diff --git a/docs/img/kubernetes/kubernetes_hybridnet/image-20230406144105191.png b/docs/img/kubernetes/kubernetes_hybridnet/image-20230406144105191.png new file mode 100644 index 00000000..e0274bfd Binary files /dev/null and b/docs/img/kubernetes/kubernetes_hybridnet/image-20230406144105191.png differ diff --git a/docs/img/kubernetes/kubernetes_hybridnet/image-20230406144215187.png b/docs/img/kubernetes/kubernetes_hybridnet/image-20230406144215187.png new file mode 100644 index 00000000..8db7e656 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_hybridnet/image-20230406144215187.png differ diff --git a/docs/img/kubernetes/kubernetes_hybridnet/image-20230406144245003.png b/docs/img/kubernetes/kubernetes_hybridnet/image-20230406144245003.png new file mode 100644 index 00000000..50b65e3a Binary files /dev/null and b/docs/img/kubernetes/kubernetes_hybridnet/image-20230406144245003.png differ diff --git a/docs/img/kubernetes/kubernetes_hybridnet/image-20230406144319092.png b/docs/img/kubernetes/kubernetes_hybridnet/image-20230406144319092.png new file mode 100644 index 00000000..011d8b41 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_hybridnet/image-20230406144319092.png differ diff --git a/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20220507120653090.png b/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20220507120653090.png new file mode 100644 index 00000000..83f9cc7b Binary files /dev/null and b/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20220507120653090.png differ diff --git a/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20220507120725815.png b/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20220507120725815.png new file mode 100644 index 00000000..2557c9f9 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20220507120725815.png differ diff --git a/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230220180035176.png b/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230220180035176.png new file mode 100644 index 00000000..61e96878 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230220180035176.png differ diff --git a/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230220180130670.png b/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230220180130670.png new file mode 100644 index 00000000..572c4f10 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230220180130670.png differ diff --git a/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230324173843525.png b/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230324173843525.png new file mode 100644 index 00000000..af560a2d Binary files /dev/null and b/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230324173843525.png differ diff --git a/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230324173901871.png b/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230324173901871.png new file mode 100644 index 00000000..b01281ca Binary files /dev/null and b/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230324173901871.png differ diff --git a/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230324174008962.png b/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230324174008962.png new file mode 100644 index 00000000..2dce773d Binary files /dev/null and b/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230324174008962.png differ diff --git a/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230324174249721.png b/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230324174249721.png new file mode 100644 index 00000000..6cf30dc5 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230324174249721.png differ diff --git a/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230324174316268.png b/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230324174316268.png new file mode 100644 index 00000000..a81a9705 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230324174316268.png differ diff --git a/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230324202133124.png b/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230324202133124.png new file mode 100644 index 00000000..e159cd3e Binary files /dev/null and b/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230324202133124.png differ diff --git a/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230324202324615.png b/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230324202324615.png new file mode 100644 index 00000000..e0c5b5b6 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_ipv4_and_ipv6/image-20230324202324615.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816144713112.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816144713112.png new file mode 100644 index 00000000..dfb9d567 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816144713112.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816144830488.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816144830488.png new file mode 100644 index 00000000..2512f306 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816144830488.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816144953656.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816144953656.png new file mode 100644 index 00000000..acf45b69 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816144953656.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816150151634.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816150151634.png new file mode 100644 index 00000000..593cb74b Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816150151634.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816150209913.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816150209913.png new file mode 100644 index 00000000..912547c0 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816150209913.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816150231704.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816150231704.png new file mode 100644 index 00000000..ff9f10bb Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816150231704.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816150305037.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816150305037.png new file mode 100644 index 00000000..289dde7d Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816150305037.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816150322211.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816150322211.png new file mode 100644 index 00000000..fcfa01c7 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816150322211.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816150450409.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816150450409.png new file mode 100644 index 00000000..37de7296 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816150450409.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816150706088.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816150706088.png new file mode 100644 index 00000000..3715f016 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816150706088.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816150736103.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816150736103.png new file mode 100644 index 00000000..12aa958c Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816150736103.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816150900071.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816150900071.png new file mode 100644 index 00000000..d1c9b71a Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816150900071.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816151001001.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816151001001.png new file mode 100644 index 00000000..7f4002ef Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816151001001.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816151039880.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816151039880.png new file mode 100644 index 00000000..cf625d6e Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816151039880.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816151117519.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816151117519.png new file mode 100644 index 00000000..8df6bb19 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816151117519.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816151131738.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816151131738.png new file mode 100644 index 00000000..87c21e94 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816151131738.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816151144464.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816151144464.png new file mode 100644 index 00000000..2db08a32 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816151144464.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816151157176.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816151157176.png new file mode 100644 index 00000000..b6d4f446 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816151157176.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816151225471.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816151225471.png new file mode 100644 index 00000000..c4a5356e Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816151225471.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816151259299.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816151259299.png new file mode 100644 index 00000000..483d7002 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816151259299.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816151352377.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816151352377.png new file mode 100644 index 00000000..99e50706 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816151352377.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816151709536.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816151709536.png new file mode 100644 index 00000000..1801b596 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816151709536.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816151731937.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816151731937.png new file mode 100644 index 00000000..aac292df Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816151731937.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816151746413.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816151746413.png new file mode 100644 index 00000000..98fe3f7b Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816151746413.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816152002056.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816152002056.png new file mode 100644 index 00000000..9135531b Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816152002056.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816152858892.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816152858892.png new file mode 100644 index 00000000..ccac7c7e Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816152858892.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816152917840.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816152917840.png new file mode 100644 index 00000000..4a1fcf4e Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816152917840.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816153009799.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816153009799.png new file mode 100644 index 00000000..4f4d47b5 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816153009799.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816153241393.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816153241393.png new file mode 100644 index 00000000..6d02a602 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816153241393.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816153301099.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816153301099.png new file mode 100644 index 00000000..fc71a89a Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816153301099.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816153357410.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816153357410.png new file mode 100644 index 00000000..58dad32a Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816153357410.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816153917499.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816153917499.png new file mode 100644 index 00000000..7fa9e42a Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816153917499.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816154006380.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816154006380.png new file mode 100644 index 00000000..3004e491 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816154006380.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816155110578.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816155110578.png new file mode 100644 index 00000000..54122f86 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816155110578.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816155140075.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816155140075.png new file mode 100644 index 00000000..228b4d3f Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816155140075.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816155537732.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816155537732.png new file mode 100644 index 00000000..4c98860a Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816155537732.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816155741848.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816155741848.png new file mode 100644 index 00000000..1a2bd00e Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816155741848.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816155932470.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816155932470.png new file mode 100644 index 00000000..10ebb5d1 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816155932470.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816160056177.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816160056177.png new file mode 100644 index 00000000..498e7ec6 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816160056177.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816160200746.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816160200746.png new file mode 100644 index 00000000..a460d459 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816160200746.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816160254842.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816160254842.png new file mode 100644 index 00000000..32ca6cc3 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816160254842.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816160423255.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816160423255.png new file mode 100644 index 00000000..f9376347 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816160423255.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816160443666.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816160443666.png new file mode 100644 index 00000000..b264e321 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816160443666.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816160733605.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816160733605.png new file mode 100644 index 00000000..3e6c63e2 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816160733605.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816160825424.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816160825424.png new file mode 100644 index 00000000..e4ecdda4 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816160825424.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816162243693.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816162243693.png new file mode 100644 index 00000000..b81735a3 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816162243693.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816185710948.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816185710948.png new file mode 100644 index 00000000..11e7e9c3 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816185710948.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816201739371.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816201739371.png new file mode 100644 index 00000000..54112eda Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816201739371.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816201826018.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816201826018.png new file mode 100644 index 00000000..45610aee Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816201826018.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816202438365.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816202438365.png new file mode 100644 index 00000000..2b7aa57a Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816202438365.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816202505925.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816202505925.png new file mode 100644 index 00000000..7e52b04d Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816202505925.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816202814643.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816202814643.png new file mode 100644 index 00000000..79f5e5c4 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816202814643.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816202832741.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816202832741.png new file mode 100644 index 00000000..078f253d Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816202832741.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816202919580.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816202919580.png new file mode 100644 index 00000000..b8fdbf42 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816202919580.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816202953810.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816202953810.png new file mode 100644 index 00000000..1aa16b7d Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816202953810.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816203153592.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816203153592.png new file mode 100644 index 00000000..5a80e320 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816203153592.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816203212769.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816203212769.png new file mode 100644 index 00000000..4c2aae44 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816203212769.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816203527686.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816203527686.png new file mode 100644 index 00000000..40bfef53 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816203527686.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816203823596.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816203823596.png new file mode 100644 index 00000000..227c440a Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816203823596.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816204138975.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816204138975.png new file mode 100644 index 00000000..bf3c05fe Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816204138975.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816204242066.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816204242066.png new file mode 100644 index 00000000..f42d0478 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816204242066.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816204323541.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816204323541.png new file mode 100644 index 00000000..664cb35e Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816204323541.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816204356332.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816204356332.png new file mode 100644 index 00000000..5964134f Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816204356332.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816204448683.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816204448683.png new file mode 100644 index 00000000..f1bdb429 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816204448683.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816204711617.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816204711617.png new file mode 100644 index 00000000..7673201d Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816204711617.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816204738783.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816204738783.png new file mode 100644 index 00000000..7673201d Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816204738783.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816205034201.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816205034201.png new file mode 100644 index 00000000..3f5fca4c Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816205034201.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816205248456.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816205248456.png new file mode 100644 index 00000000..ed20971a Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816205248456.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816210148864.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816210148864.png new file mode 100644 index 00000000..9b25f1a1 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816210148864.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816210220210.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816210220210.png new file mode 100644 index 00000000..932e95bf Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816210220210.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816210402733.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816210402733.png new file mode 100644 index 00000000..ec144aaf Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816210402733.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816210440574.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816210440574.png new file mode 100644 index 00000000..25d2b94b Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816210440574.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816211312775.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816211312775.png new file mode 100644 index 00000000..e7bb6d4a Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816211312775.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816211412434.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816211412434.png new file mode 100644 index 00000000..a6bdfa13 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816211412434.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816212308034.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816212308034.png new file mode 100644 index 00000000..59e104b7 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816212308034.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816212409188.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816212409188.png new file mode 100644 index 00000000..b3d0d9eb Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816212409188.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816212543225.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816212543225.png new file mode 100644 index 00000000..728724d9 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816212543225.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816212701687.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816212701687.png new file mode 100644 index 00000000..da443e68 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816212701687.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816212725300.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816212725300.png new file mode 100644 index 00000000..4a0e9842 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816212725300.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816212928917.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816212928917.png new file mode 100644 index 00000000..82b260c2 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816212928917.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816212956908.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816212956908.png new file mode 100644 index 00000000..e1c9b01f Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816212956908.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816213049117.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816213049117.png new file mode 100644 index 00000000..25f97018 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816213049117.png differ diff --git a/docs/img/kubernetes/kubernetes_rancher/image-20220816213120382.png b/docs/img/kubernetes/kubernetes_rancher/image-20220816213120382.png new file mode 100644 index 00000000..f1e4e7b1 Binary files /dev/null and b/docs/img/kubernetes/kubernetes_rancher/image-20220816213120382.png differ diff --git a/mkdocs.yml b/mkdocs.yml index 97017a55..4220f49b 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -148,9 +148,11 @@ nav: - rocketmq部署: cloud/kubernetes/kubernetes_rokectmq.md - Kubernetes集群包管理解决方案 Helm: cloud/kubernetes/kubernetes_helm.md - Kubernetes原生配置管理利器 kustomize: cloud/kubernetes/kubernetes_kustomize.md - - 源监控: cloud/kubernetes/kubernetes_ui.md - - 源监控: cloud/kubernetes/kubernetes_ui.md - - 源监控: cloud/kubernetes/kubernetes_ui.md + - kubernetes集群网络解决方案 flannel: cloud/kubernetes/kubernetes_flannel.md + - kubernetes集群网络解决方案 calico: cloud/kubernetes/kubernetes_calico.md + - kubernetes集群 underlay 网络方案 hybridnet: cloud/kubernetes/kubernetes_hybridnet.md + - kubernetes版本双栈协议(IPv4&IPv6)集群部署: cloud/kubernetes/kubernetes_ipv4_and_ipv6.md + - Rancher容器云管理平台: cloud/kubernetes/kubernetes_rancher.md - 源监控: cloud/kubernetes/kubernetes_ui.md - 源监控: cloud/kubernetes/kubernetes_ui.md - 微服务:

For online documentation and support please refer to +nginx.org.
+Commercial support is available at +nginx.com.