1. 环境配置

(1)安装前环境配置

// 节点之间:主机名、MAC 地址、product_uuid 不能重复

$ hostname             // 查看主机名
#$ ip link
$ nmcli device show    // 查看 mac 地址
$ cat /sys/class/dmi/id/product_uuid       // 查看 product_uuid

一般来讲,硬件设备会拥有唯一的地址,但是有些虚拟机的地址可能会重复。 Kubernetes 使用这些值来唯一确定集群中的节点。 如果这些值在每个节点上不唯一,可能会导致安装失败。

// 配置主机名解析

$ cat <<EOF >> /etc/hosts
192.168.10.224 c810224.xiodi.cn k8sinternal.xiodi.cn
192.168.10.214 c810214.xiodi.cn
EOF

// 配置 base和 epel 源

$ source <(curl -sL https://gitee.com/jack_zang/public-scripts/raw/master/shell/repo/centos8_use_aliyun_base_epel.sh)

关闭防火墙、selinux、swap,配置系统句柄数,修改内核参数,开启 ipvs,配置时间同步

$ source <(curl -sL https://gitee.com/jack_zang/kubernetes/raw/k8s-v1.26/install/kubeadm_1.26/prepare_env.sh)

注意:请自行配置主机名,已经dns解析主机名

(2)升级内核(可不升级)

$ source <(curl -sL https://gitee.com/jack_zang/kubernetes/raw/k8s-v1.26/install/kubeadm_1.26/prepare_env_update_kernel.sh)

$ reboot
$ uname -r
6.0.0-1.el8.elrepo.x86_64

(3)安装 kubernetes

$ source <(curl -sL https://gitee.com/jack_zang/kubernetes/raw/k8s-v1.26/install/kubeadm_1.26/kubernetes_install.sh)

主要安装4个软件:

  • containerd:容器运行时
  • kubeadm:用来初始化集群的指令
  • kubelet:在集群中的每个节点上用来启动 Pod 和容器等。
  • kubectl:用来与集群通信的命令行工具(kubernetes 集群客户端)。

2. kubeadm 配置单 master 集群

(1)master 节点初始化集群

// 生成默认配置,也可不生成,直接使用下面的配置

$ kubeadm config print init-defaults --component-configs KubeletConfiguration > /etc/kubernetes/kubeadm-config.yaml

如果不指定任何组件,默认只有 InitConfiguration 和 ClusterConfiguration 配置,支持的组件有:
KubeProxyConfiguration 和 KubeletConfiguration 。

// kubeadm 配置文件

$ vi /etc/kubernetes/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
imageRepository: registry.aliyuncs.com/google_containers
kubernetesVersion: 1.26.0
controlPlaneEndpoint: "k8sinternal.xiodi.cn:6443"
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16
apiServer:
  timeoutForControlPlane: 4m0s
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
scheduler: {}
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
logging:
  flushFrequency: 0
  options:
    json:
      infoBufferSize: "0"
  verbosity: 0
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

基本上和默认值一样,主要注意以下几个地方:

imageRepository: registry.aliyuncs.com/google_containers
controlPlaneEndpoint: "k8sinternal.xiodi.cn:6443"   // 用作以后升级高可用使用
podSubnet: 10.244.0.0/16
cgroupDriver: systemd
mode: ipvs

如果计划将单个控制平面 kubeadm 集群升级成高可用, 你应该指定 --control-plane-endpoint 为所有控制平面节点设置共享端点。 端点可以是负载均衡器的 DNS 名称或 IP 地址。

除非另有说明,否则 kubeadm 使用与默认网关关联的网络接口来设置此控制平面节点 API server 的广播地址。 要使用其他网络接口,请为 kubeadm init 设置 --apiserver-advertise-address=<ip-address> 参数。 要部署使用 IPv6 地址的 Kubernetes 集群, 必须指定一个 IPv6 地址,例如 --apiserver-advertise-address=2001:db8::101

// 提前拉取镜像并初始化集群

$ kubeadm config images pull --config /etc/kubernetes/kubeadm-config.yaml
$ kubeadm init --config /etc/kubernetes/kubeadm-config.yaml
[init] Using Kubernetes version: v1.26.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [c810224.xiodi.cn k8sinternal.xiodi.cn kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.10.224]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [c810224.xiodi.cn localhost] and IPs [192.168.10.224 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [c810224.xiodi.cn localhost] and IPs [192.168.10.224 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 5.000994 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node c810224.xiodi.cn as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node c810224.xiodi.cn as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: zfssa4.fu9qdw1crrxf9si7
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join k8sinternal.xiodi.cn:6443 --token zfssa4.fu9qdw1crrxf9si7 \
    --discovery-token-ca-cert-hash sha256:dc8dadefc369872c395a5d5fe6ac20a71e15be7cce9a01a9fa542d0226111719 \
    --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8sinternal.xiodi.cn:6443 --token zfssa4.fu9qdw1crrxf9si7 \
    --discovery-token-ca-cert-hash sha256:dc8dadefc369872c395a5d5fe6ac20a71e15be7cce9a01a9fa542d0226111719

// 配置用户认证证书

$ mkdir -p $HOME/.kube
$ cp -i /etc/kubernetes/admin.conf  $HOME/.kube/config
$ chown  $(id -u):$(id -g)  $HOME/.kube/config

kubeadm init 参数参考:https://kubernetes.io/zh-cn/docs/reference/setup-tools/kubeadm/
kubeadm init 带配置文件参考:https://kubernetes.io/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file

(2)安装 Helm

// 安装 helm

$ source <(curl -sL https://gitee.com/jack_zang/public-scripts/raw/master/shell/helm/helm-cmd-install.sh)

(3)安装网络组件 Calico

// 安装 pod network组件 Calico

$ curl -LO https://github.com/projectcalico/calico/releases/download/v3.25.0/tigera-operator-v3.25.0.tgz
$ helm show values tigera-operator-v3.25.0.tgz > calico-value.yaml  // 查看 chart 中可定义的配置

$ helm install calico tigera-operator-v3.25.0.tgz -n kube-system -f calico-value.yaml

// 验证 calico,重点是 tigera-operator,这个是 calico 操作管理容器

$ kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
coredns-5bbd96d687-cmd94                   1/1     Running   0          93m
coredns-5bbd96d687-l2vjs                   1/1     Running   0          93m
etcd-c810224.xiodi.cn                      1/1     Running   0          94m
kube-apiserver-c810224.xiodi.cn            1/1     Running   0          94m
kube-controller-manager-c810224.xiodi.cn   1/1     Running   0          94m
kube-proxy-qv68f                           1/1     Running   0          93m
kube-scheduler-c810224.xiodi.cn            1/1     Running   0          94m
tigera-operator-54b47459dd-tntnx           1/1     Running   0          13m

// 验证 calico,查看 calico 安装

$ kubectl get pods -n calico-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-6b7b9c649d-k6667   1/1     Running   0          15m
calico-node-c9fzx                          1/1     Running   0          15m
calico-typha-7cf7b9cd86-tmmdl              1/1     Running   0          15m
csi-node-driver-2n62h                      2/2     Running   0          14m

// 查看 calico 向 k8s 添加的 api 资源

$ kubectl api-resources | grep calico
bgpconfigurations                              crd.projectcalico.org/v1               false        BGPConfiguration
bgppeers                                       crd.projectcalico.org/v1               false        BGPPeer
blockaffinities                                crd.projectcalico.org/v1               false        BlockAffinity
caliconodestatuses                             crd.projectcalico.org/v1               false        CalicoNodeStatus
clusterinformations                            crd.projectcalico.org/v1               false        ClusterInformation
felixconfigurations                            crd.projectcalico.org/v1               false        FelixConfiguration
globalnetworkpolicies                          crd.projectcalico.org/v1               false        GlobalNetworkPolicy
globalnetworksets                              crd.projectcalico.org/v1               false        GlobalNetworkSet
hostendpoints                                  crd.projectcalico.org/v1               false        HostEndpoint
ipamblocks                                     crd.projectcalico.org/v1               false        IPAMBlock
ipamconfigs                                    crd.projectcalico.org/v1               false        IPAMConfig
ipamhandles                                    crd.projectcalico.org/v1               false        IPAMHandle
ippools                                        crd.projectcalico.org/v1               false        IPPool
ipreservations                                 crd.projectcalico.org/v1               false        IPReservation
kubecontrollersconfigurations                  crd.projectcalico.org/v1               false        KubeControllersConfiguration
networkpolicies                                crd.projectcalico.org/v1               true         NetworkPolicy
networksets                                    crd.projectcalico.org/v1               true         NetworkSet

// 验证集群

$ kubectl get nodes
NAME               STATUS   ROLES           AGE    VERSION
c810224.xiodi.cn   Ready    control-plane   101m   v1.26.1

$ kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
coredns-5bbd96d687-cmd94                   1/1     Running   0          101m
coredns-5bbd96d687-l2vjs                   1/1     Running   0          101m
etcd-c810224.xiodi.cn                      1/1     Running   0          101m
kube-apiserver-c810224.xiodi.cn            1/1     Running   0          101m
kube-controller-manager-c810224.xiodi.cn   1/1     Running   0          101m
kube-proxy-qv68f                           1/1     Running   0          101m
kube-scheduler-c810224.xiodi.cn            1/1     Running   0          101m
tigera-operator-54b47459dd-tntnx           1/1     Running   0          20m

(4)Node 节点加入集群

//加入集群

$ kubeadm join k8sinternal.xiodi.cn:6443 --token zfssa4.fu9qdw1crrxf9si7 --discovery-token-ca-cert-hash sha256:dc8dadefc369872c395a5d5fe6ac20a71e15be7cce9a01a9fa542d0226111719

//如果已经不记得加入集群的命令,创建一个新的 token,默认24小时后过期

$ kubeadm token create --print-join-command
kubeadm join k8sinternal.xiodi.cn:6443 --token 7llcyj.b9hc0xg8k2nnkxok --discovery-token-ca-cert-hash sha256:dc8dadefc369872c395a5d5fe6ac20a71e15be7cce9a01a9fa542d0226111719

// 验证节点

$ kubectl get pods -n kube-system -o wide
$ kubectl get nodes
NAME                  STATUS   ROLES           AGE    VERSION
c810214.xiodi.cn      Ready    <none>          13m    v1.26.1
c810224.xiodi.cn      Ready    control-plane   161m   v1.26.1

// 如果要加入 master 节点(先看看即可)

kubeadm join k8sinternal.xiodi.cn:6443 --token zfssa4.fu9qdw1crrxf9si7 --discovery-token-ca-cert-hash sha256:dc8dadefc369872c395a5d5fe6ac20a71e15be7cce9a01a9fa542d0226111719 --control-plane

3. 测试

// 创建 nginx deployment 和 svc

$ kubectl apply -f https://gitee.com/jack_zang/kubernetes/raw/k8s-v1.26/install/kubeadm_1.26/nginx-test.yaml

浏览器查看:http://NODEIP:32222

4. 扩展

(1)加入节点 token 详解

$ kubeadm token list        // 查看令牌
TOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPS
01rbno.71wjsyykart1q3ne   23h         2023-02-17T02:13:19Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
...

$ kubeadm token create         // 创建令牌,默认情况下,有效性为 24 小时
ff4lq6.vez20elpkylwmipw

// 查看 `--discovery-token-ca-cert-hash` 的值
$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
6270d1e714a0c084619f2fc9e3560856253fb914d972259604862b9b4c593581

(2)删除节点

使用适当的凭证与控制平面节点通信,运行:

kubectl drain <node name> --delete-emptydir-data --force --ignore-daemonsets

在删除节点之前,请重置 kubeadm 安装的状态:

kubeadm reset

重置过程不会重置或清除 iptables 规则或 IPVS 表。如果你希望重置 iptables,则必须手动进行:

iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

如果要重置 IPVS 表,则必须运行以下命令:

ipvsadm -C

现在删除节点:

kubectl delete node <节点名称>

(3)清理控制平面

你可以在控制平面主机上使用 kubeadm reset 来触发尽力而为的清理。

作者:jackzang  创建时间:2023-02-16 16:58
最后编辑:jackzang  更新时间:2024-08-01 15:36