1.集群信息

使用 containerd 作为容器运行时搭建 Kubernetes 集群,
部署k8s集群的节点按照用途可以划分为如下2类角色:

  • master:集群的master节点,集群的初始化节点,基础配置不低于2C4G
  • node: 集群的slave节点,可以多台,基础配置不低于2C4G

本例为了演示node节点的添加,会部署一台master+2台node,节点规划如下:

主机名ip角色
master1192.168.247.147master
node1192.168.247.148node
node2192.168.247.149node

2.安装前准备工作

  • 操作节点:所有节点(master,node)均需执行

修改hostname
hostname必须只能包含小写字母、数字、","、"-",且开头结尾必须是小写字母或数字

# 在master1节点
 ~]# hostnamectl set-hostname master1 #设置master1节点的hostname

# 在node1节点
 ~]# hostnamectl set-hostname node1 #设置node1节点的hostname

# 在node2节点
 ~]# hostnamectl set-hostname node2 #设置node2节点的hostname

添加hosts解析

 ~]# cat >>/etc/hosts<<EOF
192.168.247.147 master1
192.168.247.148 node1
192.168.247.149 node2
EOF

关闭selinux和防火墙

sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/config
setenforce 0
systemctl disable firewalld && systemctl stop firewalld

关闭swap

swapoff -a
# 防止开机自动挂载 swap 分区
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

由于开启内核 ipv4 转发需要加载 br_netfilter 模块,所以加载下该模块(modprobe br_netfilter)

 ~]# modprobe br_netfilter

最好将上面的命令设置成开机启动,因为重启后模块失效,下面是开机自动加载模块的方式。首先新建 /etc/rc.sysinit 文件,内容如下所示:

~]# cat /etc/rc.sysinit
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules ; do
[ -x $file ] && $file
done
[root

然后在 /etc/sysconfig/modules/ 目录下新建如下文件并加上权限:

~]#  cat /etc/sysconfig/modules/br_netfilter.modules
modprobe br_netfilter

~]# chmod 755 /etc/sysconfig/modules/br_netfilter.modules

然后重启后,模块就可以自动加载了:

~]#  lsmod |grep br_netfilter
br_netfilter           22256  0 
bridge                146976  1 br_netfilter

创建 /etc/sysctl.d/k8s.conf文件,添加如下内容:

 ~]# cat /etc/sysctl.d/k8s.conf 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
# 下面的内核参数可以解决ipvs模式下长连接空闲超时的问题
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 10
net.ipv4.tcp_keepalive_time = 600
vm.swappiness=0

 ~]# sysctl -p /etc/sysctl.d/k8s.conf 

执行 sysctl -p /etc/sysctl.d/k8s.conf 使修改生效。

安装 ipvs

 ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
 ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

上面脚本创建了的 /etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块。使用 lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已经正确加载所需的内核模块。

安装ipset 、 ipvsadm软件包

 ~]# yum install -y ipset ipvsadm

为了便于查看 ipvs 的代理规则,安装管理工具 ipvsadm

同步服务器时间

 ~]# yum install chrony -y
 ~]# systemctl enable chronyd
 ~]# systemctl start chronyd
 ~]# chronyc sources

3. 安装 Containerd

  • 操作节点:所有节点(master,node)均需执行

首先需要这节点上安装 seccomp 依赖

 ~]# rpm -qa |grep libseccomp
libseccomp-2.3.1-4.el7.x86_64

# 如果没有安装 libseccomp 包则执行下面的命令安装依赖
 ~]# yum install wget -y
 ~]# wget http://mirror.centos.org/centos/7/os/x86_64/Packages/libseccomp-2.3.1-4.el7.x86_64.rpm
 ~]# yum install libseccomp-2.3.1-4.el7.x86_64.rpm -y

由于 containerd 需要调用 runc,所以我们也需要先安装 runc,不过 containerd 提供了一个包含相关依赖的压缩包。

 ~]# wget https://github.com/containerd/containerd/releases/download/v1.5.5/cri-containerd-cni-1.5.5-linux-amd64.tar.gz
# 如果有限制,也可以替换成下面的 URL 加速下载
 ~]# wget https://download.fastgit.org/containerd/containerd/releases/download/v1.5.5/cri-containerd-cni-1.5.5-linux-amd64.tar.gz

直接将压缩包解压到系统的各个目录中:

 ~]# tar -C / -xzf cri-containerd-cni-1.5.5-linux-amd64.tar.gz

然后要将 /usr/local/bin 和 /usr/local/sbin 追加到 ~/.bashrc 文件的 PATH 环境变量中:

 ~]# export PATH=$PATH:/usr/local/bin:/usr/local/sbin

然后执行下面的命令使其立即生效:

 ~]# source ~/.bashrc

containerd 的默认配置文件为 /etc/containerd/config.toml,我们可以通过如下所示的命令生成一个默认的配置:

 ~]# mkdir -p /etc/containerd
 ~]# containerd config default > /etc/containerd/config.toml

对于使用 systemd 作为 init system 的 Linux 的发行版,使用 systemd 作为容器的 cgroup driver 可以确保节点在资源紧张的情况更加稳定,所以推荐将 containerd 的 cgroup driver 配置为 systemd。

修改前面生成的配置文件 /etc/containerd/config.toml

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true
    ....

然后再为镜像仓库配置一个加速器,需要在 cri 配置块下面的 registry 配置块下面进行配置 registry.mirrors:

[plugins."io.containerd.grpc.v1.cri"]
  ...
  # sandbox_image = "k8s.gcr.io/pause:3.5"
  sandbox_image = "registry.aliyuncs.com/k8sxio/pause:3.5"
  ...
  [plugins."io.containerd.grpc.v1.cri".registry]
    [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
        endpoint = ["https://bqr1dr1n.mirror.aliyuncs.com"]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
        endpoint = ["https://registry.aliyuncs.com/k8sxio"]

重载配置文件,启动,并加入开机自启

 ~]# systemctl daemon-reload
 ~]# systemctl enable containerd --now

启动完成后就可以使用 containerd 的本地 CLI 工具 ctr 和 crictl 了,比如查看版本:

[root@master1 ~]# ctr version
Client:
  Version:  v1.5.5
  Revision: 72cec4be58a9eb6b2910f5d10f1c01ca47d231c0
  Go version: go1.16.6

Server:
  Version:  v1.5.5
  Revision: 72cec4be58a9eb6b2910f5d10f1c01ca47d231c0
  UUID: 6d8466fe-59f2-457b-b16a-e0251e934e7d
[root@master1 ~]# crictl version
Version:  0.1.0
RuntimeName:  containerd
RuntimeVersion:  v1.5.5
RuntimeApiVersion:  v1alpha2

4. 使用 kubeadm 部署 Kubernetes

  • 操作节点:所有节点(master,node)均需执行

设置yum源

~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

如果不能科学上网的话,我们可以使用阿里云的源进行安装:

~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

然后安装 kubeadm、kubelet、kubectl:

# --disableexcludes 禁掉除了kubernetes之外的别的仓库
~]# yum makecache fast
~]# yum install -y kubelet-1.22.2 kubeadm-1.22.2 kubectl-1.22.2 --disableexcludes=kubernetes
~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:37:34Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}

可以看到我们这里安装的是 v1.22.2 版本,然后将 master 节点的 kubelet 设置成开机启动:

~]# systemctl enable --now kubelet
注意:到这里为止上面所有的操作都需要在所有节点执行配置。

5. 初始化集群

  • 操作节点:master

初始化配置文件

~]# kubeadm config print init-defaults --component-configs KubeletConfiguration > kubeadm.yaml

修改配置文件:

[root@master1 ~]# cat kubeadm.yaml 
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.247.147 # master主机IP
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock  # 使用 containerd的Unix socket 地址
  imagePullPolicy: IfNotPresent
  name: master1
  taints: # 给master添加污点,master节点不能调度应用
  - effect: "NoSchedule"
    key: "node-role.kubernetes.io/master"

---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs  # kube-proxy 模式

---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/k8sxio # 阿里容器镜像
kind: ClusterConfiguration
kubernetesVersion: 1.22.2
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16  # 指定 pod 子网
scheduler: {}

---
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
cgroupDriver: systemd  # 配置 cgroup driver
logging: {}
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s

在开始初始化集群之前可以使用 kubeadm config images pull --config kubeadm.yaml 预先在各个服务器节点上拉取所k8s需要的容器镜像。

[root@master1 ~]# kubeadm config images pull --config kubeadm.yaml
[config/images] Pulled registry.aliyuncs.com/k8sxio/kube-apiserver:v1.22.2
[config/images] Pulled registry.aliyuncs.com/k8sxio/kube-controller-manager:v1.22.2
[config/images] Pulled registry.aliyuncs.com/k8sxio/kube-scheduler:v1.22.2
[config/images] Pulled registry.aliyuncs.com/k8sxio/kube-proxy:v1.22.2
[config/images] Pulled registry.aliyuncs.com/k8sxio/pause:3.5
[config/images] Pulled registry.aliyuncs.com/k8sxio/etcd:3.5.0-0
failed to pull image "registry.aliyuncs.com/k8sxio/coredns:v1.8.4": output: time="2021-10-25T17:34:48+08:00" level=fatal msg="pulling image: rpc error: code = NotFound desc = failed to pull and unpack image \"registry.aliyuncs.com/k8sxio/coredns:v1.8.4\": failed to resolve reference \"registry.aliyuncs.com/k8sxio/coredns:v1.8.4\": registry.aliyuncs.com/k8sxio/coredns:v1.8.4: not found"
, error: exit status 1
To see the stack trace of this error execute with --v=5 or higher

上面在拉取 coredns 镜像的时候出错了,没有找到这个镜像,我们可以手动 pull 该镜像,然后重新 tag 下镜像地址即可:

[root@master1 ~]# ctr -n k8s.io i pull docker.io/coredns/coredns:1.8.4
docker.io/coredns/coredns:1.8.4:                                                  resolved       |++++++++++++++++++++++++++++++++++++++|
index-sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890:    done           |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4: done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:bc38a22c706b427217bcbd1a7ac7c8873e75efdd0e59d6b9f069b4b243db4b4b:    done           |++++++++++++++++++++++++++++++++++++++|
config-sha256:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44:   done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:c6568d217a0023041ef9f729e8836b19f863bcdb612bb3a329ebc165539f5a80:    exists         |++++++++++++++++++++++++++++++++++++++|
elapsed: 12.4s                                                                    total:  12.0 M (991.3 KiB/s)
unpacking linux/amd64 sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890...
done: 410.185888ms
[root@master1 ~]# ctr -n k8s.io i tag docker.io/coredns/coredns:1.8.4 registry.aliyuncs.com/k8sxio/coredns:v1.8.4

然后就可以使用上面的配置文件在 master 节点上进行初始化:

[root@master1 ~]# kubeadm init --config kubeadm.yaml

若初始化成功后,最后会提示如下信息:

I0610 17:11:12.665792    2197 kubeletfinalize.go:90] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem"
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0610 17:11:12.667763    2197 kubeletfinalize.go:134] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.247.147:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:9783feb0e10f5bdb2123f038e0b8d7c8b3d8acce056b67220a0fea526da326fe

下来按照上述提示信息操作,配置kubectl客户端的认证

[root@master1 ~]# mkdir -p $HOME/.kube
[root@master1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

然后可以使用 kubectl 命令查看 master 节点已经初始化成功了:

[root@master1 ~]#  kubectl get nodes
NAME      STATUS   ROLES                  AGE   VERSION
master1   Ready    control-plane,master   74s   v1.22.2

⚠️注意:此时使用 kubectl get nodes查看节点应该处于notReady状态,因为还未配置网络插件

若执行初始化过程中出错,根据错误信息调整后,执行kubeadm reset后再次执行init操作即可

如果没记住上面的join信息,可以用kubeadm token create --print-join-command重新查看

6.添加node节点到集群中

  • 操作节点:所有的node节点需要执行
 ~]# kubeadm join 192.168.247.147:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:9783feb0e10f5bdb2123f038e0b8d7c8b3d8acce056b67220a0fea526da326fe 

7.安装flannel插件

  • 操作节点:master节点

这个时候其实集群还不能正常使用,因为还没有安装网络插件,接下来安装网络插件,可以在文档 https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/ 中选择我们自己的网络插件,这里我们安装 flannel:

➜  ~ wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 如果有节点是多网卡,则需要在资源清单文件中指定内网网卡
# 搜索到名为 kube-flannel-ds 的 DaemonSet,在kube-flannel容器下面
[root@master1 ~]# vi kube-flannel.yml
......
containers:
- name: kube-flannel
  image: quay.io/coreos/flannel:v0.15.0
  command:
  - /opt/bin/flanneld
  args:
  - --ip-masq
  - --kube-subnet-mgr
  - --iface=ens33  # 如果是多网卡的话,指定内网网卡的名称
......
[root@master1 ~]# kubectl apply -f kube-flannel.yml  # 安装 flannel 网络插件

过会儿再查看 Pod 运行状态:

[root@master1 ~]# kubectl get pods -n kube-system
NAME                              READY   STATUS    RESTARTS   AGE
coredns-7568f67dbd-l2dq9          1/1     Running   0          13m
coredns-7568f67dbd-nkfnf          1/1     Running   0          13m
etcd-master1                      1/1     Running   0          13m
kube-apiserver-master1            1/1     Running   0          13m
kube-controller-manager-master1   1/1     Running   0          13m
kube-flannel-ds-fmkdw             1/1     Running   0          4m38s
kube-flannel-ds-l5xxq             1/1     Running   0          4m38s
kube-proxy-htv79                  1/1     Running   0          11m
kube-proxy-l92mv                  1/1     Running   0          13m
kube-scheduler-master1            1/1     Running   0          13m

8.部署dashboard

  • 操作节点:master节点
# 推荐使用下面这种方式
[root@master1 ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
[root@master1 ~]# vi recommended.yaml
# 修改Service为NodePort类型
......
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  type: NodePort  # 加上type=NodePort变成NodePort类型的服务
......

直接创建

[root@master1 ~]# kubectl apply -f recommended.yaml

新版本的 Dashboard 会被默认安装在 kubernetes-dashboard 这个命名空间下面:

[root@master1 ~]#  kubectl get pods -n kubernetes-dashboard -o wide
NAME                                         READY   STATUS    RESTARTS   AGE   IP          NODE    NOMINATED NODE   READINESS GATES
dashboard-metrics-scraper-856586f554-f4psz   1/1     Running   0          15m   10.88.0.2   node2   <none>           <none>
kubernetes-dashboard-67484c44f6-snqdp        1/1     Running   0          15m   10.88.0.2   node1   <none>           <none>

我们前面配置的 podSubnet 为 10.244.0.0/16,但上面的 Pod 分配的 IP 段是 10.88.xx.xx,包括前面自动安装的 CoreDNS 也是分配的 IP 段为10.88.xx.xx

[root@master1 net.d]# kubectl get pods -n kube-system -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP                NODE      NOMINATED NODE   READINESS GATES
coredns-7568f67dbd-l2dq9          1/1     Running   0          49m   10.88.0.3         master1   <none>           <none>
coredns-7568f67dbd-nkfnf          1/1     Running   0          49m   10.88.0.2         master1   <none>           <none>
etcd-master1                      1/1     Running   0          49m   192.168.247.147   master1   <none>           <none>
kube-apiserver-master1            1/1     Running   0          49m   192.168.247.147   master1   <none>           <none>
kube-controller-manager-master1   1/1     Running   0          49m   192.168.247.147   master1   <none>           <none>
kube-flannel-ds-fmkdw             1/1     Running   0          40m   192.168.247.147   master1   <none>           <none>
kube-flannel-ds-l5xxq             1/1     Running   0          40m   192.168.247.149   node2     <none>           <none>
kube-flannel-ds-vmwwk             1/1     Running   0          12m   192.168.247.148   node1     <none>           <none>
kube-proxy-htv79                  1/1     Running   0          47m   192.168.247.149   node2     <none>           <none>
kube-proxy-l8n8n                  1/1     Running   0          12m   192.168.247.148   node1     <none>           <none>
kube-proxy-l92mv                  1/1     Running   0          49m   192.168.247.147   master1   <none>           <none>
kube-scheduler-master1            1/1     Running   0          49m   192.168.247.147   master1   <none>           <none>
  • 解决方式:
### master1,node1,node2都要操作
[root@master1 ~]# ls /etc/cni/net.d/
10-containerd-net.conflist  10-flannel.conflist
[root@master1 ~]#  mv /etc/cni/net.d/10-containerd-net.conflist /etc/cni/net.d/10-containerd-net.conflist.bak
[root@master1 ~]# ifconfig cni0 down && ip link delete cni0
[root@master1 ~]# systemctl daemon-reload
[root@master1 ~]# systemctl restart containerd kubelet

然后记得重建 coredns 和 dashboard 的 Pod,重建后 Pod 的 IP 地址就正常了:

[root@master1 net.d]# kubectl delete pod coredns-7568f67dbd-l2dq9 coredns-7568f67dbd-nkfnf -n kube-system
pod "coredns-7568f67dbd-l2dq9" deleted
pod "coredns-7568f67dbd-nkfnf" deleted

[root@master1 net.d]# kubectl delete pod dashboard-metrics-scraper-856586f554-f4psz kubernetes-dashboard-67484c44f6-snqdp  -n kubernetes-dashboard
pod "dashboard-metrics-scraper-856586f554-f4psz" deleted
pod "kubernetes-dashboard-67484c44f6-snqdp" deleted

查看 Dashboard 的 NodePort 端口:

[root@master1 net.d]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.101.152.102   <none>        8000/TCP        36m
kubernetes-dashboard        NodePort    10.97.91.52      <none>        443:32569/TCP   36m

使用浏览器访问 https://192.168.247.147:32569,其中192.168.247.147为master节点的外网ip地址,chrome目前由于安全限制,测试访问不了,使用firefox可以进行访问。

创建一个具有全局所有权限的用户来登录 Dashboard:(admin.yaml)

[root@master1 ~]# cat admin.yaml 
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: admin
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: admin
  namespace: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin
  namespace: kubernetes-dashboard

直接创建:

[root@master1 ~]# kubectl apply -f admin.yaml 
clusterrolebinding.rbac.authorization.k8s.io/admin created
serviceaccount/admin created
[root@master1 ~]# kubectl get secret -n kubernetes-dashboard|grep admin-token
admin-token-ldwxb                  kubernetes.io/service-account-token   3      10s
[root@master1 ~]#  kubectl get secret admin-token-ldwxb -o jsonpath={.data.token} -n kubernetes-dashboard |base64 -d
eyJhbGciOiJSUzI1NiIsImtpZCI6InZaeW9HMGdPTmExOHp2M00wc3pZYlV1a2Z0TjIzNFUwRnloWjNmbHd0VTAifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi1sZHd4YiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjQyOThhOTQxLTM5MTYtNDk1Yy1iZDllLTQ1NDFhOWI5YTAxNCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbiJ9.J9_-N1DLg9sVvN8TJ0EPweXTtfEk2ZM_bBlSWRYOYCyoH3LAq1zA3JVciLitiz7mMMBVJUSRh3FEh0f66M1sqFwGQoKcJvQAOAHlsFyYZGBIiIEalwOcFioZZqGCeSc9D_PcHkcEZbJB_rF4iKC5ujAvLiFI8YKhjNygELD55TLPcSStT2_xMXGG5pGmQZsyuS_I74ysVi35UsC4rJp9qbxrX9ZTpDJvAo94tvXwYFaopi31Jli3Os5bIGTS7juEfMhqD_g7i3SalYjCm_X6kJ1FnLbKJz0oO2gnNQ6BCl0CzIuD4fc6vJ-Jn0lrXyFql_i_mJ6cmCstCjhOdxp34w

9.清理环境

 ~]# kubeadm reset
 ~]# ifconfig cni0 down && ip link delete cni0
 ~]# ifconfig flannel.1 down && ip link delete flannel.1
 ~]# rm -rf /var/lib/cni/
Last modification:June 10, 2022
如果觉得我的文章对你有用,请随意赞赏