kubernetes安装及使用 (docker)
kubernetes安装及使用(docker)
初始化安装k8s集群的实验环境
- Kubernetes 版本1.31.2
- centos7.9机器三台
Kubernetes版本
- ubernetes 版本以 x.y.z 表示,其中 x 是主要版本, y 是次要版本,z 是补丁版本,遵循语义版本控制术语。Kubernetes 社区大约会每隔四个月发布次要版本,社区对于每个版本的支持周期为12个月。此处我们选择使用较新的1.31.x版本来进行部署演示。
容器运行时
在1.24.x版本中,Kubernetes正式放弃了对dockershim的支持,这也标志着其与Docker的分割。从这个版本开始,我们将无法直接在Kubernetes中使用Docker。
对此,本文将不采用Docker方案,而是以目前主流的Containerd做为容器运行时。
主机环境
在机器配置上,Master节点至少要2 Core和4GB内存,推荐配置是4 Core和16GB。而工作节点则应根据需要运行的业务容器情况进行配置,但通常不少于4 core和16GB内存。
操作系统方面支持Ubuntu、CentOS、RedHat、Debian等常规系统,内核版本要求不低于3.10。
本次演示的节点为一台Master+两台Worker节点的三主机模式,信息如下:
IP地址 主机名 角色 系统版本 192.168.1.110 k8s-master01 Master centos7.9 192.168.1.111 k8s-node1 Worker centos7.9 192.168.1.112 k8s-node1 Worker centos7.9
卸载旧版本
卸载kubernetes
# 卸载k8s以及依赖
systemctl stop kubelet
systemctl stop etcd
systemctl stop docker
systemctl stop kubelet
systemctl stop etcd
kubeadm reset --cri-socket unix:///var/run/cri-dockerd.sock
yum list installed | grep kube
yum remove kubeadm kubelet kubectl kubernetes-cni
yum list installed | grep kube
yum -y remove cri-tools
# 删除数据
rm -rvf $HOME/.kube
rm -rvf ~/.kube/
rm -rvf /etc/kubernetes/
rm -rvf /etc/systemd/system/kubelet.service.d
rm -rvf /etc/systemd/system/kubelet.service
rm -rvf /usr/bin/kube*
rm -rvf /etc/cni
rm -rvf /opt/cni
rm -rvf /var/lib/etcd
rm -rvf /var/etcd
卸载docker
yum -y remove docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin docker-ce-rootless-extras
rm -rf /var/lib/docker
rm -rf /var/lib/containerd
更改UUID
三台机器是通过ProxmoxVe克隆的虚拟机,uuid都相同
k8s 需要通过uuid等网络唯一标识来控制节点资源,这里如果uuid相同,需要修改
[zhangtq@k8s-master1 ~]$ uuidgen c0957cdb-760c-4ebd-8ecf-f649b3f9661a [zhangtq@k8s-master1 ~]$ vi /etc/sysconfig/network-scripts/ifcfg-eth0 # 替换UUID
升级内核版本
注意
- 这一步非必须,先要查看内核版本
[root@k8s-master1 ~]# cat /proc/version
Linux version 5.4.278-1.el7.elrepo.x86_64 (mockbuild@Build64R7) (gcc version 9.3.1 20200408 (Red Hat 9.3.1-2) (GCC)) #1 SMP Sun Jun 16 15:37:11 EDT 2024
#只查看内核版本
[root@k8s-master1 ~]# uname -r
5.4.278-1.el7.elrepo.x86_64
我这里内核版本>=3.10,可以不用升级,如果版本小于,需要升级
升级移步至centos7升级内核
所有机器
重启后记得看内核版本是否升级成功
配置阿里云yum源
所有机器
执行- 具体参考更换阿里源
修改hostname
- 修改hostname
分别
在对应的机器上配置
# 在192.168.1.110执行
hostnamectl set-hostname k8s-master1
# 在192.168.1.111执行
hostnamectl set-hostname k8s-node1
# 在192.168.1.112执行
hostnamectl set-hostname k8s-node2
- 配置hosts文件
所有
机器都执行
cat >> /etc/hosts<<EOF
192.168.1.110 k8s-master1
192.168.1.111 k8s-node1
192.168.1.112 k8s-node2
EOF
配置SSH互信
方式一
所有机器
输入以下命令,创建一组公钥和私钥# 直接一直回车就行 ssh-keygen
一路回车,生成密钥
所有机器
上执行公钥拷贝命令ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-master1 ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-node1 ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-node2
方式二
所有机器
执行
ssh-keygen -t rsa
for i in 192.168.0.30 192.168.0.10 192.168.0.11 ; do ssh-copy-id $i; done
关闭防火墙
#临时加永久
systemctl stop firewalld
systemctl disable firewalld
#临时
setenforce 0
#永久
sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config
时间同步
centos8
所有机器
都安装chrony,设置为开机自动启动,并立即开启服务
yum install chrony -y
systemctl enable chronyd --now
- 同步时间
# 用东八区,北京,上海的时间
cp -f /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
# 重启服务
systemctl restart chronyd
# 立即同步时间
chronyc sources && chronyc -a makestep
centos7
所有机器
执行
yum -y install ntp
#安装ntp软件包。
systemctl enable ntpd
#设置ntp服务开机自启。
systemctl start ntpd
#启动ntp服务。
ntpdate -u cn.pool.ntp.org
#从公共ntp服务器同步一次时间。
hwclock --systohc
#将系统时间同步到硬件时钟。
timedatectl set-timezone Asia/Shanghai
#设置系统时区为上海。
关闭swap分区
Swap是交换分区,如果机器内存不够,会使用swap分区,但是swap分区的性能较低,k8s设计的时候为了能提升性能,默认是不允许使用交换分区的。Kubeadm初始化的时候会检测swap是否关闭,如果没关闭,那就初始化失败。如果不想要关闭交换分区,安装k8s的时候可以指定--ignore-preflight-errors=Swap来解决
所有机器
执行
# 临时关闭swap
swapoff -a
# 永久关闭swap
sed -ri 's/.*swap.*/#&/' /etc/fstab
free -m
命令查看swap状态- 若swap行都显示 0 则表示关闭成功
加载ipvs
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ipvs_sh
modprobe -- nf_conntrack
lsmod | grep nf_conntrack
修改内核参数
所有机器执行
modprobe br_netfilter
lsmod | grep br_netfilter
#生效
modprobe br_netfilter # 立即生效,重启失效,流量控制的内核模块
# 配置永久生效
cat <<EOF |tee /etc/modules-load.d/k8s.conf
br_ netfilter
EOF
echo "modprobe br_netfilter" >> /etc/profile
cat >/etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# 关于为什么要执行上面的操作
#问题1:为什么要执行modprobe br_netfilter?
修改/etc/sysctl.d/kubernetes.conf文件,增加如下三行参数:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
sysctl -p /etc/sysctl.d/kubernetes.conf出现报错:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
解决方法:
[root@pengfei-master1 ~]# modprobe br_netfilter
#问题2:为什么开启net.bridge.bridge-nf-call-iptables内核参数?
在centos下安装docker,执行docker info出现如下警告:
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
解决办法:
[root@pengfei-master1 ~]# vim /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
#问题3:为什么要开启net.ipv4.ip_forward = 1参数?
kubeadm初始化k8s如果报错:
就表示没有开启ip_forward,需要开启。
net.ipv4.ip_forward是数据包转发:
出于安全考虑,Linux系统默认是禁止数据包转发的。
所谓转发即当主机拥有多于一块的网卡时,其中一块收到数据包,
根据数据包的目的ip地址将数据包发往本机另一块网卡,
该网卡根据路由表继续发送数据包。这通常是路由器所要实现的功能。
要让Linux系统具有路由转发功能,需要配置一个Linux的内核参数net.ipv4.ip_forward。
这个参数指定了Linux系统当前对路由转发功能的支持情况;
其值为0时表示禁止进行IP转发;如果是1,则说明IP转发功能已经打开。
安装其他工具
yum install -y ipvsadm ipset
安装依赖
yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet ipvsadm
配置国内docker源
yum install -y yum-utils
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
安装docker
所有机器都执行
注
docker也要安装,docker跟containerd不冲突,安装docker是为了能基于dockerfile构建镜像
:::
备注:
yum install docker-ce -y systemctl enable docker --now #启动docker并开机自动启动
配置docker镜像加速
所有机器都执行
cat >/etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": [
"https://docker.udayun.com",
"https://581ltx2c.mirror.aliyuncs.com",
"https://registry.docker-cn.com"
],
"max-concurrent-downloads": 10,
"log-driver": "json-file",
"log-level": "warn",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"data-root": "/var/lib/docker"
}
EOF
# 注意上面https://581ltx2c.mirror.aliyuncs.com这个地址换成自己的阿里云镜像加速地址
# 目前阿里云镜像加速只适用于阿里云公网,其他网络环境需要更换其他的镜像加速站
systemctl daemon-reload
systemctl restart docker
安装cri-docer
所有机器都执行
注
由于1.24以及更高版本不支持docker所以安装cri-docker
警告
cri-dockerd 最新v0.3.15已经移除了对centos7的构建,且本机器内核为 5.4
,所以安装cri-dockerd-0.3.13
版本
::: warn
cri-dockerd-0.3.14
安装错误演示,此时需要替换为对应的版本
:::
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.14/cri-dockerd-0.3.14-3.el8.x86_64.rpm
rpm -ivh cri-dockerd-0.3.14-3.el8.x86_64.rpm
错误:依赖检测失败:
(iptables or nftables) 被 cri-dockerd-3:0.3.14-3.el8.x86_64 需要
rpmlib(RichDependencies) <= 4.12.0-1 被 cri-dockerd-3:0.3.14-3.el8.x86_64 需要
注
- 不同内核对应的版本不同,这里是centos5.4的内核,最新的cri-docker是v0.3.13
- cri-docker-0.3.14已经不直接支持centos7了,没有预编译包,使用最新的需要自己编译安装
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.13/cri-dockerd-0.3.13-3.el7.x86_64.rpm rpm -ivh cri-dockerd-0.3.13-3.el7.x86_64.rpm
修改配置文件
vi /usr/lib/systemd/system/cri-docker.service ---- 修改第10行内容 #ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.10 systemctl daemon-reload systemctl start cri-docker systemctl enable cri-docker systemctl status cri-docker
安装kubernetes
所有机器都执行
配置k8s国内源
- 所有机器都执行
# 这里是1.31的源,如果需要其他版本的,按照版本更改就可以
cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.31/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.31/rpm/repodata/repomd.xml.key
EOF
安装k8s三件套
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
注:每个软件包的作用
Kubeadm: kubeadm是一个工具,用来初始化k8s集群的
kubelet: 安装在集群所有节点上,用于启动Pod的,kubeadm安装k8s,k8s控制节点和工作节点的组件,都是基于pod运行的,只要pod启动,就需要kubelet
kubectl: 通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件
systemctl daemon-reload
systemctl start kubelet.service
systemctl enable kubelet.service
systemctl status kubelet.service
# 如果kubelet status是如图可以暂时先不用理会
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since 日 2024-11-03 01:32:49 CST; 8s ago
Docs: https://kubernetes.io/docs/
Process: 7786 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
Main PID: 7786 (code=exited, status=1/FAILURE)
- 设置驱动为
systemd
- master会在初始化的配置文件里设置所以可以不用手动设置
- 其他node主机需要手动设置
为了实现docker使用的cgroupdriver与kubelet使用的cgroup的一致性,建议修改如下文件内容
vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
设置kubelet为开机自启动即可,由于没有生成配置文件,集群初始化后自动启动
systemctl enable kubelet
kubeadm初始化k8s集群(控制端执行)
- 只在
master
执行
使用kubeadm生成k8s集群文件
#注意:只在k8s-master1控制端执行
kubeadm config print init-defaults > kubeadm.yaml
修改配置文件
vi kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.1.110 #修改控制节点的ip
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock #修改此处,指定cri-dockerd容器运行时
imagePullPolicy: IfNotPresent
name: k8s-master1 #修改控制节点主机名
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers #修改镜像仓库为阿里云镜像仓库地址
kind: ClusterConfiguration
kubernetesVersion: 1.31.0
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16 #修改指定pod网段,需要新增加这个
serviceSubnet: 10.96.0.0/12 #修改指定Service网段
scheduler: {}
#插入以下内容,(复制时,要带着---):
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
#mode: ipvs 表示kube-proxy代理模式是ipvs,如果不指定ipvs,会默认使用iptables,但是iptables效率低,所以我们生产环境建议开启ipvs,阿里云和华为云托管的K8s,也提供ipvs模式
初始化集群
- 主节点执行
# 最新版本1.31.2版本需要有多个容器运行时的时候,需要指定cri-socker容器运行时才可以
kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers --cri-socket unix:///var/run/cri-dockerd.sock
kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification
- 打印如下内容说明初始化成功
[init] Using Kubernetes version: v1.31.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
W1102 21:01:09.893644 6091 checks.go:846] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.9" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.10" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.110]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master1 localhost] and IPs [192.168.1.110 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master1 localhost] and IPs [192.168.1.110 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.039180914s
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is healthy after 32.002622812s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master1 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.120:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash --cri-sha256:71c049dd7266a0916ff383e5e7b0a33b985754c5a687a1b16b63119d1856ed1f
配置kubectl的配置文件config,相当于对kubectl进行授权,这样kubectl命令可以使用这个证书对k8s集群进行管理
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master1 NotReady control-plane 3m37s v1.25.0
扩容k8s集群-添加工作节点
master1节点查看加入节点的命令
kubeadm token create --print-join-command kubeadm join 192.168.1.120:6443 --token 67b8cu.a815vaz3x3uqaeer --discovery-token-ca-cert-hash sha256:71c049dd7266a0916ff383e5e7b0a33b985754c5a687a1b16b63119d1856ed1f --cri-socket unix:///var/run/cri-dockerd.sock
节点执行
kubeadm join 192.168.1.110:6443 --token 7xr5wj.u7r7gu94bbq59qx8 --discovery-token-ca-cert-hash sha256:4a8dc13f8752e26705222186578c501f47afb35a3478990e0093142c449135dd --cri-socket unix:///var/run/cri-dockerd.sock
查看集群节点,在控制节点执行
kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master1 NotReady control-plane 51m v1.31.2 k8s-node1 NotReady <none> 18s v1.31.2 k8s-node2 NotReady <none> 5s v1.31.2
可以看的出,此时集群状态都是NotReady,这是因为没有安装网络插件
安装网络插件calico
集中式的路由分发
Calico网络模型主要工作组件
1、Felix:运行在每一台 Host 的 agent 进程,主要负责网络接口管理和监听、路由、ARP 管理、ACL 管理和同步、状态上报等。保证跨主机容器网络互通。
2、etcd:分布式键值存储,相当于k8s集群中的数据库,存储着Calico网络模型中IP地址等相关信息。主要负责网络元数据一致性,确保 Calico 网络状态的准确性;
3、BGP Client(BIRD):Calico 为每一台 Host 部署一个 BGP Client,即每台host上部署一个BIRD。主要负责把 Felix 写入 Kernel 的路由信息分发到当前 Calico 网络,确保 Workload 间的通信的有效性;
4、BGP Route Reflector:在大型网络规模中,如果仅仅使用 BGP client 形成 mesh 全网互联的方案就会导致规
安装calico
仅在master
执行找到和k8s版本对应的calico版本,点击这里| Calico Documentation (tigera.io)
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.0/manifests/tigera-operator.yaml
上述如果网络连接不了,可以选择下面通过calico配置文件的方式部署
获取配置文件
wget https://docs.projectcalico.org/manifests/calico.yaml
修改calico配置
1、Daemonset配置 …… containers: # Runs calico-node container on each Kubernetes node. This # container programs network policy and routes on each # host. - name: calico-node image: docker.io/calico/node:v3.18.0 …… env: # Use Kubernetes API as the backing datastore. - name: DATASTORE_TYPE value: "kubernetes" # Cluster type to identify the deployment type - name: CLUSTER_TYPE value: "k8s,bgp" # Auto-detect the BGP IP address. - name: IP value: "autodetect" # Enable IPIP - name: CALICO_IPV4POOL_IPIP value: "Always" #pod网段 - name: CALICO_IPV4POOL_CIDR value: "10.244.0.0/16" calico-node服务的主要参数如下: CALICO_IPV4POOL_IPIP:是否启用IPIP模式。启用IPIP模式时,Calico将在Node上创建一个名为tunl0的虚拟隧道。IP Pool可以使用两种模式:BGP或IPIP。使用IPIP模式时,设置CALICO_IPV4POOL_IPIP="Always",不使用IPIP模式时,设置CALICO_IPV4POOL_IPIP="Off",此时将使用BGP模式。 IP_AUTODETECTION_METHOD:获取Node IP地址的方式,默认使用第1个网络接口的IP地址,对于安装了多块网卡的Node,可以使用正则表达式选择正确的网卡,例如"interface=eth.*"表示选择名称以eth开头的网卡的IP地址。 - name: IP_AUTODETECTION_METHOD value: "interface=ens33" 扩展:calico的IPIP模式和BGP模式对比分析 1)IPIP 把一个IP数据包又套在一个IP包里,即把IP层封装到IP层的一个 tunnel,它的作用其实基本上就相当于一个基于IP层的网桥,一般来说,普通的网桥是基于mac层的,根本不需要IP,而这个ipip则是通过两端的路由做一个tunnel,把两个本来不通的网络通过点对点连接起来; calico以ipip模式部署完毕后,node上会有一个tunl0的网卡设备,这是ipip做隧道封装用的,也是一种overlay模式的网络。当我们把节点下线,calico容器都停止后,这个设备依然还在,执行 rmmodipip命令可以将它删除。 2)BGP BGP模式直接使用物理机作为虚拟路由路(vRouter),不再创建额外的tunnel 边界网关协议(BorderGateway Protocol, BGP)是互联网上一个核心的去中心化的自治路由协议。它通过维护IP路由表或‘前缀’表来实现自治系统(AS)之间的可达性,属于矢量路由协议。BGP不使用传统的内部网关协议(IGP)的指标,而是基于路径、网络策略或规则集来决定路由。因此,它更适合被称为矢量性协议,而不是路由协议,通俗的说就是将接入到机房的多条线路(如电信、联通、移动等)融合为一体,实现多线单IP; BGP 机房的优点:服务器只需要设置一个IP地址,最佳访问路由是由网络上的骨干路由器根据路由跳数与其它技术指标来确定的,不会占用服务器的任何系统; 官方提供的calico.yaml模板里,默认打开了ip-ip功能,该功能会在node上创建一个设备tunl0,容器的网络数据会经过该设备被封装一个ip头再转发。这里,calico.yaml中通过修改calico-node的环境变量:CALICO_IPV4POOL_IPIP来实现ipip功能的开关:默认是Always,表示开启;Off表示关闭ipip。 - name: CLUSTER_TYPE value: "k8s,bgp" # Auto-detect the BGP IP address. - name: IP value: "autodetect" # Enable IPIP - name: CALICO_IPV4POOL_IPIP value: "Always" 总结: calico BGP通信是基于TCP协议的,所以只要节点间三层互通即可完成,即三层互通的环境bird就能生成与邻居有关的路由。但是这些路由和flannel host-gateway模式一样,需要二层互通才能访问的通,因此如果在实际环境中配置了BGP模式生成了路由但是不同节点间pod访问不通,可能需要再确认下节点间是否二层互通。 为了解决节点间二层不通场景下的跨节点通信问题,calico也有自己的解决方案——IPIP模式
kubectl apply -f calico.yaml kubectl get pod -n kube-system # 打印如下内容 NAME READY STATUS RESTARTS AGE calico-kube-controllers-7c587b8bb-kdx8c 0/1 Pending 0 17s calico-node-brbfl 0/1 Init:ErrImageNeverPull 0 17s calico-node-f2m25 0/1 Init:ErrImageNeverPull 0 17s calico-node-zs86g 0/1 Init:ErrImageNeverPull 0 17s coredns-fcd6c9c4-hcvdj 0/1 Pending 0 9h coredns-fcd6c9c4-wqtwc 0/1 Pending 0 9h etcd-k8s-master1 1/1 Running 0 9h kube-apiserver-k8s-master1 1/1 Running 0 9h kube-controller-manager-k8s-master1 1/1 Running 1 (8h ago) 9h kube-proxy-69kps 1/1 Running 0 8h kube-proxy-7nhzp 1/1 Running 0 8h kube-proxy-zj6cj 1/1 Running 0 9h kube-scheduler-k8s-master1 1/1 Running 1 (8h ago) 9h
calico pull 失败
上述方法安装完calico插件后发现
calico-node-8b8tr 0/1 Init:ImagePullBackOff
kubectl describe pod calico-node-brbfl -n kube-system
发现时拉取镜像时候报错,由于这个时间段对
docker hub
进行了封禁,导致用不了同时容器运行时用的时
containerd
, 所以需要通过docker 拉取后再通过ctr
命令导入# # 添加如下内容目前https://docker.m.daocloud.io可以pull到该镜像 vi /etc/docker/daemon.json { "registry-mirrors": ["https://docker.m.daocloud.io", "https://581ltx2c.mirror.aliyuncs.com"] } # 找到calico中需要下载的镜像 grep image calico.yaml image: calico/cni:v3.25.0 imagePullPolicy: IfNotPresent image: calico/cni:v3.25.0 imagePullPolicy: IfNotPresent image: calico/node:v3.25.0 imagePullPolicy: IfNotPresent image: calico/node:v3.25.0 imagePullPolicy: IfNotPresent image: calico/kube-controllers:v3.25.0 imagePullPolicy: IfNotPresent # 通过docker 下载镜像 for i in calico/cni:v3.25.0 calico/node:v3.25.0 calico/kube-controllers:v3.25.0; do docker pull $i ; done # 或者通过可以pull 镜像的机器pull, docker导出镜像,上传到本地,然后导入 docker save calico/kube-controllers:v3.25.0 > kube-controllers-3.25.0.tar docker save calico/cni:v3.25.0 > cni-3.25.0.tar docker save calico/node:v3.25.0 > node-3.25.0.tar # 删除失败的pod 并且新建calico pod kubectl delete -f calico.yaml kubectl apply -f calico.yaml # 查看k8s集群的情况 [root@k8s-master1 zhangtq]# kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-6d668dcdd6-b94sz 1/1 Running 0 90m kube-system calico-node-9f2w9 1/1 Running 0 90m kube-system calico-node-l9mjs 1/1 Running 0 90m kube-system calico-node-xqdrp 1/1 Running 0 90m kube-system coredns-6554b8b87f-bdq6h 1/1 Running 0 18h kube-system coredns-6554b8b87f-nw6kl 1/1 Running 0 18h kube-system etcd-k8s-master1 1/1 Running 0 18h kube-system kube-apiserver-k8s-master1 1/1 Running 0 18h kube-system kube-controller-manager-k8s-master1 1/1 Running 3 (89m ago) 18h kube-system kube-proxy-nc8tw 1/1 Running 0 17h kube-system kube-proxy-wh7dm 1/1 Running 0 17h kube-system kube-proxy-zv48l 1/1 Running 0 18h kube-system kube-scheduler-k8s-master1 1/1 Running 4 (89m ago) 18h
node节点打个标签,显示work
kubectl label nodes k8s-node1 node-role.kubernetes.io/work=work
kubectl label nodes k8s-node2 node-role.kubernetes.io/work=work
测试网络
# 首次安装并且登录busybox
[root@k8s-master1 ~]# kubectl run busybox --image docker.io/library/busybox:1.28 --image-pull-policy=IfNotPresent --restart=Never --rm -it busybox -- sh
# 如果是安装过busybox 直接登录测试
[root@k8s-master1 ~]# kubectl exec -it busybox -- sh
# 登录后执行下面两条命令测试网络
/ # ping www.baidu.com
PING www.baidu.com (39.156.66.18): 56 data bytes
64 bytes from 39.156.66.18: seq=0 ttl=54 time=6.991 ms
64 bytes from 39.156.66.18: seq=1 ttl=54 time=7.303 ms
64 bytes from 39.156.66.18: seq=2 ttl=54 time=29.566 ms
^C
--- www.baidu.com ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 6.991/14.620/29.566 ms
/ # nslookup kubernetes.default.svc.cluster.local
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default.svc.cluster.local
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
/ # nslookup kubernetes.default.svc.cluster.loca
#10.96.0.10 就是我们coreDNS的clusterIP,说明coreDNS配置好了。
#解析内部Service的名称,是通过coreDNS去解析的。
#注意:
#busybox要用指定的1.28版本,不能用最新版本,最新版本,nslookup会解析不到dns和ip
kubernetes-dashboard安装(旧版本)
警告
新版本dashboard已经采用helm安装了,此种安装方式已被废弃
安装
- 获取配置文件
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
# 如果下载失败的话,可以到对应的仓库下载文件
- 编辑配置文件
# 编辑配置文件
cp recommended.yaml recommended-secret.yaml
vim recommended-secret.yaml
- 指定 type,及 nodePort
----
spec:
type: NodePort # 增加type=NodePort,为了外部可以访问dashboard应用
ports:
- port: 443
targetPort: 8443
nodePort: 30001 # 指定端口,若此处不指定,可通过命令查看:kubectl get svc -n kubernetes-dashboard
selector:
k8s-app: kubernetes-dashboardspec:
----
- 删除dashboard-certs,为后面使用自签证书作准备,即recommended-secret.yaml文件中找到下面内容进行删除
- 2.7.0版本已经不需要删除了,可以直接使用https访问
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
- 创建用户的配置文件
cat > dashboard-adminuser.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
EOF
# 初始化dashboard
kubectl apply -f recommended-secret.yaml
# 创建用户
kubectl apply -f dashboard-adminuser.yaml
- 如若要删除dashboard pod,命令如 下:
kubectl delete -f recommended-secret.yaml
- 查看dashboard状态,确保所有节点是Running状态:
kubectl get pods,svc -n kubernetes-dashboard
kubectl get pods --all-namespaces
配置证书
Kubernetes Dashboard,目前像Edge/Chrome等都无法访问,只有Firefox可以访问,这种问题是浏览器自带安全机制决定的,但给签发证书就可以访问
签发证书在指定目录下进行:
mkdir tls && cd tls
- 创建自签名CA:
# 生成私钥
openssl genrsa -out ca.key 2048
# 生成自签名证书
# subj
openssl req -new -x509 \
-key ca.key \
-out ca.crt \
-days 3650 \
-subj "/C=CN/ST=HB/L=WH/O=DM/OU=YPT/CN=CA"
# 查看CA内容
openssl x509 -in ca.crt -noout -text
- 签发Dashboard证书
# 生成私钥
openssl genrsa -out dashboard.key 2048
# 申请签名请求,注意,IP为安装Dashboard服务器IP
openssl req -new -sha256 \
-key dashboard.key \
-out dashboard.csr \
-subj "/C=CN/ST=Shanghai/L=Shanghai/O=k8s/OU=System/CN=10.104.10.201"
# 配置文件,注意,IP为安装Dashboard服务器IP
# 通过这条命令看ip kubectl get pod -n kubernetes-dashboard -o wide
cat > dashboard.cnf << EOF
extensions = san
[san]
keyUsage = digitalSignature
extendedKeyUsage = clientAuth,serverAuth
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
subjectAltName = IP:10.106.155.127,IP:127.0.0.1,DNS:10.106.155.127,DNS:localhost
EOF
# 签发证书
openssl x509 -req -sha256 \
-days 3650 \
-in dashboard.csr \
-out dashboard.crt \
-CA ca.crt \
-CAkey ca.key \
-CAcreateserial \
-extfile dashboard.cnf
# 查看证书
openssl x509 -in dashboard.crt -noout -text
cd ..
kubectl create secret generic kubernetes-dashboard-certs \
--from-file=tls/dashboard.key \
--from-file=tls/dashboard.crt \
-n kubernetes-dashboard
- 如若删除证书,命令如下:
kubectl delete secret kubernetes-dashboard-certs -n kubernetes-dashboard
- 创建token
kubectl -n kubernetes-dashboard create token admin-user
- 记得将token保存,浏览器访问 dashboard时,需要用到 token
eyJhbGciOiJSUzI1NiIsImtpZCI6IkxuUVpjazNvWHE1SGZMMllGWWg3czRvRHpES3FvWkl0aVZhTjhzeGhpQVUifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjcyNjU0ODI5LCJpYXQiOjE2NzI2NTEyMjksImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiM2ZmZjM3NDItNjQ5Zi00N2ZlLWEzYjgtOTA2MGY5OWIwN2E0In19LCJuYmYiOjE2NzI2NTEyMjksInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.iYUHgwOx1ho8VKvlxnMjEBZJJf87O95El5mYDj8cVL3AgA2-kp67BaGQ4SSuT6KbwqiK6dAJuQeHp5TV_-dNxkyHBGXW7yWrx14ZBu42ydaXS7Ku1K-GFekMLsz7Q8OoaFP8uCrP_6o14IvYqdXHkw18GOeOJ6D_KCRyK_uDcnvNNmavM97BzfApV37bnC7MDx1zPkvx9WiM8NTijf91iUKyHY_Q4JSSNpgVKGovy2RYK11SHQGjUF6rZ5pTTbT-zwvcwX7wRX1Vck6fy2L4hJmWbHJrk_95mHS5mb9u9WPhoXBqwBaPDlcfIRMSaJ3CVyPOSc1nfciLW1GO1BTq_g
找到dashboard pod service端口
kubectl get svc -n kubernetes-dashboard
这时只能通过上面的Ip访问,外部不能直接访问
配置外部访问
Dashboard登录连接有以下三种,本机的http方式访问,外部机器的https访问。
https://<domain_name>/…
非以上连接登录,页面都会出现以下提示,而无法登录。
在这里插入图片描述
本机访问配置
本机器访问
master机器输入命令,此时命令为挂起状态kubectl proxy
在本机浏览器输入(注意必须是 http),对,没错就是这么长的连接:
http://localhost:30001/api/v1/namespaces/kubernetes-dashboard/services/https
外部机器访问
方法一:端口转发模式:
监听所有IP地址,并将8080转发至443https端口访问。
kubectl port-forward -n kubernetes-dashboard --address 0.0.0.0 service/kubernetes-dashboard 8080:443
这时在外部机器浏览器输入,(注意必须是 https),对,没错就是这么短的连接即可访问:
https://192.168.0.30:30001/ # 这里的ip是主机ip
image-20240925190549907
方法二:NodePort:
编辑命令空间kubernetes-dashboard中的kubernetes-dashboard服务
kubectl -n kubernetes-dashboard edit service kubernetes-dashboard
打开后,将type: ClusterIP 改为 type: NodePort
apiVersion: v1 kind: Service ... ... ports: - nodePort: 30169 port: 443 protocol: TCP targetPort: 8443 selector: k8s-app: kubernetes-dashboard sessionAffinity: None type: NodePort #修改这一行即可,原为type: ClusterIP status: loadBalancer: {}
重新查看命令空间kubernetes-dashboard中的kubernetes-dashboard服务的端口地址。
[root@k8s-master1 ~]# kubectl -n kubernetes-dashboard get service kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard NodePort 10.106.155.127 <none> 443:30001/TCP 26h
kubectl -n kubernetes-dashboard get service kubernetes-dashboard
显示如下,外部暴露端口自动为30001
这时在外部机器浏览器输入IP加30001,(注意必须是 https)即可访问:
image-20230922150703068
方法三:API Server:
注:这种方法仅适用于在浏览器中安装用户证书时才可用,可自行研究,这里不深究了。
如果没有安装证书,显示“检测到不安全的访问。无法登陆。通过 HTTPS 或使用 localhost 安全访问 Dashboard”设置API server接收所有主机的请求:-
kubectl proxy --address='0.0.0.0' --accept-hosts='^*$'
- 浏览器访问命令为:
https://<master-ip>:<apiserver-port>/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
kubernete-dashboard安装(新版本)
Kubernetes Dashboard 是一个通用的、基于 Web 的 UI,适用于 Kubernetes 集群。它允许用户管理集群中运行的应用程序并对其进行故障排除,以及管理集群本身。
从版本 7.0.0 开始,官方放弃了对基于清单的安装的支持。目前仅支持基于 Helm 的安装。由于多容器设置和对 Kong 网关 API 代理的硬依赖,轻松支持基于清单的安装是不可行的
安装helm
wget https://get.helm.sh/helm-v3.16.1-linux-amd64.tar.gz
tar zxf helm-v3.16.1-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helm && rm -rf linux-amd64
helm version
[root@k8s-master1 dashboard-chart]# helm version
version.BuildInfo{Version:"v3.16.1", GitCommit:"5a5449dc42be07001fd5771d56429132984ab3ab", GitTreeState:"clean", GoVersion:"go1.22.7"}
helm repo add bitnami https://charts.bitnami.com/bitnami
helm search repo bitnami
helm安装dashboard
# helm直接安装
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard
# 由于网络受限,这里我是先下载了dashboard包
# https://github.com/kubernetes/dashboard/releases/download/
helm upgrade --install kubernetes-dashboard kubernetes-dashboard-7.10.0.tgz --create-namespace --namespace kubernetes-dashboard
检查是否启动
kubectl get pod -A
- 发现拉取镜像失败
- 解决方法:
- 通过查看pod,确定拉取的镜像标签,然后通过docker 拉取,通过containerd导入
确定需要拉取的镜像
- 查看每一个
dashboard pod
, 确定拉取失败的镜像,一共有五个
kubectl describe pod kubernetes-dashboard-api-575495cdc9-rlfhz -n kubernetes-dashboard
kubectl describe pod kubernetes-dashboard-auth-7f9956566f-tkcg8 -n kubernetes-dashboard
kubectl describe pod kubernetes-dashboard-metrics-scraper-df869c886-8x26z -n kubernetes-dashboard
kubectl describe pod kubernetes-dashboard-web-6ccf8d967-nlw2h -n kubernetes-dashboard
kubectl describe pod kubernetes-dashboard-kong-57d45c4f69-s9jm2 -n kubernetes-dashboard
拉取镜像
docker pull kubernetesui/dashboard-auth:1.2.2
docker pull docker.io/kubernetesui/dashboard-api:1.10.1
docker pull docker.io/library/kong:3.6
docker pull docker.io/kubernetesui/dashboard-metrics-scraper:1.2.1
docker pull docker.io/kubernetesui/dashboard-web:1.6.0
导出docker镜像
docker save kubernetesui/dashboard-web:1.6.0 > dashboard-web1.6.0.tar
docker save kubernetesui/dashboard-api:1.10.1 > dashboard-api1.10.1.tar
docker save kubernetesui/dashboard-auth:1.2.2 > dashboard-auth1.2.2.tar
docker save kubernetesui/dashboard-metrics-scraper:1.2.1 > dashboard-metrics-scraper.1.2.1.tar
docker save kong:3.6 > kong3.6.tar
上传至每个节点
scp dashboard-* root@k8s-node1:/home/zhangtq
scp dashboard-* root@k8s-node2:/home/zhangtq
scp kong3.6.tar root@k8s-node1:/home/zhangtq
scp kong3.6.tar root@k8s-node2:/home/zhangtq
导入镜像
注
每个节点都需要导入
docker load -i dashboard-api1.10.1.tar
docker load -i dashboard-auth1.2.2.tar
docker load -i dashboard-metrics-scraper.1.2.1.tar
docker load -i dashboard-web1.6.0.tar
docker load -i kong3.6.tar
重新安装
helm delete kubernetes-dashboard --namespace kubernetes-dashboard
helm upgrade --install kubernetes-dashboard kubernetes-dashboard-7.10.0.tgz --create-namespace --namespace kubernetes-dashboard
kubectl get pod -A
- 发现全部启动正常
kubernetes-dashboard kubernetes-dashboard-api-b79767647-trwnm 1/1 Running 0 58m
kubernetes-dashboard kubernetes-dashboard-auth-5f9d968665-59984 1/1 Running 0 58m
kubernetes-dashboard kubernetes-dashboard-kong-57d45c4f69-lqswk 1/1 Running 0 58m
kubernetes-dashboard kubernetes-dashboard-metrics-scraper-df869c886-kjvnk 1/1 Running 0 58m
kubernetes-dashboard kubernetes-dashboard-web-6ccf8d967-mrt9w 1/1 Running 0 58m
访问dashboard
[参考dashboard官方文档](dashboard/docs/user/accessing-dashboard/README.md 在 master ·kubernetes/dashboard (英语) ·GitHub的)
方式一修改NodePort
[root@k8s-master1 zhangtq]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard-api ClusterIP 10.111.55.149 <none> 8000/TCP 4h38m
kubernetes-dashboard-auth ClusterIP 10.102.2.131 <none> 8000/TCP 4h38m
kubernetes-dashboard-kong-proxy ClusterIP 10.104.23.171 <none> 443/TCP 4h38m
kubernetes-dashboard-metrics-scraper ClusterIP 10.103.76.60 <none> 8000/TCP 4h38m
kubernetes-dashboard-web ClusterIP 10.97.198.189 <none> 8000/TCP 4h38m
# 找到kubernetes-dashboard-kong-proxy,可以看到是由kong-proxy对外提供web服务的
# 编辑kubernetes-dashboard-kong-proxy, 将type 改为NodePort
[root@k8s-master1 zhangtq]# kubectl edit svc kubernetes-dashboard-kong-proxy -n kubernetes-dashboard
spec:
clusterIP: 10.104.23.171
clusterIPs:
- 10.104.23.171
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: kong-proxy-tls
nodePort: 32759
port: 443
protocol: TCP
targetPort: 8443
selector:
app.kubernetes.io/component: app
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/name: kong
sessionAffinity: None
type: ClusterIP # 将ClusterIP 改为NodePort
status:
loadB
# 查看端口,由图可知端口为32759
[root@k8s-master1 zhangtq]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard-api ClusterIP 10.111.55.149 <none> 8000/TCP 4h41m
kubernetes-dashboard-auth ClusterIP 10.102.2.131 <none> 8000/TCP 4h41m
kubernetes-dashboard-kong-proxy NodePort 10.104.23.171 <none> 443:32759/TCP 4h41m
kubernetes-dashboard-metrics-scraper ClusterIP 10.103.76.60 <none> 8000/TCP 4h41m
kubernetes-dashboard-web ClusterIP 10.97.198.189 <none> 8000/TCP 4h41m
- 访问
https://IP:32759
, 需要token, 请参考创建token章节
创建token
cat > dashboard-user.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
EOF
kubectl apply -f dashboard-user.yaml
# 创建token
kubectl -n kube-system create token admin-user
eyJhbGciOiJSUzI1NiIsImtpZCI6Inc1SldjZlVjaFpTQTBWNWtxRDZHamxTdTd6em5NZWN2aFBWMDR1bVE4OEUifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzMwNzc2MjY3LCJpYXQiOjE3MzA3NzI2NjcsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiMDA4NTI3NTAtYzc5My00MjY2LTgzNjQtMmE1OGM5OWY2YTkwIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiZDIwZTRmZGEtNWE2NC00NTFhLWIzYjAtMDE2ZmEwNDQ0ZjY5In19LCJuYmYiOjE3MzA3NzI2NjcsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbi11c2VyIn0.fOsizY1MTPoQtWQDSgLrEZIay1cb3V5MVfNK14gDwXek7lgZFfoPPEGVgqgnSExtHhiRgyI9mSIHUnJ0JZZrdZ7RFJACSCwjNRlNEi_Md2_GKGmHhZrGp7Y2r5mjm1nlKVYICELqVx3d06yeC2bFXfI4gquuG_BA2ESg1HZ8Zgfo6ZYAT2y7-2_E6cAN9NGjU7YDbXo0VbzeEEVqp6zdvo4-0i1lB2yKcHe6tUEq-NYS-oThcbuI741Af1kSgJW7qAA7W9XsYpAU8diQ3Xy6rKRJAulh0S04bS4HX5_-017c98Bu8gX3ilzstFDmjsCba7Apigk4VCmwnn-Rp4D-8A
访问
网址为
https://masterip:32759

- 将上面的token填入就可以登录