初步搭建一個(gè)一個(gè)主節(jié)點(diǎn)和兩個(gè)從節(jié)點(diǎn)Kubernetes 1.28.2 集群。先準(zhǔn)備好機(jī)器 host hostname os role hardware 192.168.31.200 master01 centos7.9 control-plane cpu:2c 內(nèi)存: 3G 硬盤1:50G 192.
host | hostname | os | role | hardware |
---|---|---|---|---|
192.168.31.200 | master01 | centos7.9 | control-plane | cpu:2c 內(nèi)存: 3G 硬盤1:50G |
192.168.31.201 | node01 | centos7.9 | worker | cpu:2c 內(nèi)存: 3G 硬盤1:50G 硬盤2:50G |
192.168.31.202 | node02 | centos7.9 | worker | cpu:2c 內(nèi)存: 3G 硬盤1:50G 硬盤2:50G |
systemctl stop firewalld
systemctl disable firewalld
systemctl is-enabled firewalld
ntpdate ntp1.aliyun.com
vi /etc/crontab
1 * * * * root /usr/sbin/ntpdate ntp1.aliyun.com && /sbin/hwclock -w
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
setenforce 0
sed -i '/swap/s/^/#/g' /etc/fstab
swapoff -a
[root@master01 ~]# cat >>/etc/hosts <
簡(jiǎn)介: centos7 yum工具在線升級(jí)內(nèi)核
[root@master01 ~]# uname -a
Linux localhost.localdomain 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
yum -y --enablerepo=elrepo-kernel install kernel-lt
sudo awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg
grub2-set-default 0
6、重啟
reboot
7、查看內(nèi)核版本
[root@master01 ~]# uname -a
Linux localhost.localdomain 4.4.244-1.el7.elrepo.x86_64 #1 SMP Tue Nov 17 18:57:10 EST 2020 x86_64 x86_64 x86_64 GNU/Linux
cat > /etc/sysctl.d/Kubernetes.conf <
這些配置參數(shù)的含義是:
net.bridge.bridge-nf-call-ip6tables = 1
:當(dāng)通過橋接網(wǎng)絡(luò)接收到IPv6數(shù)據(jù)包時(shí),將調(diào)用
ip6tables
的規(guī)則進(jìn)行處理。
net.bridge.bridge-nf-call-iptables = 1
:當(dāng)通過橋接網(wǎng)絡(luò)接收到IPv4數(shù)據(jù)包時(shí),將調(diào)用
iptables
的規(guī)則進(jìn)行處理。
net.ipv4.ip_forward = 1
:允許IPv4的數(shù)據(jù)包轉(zhuǎn)發(fā),即使數(shù)據(jù)包的目標(biāo)不是本機(jī)。
vm.swappiness = 0
: vm.swappiness是操作系統(tǒng)控制物理內(nèi)存交換出去的策略。它允許的值是一個(gè)百分比的值,最小為0,最大運(yùn)行100,該值默認(rèn)為60。vm.swappiness設(shè)置為0表示盡量少swap,100表示盡量將inactive的內(nèi)存頁交換出去。
Kubernetes通過iptables實(shí)現(xiàn)服務(wù)發(fā)現(xiàn)和網(wǎng)絡(luò)流量路由,pod通信。這一步很重要。沒有設(shè)置的話會(huì)導(dǎo)致集群網(wǎng)絡(luò)通信故障,如pod無法通信。核模塊
yum -y install conntrack ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git
# 相關(guān)內(nèi)核模塊
cat > /etc/modules-load.d/ipvs.conf <
ip_vs
,
ip_vs_rr
,
ip_vs_wrr
,
ip_vs_sh
是IPVS相關(guān)的內(nèi)核模塊。它們提供了不同的負(fù)載均衡算法(round-robin,加權(quán)輪詢,最短任務(wù)優(yōu)先)。
nf_conntrack
和
nf_conntrack_ipv4
是用于網(wǎng)絡(luò)連接跟蹤的內(nèi)核模塊,這在防火墻和NAT中非常重要。
[root@master01 ~]# reboot
# 檢查是否加載成功
lsmod |egrep "ip_vs|nf_conntrack_ipv4"
nf_conntrack_ipv4 15053 26
nf_defrag_ipv4 12729 1 nf_conntrack_ipv4
ip_vs_sh 12688 0
ip_vs_wrr 12697 0
ip_vs_rr 12600 0
ip_vs 145458 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 139264 10 ip_vs,nf_nat,nf_nat_ipv4,nf_nat_ipv6,xt_conntrack,nf_nat_masquerade_ipv4,nf_nat_masquerade_ipv6,nf_conntrack_netlink,nf_conntrack_ipv4,nf_conntrack_ipv6
libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack
順便介紹一下歷史背景。早期docker勢(shì)大,但docker沒有實(shí)現(xiàn)CRI,Kubernetes只能用dockershim做適配器來兼容docker,使其可以接入cri,這個(gè)dockershim在Kubernetes1.24版本就被放棄維護(hù)了。containerd是從docker中分離出來的開源項(xiàng)目,強(qiáng)調(diào)簡(jiǎn)單性、健壯性和可移植性。它負(fù)責(zé)以下工作
管理容器的生命周期(從創(chuàng)建容器到銷毀容器)
拉取/推送容器鏡像
存儲(chǔ)管理(管理鏡像及容器數(shù)據(jù)的存儲(chǔ))
調(diào)用 runc 運(yùn)行容器(與 runc 等容器運(yùn)行時(shí)交互,runc是oci 開放容器標(biāo)準(zhǔn)的一個(gè)實(shí)現(xiàn)。oci就是創(chuàng)建容器需要做一些 namespaces 和 cgroups 的配置,以及掛載 root 文件系統(tǒng)等操作的規(guī)范)
管理容器網(wǎng)絡(luò)接口及網(wǎng)絡(luò)
yum -y install yum-utils device-mapper-persistent-data lvm2
# 添加阿里源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 配置 containerd
cat >>/etc/modules-load.d/containerd.conf <
overlay
是一個(gè)文件系統(tǒng)類型,它支持在不改變底層文件的情況下,將改動(dòng)保存在另一個(gè)分離的文件層。它常用于 Docker 和其他容器運(yùn)行時(shí)中,用來創(chuàng)建容器的文件系統(tǒng)。(寫時(shí)復(fù)制)
br_netfilter
是一個(gè)網(wǎng)絡(luò)相關(guān)的內(nèi)核模塊,它允許 iptables 和其他網(wǎng)絡(luò)工具對(duì)橋接流量進(jìn)行過濾。這在 Kubernetes 網(wǎng)絡(luò)設(shè)置中很重要,特別是在使用 overlay 網(wǎng)絡(luò)(如 flannel、Calico 等)時(shí)。
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
# 使用systemd管理cgroups
sed -i '/SystemdCgroup/s/false/true/g' /etc/containerd/config.toml
# 配置sadnbox image從阿里云拉取
sed -i '/sandbox_image/s/registry.k8s.io/registry.aliyuncs.com\/google_containers/g' /etc/containerd/config.toml
sed -i 's#sandbox_image = "registry.k8s.io/pause:3.6"#sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"#' /etc/containerd/config.toml
# 啟動(dòng)containerd
systemctl enable containerd
systemctl start containerd
cat >/etc/yum.repos.d/kubernetes.repo <https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 查看可用版本
yum list kubelet --showduplicates |grep 1.28
# 開始安裝 這篇文檔寫下時(shí),最新版本為1.28.2 我直接安裝的最新版
yum -y install kubectl-1.28.2 kubelet-1.28.2 kubeadm-1.28.2
# 啟動(dòng)
systemctl enable kubelet
systemctl start kubelet
# 查看所需鏡像
[root@master01 ~]# kubeadm config images list --kubernetes-version=v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/coredns/coredns:v1.10.1
# 初始化
[root@master01 ~]# kubeadm init --kubernetes-version=1.28.2 \
--apiserver-advertise-address=192.168.31.200 \
--image-repository registry.aliyuncs.com/google_containers \
--pod-network-cidr=172.16.0.0/16
apiserver-advertise-address
寫control-plane的ip
pod-network-cidr
寫個(gè)不沖突的網(wǎng)段
image-repository
指定從阿里云拉取鏡像
命令執(zhí)行完成后會(huì)返回一長(zhǎng)段內(nèi)容,主要看最后部分
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.96.101:6443 --token l906wz.0fydt3hcfbogwlo9 \
--discovery-token-ca-cert-hash sha256:2604d3aab372a483b26bcbdafdb54d7746226975c3a317db07d94eccdfca51be
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf
[root@master01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
control-plane01 NotReady control-plane 50s v1.28.2
[root@master01 ~]# kubectl get pods -A
[root@master01 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-7bdc4cb885-fs2tz 1/1 Pending 0 13d
coredns-7bdc4cb885-wk7c9 1/1 Pending 0 13d
etcd-control-plane01 1/1 Running 0 13d
kube-apiserver-control-plane01 1/1 Running 0 13d
kube-controller-manager-control-plane01 1/1 Running 0 13d
kube-proxy-mfzmq 1/1 Running 3 (25h ago) 13d
kube-scheduler-control-plane01 1/1 Running 0 13d
kubeadm token create --print-join-command
wget https://docs.projectcalico.org/manifests/calico.yaml
改為10.244.0.0/16
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: "k8s,bgp"
# 下面添加
- name: IP_AUTODETECTION_METHOD
value: "interface=eth0"
# eth0為本地網(wǎng)卡名字
Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "5d6557ac061d164d494042e7e9b6cc38c95688a358275a78f5bbb7dd3883c063" network for pod "ingress-nginx-admission-create-b9q9w": networkPlugin cni failed to set up pod "ingress-nginx-admission-create-b9q9w_ingress-nginx" network: error getting ClusterInformation: connection is unauthorized: Unauthorized, failed to clean up sandbox container "5d6557ac061d164d494042e7e9b6cc38c95688a358275a78f5bbb7dd3883c063" network for pod "ingress-nginx-admission-create-b9q9w": networkPlugin cni failed to teardown pod "ingress-nginx-admission-create-b9q9w_ingress-nginx" network: error getting ClusterInformation: connection is unauthorized: Unauthorized]
kubectl apply -f calico.yaml
# 檢查
[root@master01 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-6849cf9bcf-gv6xx 1/1 Running 0 13d
calico-node-2d7xx 1/1 Running 0 13d
coredns-7bdc4cb885-fs2tz 1/1 Running 0 13d
coredns-7bdc4cb885-wk7c9 1/1 Running 0 13d
etcd-control-plane01 1/1 Running 0 13d
kube-apiserver-control-plane01 1/1 Running 0 13d
kube-controller-manager-control-plane01 1/1 Running 0 13d
kube-proxy-mfzmq 1/1 Running 3 (25h ago) 13d
kube-scheduler-control-plane01 1/1 Running 0 13d
# 所有worker節(jié)點(diǎn)都執(zhí)行
kubeadm join 192.168.31.200:6443 --token l906wz.0fydt3hcfbogwlo9 \
--discovery-token-ca-cert-hash sha256:2604d3aab372a483b26bcbdafdb54d7746226975c3a317db07d94eccdfca51be
# 查看狀態(tài)
[root@master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
control-plane01 Ready control-plane 13d v1.28.2
node01 Ready 13d v1.28.2
node02 Ready 13d v1.28.2
node03 Ready 13d v1.28.2
yum -y install bash-completion
echo "source <(kubectl completion bash)" >> /etc/profile
source /etc/profile
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v3.0.0-alpha0/charts/kubernetes-dashboard.yaml
kind: Service
apiVersion: v1
metadata:
labels:
Kubernetes-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort # 增加內(nèi)容
ports:
- port: 443
targetPort: 8443
nodePort: 30000 # 增加內(nèi)容(端口范圍30000-32767)
selector:
Kubernetes-app: kubernetes-dashboard
# 安裝
kubectl apply -f recommended.yaml
# 查看進(jìn)度
[root@master01 ~]# kubectl get all -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
pod/dashboard-metrics-scraper-5cb4f4bb9c-h549p 1/1 Running 3 (26h ago) 13d
pod/kubernetes-dashboard-6967859bff-cm4tl 1/1 Running 4 (26h ago) 13d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP 10.108.31.72 8000/TCP 13d
service/kubernetes-dashboard NodePort 10.102.47.173 443:30000/TCP 13d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/dashboard-metrics-scraper 1/1 1 1 13d
deployment.apps/kubernetes-dashboard 1/1 1 1 13d
NAME DESIRED CURRENT READY AGE
replicaset.apps/dashboard-metrics-scraper-5cb4f4bb9c 1 1 1 13d
replicaset.apps/kubernetes-dashboard-6967859bff 1 1 1 13d
[root@master01 ~]# vim admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin
namespace: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
name: kubernetes-dashboard-admin
namespace: kubernetes-dashboard
annotations:
kubernetes.io/service-account.name: "admin"
type: kubernetes.io/service-account-token
# 創(chuàng)建admin用戶token
kubectl -n kubernetes-dashboard create token admin
# 獲取token
Token=$(kubectl -n kubernetes-dashboard get secret |awk '/kubernetes-dashboard-admin/ {print $1}')
kubectl describe secrets -n kubernetes-dashboard ${Token} |grep token |awk 'NR==NF {print $2}'
wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml -O metrics-server.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
# ...
template:
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
- --kubelet-insecure-tls # 需要新加的一行
image: registry.cn-hangzhou.aliyuncs.com/rainux/metrics-server:v0.6.4
kubectl apply -f metrics-server.yaml
# 查看是否在運(yùn)行
kubectl get pods -n kube-system | grep metrics
# 獲取集群的指標(biāo)數(shù)據(jù)
kubectl get --raw /apis/metrics.k8s.io/v1beta1 | python3 -m json.tool
根據(jù)輸出可見,集群提供nodes和pods的資源指標(biāo)。
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "metrics.k8s.io/v1beta1",
"resources": [
{
"name": "nodes",
"singularName": "",
"namespaced": false,
"kind": "NodeMetrics",
"verbs": [
"get",
"list"
]
},
{
"name": "pods",
"singularName": "",
"namespaced": true,
"kind": "PodMetrics",
"verbs": [
"get",
"list"
]
}
]
}
#1-2分鐘后查看結(jié)果
[root@master01 ~]# kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-master 256m 12% 2002Mi 52%
k8s-node1 103m 5% 1334Mi 34%
k8s-node2 144m 7% 1321Mi 34%
# 查看 top 命令的幫助
kubectl top --help
# 查看node節(jié)點(diǎn)的資源使用情況
kubectl top node
# 查看pod的資源使用情況
kubectl top pod
# 查看所有命名空間的pod資源使用情況
kubectl top pod -A
源碼
[root@k8s-master01 dashboard]# vim /root/.kube/config # 增加 token 內(nèi)容
- name: admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQxekNDQXIrZ0F3SUJBZ0lVTFFhcXpaaitVc0tRU1BiWVlMRmxDWnhDZVBNd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SUJjTk1qQXdOREU1TURVeE1UQXdXaGdQTWpBM01EQTBNRGN3TlRFeE1EQmFNR2N4Q3pBSkJnTlYKQkFZVEFrTk9NUkV3RHdZRFZRUUlFd2hJWVc1bldtaHZkVEVMTUFrR0ExVUVCeE1DV0ZNeEZ6QVZCZ05WQkFvVApEbk41YzNSbGJUcHRZWE4wWlhKek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweERqQU1CZ05WQkFNVEJXRmtiV2x1Ck1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBeG1MWWxNQXFEeGVreXljWWlvQXUKU2p5VzhiUCtxTzF5bUhDWHVxSjQ3UW9Vd0lSVEFZdVAyTklQeFBza04xL3ZUeDBlTjFteURTRjdYd3dvTjR5cApacFpvRjNaVnV1NFNGcTNyTUFXT1d4VU93REZNZFZaSkJBSGFjZkdMemdOS01FZzRDVDhkUmZBUGxrYVdxNkROCmJKV3JYYW41WGRDUnE2NlpTdU9lNXZXTWhENzNhZ3UzWnBVZWtHQmpqTEdjNElTL2c2VzVvci9LeDdBa0JuVW0KSlE3M2IyWUl3QnI5S1ZxTUFUNnkyRlhsRFBpaWN1S0RFK2tGNm9leG04QTljZ1pKaDloOFZpS0trdnV3bVh5cwpNREtIUzJEektFaTNHeDVPUzdZR1ZoNFJGTGp0VXJuc1h4TVBtYWttRFV1NkZGSkJsWlpkUTRGN2pmSU9idldmCjlRSURBUUFCbzM4d2ZUQU9CZ05WSFE4QkFmOEVCQU1DQmFBd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUcKQ0NzR0FRVUZCd01DTUF3R0ExVWRFd0VCL3dRQ01BQXdIUVlEVlIwT0JCWUVGS1pCcWpKRldWejZoV1l1ZkZGdApHaGJnQ05MU01COEdBMVVkSXdRWU1CYUFGQWJLKzBqanh6YUp3R1lGYWtpWVJjZzZENkpmTUEwR0NTcUdTSWIzCkRRRUJDd1VBQTRJQkFRQ05Ra3pueDBlSDU3R2NKZTF5WUJqNkY4YmVzM2VQNGRWcUtqQVZzSkh6S3dRWnpnUjIKcnVpMmdZYTZjdWNMNGRWVllHb05mRzRvdWI0ekJDTUIzZkRyN2FPRFhpcGcrdWx3OFpRZGRaN3RIYnZRTlIyMApTTHhnWnlFYU9MSFdmRVNYNFVJZk1mL3pDaGZ0Yzdhb1NpcUNhMGo2NmY2S3VVUnl6SSsxMThqYnpqK1gwb1d1ClVmdVV3dk5xWHR5ZjlyUTVWQW40bjhiU25nZDBGOXgzNFlyeUNMQ0REOWdBaWR3SDlVM3I3eVVGQ1Rkbm9leEgKSTgyYjRLdHZzT2NGMk5Dd21WZDFBWDNJSEFmMENRMEZSQ21YWjF3aFNxd1lFeVAxTStMMEcxN29CTmU5cmttMwo4U0NyWjczaWtiN0k1NXlVOWRrMjdXbVByb1hXMjAvcXhHeDYKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBeG1MWWxNQXFEeGVreXljWWlvQXVTanlXOGJQK3FPMXltSENYdXFKNDdRb1V3SVJUCkFZdVAyTklQeFBza04xL3ZUeDBlTjFteURTRjdYd3dvTjR5cFpwWm9GM1pWdXU0U0ZxM3JNQVdPV3hVT3dERk0KZFZaSkJBSGFjZkdMemdOS01FZzRDVDhkUmZBUGxrYVdxNkROYkpXclhhbjVYZENScTY2WlN1T2U1dldNaEQ3MwphZ3UzWnBVZWtHQmpqTEdjNElTL2c2VzVvci9LeDdBa0JuVW1KUTczYjJZSXdCcjlLVnFNQVQ2eTJGWGxEUGlpCmN1S0RFK2tGNm9leG04QTljZ1pKaDloOFZpS0trdnV3bVh5c01ES0hTMkR6S0VpM0d4NU9TN1lHVmg0UkZManQKVXJuc1h4TVBtYWttRFV1NkZGSkJsWlpkUTRGN2pmSU9idldmOVFJREFRQUJBb0lCQVFDdkRPRld3QWxjcjl3MQpkaFh0Z0JWWVpBWTgyRHBKRE53bExwUnpscEZsZDVQQUhBS3lSbGR6VmtlYjVJNmNYZ1pucEtYWTZVaDIxYWhxCndldHF1Szl4V2g0WE5jK0gxaklYMlBiQnRPVmI4VVRHeWJsUmdBV0ZoNjBkQmFuNjZtUTRIa0Z6eDBFcFNSNDMKMTZselg3eGpwOTFDRkkxNC9tVExQSkQreDhLYXYxcDVPU1BYQkxhdzR6V1JycmFVSnFrVUtZcmRJUVlkNC9XQQpLNVp3WGpRdklpZzlGclArb2Fnb1kyelFzODFXMmlVd1pXanhkQnV0dXZiQW5mVEc0ZkQvUjc3MnNzUU44dkFvCldDUGpTcTlLckJZQzJYaWd5L2JkSHFFT3lpSmxUQVpaazZLQXlBN0ExbCs5WDFSOWxyUTFPTkpOS1k5WWRybTIKajFudW1WSXhBb0dCQU5sS3B4MW9tQVBQK0RaOGNMdjkwZDlIWm1tTDJZYkppUUtNdEwrUTJLKzdxZHNwemtOaQorb1J2R0NOR0R1U3JZbDZwWjFDMk0xajkxNXJwbWFrZmJmV2NDRWtKenlVRjhSMzUyb2haMUdYeWQzcmkxMWxqCndpcnlmcHl2QnF1SWlKYWR4Rk1UdGRoTmFuRTNFeURrSVJ0UW03YXcyZHppUnNobHkxVXFGMEYvQW9HQkFPbTYKQjFvbnplb2pmS0hjNnNpa0hpTVdDTnhXK2htc1I4akMxSjVtTDFob3NhbmRwMGN3ekJVR05hTDBHTFNjbFRJbwo4WmNNeWdXZU1XbmowTFA3R0syVUwranlyK01xVnFkMk1PRndLanpDOHNXQzhTUEovcC96ZWZkL2ZSUE1PamJyCm8rMExvblUrcXFjTGw1K1JXQ2dJNlA1dFo2VGR5eTlWekFYVUV2Q0xBb0dBQjJndURpaVVsZnl1MzF5YWt5M3gKeTRTcGp3dC9YTUxkOHNKTkh3S1hBRmFMVWJjNUdyN3kvelN5US9HTmJHb1RMbHJqOUxKaFNiVk5kakJrVm9tRgp2QXVYbExYSzQ5NHgrKzJhYjI5d2VCRXQxWGlLRXJmOTFHenp0KytYY0oxMDJuMkNSYnEwUmkxTlpaS1ZDbGY4CmNPdnNndXZBWVhFdExJT2J6TWxraFkwQ2dZRUEyUnFmOGJLL3B4bkhqMkx5QStYT3lMQ1RFbmtJWUFpVHRYeWsKbTI0MzFGdUxqRW9FTkRDem9XUGZOcnFlcUVZNm9CbEFNQnNGSFNyUW81ZW1LVWk0cDZQYXpQdUJQZlg2QUJ2ZApVOHNvc01BMVdocERmQWNKcWZJei9SNURSTHlUNXFnRDRSREptemJXdGN3aXoybm5CV2toWkJTa0RaU29SQlBpCkxCZk9iL2tDZ1lFQXk1ZS9MaXgzSzVvdHhGRC8xVVV0cGc2dEJqMksxVkg5bDJJWFBidmFKMjZQYnZWYkEwQTUKM0Z5UmZnSTlZTTc3T3QxbTY0ZlRTV21YdTJKU0JpM3FFQ2xic3FRT2taZXZ1V2VsSVY5WnhWblc5NVMzMHVuUwp0ZEk3ZDVTUm1OSUpWK0l1Mk9IRGxkYXN4TzJmcVFoTnVxSFRiVStvNW42ZCtUUlpXVTdpN0drPQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
token: JSUzI1NiIsImtpZCI6Ikg5dThGMmc0c1ZBOTVkajVjMGRlb2poZjJMaExDSFp1T1NJWTdobkYtWmsifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTRsYzkyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJiNjc2MGRkZi1kN2FhLTRlZjctYWZkOS05YzA0ZThlMWE5NTQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.XCA6-Wo7q8tJY8td1PRGkruvuNOmtHenjzyToRq5fJjGmWjdLspMDRvDul7YjMeY5eNuhcMG1cJgnyTZZW4gypIiVK1cAtvNR-U4oS0Vv8PqknZdc5-U1ftjIUeayH33tPCAgj-rui31CTwg26s0Z0B312XHF6tLOZZYxkavd1zYVt7DJaJcJpVsC1yaagoLBTjrfpV42N2s49QxnXMaQwYJGy2vowbLcxekdOV2h-7Hv63DxqBRoFYNx_DawN2m3JFfIyQMP7lwENXvNK76wnY2boO8asbIS92V4poLnc9v0r4gtV80dFp3558_XYBWhnZq-_klFHsfxJ0Opt_iEA
# 導(dǎo)出
cp /root/.kube/config /data/dashboard/k8s-dashboard.kubeconfig
sz k8s-dashboard.kubeconfig
Kuboard是一款免費(fèi)的 Kubernetes 管理工具,提供了豐富的功能,結(jié)合已有或新建的代碼倉(cāng)庫、鏡像倉(cāng)庫、CI/CD工具等,可以便捷的搭建一個(gè)生產(chǎn)可用的 Kubernetes 容器云平臺(tái),輕松管理和運(yùn)行云原生應(yīng)用。您也可以直接將 Kuboard 安裝到現(xiàn)有的 Kubernetes 集群,通過 Kuboard 提供的 Kubernetes RBAC 管理界面,將 Kubernetes 提供的能力開放給您的開發(fā)團(tuán)隊(duì)。
Kubernetes 基本管理功能
節(jié)點(diǎn)管理
名稱空間管理
存儲(chǔ)類/存儲(chǔ)卷管理
控制器(Deployment/StatefulSet/DaemonSet/CronJob/Job/ReplicaSet)管理
Service/Ingress 管理
ConfigMap/Secret 管理
CustomerResourceDefinition 管理
Kubernetes 問題診斷
Top Nodes / Top Pods
事件列表及通知
容器日志及終端
KuboardProxy (kubectl proxy 的在線版本)
PortForward (kubectl port-forward 的快捷版本)
復(fù)制文件 (kubectl cp 的在線版本)
認(rèn)證與授權(quán)
Github/GitLab 單點(diǎn)登錄
KeyCloak 認(rèn)證
LDAP 認(rèn)證
完整的 RBAC 權(quán)限管理
Kuboard 特色功能
Grafana+ Prometheus 資源監(jiān)控
Grafana+Loki+Promtail 日志聚合
Kuboard 官方套件
Kuboard 自定義名稱空間布局
Kuboard 中英文語言包
KuBord官網(wǎng): https://kuboard.cn/install/v3/install-in-k8s.html#%E5%AE%89%E8%A3%85
提供的安裝命令如下:(支持1.27)
KuBord官網(wǎng):https://kuboard.cn/install/v3/install.html
提供的安裝命令如下:
kubectl apply -f https://addons.kuboard.cn/kuboard/kuboard-v3.yaml
錯(cuò)誤異常pod一直不就緒 缺少 Master Role
可能缺少 Master Role 的情況有:
當(dāng)您在 * 阿里云、騰訊云(以及其他云)托管* 的 K8S 集群中以此方式安裝 Kuboard 時(shí),您執(zhí)行 kubectl get nodes 將 * 看不到 master 節(jié)點(diǎn)* ;
當(dāng)您的集群是通過二進(jìn)制方式安裝時(shí),您的集群中可能缺少 Master Role,或者當(dāng)您刪除了 Master 節(jié)點(diǎn)的
在集群中缺少 Master Role 節(jié)點(diǎn)時(shí),您也可以為一個(gè)或者三個(gè) worker 節(jié)點(diǎn)添加的標(biāo)簽,來增加 kuboard-etcd 的實(shí)例數(shù)量;
kubectl label nodes your-node-name k8s.kuboard.cn/role=etcd
在瀏覽器中打開鏈接 http://172.23.70.235:30080
輸入初始用戶名和密碼,并登錄
部署完成后,進(jìn)入30080端口可以看到這個(gè)命令,運(yùn)行
curl -k 'http://172.23.70.235:30080/kuboard-api/cluster/default/kind/KubernetesCluster/default/resource/installAgentToKubernetes?token=VJr7EYvO0Dvh7eoB8JlYcN7S0GQhnPZE' > kuboard-agent.yaml
kubectl apply -f ./kuboard-agent.yaml
然后就可以看到集群信息了
至此集群部署完成
機(jī)器學(xué)習(xí):神經(jīng)網(wǎng)絡(luò)構(gòu)建(下)
閱讀華為Mate品牌盛典:HarmonyOS NEXT加持下游戲性能得到充分釋放
閱讀實(shí)現(xiàn)對(duì)象集合與DataTable的相互轉(zhuǎn)換
閱讀鴻蒙NEXT元服務(wù):論如何免費(fèi)快速上架作品
閱讀算法與數(shù)據(jù)結(jié)構(gòu) 1 - 模擬
閱讀5. Spring Cloud OpenFeign 聲明式 WebService 客戶端的超詳細(xì)使用
閱讀Java代理模式:靜態(tài)代理和動(dòng)態(tài)代理的對(duì)比分析
閱讀Win11筆記本“自動(dòng)管理應(yīng)用的顏色”顯示規(guī)則
閱讀本站所有軟件,都由網(wǎng)友上傳,如有侵犯你的版權(quán),請(qǐng)發(fā)郵件[email protected]
湘ICP備2022002427號(hào)-10 湘公網(wǎng)安備:43070202000427號(hào)© 2013~2025 haote.com 好特網(wǎng)