kubernetes dashboard

重新疏理k8s dashboard安装
github地址:https://github.com/kubernetes/dashboard

在线dashboard2.1 deployment清单文件

1
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.1.0/aio/deploy/recommended.yaml

默认type为ClusterIp即只能看到Service地址,需要穿透集群边界让外部进行访问
方式有ingress、NodePort、外部LoadBalance、pod HostPort端口转发、pod HostNetwork 等方式,这里采用最简单的NodePort

修改recommended.yaml清单文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- nodePort: 30443
port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
type: NodePort

或者跑起来后再直接修改svc/kubernetes-dashboard

1
kubectl edit svc/kubernetes-dashboard  -n kubernetes-dashboard  # type: NodePort

按你的需要添加单独的NameSpace

1
2
3
4
5
6
7
8
root@k8s-m:/data/dashboard# kubectl    create namespace admin-ns --dry-run=client -o yaml
apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: null
name: admin-ns
spec: {}
status: {}

按你的需要添加kubernetes-dashboard

1
2
3
4
5
6
7
8
9
root@k8s-m:/data/dashboard# kubectl    create sa  superadmin -n admin-ns
serviceaccount/superadmin created
root@k8s-m:/data/dashboard# kubectl create sa superadmin -n admin-ns --dry-run=client -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: null
name: superadmin
namespace: admin-ns

创建一个clusterrolebinding,让内置的集群管理员角色--clusterrole=cluster-admin与刚创建的serviceaccount绑定

1
2
3
4
5
6
7
8
9
10
11
12
13
14
root@k8s-m:/data/dashboard# kubectl  create clusterrolebinding  superadmin  --clusterrole=cluster-admin  --serviceaccount=admin-ns:superadmin --dry-run=client -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: superadmin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: superadmin
namespace: admin-ns

获取Token登录

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
root@k8s-m:/data/dashboard# kubectl  describe   sa/superadmin -n admin-ns
Name: superadmin
Namespace: admin-ns
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: superadmin-token-gtrj5
Tokens: superadmin-token-gtrj5
Events: <none>
root@k8s-m:/data/dashboard# kubectl describe secret/superadmin-token-gtrj5 -n admin-ns
Name: superadmin-token-gtrj5
Namespace: admin-ns
Labels: <none>
Annotations: kubernetes.io/service-account.name: superadmin
kubernetes.io/service-account.uid: 339d914d-3ae8-440a-a590-4a304400ef17

Type: kubernetes.io/service-account-token

Data
====
token: eyJhbGciOiJSUzI1NiIsImtpZCI6InZk--xxx # Token字段
ca.crt: 1066 bytes
namespace: 8 bytes

界面自带CPU、内存监控图,只是数据是来源于Metrics Server, 需要部署MetricsServer才能展示,如果后期要部署kube-prometheus 就不需要单独部署Metrics Server,因为它己经集成了

metrics-server github 地址: https://github.com/kubernetes-sigs/metrics-server

在线清单文件部署

1
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

发现老是无限重启,说是就绪性探测和存活性探测有问题,我这里换换镜像以及添加2个参数好了
其实根源https TLS问题

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
spec:
hostNetwork: true
serviceAccountName: metrics-server
containers:
- name: metrics-server
image: bitnami/metrics-server:0.4.1 # 可以尝试更换官方镜像
#image: k8s.gcr.io/metrics-server/metrics-server:v0.4.1
imagePullPolicy: IfNotPresent
args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-insecure-tls # 禁用https
- --kubelet-use-node-status-port
- --kubelet-preferred-address-types=InternalDNS,InternalIP,ExternalDNS,ExternalIP,Hostname # 添加地址解析类型
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
periodSeconds: 10
ports:

总结

  • dashboard只是提供web GUI服务,本身并做认证授权,只是代为拿着账号向kubernetes API进行认证
  • dashboard 运行于pod, pod代为向kubernetes API进行认证的账号也必须是ServiceAccount账号,不可以是User自然人的属性账号
  • clusterrolebinding只能绑定clusterrole, rolebinding 即可以绑role,也可以绑clusterrole,口决是: 小绑大降权
  • Metrics-server用于收集pod内部CPU、内存使用量资源,kube-prometheus项目内部集成此功能,所以部署与否看你需要