标题中所说的Scale Up or Down
也就是伸缩,Deployment的伸缩也是主要对pod的伸缩,达到高可用以及负载的目的。
使用yaml运行几个nginx
编写文件
[root@node1 ~]# vim nginx.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.8
ports:
- containerPort: 80
运行nginx-deployment
[root@node1 ~]# kubectl apply -f nginx.yml
deployment.apps/nginx-deployment created
查看运行起来的pod容器
[root@node1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-deployment-5f8c6846ff-446sl 1/1 Running 0 9m46s 10.244.2.2 node2
nginx-deployment-5f8c6846ff-4qhvm 1/1 Running 0 9m46s 10.244.1.2 node3
pod的伸缩
其实也特别简单,就是直接修改yaml文件的replicas配置项的数量
[root@node1 ~]# vim nginx.yml
# 修改replicas
replicas: 5
将pod从2个增加为5个
[root @node1 ~]# kubectl apply -f nginx.yml
deployment.apps/nginx-deployment configured
再次查看
[root@node1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-5f8c6846ff-446sl 1/1 Running 0 13m 10.244.2.2 node2
nginx-deployment-5f8c6846ff-4qhvm 1/1 Running 0 13m 10.244.1.2 node3
nginx-deployment-5f8c6846ff-5srk7 1/1 Running 0 27s 10.244.1.3 node3
nginx-deployment-5f8c6846ff-h5cgv 1/1 Running 0 27s 10.244.2.3 node2
nginx-deployment-5f8c6846ff-stgdn 1/1 Running 0 27s 10.244.2.4 node2
就这么简单,伸缩已经完成了,不会停掉之前的2个pod,直接在2个的基础上,保证pod总数为5
管理节点参与运算调度
发现所有的pod都运行在node2和node3,k8s中为了管理节点的安全考虑,在分配pod运行主机时,管理节点不参与运算。
如果希望管理节点参与运算,需要执行kubectl taint node 管理节点主机名 node-role.kubernetes.io/master-
再次恢复管理节点的状态,执行kubectl taint node 管理节点主机名 node-role.kubernetes.io/master="":NoSchedule
[root@node1 ~]# kubectl taint node node1 node-role.kubernetes.io/master-
node/node1 untainted
重新伸缩,查看管理节点上是否运行pod
[root@node1 ~]# vim nginx.yml
# 修改replicas
replicas: 9
进行伸缩配置
[root@node1 ~]# kubectl apply -f nginx.yml
deployment.apps/nginx-deployment configured
查看pod运行节点
经过我的验证,管理节点虽然可以参与运算,但是在少量pod运行时,还是尽量不会去将pod运行到管理节点,直到我修改为15个pod,才在管理节点勉强运行了2个pod,又修改了27个pod,才运行的多了一点
[root@node1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-deployment-5f8c6846ff-4qhvm 1/1 Running 0 81m 10.244.1.2 node3
nginx-deployment-5f8c6846ff-5kk2c 1/1 Running 0 2s 10.244.0.10 node1
nginx-deployment-5f8c6846ff-8s5mk 1/1 Running 0 2s 10.244.2.37 node2
nginx-deployment-5f8c6846ff-b9rdj 1/1 Running 0 19s 10.244.2.34 node2
nginx-deployment-5f8c6846ff-c8dv8 1/1 Running 0 19s 10.244.1.21 node3
nginx-deployment-5f8c6846ff-fk994 1/1 Running 0 2s 10.244.2.36 node2
nginx-deployment-5f8c6846ff-g2sqz 1/1 Running 0 19s 10.244.2.33 node2
nginx-deployment-5f8c6846ff-hks7l 1/1 Running 0 19s 10.244.1.22 node3
nginx-deployment-5f8c6846ff-lhxl4 1/1 Running 0 19s 10.244.2.35 node2
nginx-deployment-5f8c6846ff-lzct7 1/1 Running 0 2s 10.244.0.11 node1
nginx-deployment-5f8c6846ff-m5vs9 1/1 Running 0 19s 10.244.1.23 node3
nginx-deployment-5f8c6846ff-nwbwk 1/1 Running 0 19s 10.244.2.32 node2
nginx-deployment-5f8c6846ff-rcjlg 1/1 Running 0 19s 10.244.2.31 node2
nginx-deployment-5f8c6846ff-sfq2d 1/1 Running 0 2s 10.244.1.24 node3
nginx-deployment-5f8c6846ff-tzh8s 1/1 Running 0 2s 10.244.1.25 node3
恢复管理节点的only状态
设置值为NoScheduler
,不让调度器进行调度
[root@node1 ~]# kubectl taint node node1 node-role.kubernetes.io/master="":NoSchedule
node/node1 tainted
将pod伸缩为3个再次查看,结果发现还有node1的pod在运行
[root@node1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-deployment-5f8c6846ff-4qhvm 1/1 Running 0 88m 10.244.1.2 node3
nginx-deployment-5f8c6846ff-5kk2c 1/1 Running 0 7m35s 10.244.0.10 node1
nginx-deployment-5f8c6846ff-lzct7 1/1 Running 0 7m35s 10.244.0.11 node1
我以为是反应慢,重复试了好几次都一样,最后将pod伸缩为1个
[root@node1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-deployment-5f8c6846ff-4qhvm 1/1 Running 0 88m 10.244.1.2 node3
然后再次伸缩为多个pod
[root@node1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-deployment-5f8c6846ff-4qhvm 1/1 Running 0 90m 10.244.1.2 node3
nginx-deployment-5f8c6846ff-5j87g 1/1 Running 0 3s 10.244.1.26 node3
nginx-deployment-5f8c6846ff-6kd6h 1/1 Running 0 3s 10.244.2.38 node2
以上就是k8s的pod的简单伸缩