创建 nfs 的 storageClass 请跳到 Kubernetes–StorageClass(二),因为 本文中的 NFS 仓库中的项目有点 bug ,对于新版 k8s 无法解决。不信邪你可以试试

在之前的文章 Kubernetes的数据管理 一文中,介绍了一些 volume 以及 PV 、 PVC 的使用方法,也被称为 静态 PV 供给,因为 PVC 是建立在 PV 的前提下,必须手动创建 PV ,再去创建 PVC 去匹配 PV 获取存储资源。

以上说到的方法中,对于大量 pod 的场景来说,维护成本相当大,对于运维人员也是不友好的。所以 k8s 支持了 PV 的动态供给 — StroageClass。也就是使用了 StorageClass 之后,不需要单独创建 PV,只需要创建 PVC ,就会根据 PVC 的申请自动创建一个 PV,也被称为 动态 PV 供给

基于 NFS 创建 StorageClass

这里以 nfs 为例做 StorageClass 的创建

搭建nfs

yum -y install nfs-utils rpcbind

创建共享目录

mkdir /nfsdata

将目录添加到共享列表

$ vim /etc/exports
/nfsdata *(rw,no_root_squash,sync)
$ exportfs -r

启动nfs

systemctl enable --now rpcbind nfs-server

验证共享目录

$ showmount -e 192.168.1.12
Export list for 192.168.1.12:
/nfsdata *

部署外部存储的制备器(Provisioner)

关于制备器请参考 官方说明

有的外部存储k8s内置了制备器,不需要单独部署,有的则需要单独部署。

git clone https://github.com/kubernetes-incubator/external-storage
cd external-storage/nfs-client/deploy/

修改添加 nfs 的 ip 以及共享路径

$ vim deployment.yaml
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          imagePullPolicy: IfNotPresent   # 添加该行,默认是Always
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 192.168.1.12   # nfs ip
            - name: NFS_PATH
              value: /nfsdata  # share path
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.1.12   # 同理
            path: /nfsdata  # 同理

修改回收 StorageClass 的回收策略

$ vim class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "true"   # 该参数表示当pvc被删除时,数据是否要归档保存,false表示不保存,true保存

创建 nfs-client

kubectl apply -f rbac.yaml      # 授权访问apiserver
kubectl apply -f deployment.yaml  # 部署nfs插件
kubectl apply -f class.yaml    # 创建StorageClass

查看 StorageClass

$ kubectl get sc
NAME                  PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
managed-nfs-storage   fuseim.pri/ifs   Delete          Immediate           false                  10s

使用 StorageClass

下载一个我自己的 nginx 示例

wget https://www.feiyiblog.com/files/practise/nginx.yaml

内容如下

---
apiVersion: v1
kind: Namespace
metadata:
  name: nginx
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-cm
  namespace: nginx
data:
  index.html: |
    <html>
      <body>
        <p>&nbsp;</p>
        <p>&nbsp;</p>
        <p>&nbsp;</p>
        <p>&nbsp;</p>
        <p>&nbsp;</p>
        <p>&nbsp;</p>
        <h1 style="text-align:center;color:#1E90FF">Welcome to FeiYi's&copy; Blog</h1>
      </body>
    </html>
  healthz.html: |
    <html>
      <body>
        <p>&nbsp;</p>
        <p>&nbsp;</p>
        <p>&nbsp;</p>
        <p>&nbsp;</p>
        <p>&nbsp;</p>
        <p>&nbsp;</p>
        <h1 style="text-align:center;color:#1E90FF">Healthz Check</h1>
      </body>
    </html>
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      tolerations:
      - key: "node-role.kubernetes.io/master"
        effect: "NoSchedule"
      containers:
      - name: nginx
        image: nginx:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
        livenessProbe:
          httpGet:
            scheme: HTTP
            path: /healthz
            port: 80
          initialDelaySeconds: 10
          periodSeconds: 5
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html/index.html
          readOnly: True
          subPath: index.html
        - name: html
          mountPath: /usr/share/nginx/html/healthz/index.html
          readOnly: True
          subPath: healthz.html
      volumes:
      - name: html
        configMap:
          name: nginx-cm
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
  namespace: nginx
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 80
      nodePort: 30000

需要在没有创建 PV 的情况下申请存储资源去使用

# 在 Deployment 部分添加挂载pvc部分
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html/index.html
          readOnly: True
          subPath: index.html
        - name: html
          mountPath: /usr/share/nginx/html/healthz/index.html
          readOnly: True
          subPath: healthz.html
        # 添加以下内容
        - name: auto-pv
          mountPath: /usr/share/nginx/html/test/
          readOnly: True
      volumes:
      - name: html
        configMap:
          name: nginx-cm
      # 添加以下内容
      - name: auto-pv
        persistentVolumeClaim:
          claimName: auto-pv

执行部署文件

kubectl apply -f nginx.yaml

查看部署结果:发现是 Pending 状态

$ kubectl get pods -n nginx 
NAME                               READY   STATUS    RESTARTS   AGE
nginx-deployment-c94f957b8-5kcnr   0/1     Pending   0          4s
nginx-deployment-c94f957b8-kwglv   0/1     Pending   0          4s

无法调度,是因为没有合适的 pvc 供它存储

创建 pvc

$ vim nginx-pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: auto-pv
  namespace: nginx
spec:
  accessModes:
    - ReadWriteMany
  # 指定刚才创建的storage-class的name,kubectl get sc -n nginx
  storageClassName: "managed-nfs-storage"
  resources:
    requests:
      storage: 2Gi

执行部署文件

kubectl apply -f nginx-pvc.yaml

查看 pvc 创建状态

$ kubectl get pvc -n nginx 
NAME      STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS          AGE
auto-pv   Pending                                      managed-nfs-storage   12s

排错

PVC 也处于 Pending 状态,此时 describe 查看到的信息是 waiting for a volume to be created, either by external provisioner "fuseim.pri/ifs" or manually created by system administrator ,没有看出具体原因,就是创建不了

因为 StorageClass 是由 NFS Client 来作为后端存储的,先看下 NFS 的日志

$ kubectl logs nfs-client-provisioner-65f9b778c4-w9crr 
I0607 14:45:09.633044       1 leaderelection.go:185] attempting to acquire leader lease  default/fuseim.pri-ifs...
E0607 14:45:09.640224       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"fuseim.pri-ifs", GenerateName:"", Namespace:"default", SelfLink:"", UID:"c85d0b6a-3692-4973-95cb-c287be09f54c", ResourceVersion:"62574", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63758673909, loc:(*time.Location)(0x1956800)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"nfs-client-provisioner-65f9b778c4-w9crr_f521c6b4-c79e-11eb-829b-c6baccbd563e\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-06-07T14:45:09Z\",\"renewTime\":\"2021-06-07T14:45:09Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'nfs-client-provisioner-65f9b778c4-w9crr_f521c6b4-c79e-11eb-829b-c6baccbd563e became leader'
I0607 14:45:09.640271       1 leaderelection.go:194] successfully acquired lease default/fuseim.pri-ifs
I0607 14:45:09.640302       1 controller.go:631] Starting provisioner controller fuseim.pri/ifs_nfs-client-provisioner-65f9b778c4-w9crr_f521c6b4-c79e-11eb-829b-c6baccbd563e!
I0607 14:45:09.741176       1 controller.go:680] Started provisioner controller fuseim.pri/ifs_nfs-client-provisioner-65f9b778c4-w9crr_f521c6b4-c79e-11eb-829b-c6baccbd563e!
I0607 15:01:34.198884       1 controller.go:987] provision "nginx/auto-pv" class "managed-nfs-storage": started
E0607 15:01:34.202296       1 controller.go:1004] provision "nginx/auto-pv" class "managed-nfs-storage": unexpected error getting claim reference: selfLink was empty, can't make reference

最后一行显示 unexpected error getting claim reference: selfLink was empty, can't make reference,selfLink 为空,经查阅资料,1.20版本默认删除了SelfLink的功能,导致无法绑定pv和pvc,我用的版本是1.21.1也得添加,以后的版本应该会解决这个问题的。

我也没搞懂selfLink是干啥的,反正以下方法能解决就是了。

现在通过对 apiserver 的启动参数做修改来解决该问题

$ vim /etc/kubernetes/manifests/kube-apiserver.yaml
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=192.168.1.12
    - --allow-privileged=true
...
    - --feature-gates=RemoveSelfLink=false  # 添加该参数

修改完文件后,等待30s左右,pvc 会自动 Bound 成功

$ kubectl get pvc -n nginx
NAME      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
auto-pv   Bound    pvc-7ec7b0a5-b388-4550-987a-30838ecc2220   2Gi        RWX            managed-nfs-storage   15m

此时在 nfs 的共享目录下会有一个名字中带有 pvc_name 的目录

$ ls /nfsdata/
nginx-auto-pv-pvc-7ec7b0a5-b388-4550-987a-30838ecc2220

这个目录就是对应着,pod 中挂载的 /usr/share/nginx/html/test 目录

查看容器是否正常

$ kubectl get pods -n nginx 
NAME                               READY   STATUS    RESTARTS   AGE
nginx-deployment-c94f957b8-5kcnr   1/1     Running   0          21m
nginx-deployment-c94f957b8-kwglv   1/1     Running   0          21m

验证挂载

上述实验中,将 nfs 共享目录挂载到了 /usr/share/nginx/html/test

在共享的挂载目录中创建文件

echo StorageClass_NFS_TEST > /nfsdata/nginx-auto-pv-pvc-7ec7b0a5-b388-4550-987a-30838ecc2220/index.html

进入容器查看

$ kubectl exec -it -n nginx nginx-deployment-c94f957b8-5kcnr -- cat  /usr/share/nginx/html/test/index.html
StorageClass_NFS_TEST

访问查看,访问的地址是上方我自己创建的 nginx 示例中写好的 svc

$ curl 192.168.1.12:30000/test/
StorageClass_NFS_TEST

评论




正在载入...
PoweredHexo
HostedAliyun
DNSAliyun
ThemeVolantis
UV
PV
BY-NC-SA 4.0