K8s 持久化存储-nfs

3,220次阅读
没有评论

共计 16767 个字符,预计需要花费 42 分钟才能阅读完成。

K8s 持久化存储-nfs

似乎NFS Provisioner 已经停止维护,目前可以使用 NFS Subdir External Provisioner来取缔,此处记录下

K8s 持久化存储

数据卷 Volume

数据卷的主要场景是解决数据持久性和数据共享:

  • 数据持久性:容器内的数据未做持久化时,则当容器崩溃后重建,容器先前的数据将会丢失,当做了volume持久化时则重建后的容器依然可以使用奔溃前的数据
  • 数据共享:同一个应用多副本时,则可能需要共享其存储文件

数据卷类型

目前k8s数据卷支持多达20+种类型,可自行参考 Kubernetes 官方文档 Volumes,以下是常用的几种:

  • emptyDir
    • Pod之间数据不共享,Pod内容器组内数据可共享
    • Pod内容器组中的一个容器崩溃,kubelete并不会删除Pod,而是重启容器,所以其数据重启后依然是存在的
    • Pod删除重建,emptyDir数据永久删除
  • hostPath
    • 将Pod所在节点文件系统上的某个文件、文件夹挂载进Pod 容器
    • linux一切皆文件,所以可以挂在SocketCharDeviceBlockDevice
  • configMap
    • 以数据卷的方式挂载到容器
    • Key对应文件名,value对应文件内容
    • 将 ConfigMap 数据卷挂载到容器时,如果该挂载点指定了 数据卷内子路径 (subPath),则该 ConfigMap 被改变后,该容器挂载的内容仍然不变
  • secret
    • 以数据卷的方式挂载到容器
    • 将 Secret 数据卷挂载到容器时,如果该挂载点指定了 数据卷内子路径 (subPath),则该 Secret 被改变后,该容器挂载的内容仍然不变
  • persistentVolumeClaim
    • 用来挂载PersistentVolume 存储卷,PersistentVolume 是集群中的一块存储空间,由集群管理员管理、或者由 Storage Class(存储类)自动管理
  • nfs
    • 可以实现 Pod 之间数据共享,Pod 内容器数据共享
    • 可以实现 Pod 之间数据同时读写
  • cephfs
    • 可以实现 Pod 之间数据共享,Pod 内容器数据共享
    • 可以实现 Pod 之间数据同时读写

存储卷 PersistentVolume

PersistentVolume 是集群中的一块存储空间,由集群管理员管理、或者由 Storage Class(存储类)自动管理。PersistentVolumeClaim 则是用来声明存储卷,代表用户需要的存储请求

存储卷 Provisioning

  • 静态:需要管理员手动创建PV后才能绑定使用
  • 动态:通过PVC和SotrageClass(存储类)关联,自动按PVC存储卷声明来创建PV

存储卷 Reclaiming

  • 保留 Retain
  • 删除 Delete
  • 再利用 Recycle

存储卷 Access Modes

  • ReadWriteOnce:可被单节点读写
  • ReadOnlyMany:可被多节点只读
  • ReadWriteMany:可被多节点读写

存储卷 状态

  • Available:可用的 PV,尚未绑定到 PVC
  • Bound:已经绑定到 PVC
  • Released:PVC 已经被删除,但是资源还未被集群回收
  • Failed:自动回收失败

NFS-Server部署

# 服务端安装配置
[root@node-nfs ~]# yum install -y rpcbind nfs-utils
[root@node-nfs ~]# mkdir /nfsdata
[root@node-nfs ~]# cat >/etc/exports<<'EOF'
/nfsdata 192.168.174.0/24(rw,sync,insecure,no_root_squash)
EOF
[root@node-nfs ~]# systemctl enable rpcbind
[root@node-nfs ~]# systemctl enable nfs-server
[root@node-nfs ~]# exportfs
/nfsdata      	192.168.174.0/24

# 客户端安装配置
[root@node1 nfs-subpath]# yum install -y nfs-utils
[root@node1 nfs-subpath]# showmount -e 192.168.174.135
Export list for 192.168.174.135:
/nfsdata 192.168.174.0/24

NFS Subdir External Provisioner部署

创建rbac授权

[root@node1 nfs-subpath]# cat >rbac.yaml<<'EOF'
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: kube-system
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: kube-system
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: kube-system
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
EOF

[root@node1 nfs-subpath]# kubectl apply -f rbac.yaml 
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created

创建 nfs-client-provisioner

[root@node1 nfs-subpath]# cat >nfs-provisisoner-deploy.yaml<<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
spec:
  replicas: 1
  strategy: 

    ## 设置升级策略为删除再创建(默认为滚动更新)
    type: Recreate                   
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
      - name: nfs-client-provisioner
        #image: gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.0

        image: swr.cn-east-2.myhuaweicloud.com/kuboard-dependency/nfs-subdir-external-provisioner:v4.0.2
        volumeMounts:
        - name: nfs-client-root
          mountPath: /persistentvolumes
        env:

        ## Provisioner的名称,后续设置的storageclass要和这个保持一致
        - name: PROVISIONER_NAME     
          value: nfs-client 

        ## NFS服务器地址,需和valumes参数中配置的保持一致
        - name: NFS_SERVER           
          value: 192.168.174.135

        ## NFS服务器数据存储目录,需和valumes参数中配置的保持一致
        - name: NFS_PATH             
          value: /nfsdata
        - name: ENABLE_LEADER_ELECTION
          value: "true"
      volumes:
      - name: nfs-client-root
        nfs:

          ## NFS服务器地址
          server: 192.168.174.135
          ## NFS服务器数据存储录       
          path: /nfsdata
EOF

[root@node1 nfs-subpath]# kubectl get -f nfs-provisisoner-deploy.yaml -n kube-system
# 由于我是单master节点,期间查看未调度
[root@node1 nfs-subpath]# kubectl describe pod nfs-client-provisioner-75d98ccc6b-lsgtw -n kube-system
......
Events:
  Type     Reason            Age              From               Message
  ----     ------            ----             ----               -------
  Warning  FailedScheduling  4m24s            default-scheduler  0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
  Warning  FailedScheduling  4m24s            default-scheduler  0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.

......

[root@node1 nfs-subpath]# kubectl taint nodes --all node-role.kubernetes.io/master-
[root@node1 nfs-subpath]# kubectl get -f nfs-provisisoner-deploy.yaml -n kube-system
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
nfs-client-provisioner   1/1     1            1           30m

创建 NFS StorageClass

[root@node1 nfs-subpath]# cat >nfs-storageclass.yaml<<'EOF'
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
  annotations:
    ## 是否设置为默认的storageclass
    storageclass.kubernetes.io/is-default-class: "false"  
## 动态卷分配者名称,必须和上面创建的"provisioner"变量中设置的Name一致
provisioner: nfs-client                                   
parameters:
  ## 设置为"false"时删除PVC不会保留数据,"true"则保留数据
  archiveOnDelete: "true"                                 
mountOptions: 
  ## 指定为硬挂载方式
  - hard                                                  
  ## 指定NFS版本,这个需要根据NFS Server版本号设置
  - nfsvers=4                                             
EOF

[root@node1 nfs-subpath]# kubectl apply -f nfs-storageclass.yaml 
storageclass.storage.k8s.io/nfs-storage created

[root@node1 nfs-subpath]# kubectl get storageclass
NAME          PROVISIONER   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-storage   nfs-client    Delete          Immediate           false                  8s

测试验证

创建测试demo pvc

[root@node1 nfs-subpath]# cat >demo-pvc.yaml<<'EOF' 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-pvc
spec:
  storageClassName: nfs-storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi
EOF

[root@node1 nfs-subpath]# kubectl apply -f demo-pvc.yaml 
persistentvolumeclaim/test-pvc created

查看PVC、PV

[root@node1 nfs-subpath]# kubectl apply -f demo-pvc.yaml 
persistentvolumeclaim/test-pvc created
[root@node1 nfs-subpath]# kubectl get pvc 
NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-pvc   Bound    pvc-918b16b3-d233-40ba-acd7-66fe14c9f748   100Mi      RWO            nfs-storage    5s
[root@node1 nfs-subpath]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM              STORAGECLASS   REASON   AGE
pvc-918b16b3-d233-40ba-acd7-66fe14c9f748   100Mi      RWO            Delete           Bound    default/test-pvc   nfs-storage             13s

查看k8s_nfs-client-provisioner日志

[root@node1 nfs-subpath]# docker ps -a | grep nfs
e6ad97d666cd        932b0bface75                             "/nfs-subdir-externa…"   13 minutes ago      Up 13 minutes                                      k8s_nfs-client-provisioner_nfs-client-provisioner-7d6bdc5c84-rpcpt_kube-system_183cb76e-cde7-4ef8-bbbc-1d8fe53303d9_0
78ab0c4ce3d0        registry.aliyuncs.com/k8sxio/pause:3.2   "/pause"                 13 minutes ago      Up 13 minutes                                      k8s_POD_nfs-client-provisioner-7d6bdc5c84-rpcpt_kube-system_183cb76e-cde7-4ef8-bbbc-1d8fe53303d9_0
[root@node1 nfs-subpath]# docker logs e6ad97d666cd
I0506 04:02:13.249191       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/nfs-client...
I0506 04:02:13.258480       1 leaderelection.go:252] successfully acquired lease kube-system/nfs-client
I0506 04:02:13.258627       1 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"nfs-client", UID:"d99c998e-6ff1-4fb7-9d61-8e81d6c5216b", APIVersion:"v1", ResourceVersion:"24264", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' nfs-client-provisioner-7d6bdc5c84-rpcpt_f6837cb6-f8a9-4689-ad63-4af65358039f became leader
I0506 04:02:13.258964       1 controller.go:820] Starting provisioner controller nfs-client_nfs-client-provisioner-7d6bdc5c84-rpcpt_f6837cb6-f8a9-4689-ad63-4af65358039f!
I0506 04:02:13.359205       1 controller.go:869] Started provisioner controller nfs-client_nfs-client-provisioner-7d6bdc5c84-rpcpt_f6837cb6-f8a9-4689-ad63-4af65358039f!
I0506 04:15:12.930860       1 controller.go:1317] provision "default/test-pvc" class "nfs-storage": started
I0506 04:15:12.934887       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"test-pvc", UID:"918b16b3-d233-40ba-acd7-66fe14c9f748", APIVersion:"v1", ResourceVersion:"26361", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/test-pvc"
I0506 04:15:12.940375       1 controller.go:1420] provision "default/test-pvc" class "nfs-storage": volume "pvc-918b16b3-d233-40ba-acd7-66fe14c9f748" provisioned
I0506 04:15:12.940549       1 controller.go:1437] provision "default/test-pvc" class "nfs-storage": succeeded
I0506 04:15:12.940575       1 volume_store.go:212] Trying to save persistentvolume "pvc-918b16b3-d233-40ba-acd7-66fe14c9f748"
I0506 04:15:12.955484       1 volume_store.go:219] persistentvolume "pvc-918b16b3-d233-40ba-acd7-66fe14c9f748" saved
I0506 04:15:12.956006       1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"test-pvc", UID:"918b16b3-d233-40ba-acd7-66fe14c9f748", APIVersion:"v1", ResourceVersion:"26361", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-918b16b3-d233-40ba-acd7-66fe14c9f748

查看NFS server目录

在NFS Server中可以看到,持久卷默认被配置为${namespace}-${pvcName}-${pvName}

[root@node-nfs ~]# ll /nfsdata/
总用量 0
drwx------. 6 1000 root 68 8月   2 15:42 default-cassandra-data-cassandra-0-pvc-3d54879a-187c-4f6d-9116-5ecb39ef4804
drwx------. 6 1000 root 68 8月   2 16:08 default-cassandra-data-cassandra-1-pvc-01f25b40-6084-4043-b6ae-db96b57f4fcf
drwx------. 6 1000 root 68 8月   2 16:10 default-cassandra-data-cassandra-2-pvc-3d02ba22-1302-457c-a26d-c23f4751c576
drwxrwxrwx. 2 root root  6 8月   6 12:15 default-test-pvc-pvc-918b16b3-d233-40ba-acd7-66fe14c9f748

创建一个Statefulset测试PVC

[root@node1 nfs-subpath]# cat >statusful-nginx.yaml<'EOF' 
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  serviceName: "nginx"
  replicas: 5
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
      annotations:

        ## 此处声明storage class名称
        volume.beta.kubernetes.io/storage-class: "nfs-storage"
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 100Mi
EOF

[root@node1 nfs-subpath]# kubectl apply -f statusful-nginx.yaml 
statefulset.apps/web created
[root@node1 nfs-subpath]# kubectl get -f statusful-nginx.yaml 
NAME   READY   AGE
web    5/5     64s
[root@node1 nfs-subpath]# kubectl get pvc
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-pvc    Bound    pvc-918b16b3-d233-40ba-acd7-66fe14c9f748   100Mi      RWO            nfs-storage    77m
www-web-0   Bound    pvc-4cf0a4c9-244b-4c9d-a436-8a59510db08f   100Mi      RWO            nfs-storage    3m4s
www-web-1   Bound    pvc-cb096dc0-ee04-45d9-bbc2-96f81fe0810a   100Mi      RWO            nfs-storage    2m54s
www-web-2   Bound    pvc-f5b968f7-4c74-484a-8e98-7d077cd94933   100Mi      RWO            nfs-storage    2m43s
www-web-3   Bound    pvc-bfd0d983-09c5-4f7a-ab6d-e38c1cd509f5   100Mi      RWO            nfs-storage    2m29s
www-web-4   Bound    pvc-34069799-30d7-4a8d-8926-1c88afcd1581   100Mi      RWO            nfs-storage    2m17s
[root@node1 nfs-subpath]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS   REASON   AGE
pvc-34069799-30d7-4a8d-8926-1c88afcd1581   100Mi      RWO            Delete           Bound    default/www-web-4   nfs-storage             2m34s
pvc-4cf0a4c9-244b-4c9d-a436-8a59510db08f   100Mi      RWO            Delete           Bound    default/www-web-0   nfs-storage             3m22s
pvc-918b16b3-d233-40ba-acd7-66fe14c9f748   100Mi      RWO            Delete           Bound    default/test-pvc    nfs-storage             77m
pvc-bfd0d983-09c5-4f7a-ab6d-e38c1cd509f5   100Mi      RWO            Delete           Bound    default/www-web-3   nfs-storage             2m47s
pvc-cb096dc0-ee04-45d9-bbc2-96f81fe0810a   100Mi      RWO            Delete           Bound    default/www-web-1   nfs-storage             3m12s
pvc-f5b968f7-4c74-484a-8e98-7d077cd94933   100Mi      RWO            Delete           Bound    default/www-web-2   nfs-storage             3m1s

NFS Subdir External Provisioner集群模式

确保ENABLE_LEADER_ELECTION值为true即可,然后增加副本数量

[root@node1 nfs-subpath]# cat >nfs-provisisoner-deploy.yaml<<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
spec:
  replicas: 1
  strategy: 

    ## 设置升级策略为删除再创建(默认为滚动更新)
    type: Recreate                   
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
      - name: nfs-client-provisioner
        #image: gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.0

        image: swr.cn-east-2.myhuaweicloud.com/kuboard-dependency/nfs-subdir-external-provisioner:v4.0.2
        volumeMounts:
        - name: nfs-client-root
          mountPath: /persistentvolumes
        env:

        ## Provisioner的名称,后续设置的storageclass要和这个保持一致
        - name: PROVISIONER_NAME     
          value: nfs-client 

        ## NFS服务器地址,需和valumes参数中配置的保持一致
        - name: NFS_SERVER           
          value: 192.168.174.135

        ## NFS服务器数据存储目录,需和valumes参数中配置的保持一致
        - name: NFS_PATH             
          value: /nfsdata
        - name: ENABLE_LEADER_ELECTION
          value: "true"
      volumes:
      - name: nfs-client-root
        nfs:

          ## NFS服务器地址
          server: 192.168.174.135
          ## NFS服务器数据存储录       
          path: /nfsdata
EOF

# 扩容到3个副本
[root@node1 nfs-subpath]# kubectl scale  deploy nfs-client-provisioner -n kube-system  --replicas=3
deployment.apps/nfs-client-provisioner scaled

[root@node1 nfs-subpath]# kubectl get deploy nfs-client-provisioner -n kube-system
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
nfs-client-provisioner   3/3     3            3           128m

[root@node1 nfs-subpath]# kubectl get pods -n kube-system -l app=nfs-client-provisioner
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-7d6bdc5c84-68npl   1/1     Running   0          4m53s
nfs-client-provisioner-7d6bdc5c84-rpcpt   1/1     Running   0          102m
nfs-client-provisioner-7d6bdc5c84-v2rjv   1/1     Running   0          4m53s

查看nfs-client-provisioner client日志

# 两个新增的pod 在请求成为leader
[root@node1 nfs-subpath]# kubectl logs nfs-client-provisioner-7d6bdc5c84-68npl -n kube-system
I0506 05:40:02.065682       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/nfs-client...
[root@node1 nfs-subpath]# kubectl logs nfs-client-provisioner-7d6bdc5c84-v2rjv -n kube-system
I0506 05:40:02.033307       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/nfs-client...
# 而之前的pod本身开启了集群参数,所以在其启动开始的日志中也可以找打相关选举日志
[root@node1 nfs-subpath]# kubectl logs nfs-client-provisioner-7d6bdc5c84-rpcpt -n kube-system
I0506 04:02:13.249191       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/nfs-client...
I0506 04:02:13.258480       1 leaderelection.go:252] successfully acquired lease kube-system/nfs-client
I0506 04:02:13.258627       1 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"nfs-client", UID:"d99c998e-6ff1-4fb7-9d61-8e81d6c5216b", APIVersion:"v1", ResourceVersion:"24264", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' nfs-client-provisioner-7d6bdc5c84-rpcpt_f6837cb6-f8a9-4689-ad63-4af65358039f became leader
I0506 04:02:13.258964       1 controller.go:820] Starting provisioner controller nfs-client_nfs-client-provisioner-7d6bdc5c84-rpcpt_f6837cb6-f8a9-4689-ad63-4af65358039f!
I0506 04:02:13.359205       1 controller.go:869] Started provisioner controller nfs-client_nfs-client-provisioner-7d6bdc5c84-rpcpt_f6837cb6-f8a9-4689-ad63-4af65358039f!
I0506 04:15:12.930860       1 controller.go:1317] provision "default/test-pvc" class "nfs-storage": started
.....

模拟删除其中一个pod

# 删掉为leader的pod,此时其他pod会竞选为leader,之前创建的pv、pvc并不会受影响
[root@node1 nfs-subpath]# kubectl delete pod nfs-client-provisioner-7d6bdc5c84-rpcpt -n kube-system
pod "nfs-client-provisioner-7d6bdc5c84-rpcpt" deleted

 
[root@node1 nfs-subpath]# kubectl get pods -n kube-system -l app=nfs-client-provisioner
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-7d6bdc5c84-68mvk   1/1     Running   0          31s
nfs-client-provisioner-7d6bdc5c84-68npl   1/1     Running   0          13m
nfs-client-provisioner-7d6bdc5c84-v2rjv   1/1     Running   0          13m

# 查看日志
[root@node1 nfs-subpath]# kubectl logs nfs-client-provisioner-7d6bdc5c84-v2rjv -n kube-system
I0506 05:40:02.033307       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/nfs-client...
E0506 05:52:50.518880       1 leaderelection.go:356] Failed to update lock: Operation cannot be fulfilled on endpoints "nfs-client": the object has been modified; please apply your changes to the latest version and try again

[root@node1 nfs-subpath]# kubectl logs nfs-client-provisioner-7d6bdc5c84-68npl -n kube-system
I0506 05:40:02.065682       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/nfs-client...
I0506 05:52:50.517199       1 leaderelection.go:252] successfully acquired lease kube-system/nfs-client
I0506 05:52:50.517673       1 controller.go:820] Starting provisioner controller nfs-client_nfs-client-provisioner-7d6bdc5c84-68npl_e74447ac-2d03-4991-9168-23e735ba8a8c!
I0506 05:52:50.517568       1 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"nfs-client", UID:"d99c998e-6ff1-4fb7-9d61-8e81d6c5216b", APIVersion:"v1", ResourceVersion:"42271", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' nfs-client-provisioner-7d6bdc5c84-68npl_e74447ac-2d03-4991-9168-23e735ba8a8c became leader
I0506 05:52:50.618280       1 controller.go:869] Started provisioner controller nfs-client_nfs-client-provisioner-7d6bdc5c84-68npl_e74447ac-2d03-4991-9168-23e735ba8a8c!

[root@node1 nfs-subpath]# kubectl logs nfs-client-provisioner-7d6bdc5c84-68mvk -n kube-system
I0506 05:52:32.990781       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/nfs-client...

正文完
 234
xadocker
版权声明:本站原创文章,由 xadocker 2021-08-08发表,共计16767字。
转载说明:除特殊说明外本站文章皆由CC-4.0协议发布,转载请注明出处。
评论(没有评论)