重新启动Kubernetes petset将清除持续卷

问题描述:

我正在运行3个zookeepers petset,其卷使用glusterfs持久卷。如果这是您第一次启动宠物套件,一切都很好。重新启动Kubernetes petset将清除持续卷

我的一个要求是,如果petset被杀死,那么在我重新启动后,它们仍将使用相同的持久卷。

我现在面临的问题是,重新启动petset后,永久卷中的原始数据将被清除。那么我该如何解决这个问题,而不是手动将文件从该卷中复制出来?我试图reclaimPolicy保留和删除,他们都将清理卷。谢谢。

下面是配置文件。

光伏

apiVersion: v1 
kind: PersistentVolume 
metadata: 
    name: glusterfsvol-zookeeper-0 
spec: 
    capacity: 
    storage: 1Gi 
    accessModes: 
    - ReadWriteMany 
    glusterfs: 
    endpoints: gluster-cluster 
    path: zookeeper-vol-0 
    readOnly: false 
    persistentVolumeReclaimPolicy: Retain 
    claimRef: 
    name: glusterfsvol-zookeeper-0 
    namespace: default 
--- 
apiVersion: v1 
kind: PersistentVolume 
metadata: 
    name: glusterfsvol-zookeeper-1 
spec: 
    capacity: 
    storage: 1Gi 
    accessModes: 
    - ReadWriteMany 
    glusterfs: 
    endpoints: gluster-cluster 
    path: zookeeper-vol-1 
    readOnly: false 
    persistentVolumeReclaimPolicy: Retain 
    claimRef: 
    name: glusterfsvol-zookeeper-1 
    namespace: default 
--- 
apiVersion: v1 
kind: PersistentVolume 
metadata: 
    name: glusterfsvol-zookeeper-2 
spec: 
    capacity: 
    storage: 1Gi 
    accessModes: 
    - ReadWriteMany 
    glusterfs: 
    endpoints: gluster-cluster 
    path: zookeeper-vol-2 
    readOnly: false 
    persistentVolumeReclaimPolicy: Retain 
    claimRef: 
    name: glusterfsvol-zookeeper-2 
    namespace: default 

PVC

apiVersion: v1 
kind: PersistentVolumeClaim 
metadata: 
    name: glusterfsvol-zookeeper-0 
spec: 
    accessModes: 
    - ReadWriteMany 
    resources: 
    requests: 
     storage: 1Gi 
--- 
apiVersion: v1 
kind: PersistentVolumeClaim 
metadata: 
    name: glusterfsvol-zookeeper-1 
spec: 
    accessModes: 
    - ReadWriteMany 
    resources: 
    requests: 
     storage: 1Gi 
--- 
apiVersion: v1 
kind: PersistentVolumeClaim 
metadata: 
    name: glusterfsvol-zookeeper-2 
spec: 
    accessModes: 
    - ReadWriteMany 
    resources: 
    requests: 
     storage: 1Gi 

petset

apiVersion: apps/v1alpha1 
kind: PetSet 
metadata: 
    name: zookeeper 
spec: 
    serviceName: "zookeeper" 
    replicas: 1 
    template: 
    metadata: 
     labels: 
     app: zookeeper 
     annotations: 
     pod.alpha.kubernetes.io/initialized: "true" 
    spec: 
     terminationGracePeriodSeconds: 0 
     containers: 
     - name: zookeeper 
     securityContext: 
      privileged: true 
      capabilities: 
      add: 
       - IPC_LOCK 
     image: kuanghaochina/zookeeper-3.5.2-alpine-jdk:latest 
     imagePullPolicy: Always 
     ports: 
      - containerPort: 2888 
      name: peer 
      - containerPort: 3888 
      name: leader-election 
      - containerPort: 2181 
      name: client 
     env: 
     - name: ZOOKEEPER_LOG_LEVEL 
      value: INFO 
     volumeMounts: 
     - name: glusterfsvol 
      mountPath: /opt/zookeeper/data 
      subPath: data 
     - name: glusterfsvol 
      mountPath: /opt/zookeeper/dataLog 
      subPath: dataLog 
    volumeClaimTemplates: 
    - metadata: 
     name: glusterfsvol 
    spec: 
     accessModes: 
     - ReadWriteMany 
     resources: 
     requests: 
      storage: 1Gi 

发现的原因是,我使用zkServer-initialize.sh迫使饲养员使用的ID,但在脚本,它会清理dataDir。

+0

欢迎使用StackOverflow - 您应该也可以分享您的配置,以确保人们可以更轻松地重现您的设置并回答您的问题。 – pagid

+0

感谢您的帮助。配置文件被添加。 – HAO

找到的原因是我使用zkServer-initialize.sh来强制zookeeper使用id,但在脚本中,它将清除dataDir。