Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Device already mounted at /var/lib/kubelet/pods" with a shared=yes ZFS dataset #497

Open
etlfg opened this issue Jan 17, 2024 · 1 comment
Assignees
Labels
bug Something isn't working.

Comments

@etlfg
Copy link

etlfg commented Jan 17, 2024

What steps did you take and what happened:

I can't get two pods running on the same node to access to one given dataset.

I've been reading all the issues and docs I can about sharing a dataset between pods.

But I can't get out of MountVolume.SetUp failed for volume "consume-pv" : rpc error: code = Internal desc = rpc error: code = Internal desc = verifyMount: device already mounted at [/var/lib/k0s/kubelet/pods/7f3fc9cd-5e94-4d32-9e59-3ae0caa41fc4/volumes/kubernetes.io~csi/import-pv/mount /host/var/lib/k0s/kubelet/pods/7f3fc9cd-5e94-4d32-9e59-3ae0caa41fc4/volumes/kubernetes.io~csi/import-pv/mount]

I don't get either the shared-yes param in my ZFSVolume as stated #152 (comment)

Here are my curated resources :

kubectl get sc -n openebs zfs-import -o yaml
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: zfs-import
parameters:
  compression: "off"
  dedup: "off"
  fstype: zfs
  poolname: data/import
  recordsize: 16k
  shared: "yes"
  thinprovision: "no"
provisioner: zfs.csi.openebs.io
reclaimPolicy: Retain
volumeBindingMode: Immediate
kubectl zv -n openebs import -o yaml
apiVersion: zfs.openebs.io/v1
kind: ZFSVolume
metadata:
  name: import
  namespace: openebs
spec:
  capacity: "1073741824000"
  fsType: zfs
  ownerNodeID: main
  poolName: data
  volumeType: DATASET
status:
  state: Ready
apiVersion: v1
kind: PersistentVolume
metadata:
  finalizers:
  - kubernetes.io/pv-protection
  name: import-pv
spec:
  accessModes:
  - ReadWriteOnce # Tried ReadWriteMany just in case, doesn't work as expected
  capacity:
    storage: 1Ti
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: import-pvc
    namespace: default
  csi:
    driver: zfs.csi.openebs.io
    fsType: zfs
    volumeAttributes:
      openebs.io/poolname: data
    volumeHandle: import
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - main
  persistentVolumeReclaimPolicy: Retain
  storageClassName: zfs-import
  volumeMode: Filesystem
status:
  phase: Bound
kubectl get pvc import-pvc -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  finalizers:
  - kubernetes.io/pvc-protection
  name: import-pvc
  namespace: default
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Ti
  storageClassName: zfs-import
  volumeMode: Filesystem
  volumeName: import-pv
status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Ti
  phase: Bound

What did you expect to happen:

I'm expecting that the two pods share the same ZFS dataset to get only one destination to put my files (the two applications have dicern concerns depending on the files put in It).

Environment:

  • ZFS-LocalPV version
    zfs-driver:2.4.0
  • Kubernetes version (use kubectl version):
Client Version: v1.28.5
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.4+k0s
  • Kubernetes installer & version:
    k0s v1.28.4+k0s.0
  • Cloud provider or hardware configuration:
    Odroid HC4
  • OS (e.g. from /etc/os-release):
    Armbian 23.11.1 bookworm
    6.1.63-current-meson64
    Thanks in advance for providing me any clue about this..
@etlfg etlfg changed the title Device already mounted at for two pods on the same shared=yes ZFS dataset "Device already mounted at /var/lib/kubelet/pods" with a shared=yes ZFS dataset Jan 17, 2024
@Abhinandan-Purkait Abhinandan-Purkait added the bug Something isn't working. label Jun 6, 2024
@w3aman
Copy link
Contributor

w3aman commented Oct 2, 2024

Hi @etlfg, I was trying to use shared mount also, and was able to do it successfully. So i thought to give it a try with yamls you provided, and here as well it worked for me. I can see shared: yes in -o yaml of zfsvolume CR. Does this issue still persist for you? I would suggest to give it a try once again.

One point i want to check that:

  1. In your storage class yaml i see that poolname: data/import but in your zfs volume and PV yamls i see that it is only poolName: data. Can you confirm that by any chance your storage class yaml was different while provisiong the volume ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working.
Projects
None yet
Development

No branches or pull requests

3 participants