Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PVC Creation stuck with v3.13.0 #5073

Open
appcoders opened this issue Jan 10, 2025 · 0 comments
Open

PVC Creation stuck with v3.13.0 #5073

appcoders opened this issue Jan 10, 2025 · 0 comments

Comments

@appcoders
Copy link

I am trying to get ceph-csi running for 3 days now. I am using k3s v1.31.4+k3s1 and ceph v3.13.0.
I'm following https://docs.ceph.com/en/latest/rbd/rbd-kubernetes/
I replaced canary with v3.13.0

All pods come up, I can reach the cluster without issues from the pod.

❯ kubectl exec -it csi-rbdplugin-provisioner-869fd747-46qf5 -c csi-rbdplugin -- /bin/sh
sh-5.1# ceph -s --id=kubernetes --key=***REDACTED**** -m 192.168.72.1
  cluster:
    id:     5f2e2b61-4b0c-4e90-aff7-281147b312a8
    health: HEALTH_OK

  services:
    mon: 5 daemons, quorum fpha01,fpha02,fpha03,fpha06,fpha08 (age 3d)
    mgr: fpha01(active, since 6d), standbys: fpha08, fpha06
    mds: 1/1 daemons up, 2 standby
    osd: 20 osds: 20 up (since 3d), 20 in (since 3d)

  data:
    volumes: 1/1 healthy
    pools:   5 pools, 225 pgs
    objects: 2.09M objects, 7.6 TiB
    usage:   23 TiB used, 47 TiB / 70 TiB avail
    pgs:     225 active+clean

  io:
    client:   6.0 KiB/s rd, 8.9 MiB/s wr, 1 op/s rd, 118 op/s wr

There are no stale rbd commands on the nodes/pods. No special output or errors on dmesg on the nodes.
It looks that simply nothing happens after " setting image options ":

csi-rbdplugin-provisioner-869fd747-46qf5 csi-rbdplugin I0110 18:11:31.932145       1 utils.go:266] ID: 23 Req-ID: pvc-99e9948e-bbc5-4ade-be96-90b92cdd0e4c GRPC call: /csi.v1.Controller/CreateVolume
csi-rbdplugin-provisioner-869fd747-46qf5 csi-rbdplugin I0110 18:11:31.932320       1 utils.go:267] ID: 23 Req-ID: pvc-99e9948e-bbc5-4ade-be96-90b92cdd0e4c GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-99e9948e-bbc5-4ade-be96-90b92cdd0e4c","parameters":{"clusterID":"5f2e2b61-4b0c-4e90-aff7-281147b312a8","csi.storage.k8s.io/pv/name":"pvc-99e9948e-bbc5-4ade-be96-90b92cdd0e4c","csi.storage.k8s.io/pvc/name":"raw-block-pvc5","csi.storage.k8s.io/pvc/namespace":"ceph-csi-rbd","imageFeatures":"layering","pool":"kubernetes"},"secrets":"***stripped***","volume_capabilities":[{"AccessType":{"Block":{}},"access_mode":{"mode":1}}]}
csi-rbdplugin-provisioner-869fd747-46qf5 csi-rbdplugin I0110 18:11:31.932455       1 rbd_util.go:1387] ID: 23 Req-ID: pvc-99e9948e-bbc5-4ade-be96-90b92cdd0e4c setting disableInUseChecks: false image features: [layering] mounter: rbd
csi-rbdplugin-provisioner-869fd747-46qf5 csi-rbdplugin I0110 18:11:31.945711       1 omap.go:89] ID: 23 Req-ID: pvc-99e9948e-bbc5-4ade-be96-90b92cdd0e4c got omap values: (pool="kubernetes", namespace="", name="csi.volumes.default"): map[]
csi-rbdplugin-provisioner-869fd747-46qf5 csi-rbdplugin I0110 18:11:31.949091       1 omap.go:159] ID: 23 Req-ID: pvc-99e9948e-bbc5-4ade-be96-90b92cdd0e4c set omap keys (pool="kubernetes", namespace="", name="csi.volumes.default"): map[csi.volume.pvc-99e9948e-bbc5-4ade-be96-90b92cdd0e4c:fdf82e72-0193-4ef9-9edb-c76ca7d72135])
csi-rbdplugin-provisioner-869fd747-46qf5 csi-rbdplugin I0110 18:11:31.950313       1 omap.go:159] ID: 23 Req-ID: pvc-99e9948e-bbc5-4ade-be96-90b92cdd0e4c set omap keys (pool="kubernetes", namespace="", name="csi.volume.fdf82e72-0193-4ef9-9edb-c76ca7d72135"): map[csi.imagename:csi-vol-fdf82e72-0193-4ef9-9edb-c76ca7d72135 csi.volname:pvc-99e9948e-bbc5-4ade-be96-90b92cdd0e4c csi.volume.owner:ceph-csi-rbd])
csi-rbdplugin-provisioner-869fd747-46qf5 csi-rbdplugin I0110 18:11:31.950329       1 rbd_journal.go:515] ID: 23 Req-ID: pvc-99e9948e-bbc5-4ade-be96-90b92cdd0e4c generated Volume ID (0001-0024-5f2e2b61-4b0c-4e90-aff7-281147b312a8-0000000000000009-fdf82e72-0193-4ef9-9edb-c76ca7d72135) and image name (csi-vol-fdf82e72-0193-4ef9-9edb-c76ca7d72135) for request name (pvc-99e9948e-bbc5-4ade-be96-90b92cdd0e4c)
csi-rbdplugin-provisioner-869fd747-46qf5 csi-rbdplugin I0110 18:11:31.950365       1 rbd_util.go:437] ID: 23 Req-ID: pvc-99e9948e-bbc5-4ade-be96-90b92cdd0e4c rbd: create kubernetes/csi-vol-fdf82e72-0193-4ef9-9edb-c76ca7d72135 size 1024M (features: [layering]) using mon 192.168.72.1:6789,192.168.72.2:6789,192.168.72.3:6789,192.168.72.6:6789,192.168.72.8:6789
csi-rbdplugin-provisioner-869fd747-46qf5 csi-rbdplugin I0110 18:11:31.950379       1 rbd_util.go:1641] ID: 23 Req-ID: pvc-99e9948e-bbc5-4ade-be96-90b92cdd0e4c setting image options on kubernetes/csi-vol-fdf82e72-0193-4ef9-9edb-c76ca7d72135
csi-rbdplugin-provisioner-869fd747-46qf5 csi-provisioner I0110 18:11:31.930207       1 event.go:389] "Event occurred" object="ceph-csi-rbd/raw-block-pvc5" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="Provisioning" message="External provisioner is provisioning volume for claim \"ceph-csi-rbd/raw-block-pvc5\""

Install is done this way in namespace ceph-csi-rbd:

kubectl create -f csidriver.yaml
kubectl create -f ceph-config-map.yaml
kubectl create -f csi-config-map.yaml
kubectl create -f csi-kms-config-map.yaml
kubectl create -f csi-rbd-secret.yaml
kubectl create -f csi-nodeplugin-rbac.yaml
kubectl create -f csi-provisioner-rbac.yaml
kubectl create -f csi-rbd-sc.yaml
kubectl create -f csi-rbdplugin-provisioner.yaml
kubectl create -f csi-rbdplugin.yaml

And after everything is up:

kubectl create -f raw-block-pvc.yaml

All yamls and logfiles attached.

log.txt
csidriver.yaml.txt
ceph-config-map.yaml.txt
csi-config-map.yaml.txt
csi-kms-config-map.yaml.txt
csi-rbd-secret.yaml.txt
csi-nodeplugin-rbac.yaml.txt
csi-provisioner-rbac.yaml.txt
csi-rbd-sc.yaml.txt
csi-rbdplugin-provisioner.yaml.txt
csi-rbdplugin.yaml.txt
raw-block-pvc.yaml.txt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant