Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cephfs: use userid and keys for provisioning #4988

Merged
merged 4 commits into from
Jan 8, 2025

Conversation

black-dragon74
Copy link
Member

@black-dragon74 black-dragon74 commented Nov 27, 2024

This patch modifies the code to use userID and
userKey for provisioning of both static and dynamic PVs.

In case user credentials are not found admin credentials are used as a fallback and for backwards compatibility.

Fixes: #4935

@black-dragon74 black-dragon74 marked this pull request as draft November 27, 2024 13:42
@mergify mergify bot added the component/cephfs Issues related to CephFS label Nov 27, 2024
@black-dragon74
Copy link
Member Author

Cluster Config

Ceph user

$ ceph auth get client.nick2
[client.nick2]
        key = AQCJHUdnHeDrGBAAd9/9Qc1orCwKwlRZLgsDeQ==
        caps mds = "allow r fsname=myfs path=/volumes, allow rws fsname=myfs path=/volumes/csi"
        caps mgr = "allow rw"
        caps mon = "allow r fsname=myfs"
        caps osd = "allow rw tag cephfs metadata=myfs, allow rw tag cephfs data=myfs"

Provisioner secret

# oc get secrets/rook-csi-cephfs-provisioner-user2 -o yaml
apiVersion: v1
data:
  userID: bmljazI=
  userKey: QVFDSkhVZG5IZURyR0JBQWQ5LzlRYzFvckN3S3dsUlpMZ3NEZVE9PQ==
kind: Secret
metadata:
  creationTimestamp: "2024-11-27T13:27:03Z"
  name: rook-csi-cephfs-provisioner-user2
  namespace: rook-ceph
  resourceVersion: "1722753"
  uid: 88222761-54a2-4eb0-9d2d-9c11326979a8
type: kubernetes.io/rook

Nodestage secret

# oc get secrets/rook-csi-cephfs-node-user2 -o yaml
apiVersion: v1
data:
  userID: bmljazI=
  userKey: QVFDSkhVZG5IZURyR0JBQWQ5LzlRYzFvckN3S3dsUlpMZ3NEZVE9PQ==
kind: Secret
metadata:
  creationTimestamp: "2024-11-27T13:27:03Z"
  name: rook-csi-cephfs-node-user2
  namespace: rook-ceph
  resourceVersion: "1722754"
  uid: 4e9525bd-4854-4cce-9007-58fd261c6c1a
type: kubernetes.io/rook

1. Dynamic PVCs

Resources

oc get sc
NAME          PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-cephfs   rook-ceph.cephfs.csi.ceph.com   Delete          Immediate           true                   17moc get pvc
NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
cephfs-pvc          Bound    pvc-39a11e4c-2ddd-46c6-9b5a-6b004bd4eced   1Gi        RWO            rook-cephfs    <unset>                 18m

Logs

I1127 13:29:09.069933       1 utils.go:266] ID: 108 Req-ID: pvc-39a11e4c-2ddd-46c6-9b5a-6b004bd4eced GRPC call: /csi.v1.Controller/CreateVolume
I1127 13:29:09.077837       1 utils.go:267] ID: 108 Req-ID: pvc-39a11e4c-2ddd-46c6-9b5a-6b004bd4eced GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-39a11e4c-2ddd-46c6-9b5a-6b004bd4eced","parameters":{"clusterID":"rook-ceph","csi.storage.k8s.io/pv/name":"pvc-39a11e4c-2ddd-46c6-9b5a-6b004bd4eced","csi.storage.k8s.io/pvc/name":"cephfs-pvc","csi.storage.k8s.io/pvc/namespace":"rook-ceph","fsName":"myfs","pool":"myfs-replicated"},"secrets":"***stripped***","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]}
I1127 13:29:09.170334       1 omap.go:89] ID: 108 Req-ID: pvc-39a11e4c-2ddd-46c6-9b5a-6b004bd4eced got omap values: (pool="myfs-metadata", namespace="csi", name="csi.volumes.default"): map[]
I1127 13:29:09.185399       1 omap.go:159] ID: 108 Req-ID: pvc-39a11e4c-2ddd-46c6-9b5a-6b004bd4eced set omap keys (pool="myfs-metadata", namespace="csi", name="csi.volumes.default"): map[csi.volume.pvc-39a11e4c-2ddd-46c6-9b5a-6b004bd4eced:595c630d-6e17-4c00-a66e-91785fb01c6d])
I1127 13:29:09.190423       1 omap.go:159] ID: 108 Req-ID: pvc-39a11e4c-2ddd-46c6-9b5a-6b004bd4eced set omap keys (pool="myfs-metadata", namespace="csi", name="csi.volume.595c630d-6e17-4c00-a66e-91785fb01c6d"): map[csi.imagename:csi-vol-595c630d-6e17-4c00-a66e-91785fb01c6d csi.volname:pvc-39a11e4c-2ddd-46c6-9b5a-6b004bd4eced csi.volume.owner:rook-ceph])
I1127 13:29:09.191264       1 fsjournal.go:318] ID: 108 Req-ID: pvc-39a11e4c-2ddd-46c6-9b5a-6b004bd4eced Generated Volume ID (0001-0009-rook-ceph-0000000000000001-595c630d-6e17-4c00-a66e-91785fb01c6d) and subvolume name (csi-vol-595c630d-6e17-4c00-a66e-91785fb01c6d) for request name (pvc-39a11e4c-2ddd-46c6-9b5a-6b004bd4eced)
I1127 13:29:09.470449       1 controllerserver.go:475] ID: 108 Req-ID: pvc-39a11e4c-2ddd-46c6-9b5a-6b004bd4eced cephfs: successfully created backing volume named csi-vol-595c630d-6e17-4c00-a66e-91785fb01c6d for request name pvc-39a11e4c-2ddd-46c6-9b5a-6b004bd4eced
I1127 13:29:09.472306       1 utils.go:273] ID: 108 Req-ID: pvc-39a11e4c-2ddd-46c6-9b5a-6b004bd4eced GRPC response: {"volume":{"capacity_bytes":1073741824,"volume_context":{"clusterID":"rook-ceph","fsName":"myfs","pool":"myfs-replicated","subvolumeName":"csi-vol-595c630d-6e17-4c00-a66e-91785fb01c6d","subvolumePath":"/volumes/csi/csi-vol-595c630d-6e17-4c00-a66e-91785fb01c6d/19ea74a6-2409-4220-b930-55deb650dc2a"},"volume_id":"0001-0009-rook-ceph-0000000000000001-595c630d-6e17-4c00-a66e-91785fb01c6d"}}

2. Static PVCs

Resources

oc get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                         STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
cephfs-static-pv                           1Gi        RWX            Retain           Bound    rook-ceph/cephfs-static-pvc                  <unset>                          10moc get pvc
NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
cephfs-static-pvc   Bound    cephfs-static-pv                           1Gi        RWX                           <unset>                 10m

@black-dragon74 black-dragon74 force-pushed the single-set-keys branch 2 times, most recently from 3b74c01 to b48a45a Compare December 3, 2024 08:57
@black-dragon74 black-dragon74 marked this pull request as ready for review December 3, 2024 10:26
@black-dragon74 black-dragon74 requested a review from a team December 9, 2024 08:40
@@ -26,23 +26,23 @@ To install the Chart into your Kubernetes cluster

- For helm 2.x

```bash
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like there are more changes than expected in this file related to formatting, Do we need this change?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could revert the formatting changes. The md files inside charts are using the outdated syntax. Prettier auto formatted them and I decided to stick with it.

What would you suggest?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i would suggest keeping the changes minimal and relevant to the PR as different developers might use different prettier configurations

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done!

@black-dragon74 black-dragon74 force-pushed the single-set-keys branch 2 times, most recently from 8de5147 to b737872 Compare December 9, 2024 12:32
@black-dragon74 black-dragon74 requested a review from a team December 19, 2024 09:53
Copy link
Collaborator

@Madhu-1 Madhu-1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

small nit, LGTM

charts/ceph-csi-cephfs/values.yaml Outdated Show resolved Hide resolved
Copy link
Contributor

@iPraveenParihar iPraveenParihar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Madhu-1
Madhu-1 previously approved these changes Jan 6, 2025
iPraveenParihar
iPraveenParihar previously approved these changes Jan 6, 2025
@nixpanic
Copy link
Member

nixpanic commented Jan 6, 2025

@Mergifyio queue

Copy link
Contributor

mergify bot commented Jan 6, 2025

queue

🛑 The pull request has been removed from the queue default

The merge conditions cannot be satisfied due to failing checks.

You can take a look at Queue: Embarked in merge queue check runs for more details.

In case of a failure due to a flaky test, you should first retrigger the CI.
Then, re-embark the pull request into the merge queue by posting the comment
@mergifyio refresh on the pull request.

@mergify mergify bot added the ok-to-test Label to trigger E2E tests label Jan 6, 2025
@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/k8s-e2e-external-storage/1.30

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e-helm/k8s-1.30

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e/k8s-1.30

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/k8s-e2e-external-storage/1.31

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/k8s-e2e-external-storage/1.32

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e-helm/k8s-1.31

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/upgrade-tests-cephfs

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e-helm/k8s-1.32

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Jan 7, 2025

/test ci/centos/upgrade-tests-cephfs

@black-dragon74
Copy link
Member Author

@Mergifyio requeue

Copy link
Contributor

mergify bot commented Jan 8, 2025

requeue

❌ This pull request head commit has not been previously disembarked from queue.

@nixpanic
Copy link
Member

nixpanic commented Jan 8, 2025

@Mergifyio queue

This PR was updated after it was queued, so got unqueued automatically.

Copy link
Contributor

mergify bot commented Jan 8, 2025

queue

✅ The pull request has been merged automatically

The pull request has been merged automatically at 7226945

This patch modifies the code to use userID and
userKey for provisioning of both static and dynamic
PVs.

In case user credentials are not found admin credentials
are used as a fallback and for backwards compatibility.

Signed-off-by: Niraj Yadav <[email protected]>
Once the version we use for upgrade testing does
not depend on adminID and adminKey we should update
the tests to use just the userID and userKey.

Signed-off-by: Niraj Yadav <[email protected]>
@mergify mergify bot added the ok-to-test Label to trigger E2E tests label Jan 8, 2025
@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/upgrade-tests-cephfs

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/upgrade-tests-rbd

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/k8s-e2e-external-storage/1.31

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/k8s-e2e-external-storage/1.32

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/k8s-e2e-external-storage/1.30

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e-helm/k8s-1.32

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e-helm/k8s-1.31

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e-helm/k8s-1.30

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e/k8s-1.32

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e/k8s-1.31

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e/k8s-1.30

@ceph-csi-bot ceph-csi-bot removed the ok-to-test Label to trigger E2E tests label Jan 8, 2025
@mergify mergify bot merged commit 7226945 into ceph:devel Jan 8, 2025
37 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/cephfs Issues related to CephFS
Projects
None yet
Development

Successfully merging this pull request may close these issues.

use single set to keys for cephfs secrets
5 participants