-
Notifications
You must be signed in to change notification settings - Fork 553
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow to change mounter option from an existing PV #4691
Comments
There's been a similar discussion here: #1887 |
@Madhu-1 Thank you for considering it. If I understand correctly, you have in mind a new ConfigMap setting that would enforce the mounter to use either FUSE or kernel mounting, regardless of the PV's volumeAttributes.mounter attribute value. Is that correct? Any reason why the mounter was set to the StorageClass initially? Was it intentional to allow administrators to decide that some PVs should be mounted with FUSE instead of kernel, depending on the type and structure of the data they contain? (Although I doubt many of us are using two different CephFS StorageClasses in Kubernetes clusters, one with mounter: kernel and another with mounter: fuse.) |
I think this is a perfect case for #4662 |
we didn't had any required for it so it was set in the SC.
i forgot that i recently opened #4662 for this use-case where admin will be able to dynamically change the options for the existing PVC but for this, we need implementation on the cephcsi to store the details in the omap or the image metadata. @nixpanic @Rakshith-R what your suggestion of storing options, omap or image/subvolume metadata? |
image/subvolume metadata is my preference. That way things are kept easily with the storage. Certain RBD features use keys in the RBD-image metadata (like I/O bandwidth throttling rbd-nbd). |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions. |
Preventing closure during summer. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation. |
Describe the feature you'd like to have
Hello this issue is to discuss the feasibility to change the mounter type of an existing PV.
Over time, due to issues with ceph MDS balancer in active / active mode and kernel hangups, we had to change StorageClass's mounter type to fuse (instead of kernel). We now want to revert this change and use mounter type kernel instead.
In the meantime, all provisioned PVs where provisoned using a volumeAttributes.mounter set to fuse and this property cannot be changed with a kubectl patch, resulting in PVs still being mounted using ceph-fuse, despite the StorageClass was recreated with mounter kernel.
Is there a way to change the mounter type of existing PV? By hacking PV's etcd entry?
If not, can you provide a way to do so?
What is the value to the end user? (why is it a priority?)
Change the mounter type of an existing PV. Avoid having to move all data from a PV to another (with the need to stop the application) just to change the mounter type of a volume and regain better performances (with kernel mounter type).
How will we know we have a good solution? (acceptance criteria)
PVs previously using volumeAttributes.mounter: fuse and being mounted with ceph-fuse would be remounted with kernel after changing the property to volumeAttributes.mounter: kernel.
The text was updated successfully, but these errors were encountered: