-
-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CephFS Backend #7172
Comments
Is using the ceph bindings quicker than using s3/swift via the radosgw? I thought all the data went via the radosgw but i'm not a ceph expert! The lib you linked uses cgo which I've been trying to avoid as it makes cross compilng hard and it means the rclone binary won't start unless the library is present unless we can dynamically load it (which is what cgofuse/cmount does). |
In my experience, it's been faster to mount it via the userspace or kernel mount rather than going via the S3 layer which is why I created the issue. That said, I am planning to wait on implementing this till we have the change notifications too so that it has definite advantage over the S3 layer. |
"Kernel mount" seems to refer to CephFS. This is separate from RADOS and S3 (RGW). Is this issue about supporting RADOS? |
No, this is about CephFS only atm. |
@ncw talked about radosgw above, which I why I pointing out the confusion. RGW has nothing to do with CephFS. Should I create a new issue for RADOS support? |
I'm not a CEPH expert so please correct me if I'm wrong! I think that RGW supports the S3 and Swift interfaces which rclone already supports. Is there another protocol that I don't know about? |
CephFS is not related to RGW or S3 or Swift |
I'm trying to understand why you said "Should I create a new issue for RADOS support?" if this issue covers CephFS and we already support S3 and Swift - what else is left to support? |
I think I see what you mean. S3 is not the native protocol of Ceph, the one that the storage daemons speak and that you can use without a gateway or additional service, that would be RADOS. graph TD;
RADOS-client-->Ceph;
RBD-client-->RBD;
RBD-->Ceph;
RGW-->Ceph;
S3-client-->RGW;
cephfs-client-->Ceph;
cephfs-client-->mds;
mds-->Ceph;
Both the CephFS feature (which requires deploying MDS daemons) and the RGW feature (which requires the RGW daemons) are optional. In particular, if you run a Ceph cluster for virtual machines or containers, you are using RADOS/RBD and won't deploy either CephFS or RGW. Some programs support RADOS directly, for example qemu can use RBD as VM disks, and there is a RADOS VFS for SQLite. This reduces complexity and increases performance since you remove a (centralized) component. rclone could support storing objects into Ceph directly using librados or the RADOS protocol, instead of requiring a gateway to convert between a protocol rclone currently speaks (S3, Swift) and the one that the Ceph cluster speaks (RADOS). This is what I thought this issue was about. |
Thanks for explaining @remram44 I understand now. I don't know if there is a demand for a RADOS backend or not. |
The associated forum post URL from
https://forum.rclone.org
N/A
What is your current rclone version (output from
rclone version
)?N/A
What problem are you are trying to solve?
Ceph Backend in RClone
How do you think rclone should be changed to solve that?
Add a new backend for Ceph in RClone.
Reference Library: https://github.com/ceph/go-ceph
Change Notifications are blocked on ceph/go-ceph#478
How to use GitHub
The text was updated successfully, but these errors were encountered: