Replies: 2 comments 1 reply
-
ideally you should have used single node server with s3 as backend storage. That would have allowed you to use the data by external systems along with OpenObserve. OpenObserve will store all enriched data in s3 this way instead of on local disk of ec2 instance. Moving from single node local storage to s3 backed storage is an involved activity. Please reach out to our team on slack if you want this, and we can help you with that. |
Beta Was this translation helpful? Give feedback.
-
Hi Prabhat, Thanks for your reply and support. This will help us to share the data with other systems after completion of the enrichment process with openobserve. I shall reach one of the folks in slack and get to know the things for this impenmentation. Thanks, |
Beta Was this translation helpful? Give feedback.
-
Which OpenObserve functionalities are relevant/related to the feature request?
No response
Description
Hi Team,
We are using AWS Cloud for our environment.
At present we are using a single EC2 to collect the logs inside openobserve. Later this will be converted to HA.
Now the ask here is, how to store/ copy the same data (COPY of the same data that we are seeing it inside openobserve after completing the things under functions + enrichment ) in to a S3.
Or
Is it like only we can feed the data from S3 to openobserve for process and not taking out the processed/enriched data out of it for further process ?
We need the enriched data out from the environment where we host openobserve system for various analytical purpose. Please clear this.
Ref :-
https://openobserve.ai/docs/storage/#disk
Proposed solution
N/A
Alternatives considered
N/A
Beta Was this translation helpful? Give feedback.
All reactions