-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SLO publishing workflow #913
Comments
@metalmatze I did some work on this in #914, interested to understand if I'm on the correct path here though 😅 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
My setup is simple: prometheus (dockerized) on EC2 and grafana somewhere else. Works, solid and simple.
I'm looking at
pyrra
to manage SLOs. Prometheus needs the rules (recording rules) and configuration on its local filesystem. So I'm currently runningpyrra
with thefilesystem
argument. This picks up local changes, generates SLO yaml specs and reloads prometheus. This seems useful.The problem now comes when my teams need to update or add SLOs. Running
ansible
to update the prometheus machine with some new SLO files is just not flexible.I'm thinking to extend the filesystem way of operating to accept new SLO specs over HTTP and write them out to the filesystem, thus triggering the okay-prometheus-now-go-reload. Ultimately, we'd be managing these SLO specs in a dedicated
git
repo, from where they would be pushed and put in production.Would that make sense or is there some preferred way of refreshing the specs on disk that I just haven't seen yet?
The text was updated successfully, but these errors were encountered: