-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Queries regarding saving the data #105
Comments
Hi @abhay40711cs , I don't know about Promethesus but appmetrics (which appmetrics-dash uses under the covers to gather data) can be run in File Collection Mode which will store data to an *.hcd archive format. On the other hand, as you mention Grafana, you might want to take a look at appmetrics-elk which can send data to an ElasticSearch instance which can be used with Grafana. |
Thanks @mattcolegate : Would you suggest how to run appmetrics dash in file collection mode, and can this data (.hcd) format is able to be imported to Grafana ? Then sole purpose will be fulfilled. I tried the appmetrics-elk but it seems to be obsolete with the Kibana and existing node.js versions. |
@abhay40711cs we're actually going to be supporting a Prometheus endpoint very shortly. Do you have any specific data that you'd like access to? (so that we can prioritise what to expose first)? |
@seabaylea : Thanks for giving a thought for this. apart from this any thought for upgrading appmetricks-elk ? |
Currently we're looking at providing the following for HTTP:
each of which would be broken down with per-route data. You should then be able to generate rate/throughput queries in Prometheus. For error rate, would a single count for any request that doesn't result in 200-OK make sense, or would you need counts of 3xx vs. 4xx etc.? |
This is great Chris, Endpoint metrics generation will help in order to
reduce the code changes.
- For error rates, if they can be segregated with 300 and 400, 500 will
help in order to identify the rejected request at the time of load test.
Regards,
Abhay
…On Mon, Aug 21, 2017 at 4:54 PM, Chris Bailey ***@***.***> wrote:
Currently we're looking at providing the following for HTTP:
# HELP http_request_duration_microseconds The HTTP request latencies in microseconds.
# TYPE http_request_duration_microseconds summary
# HELP http_requests_total Total number of HTTP requests made.
# TYPE http_requests_total counter
each of which would be broken down with per-route data. You should then be
able to generate rate/throughput queries in Prometheus.
For error rate, would a single count for any request that doesn't result
in 200-OK make sense, or would you need counts of 3xx vs. 4xx etc.?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#105 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AVB5SQ9EnQgIAgUL330dM285vhvgjTVuks5saWj1gaJpZM4O8UDZ>
.
|
Any check-in for the end points on Prometheus?
On Mon, Aug 21, 2017 at 9:14 PM, Abhay Kulshrestha <[email protected]>
wrote:
… This is great Chris, Endpoint metrics generation will help in order to
reduce the code changes.
- For error rates, if they can be segregated with 300 and 400, 500 will
help in order to identify the rejected request at the time of load test.
Regards,
Abhay
On Mon, Aug 21, 2017 at 4:54 PM, Chris Bailey ***@***.***>
wrote:
> Currently we're looking at providing the following for HTTP:
>
> # HELP http_request_duration_microseconds The HTTP request latencies in microseconds.
> # TYPE http_request_duration_microseconds summary
> # HELP http_requests_total Total number of HTTP requests made.
> # TYPE http_requests_total counter
>
> each of which would be broken down with per-route data. You should then
> be able to generate rate/throughput queries in Prometheus.
>
> For error rate, would a single count for any request that doesn't result
> in 200-OK make sense, or would you need counts of 3xx vs. 4xx etc.?
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <#105 (comment)>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AVB5SQ9EnQgIAgUL330dM285vhvgjTVuks5saWj1gaJpZM4O8UDZ>
> .
>
|
Hi @abhay40711cs. I can tell you that work is underway - I've got a very early WIP here: Note that we still haven't decided/determined whether we should add support into appmetrics-dash, or make it a separate module. |
I'm leaning more and more towards have this as a separate module rather than part of appmetrics-dash |
@abhay40711cs We have released an early driver of our prometheus adaptor to github this morning |
Thanks for the initiative.
Just a quick question, would that be possible to have custom endpoint apart
from /metrics ?
The reason behind it there are wrappers around client library 'prom-client'
, who also provides the metrics on the same endpoint.
…On Tue, Sep 5, 2017 at 5:59 PM, Toby Corbin ***@***.***> wrote:
@abhay40711cs <https://github.com/abhay40711cs> We have released an early
driver of our prometheus adaptor to github this morning
https://github.com/RuntimeTools/appmetrics-prometheus
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#105 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AVB5SZerM9N6obXHRL8cJJhX65FuxGPKks5sfT6ggaJpZM4O8UDZ>
.
|
Hi @abhay40711cs . Can you raise an issue in the appmetrics-prometheus project to make the endpoint configurable? If you have any suggestions on how you'd like to configure it, that would also be useful (its also fine if you don't mind how its done!) |
I need to save the data and to show in Grafana.
Is there any plugin which can transfer data to promethesus?
The text was updated successfully, but these errors were encountered: