-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prometheus metrics hardcoded to start at port 8080 #392
Comments
Sure, that's a workaround, but taking |
Just want to chime in and say that this change would have caused a production outage for us, which we fortunately caught in one of our staging clusters. We had one deployment with this plugin enabled that had a container already using port 8080, which started failing after a routine GKE upgrade. Given the potential impact of this change, should metrics collection be opt-in instead? |
I agree. The metrics collection port should be customizable. Hard coding to 8080 is very strict. |
Thank you all for your feedback. We have recently disabled metrics exporting by default in #375 to prevent these types of frictions when using the driver. Regarding the port, I agree that setting this port number as default is rather strict. Marking this issue as an enhancement to eventually increase flexibility with users that would like to use metrics exporting. |
Thank you for the rapid fix. Can you also provide some information to the Kubernetes Engine release notes? Since this is a breaking change, it would be good to know which GKE version includes the fixed driver version. |
The metrics added in #310 listen separately at each mount, with a starting port of
8080
. We should be able to customize the start port, but I think it would need to come from the sidecar annotation (gke-gcsfuse/prometheus-start-port: "9090"
for example).With this,
prometheus-port
doesn't need to be a disallowed flag. Instead it would skip incrementing from the start port.(Are there options to disable the metrics if users don't want them at all?)
The text was updated successfully, but these errors were encountered: