Skip to content
This repository has been archived by the owner on Jan 24, 2019. It is now read-only.

OIDC ID Token, Authorization Headers, Refreshing and Verification #621

Closed
wants to merge 36 commits into from

Conversation

JoelSpeed
Copy link
Contributor

This PR includes the commits from/replaces #534 and #620 and adds proper OIDC token validation. Please see both original PRs for descriptions and conversations about the PRs.

This now fixes #530

To summarise:

I have built an image and published it here: quay.io/joelspeed/kubernetes-3 which is built from https://github.com/pusher/oauth2_proxy/tree/kubernetes and contains the commits from this PR and #464

This PR has become quite large but, having opened #620, I realised there was a bug if the cookie_refresh period was shorter than the id_token lifetime (Thanks @jhohertz), to fix this, I needed to correctly implement ValidateSessionState for OIDC which then in turn needs the changes from #534

So the resulting PR is a combination of fixes that adds a bunch of new stuff to the OIDC provider

@jhohertz
Copy link

Testing this now... With the new PRs I can reliably reproduce your merged state in the pusher:kubernetes branch from the individual PRs.

The refresh seems to be working well now as well with your additional changes... for testing I have it expiring every 10 minutes, and it's been reliably and seamlessly refreshing each time it expires.

Great stuff @JoelSpeed , hope the maintainers take this merge in.

@danielm0hr
Copy link

danielm0hr commented Jun 22, 2018

Hi @JoelSpeed, when using your quay.io/joelspeed/kubernetes-3 image I'm getting:

2018/06/22 12:27:12 main.go:109: Get https://dex.<my-domain>/.well-known/openid-configuration: x509: failed to load system roots and no roots provided

Is the image maybe missing CA certificates? The https://dex.<my-domain> has a proper Letsencrypt certificate. Before I was using your quay.io/joelspeed/whitelist-domains-1 which was working just fine.

Btw, thanks for the greate work! This is exactly what I was looking for.

@JoelSpeed
Copy link
Contributor Author

@danielm0hr There is deliberately no ca-certificate bundle installed in the image, we mount it directly from the host /etc/ssl/certs/ca-certificates.crt, you can either do that or create your own image from this one that has them installed

@danielm0hr
Copy link

@JoelSpeed Thanks.

I have added the ca certificates to the image gfkse/oauth2_proxy:kubernetes-3-ca-certs which is working fine now. This is built from https://github.com/gfkse/oauth2_proxy/blob/kubernetes/Dockerfile, if you want to incorporate that...

@JoelSpeed
Copy link
Contributor Author

As I've said, we are deliberately mounting the certificates from our hosts, so I won't be updating my image to add the certificates.

Given CA certificates need to be kept up to date, we find it easier to maintain a single copy on the host and mount that into multiple containers rather than rebuilding each image regularly to keep the certificates up to date, YMMV

@danielm0hr
Copy link

I see your point. But does it really hurt to have the additional 256 KiB of CA certificates in the image? You can still replace the path with your host mount, can't you?

But this would enable people to use the image without any additional precautions. E.g. I'm currently using the official oauth2_proxy helm chart with this image. The chart would also have to be changed to enable mounting CA certificates from the host.

@luispollo
Copy link

luispollo commented Jul 2, 2018

Hello @JoelSpeed. Thanks a lot for contributing this. I'm looking to leverage your changes and have deployed your image of oauth2_proxy with the configuration you partially outlined in your article on The New Stack (great series, BTW). I have a setup with dex just like in the article. The OIDC dance between oauth2_proxy and dex works, and the OAuth flow between dex and GitHub Enterprise in my case also works. For some reason, however, in the last leg of the redirects, oauth2_proxy is not including the Authorization header in the response as per your change, even though I set --set-authorization-header=true in the options, and so I get prompted to login to the Kubernetes Dashboard.

The only log line I see from oauth2_proxy, which seems to indicate the id_token was picked up successfully, is this:

2018/07/02 23:33:56 oauthproxy.go:679: 100.104.0.4:43064 ("15.65.252.15") authentication complete Session{email:<redacted> user: token:true id_token:true expires:2018-07-03 23:33:55.748705419 +0000 UTC m=+86432.236792032}

Any suggestions on how to debug this further, or could you perhaps share your YAML file for the oauth2_proxy deployment in your test environment?

Thanks in advance!

@JoelSpeed
Copy link
Contributor Author

Hi @luispollo Glad you enjoyed the series 😄

I would recommend first checking the /oauth2/auth endpoint on your proxy once you have logged in. You should be able to request this page and see a 202 response with an authorization header set.

If you see the Authorization header set here, you should be able to retrieve the bearer JWT and check that it is indeed valid. If the header isn't there then there is a problem with your OAuth2 Proxy configuration.

If you can see the header there, then it would suggest to me that you aren't getting the authorization header passed correctly, double check your ingress object, it should have the configuration snippet:

ingress.kubernetes.io/configuration-snippet: |
 auth_request_set $token $upstream_http_authorization;
 proxy_set_header Authorization $token;

The line ingress.kubernetes.io/configuration-snippet could need changing depending on your ingress controller configuration, double check what you have the annotations-prefix set as within your ingress controller

@luispollo
Copy link

Thanks @JoelSpeed. You nailed it. It was the annotation prefix (I was missing nginx in front of my annotation in my setup with the NGINX ingress).

Things are looking good now, except I ran into #580. The OIDC provider is currently not including the groups scope in the OAuth flow, and so dex does not include groups in the ID token in my setup. I know that's unrelated to your PR, but that's the only thing missing at this point for my ideal setup to work.

@JoelSpeed
Copy link
Contributor Author

@luispollo Have you tried overriding the scope?

In my config file I have:

scope="openid profile email groups audience:server:client_id:kubernetes offline_access"

Which is sent to Dex to make the request. By default the scope is only openid profile email, so I've added groups, cross client trust in dex and the offline_access scope for token refreshing.

If you set your scope to include groups you should get the groups in the ID token into the dashboard

@luispollo
Copy link

luispollo commented Jul 3, 2018

You beat me to it, @JoelSpeed. I just found out you could override scopes. 😄
I can see groups in the JWT now. Successfully logging in to the Dashboard and kubectl via dex!

Thanks so much for contributing this! I hope the maintainers will review and merge this PR soon.

@tlawrie
Copy link

tlawrie commented Jul 6, 2018

@JoelSpeed I reiterate the thanks on contributing this. Although also seeking your help. I have leveraged your implementation and altered slightly to create a manual OIDC provider implementation for those that don't implement the introspection endpoint. (https://github.com/tlawrie/oauth2_proxy/tree/session-refresh)

However i am getting a 502 in the NGINX ingress on Kubernetes. This is occuring on testing the cookie refresh. Have not yet gotten to testing the 12 hour session refresh.

Any thoughts on what could be causing it? I added a lot of extra debug logging and I know that its performing a save session after the session timeout.

@JoelSpeed
Copy link
Contributor Author

@tlawrie I've seen this a few times since implementing the refreshing for OIDC.

For us, it was the fact that Grafana would send about 40 requests simultaneously as it refreshed, 40 requests hit the oauth2_proxy, all triggered a refresh and the OIDC provider that the proxy backed onto was complaining and returning an internal server error because it was receiving too many requests to refresh the same token simultaneously.

Too mitigate this, what I've done is create some caching within the ingress controller, see the gist:
https://gist.github.com/JoelSpeed/9f4dbf6f79f6498d12ccd6ff0bc096e2

What this does is cache the response from the upstream oauth2_proxy. The response is cached for three seconds, then when the next batch of requests come through, nginx sends a single request to the oauth2_proxy and sends the response to all requests made to it. Just point your ingress auth-url to http://127.0.0.1/oauth2/auth instead to enable the caching on the ingress.

You'll need to change the host on line 30 and 57 and the cookie name on line 43 if you want to make use of this.

@tlawrie
Copy link

tlawrie commented Jul 11, 2018

@JoelSpeed thanks for the information, I am going to give that a try very soon.

One clarification, I currently use both the auth-url and auth-signin ingress annotations for the protected paths. These currently point to an ingress that is defined for the oauth2_proxy

If i add this custom snippet in the configmap, I will then have conflicts with the defined kubernetes ingress and service definitions which achieve a similar implementation without the cache.

Is there a best way to implement this, either without the ingresses or without the configmap?

Edit: After implementing I get '500 Internal Server Error'

@JoelSpeed
Copy link
Contributor Author

@tlawrie I would just try to add the cache parts of the gist (L41-L53) as a custom-snippet annotation on the ingress for your proxy, but you will still need to define the http-snippet from the ConfigMap to make sure the cache path is valid.

@bodewig
Copy link

bodewig commented Jul 12, 2018

@JoelSpeed many thanks for this change.

Right now I'm using a fork of our own that includes this PR (among others) with nginx-ingress and Keycloak and our session info gets big enough to be split into two Cookies. When refreshing the token during an /auth request it seems nginx strips all newly set Cookies except for the first one, which invalidates the session, as it now consists of two inconsistent Cookies.

nginx-ingress basically does

			auth_request_set    $auth_cookie $upstream_http_set_cookie;
			add_header          Set-Cookie $auth_cookie;

which seems to only forward the very first Set-Cookie header.

Have you seen this as well and found a solution for it?

@bcorijn
Copy link

bcorijn commented Jul 19, 2018

@JoelSpeed Thanks a lot for this PR, makes this project a lot more usable to run straight in front of the Kubernetes dashboard. My setup is running a Traefik proxy in front of it, with the oauth2_proxy handling all the traffic and authenticating against Google.

The first logon works well, then after an hour the refresh happens (succesful according to the logs) and I get a new cookie back (which seems properly set from my browser). However, the dashboard hits me back with an Unauthorized error. Looking at the Auth header using the /oauth2/auth endpoint, I see my new cookie value, but the response header is already expired when decoding it, so the passed/returned header does not seem to take into account the refresh?
Anything I can check to debug this?

Edit: Looking through your code, I assume this is because the Google provider does not store a new IDToken upon refresh. I could get around this by initialising it as generic OIDC provider, but then I lose the refreshToken I believe. I am happy to PR the changes to the Google provider though, it should be minimal after your groundwork.
Edit2: Made some changes in a local fork. Storing the new ID token in the session upon refreshing in the Google provider makes this work flawlessly!

@JoelSpeed
Copy link
Contributor Author

I could get around this by initialising it as generic OIDC provider, but then I lose the refreshToken I believe

@bcorijn I don't follow as to why you would lost the refreshToken if you initialised up to Google as a generic OIDC provider? My understanding is that it should be able to work with Googles OIDC endpoint without trouble. The generic OIDC provider stores the refresh token as part of the cookie so it should always be there?

Made some changes in a local fork

Have you got a link for this fork?

@bcorijn
Copy link

bcorijn commented Aug 2, 2018

@JoelSpeed Initialising a generic OIDC provider requires passing a IssuerURL, but doing so will also use the OIDC auto-discovered login-url instead of any URL you pass yourself (Cfr https://github.com/bitly/oauth2_proxy/blob/master/options.go#L152-L161). In Google's case you need to explicitly add a query parameter to your login URL if you want to receive refresh-tokens, something that will not be present in the auto-discovered URL.
I described this and its side-effects in a bit more detail in my issue #632.

I just pushed my changes to a local fork of your repository, these are the changes that were needed for the Google provider to return the new ID Token: bcorijn/oauth2_proxy@661210e
I have been running with these changes the last 2 weeks (combined with the newer Google endpoints as described in issue 632) and everything works flawlessly to secure our kubernetes dashboard behind Google OIDC.

@JoelSpeed
Copy link
Contributor Author

@bcorijn I've implemented a Google Connector for a project called Dex which is based on the generic OIDC authentication and uses https://accounts.google.com's autodiscovery mechanism and works without issue for us. The code path is nearly identical to the one in the Bitly Proxy OIDC connector and therefore should work for Google.

What you are missing in terms of refreshing is the prompt=consent query parameter, see: https://github.com/JoelSpeed/dex/blob/c1b2a4f4d7b6ebc46fecfb6c7987e1bab023e336/connector/google/google.go#L123-L125

But you should be able to add offline_access to the scope parameter to get the offline_access part of the login URL sorted.

I've not needed to override the auto-discovered URLs in this implementation and I can assure you it is working.

@bcorijn
Copy link

bcorijn commented Aug 2, 2018

@JoelSpeed Not sure I'm completely following you now. The auto-discovered URL from Google is perfectly correct for basic authentication, but in the current GenericOIDC implementation in this project it won't include the offline_access param that is needed for Google to provide you with refresh tokens (https://developers.google.com/identity/protocols/OAuth2WebServer#offline). The line you linked from the Dex implementation adds this exact parameter as far as I understand (together with the prompt=consent, which serves a different purpose)

The Google provider in this project does handle Google's specific implementation and will refresh it's token as expected, but will not update the original ID Token with the refreshed token (as the Generic one does after your PR), which is what I tried to solve in the commit I linked.
Creating the possibility of setting custom parameters on the GenericOIDC provider login URL's would also have the same end-result, but feels like including provider-specific functionality in the generic provider while leaving the Google provider in a broken state (returning expired ID tokens even after a successful refresh).

@zmichael
Copy link

zmichael commented Oct 4, 2018

Hello @JoelSpeed!
we are trying to create simple configuration with oauth2_proxy as a proxy for dashboard
after successful login to google proxy proxy creates Authorization: Bearer eyJhbGciOiJSUzI1NiIsImt... token and pass it to dashboard
Dashboard says we are authenticated and try to access resources on api server.
Api reports this error:
E1004 10:44:12.649338 1 authentication.go:63] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, invalid bearer token]]

We now use other oidc proxy, it works well, but want to replace it because of token refresh feature.

Could you please advise, what we miss with configuration ?

@JoelSpeed
Copy link
Contributor Author

JoelSpeed commented Jan 15, 2019

Now that @pusher have taken over the repository maintenance, I am closing this in favour of oauth2-proxy/oauth2-proxy#14

@JoelSpeed JoelSpeed closed this Jan 15, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Development

Successfully merging this pull request may close these issues.

Add Authorization: Bearer <jwt> as an optional header from OIDC style providers
9 participants