-
-
Notifications
You must be signed in to change notification settings - Fork 215
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ensure old tokens continue to work by removing token_key
#362
base: develop
Are you sure you want to change the base?
Conversation
token_key
@giovannicimolin @johnraz This ended up being a bigger change than expected but this gives the project both a performance boost and solves the issue of invalidated tokens. It's my first contribution here so happy to address any feedback. |
@jmsmkn Adding this to my review queue for next week. Thanks for the contribution and for the bug fix! 🚀 |
@jmsmkn Hey! I reviewed your PR and it's mostly looking good, but I wasn't able to properly test it because there's a few things messed up in the library right now. I'll get back to this and merge it as soon as I can figure out what's wrong. |
This would be great to add in as token invalidation is what's stopping me from upgrading to v5. Is this fix still viable? |
Echoing @Fireclunge 's comment, the invalidation of existing tokens is a non-starter for us when upgrading, unfortunately. tldr; it seems like I agree with @jmsmkn 's assessment that Before #272 , this is what hash_token looked like:
The important bit is the
In #272 , the addition of a token prefix resulted in an additional step of ensuring the token string is hex-compatible. This is reasonable, as tokens like
However, the actual implementation of
And with a token prefix present, it does work, but it's still just the bytes() version of the token str:
All that said,
Again, this still works, because
NOTE: This of course assumes you are using the default (Sorry for the long reply, here's my proposed solution, which I think could slot nicely into this PR) Update
Adding this check should be lightweight performance-wise and it maintains backwards compatibility for existing tokens (while supporting newer tokens that might use the token_prefix option). @giovannicimolin let me know what you think, happy to help support this effort in this PR (if @jmsmkn is interested?) or break it out separately. Also happy to help write tests and whatnot. |
Great find, @mr-niche! I'm unsure of the best way to proceed with this. Even though I was given the old "PRs welcome" treatment when I first raised the backward compatibility issue, it's been two months since I submitted a PR, and I haven't received any feedback. I spent a weekend working on improving the library, but without input from the maintainers, it's hard to move forward. If you'd like to make a PR to my branch to consolidate our efforts, I'd be happy to merge it. However, we ultimately need some guidance from the maintainers to get this resolved. |
Thanks @jmsmkn ! I'll give @giovannicimolin some time to weigh in on how to proceed. If we get a thumbs up, I'll go ahead and submit a PR to your branch and we can get this ball rolling again. I'll also include some documentation around "Upgrading from v4 -> v5" so that folks who have raised this concern before (i.e. in #356 ) can see if the upgrade path is viable for their use case. |
@mr-niche Thanks for the great in-depth investigation of the issue! @jmsmkn Can you incorporate @mr-niche's changes into your PR and resolve the PR conflicts? Let's make this the 5.1 version - lots of folks will be happy with the library being backwards compatible. |
Thanks @giovannicimolin ! @jmsmkn , I'll take a stab at adding this to your PR, I have a little bit of time today. |
Thanks @giovannicimolin ! I have fixed the conflicts. One question: we could still keep around |
@jmsmkn That makes sense, can you keep it around for now? I'll take some time to test it out tomorrow. :) |
@giovannicimolin my changes are in this PR (to be merged into @jmsmkn 's PR) jmsmkn#1 We need some guidance on one last consideration: if we switch back to the original I'm not sure what the right thing to do here is, we could either:
Option 2 feels risky. But option 1 is yet another breaking change (albeit, a potentially smaller one - if folks have already upgraded to 5.0.*, they might not be as concerned about breaking changes, but I don't necessarily want to make that assumption). Let me know what you think/if you have any other ideas here! |
This PR adds a test that ensures old tokens continue to work by removing
token_key
. See discussion in #358 and #356.When each token had its own salt
hash_token
needed to be done per token withhash_token(token, auth_token.salt)
.token_key
was introduced in 913336d so that the comparison could then be done on a smaller number of tokens.However, the per-token salt was removed in 51a5204. That means that we can now move
hash_token
out of the filter loop ofauthenticate
as we no longer have to passauth_token.salt
, we only pass thetoken
sent by the user. That gives us thedigest
straight away, meaning that we can do a direct lookup ondigest
(the primary key, so unique) to find the relevant token.This also has the benefit of improving performance as multiple hashes and comparisons no longer need to be made, and solves the problem of tokens being invalidated between versions 4.2.0 -> 5.0.0. Additionally,
MAXIMUM_TOKEN_PREFIX_LENGTH
is no longer bound byTOKEN_KEY_LENGTH
so could be increased if you wish.