-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for TokenIntrospection and UserInfo cache #20209
Support for TokenIntrospection and UserInfo cache #20209
Conversation
b94641b
to
d811c6a
Compare
@stuartwdouglas @pedroigor it would be good if this PR could make it to |
@sberyozkin I'm wondering if makes sense to use a bounded It should help to make things simpler and remove the overhead of having a background task? |
Hi Pedro @pedroigor Sure that would be neat, but we'd need to sync around FYI running the timer is optional, if it is not running then when adding an entry to a full map, one of the oldest entries will be removed. Timer would only help with keeping removing the stale entries proactively say every few mins, but I guess, it the max size is for ex 500-1000 then it is probably not even worth starting it. I reckon users would plugin custom cache providers in the really demanding productions... |
@stuartwdouglas Do you think the default cache has to be reworked ? See comments above. I like Pedro's idea and would not mind to explore more options but not sure at the moment it is possible. Default one is meant more to help users get started with experimenting with the cache fast as opposed to providing a high quality implementation which will handle massive amount of concurrent requests, I reckon users would use Infinispan, Caffeine, etc if they really need to super scale |
Making sure it is effectively sorted with the every addition such that we can just remove the 1st entry when the map is full can be useful but I'm not sure it would not have side-effects - too many sorting attempts in a highly concurrent application... |
I'm not pushing for it. Would be nice to hear from Stuart what he thinks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops, looks like I forgot to hit submit on my review.
There are some issues with the implementation, but I also question how useful this is in a Kubernetes environment, when there is no guarantee you will hit the same nodes each time. Basically the more you scale up, the less useful this becomes (e.g. if you have 2 nodes then it will work 50% of the time, if you have 100 it will be 1%).
extensions/oidc/runtime/src/main/java/io/quarkus/oidc/TokenIntrospectionCache.java
Outdated
Show resolved
Hide resolved
extensions/oidc/runtime/src/main/java/io/quarkus/oidc/UserInfoCache.java
Outdated
Show resolved
Hide resolved
...dc/runtime/src/main/java/io/quarkus/oidc/runtime/DefaultTokenIntrospectionUserInfoCache.java
Show resolved
Hide resolved
extensions/oidc/deployment/src/main/java/io/quarkus/oidc/deployment/OidcBuildStep.java
Outdated
Show resolved
Hide resolved
extensions/oidc/runtime/src/main/java/io/quarkus/oidc/runtime/DefaultTenantConfigResolver.java
Show resolved
Hide resolved
...dc/runtime/src/main/java/io/quarkus/oidc/runtime/DefaultTokenIntrospectionUserInfoCache.java
Show resolved
Hide resolved
...dc/runtime/src/main/java/io/quarkus/oidc/runtime/DefaultTokenIntrospectionUserInfoCache.java
Outdated
Show resolved
Hide resolved
Hi Stuart, @stuartwdouglas, thanks for the review, I'll give it a try to make this default implementation a bit more robust, I was wondering it it was simpler to drop it totally and just let users register custom ones - but may be it is worth trying to make it better, will help in not too very demanding cases... |
@stuartwdouglas @pedroigor I'll push to 2.4.x as I won't have time to complete it for |
d811c6a
to
d05abe0
Compare
d05abe0
to
494da57
Compare
@stuartwdouglas I've tried to address all the change requests:
Have a look please |
494da57
to
f35be09
Compare
The operator update based approach does not provide any real guarentees, as multiple threads can be incrementing and decrementing at the same time. compareAndSet must be used to guarentee sizing. In addition this removes the attempt to invalidate an entry if the cache is full. If the cache fills up then this will result in every request iterating over every entry, which has pretty horrible performance characteristics.
@sberyozkin I pushed some changes that fix some issues, however I think we have missed the window for 2.3.0 so will likely need to add back some of these methods with all the different context classes and deprecate them all. @geoand I am guessing this is too late for 2.3? |
You folks will need to decide within the next hour or so if this is going to be backported :) |
I think back port it if possible, but if it is going to be a pain we can always go down the depreciation route. |
Okay. Go ahead and merge then and I'll cherry-pick it to the my backports branch |
Just as an FYI, I am going to finish the backports once #20103 is in |
Fixes #12800.
The problem of the performance being possibly affected due to up to 2 remote calls per every incoming access token (one for the access token introspection - opaque bearer access token or access token returned with the code flow response if it is configured to be the source of roles with non Keycloak providers plus a user info call if required for all the providers) has been known for a while and several users are waiting for a fix, in some cases SLAs can be affected. I believe @stianst and @pedroigor have also mentioned before the possibility of caching the token introspection results for a short configurable period of time.
So this PR does the following:
TokenIntrospectionCache
andUserInfoCache
interfacesTokenIntrospection
andUserInfo
for the custom cache implementations be able to recreate them either from String orjavax.json.JsonObject
UserInfo
is prepared - earlier, VertxJsonObject
was created first and then later it was converted tojavax.json.JsonObject
and now it is the other way around but only ifUserInfo
is the source of roles since VertxJsonObject
is used for it which is rare enough - so it s a bit more effective now and fits better with the new interfaces.OidcIdentityProvider
to checkTokenIntrospectionCache
andUserInfoCache
and try to add a new entry but only if the current tenant allows to cache.TokenIntrospectionCache
andUserInfoCache
is provided - in the vast majority of cases it will be the same token which is introspected and/or used to getUserInfo
- so this cache uses a single cache entry to hold both objects (one of them can be null), enforces a max number of entries and optionally starts a clean up timer. It is a fairly basic implementation which might be tweaked a bit further but it should be good enough to handle the cases where up to say 3K requests are coming nearly at the same time. Users can register custom ones of course.