-
-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Synchronization mode #4
Comments
Any update on this? We're looking for a way to keep all oauth client registrations in sync between multiple Hydra deployments, each connected to their own DB. These are all completely separate Hydra instances backed by their own IdP, the only thing they share are oauth clients (multi-tenant setup with a marketplace for integrations that are to be available for all tenants) Ideally, multiple Hydra instances all connected to the same DB (for scaling purposes/zero-downtime deployment etc) would be seen as one from the perspective of maester, so that when the Hydra deployment for a single tenant has multiple Hydra instances running all connected to the same DB, the api calls from maester to Hydra are done on the K8S Service level once and the Service just forwards the request to one of the Hydra instances |
I believe that kyma has pushed this back a bit as they are working on other issues right now. If you have some time to dedicate to this issue I think that could be a good addition. I think we could sync this quite easily, by adding a label selector that works across namespaces and executing the necessary queries as pointed out above. Maybe instead of a CronJob we could simply listen on updates (e.g. new deployment, restart pod) of the hydra containers? |
Haven't done any work developing stuff for K8S, just utilized K8S to deploy stuff, but I'll try to carve of some time and see what I can whip up :-)
Yeah, afaik k8s has ways to listen for stuff, so a CronJob shouldnt be needed |
Awesome! And, if that is unfruitful or takes too long, you can always resort to a simple go thing that does that for you. |
CronJob was something we thought at the beginning, after second thought we came up with k8s endpoint approach which returns all k8s IPs for the given k8s service. So if you have for example autoscaling of hydra it will return a list of ips of hydra instances. But... we did not plan to have support for synchro between totally different hydra instances (different k8s services). And here k8s endpoint could be still used,but controller should have configuration of hydra services deployed inside the cluster in different namespaces/with different k8s services and for registered services it could fetch endpoints and make synchronization queries. |
Not 100% sure what exactly it is you're suggesting and as said I'm not yet familiar with developing K8S controllers (or its lingo), but I think it should be something like this:
Now, I don't know (yet) if the watching for new services being created can be made generic so that it can either watch for services of for pods based on some config, but if that's possible, I think we could have a single implementation that could serve both the current implementation as well as the multi-tenant implementation |
Ah @piotrmsc - I remember, your use case was to support multiple in-memory hydras that are logically the same Authorization Server, right? |
sorry for late response... TBH we are offering in memory hydra as a default(playground) config with persistence coming soon on production profile. We are thinking of namespace isolation with ory so hydra per namespace but that is a thought for the future. Using k8s endpoints was to target multiple instances of hydra cluster wides. In k8s if you create a service for your app and scale the deployment, endpoint for this service is updated with new IP adress of the new instance. In terms of multi tenant case the controller should check if Oauth2 client CR has info about particular hydra instance and make a call to all hydra instances of this given service in the given namespace. |
We've done a couple of iterations on other projects and believe that we'll switch to |
I am closing this issue as it has not received any engagement from the community or maintainers in a long time. That does not imply that the issue has no merit. If you feel strongly about this issue
We are cleaning up issues every now and then, primarily to keep the 4000+ issues in our backlog in check and to prevent maintainer burnout. Burnout in open source maintainership is a widespread and serious issue. It can lead to severe personal and health issues as well as enabling catastrophic attack vectors. Thank you to anyone who participated in the issue! 🙏✌️ |
TODO:
The text was updated successfully, but these errors were encountered: