Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Synchronization mode #4

Closed
piotrmsc opened this issue Jul 31, 2019 · 10 comments
Closed

Synchronization mode #4

piotrmsc opened this issue Jul 31, 2019 · 10 comments

Comments

@piotrmsc
Copy link
Collaborator

piotrmsc commented Jul 31, 2019

TODO:

  • Read from secret the ID,ClientID,Secret
  • Run cron job which iterates over all CR and tries to create them in hydra, 409 status code in response is treated as ok, continue
  • use kubernetes endpoints to find out all hydra instances
@p-bakker
Copy link

Any update on this?

We're looking for a way to keep all oauth client registrations in sync between multiple Hydra deployments, each connected to their own DB.

These are all completely separate Hydra instances backed by their own IdP, the only thing they share are oauth clients (multi-tenant setup with a marketplace for integrations that are to be available for all tenants)

Ideally, multiple Hydra instances all connected to the same DB (for scaling purposes/zero-downtime deployment etc) would be seen as one from the perspective of maester, so that when the Hydra deployment for a single tenant has multiple Hydra instances running all connected to the same DB, the api calls from maester to Hydra are done on the K8S Service level once and the Service just forwards the request to one of the Hydra instances

@aeneasr
Copy link
Member

aeneasr commented Nov 21, 2019

I believe that kyma has pushed this back a bit as they are working on other issues right now. If you have some time to dedicate to this issue I think that could be a good addition.

I think we could sync this quite easily, by adding a label selector that works across namespaces and executing the necessary queries as pointed out above. Maybe instead of a CronJob we could simply listen on updates (e.g. new deployment, restart pod) of the hydra containers?

@p-bakker
Copy link

p-bakker commented Nov 21, 2019

Haven't done any work developing stuff for K8S, just utilized K8S to deploy stuff, but I'll try to carve of some time and see what I can whip up :-)

Maybe instead of a CronJob we could simply listen on updates (e.g. new deployment, restart pod) of the hydra containers?

Yeah, afaik k8s has ways to listen for stuff, so a CronJob shouldnt be needed

@aeneasr
Copy link
Member

aeneasr commented Nov 21, 2019

Awesome! And, if that is unfruitful or takes too long, you can always resort to a simple go thing that does that for you.

@piotrmsc
Copy link
Collaborator Author

piotrmsc commented Nov 21, 2019

CronJob was something we thought at the beginning, after second thought we came up with k8s endpoint approach which returns all k8s IPs for the given k8s service. So if you have for example autoscaling of hydra it will return a list of ips of hydra instances. But... we did not plan to have support for synchro between totally different hydra instances (different k8s services). And here k8s endpoint could be still used,but controller should have configuration of hydra services deployed inside the cluster in different namespaces/with different k8s services and for registered services it could fetch endpoints and make synchronization queries.

Thoughts @p-bakker @aeneasr

@p-bakker
Copy link

And here k8s endpoint could be still used,but controller should have configuration of hydra services deployed inside the cluster in different namespaces/with different k8s services and for registered services it could fetch endpoints and make synchronization queries.

Not 100% sure what exactly it is you're suggesting and as said I'm not yet familiar with developing K8S controllers (or its lingo), but I think it should be something like this:

  • controller watches for new Services being created that match the configured labels and when a new service is noticed, it syncs all oath client CRDs to it
  • controller also watches for changes in the CRDs and when it notices changes, it finds all services that match the configured labels and calls the Hydra /client/ API endpoint through the service (actual path from the service url to the /client/ endpoint of Hydra is configurable

Now, I don't know (yet) if the watching for new services being created can be made generic so that it can either watch for services of for pods based on some config, but if that's possible, I think we could have a single implementation that could serve both the current implementation as well as the multi-tenant implementation

@aeneasr
Copy link
Member

aeneasr commented Nov 21, 2019

Ah @piotrmsc - I remember, your use case was to support multiple in-memory hydras that are logically the same Authorization Server, right?

@piotrmsc
Copy link
Collaborator Author

piotrmsc commented Jan 10, 2020

sorry for late response... TBH we are offering in memory hydra as a default(playground) config with persistence coming soon on production profile.

We are thinking of namespace isolation with ory so hydra per namespace but that is a thought for the future.

Using k8s endpoints was to target multiple instances of hydra cluster wides. In k8s if you create a service for your app and scale the deployment, endpoint for this service is updated with new IP adress of the new instance. In terms of multi tenant case the controller should check if Oauth2 client CR has info about particular hydra instance and make a call to all hydra instances of this given service in the given namespace.

@aeneasr
Copy link
Member

aeneasr commented Jan 15, 2020

We've done a couple of iterations on other projects and believe that we'll switch to gobuffalo/pop as a DBAL on all projects, which allows us to easily adopt SQLite as well. I'm not sure how well that scales in terms of write access (the file is probably locked? not sure) but it will add some basic persistence in "playground" scenarios.

@aeneasr
Copy link
Member

aeneasr commented Sep 5, 2021

I am closing this issue as it has not received any engagement from the community or maintainers in a long time. That does not imply that the issue has no merit. If you feel strongly about this issue

  • open a PR referencing and resolving the issue;
  • leave a comment on it and discuss ideas how you could contribute towards resolving it;
  • open a new issue with updated details and a plan on resolving the issue.

We are cleaning up issues every now and then, primarily to keep the 4000+ issues in our backlog in check and to prevent maintainer burnout. Burnout in open source maintainership is a widespread and serious issue. It can lead to severe personal and health issues as well as enabling catastrophic attack vectors.

Thank you to anyone who participated in the issue! 🙏✌️

@aeneasr aeneasr closed this as completed Sep 5, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants