-
Notifications
You must be signed in to change notification settings - Fork 880
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for Multiple Ngnix Ingress resources in a single rollout #2001
Comments
I'll add here that we have forked the project and are actively working on adding this feature for our use and if we are happy with it would like to have it merged back into the main project. Also would like to know what counts as "Organizations below are officially using Argo Rollouts" https://github.com/argoproj/argo-rollouts/blob/master/USERS.md since we do not have Argo out in the wild but we are actively working towards that end. |
Modify Nginx SetWeight: supports multiple stable ingresses. Addresses argoproj#2001
@zachaller provided the following workaround in Slack As a work around for this you could have one Ingress resource that is managed by rollouts then have multiple other ingress resources acts as proxies to the rollouts managed one. You would need to use k8s ExternalService to point to a domain that resolves to your ingress managed by rollouts. Something kind of like https://www.elvinefendi.com/2018/08/08/ingress-nginx-proxypass-to-external-upstream.html but with a domain that resolves internally to the rollouts managed ingress. This is a bit more involved of course but could possibly unblock you today if that is a requirement I am trying to decide between compiling from a fork that includes rallyhealth#1 or trying the workaround from @zachaller The determining factor will be when can that PR be merged into master (I am willing to use a fork for some time, but not forever). How do I get this issue prioritized? |
This issue is stale because it has been open 60 days with no activity. |
@zachaller unfortunately my work has blocked the cloud-native slack workspace (working to get that undone) so I did not see the workaround suggested in slack. How much appetite is there for pulling this fork into the upstream project? rallyhealth#1 |
I am not 100% opposed to it but I also do have some reservations to add this to rollouts without fully understanding why having a setup outside of rollouts dose not solve the issue with pretty low overhead as far as implementation. The idea being something like this. I am not 100% sure on this idea but would like to explore it and fully understand your use case.
This would allow you to have pretty much any config that want within the proxies you could do different route based on paths etc etc. It could also be worth an in face meeting at one of the contributors meetings as well if you want to talk more about this or on slack. @ssanders1449 are you guys also still interested in this? |
While I do understand that there is a value to the argoproject to keep this concern out of the project, I have a couple reservations with this approach. First, building multiple routing layers creates an additional layer of complexity in building infrastructure as code which creates opportunity for bugs and mis-configuration/human errors via pointing to the wrong Nginx controller. This is complexity that can be avoided by providing simple support for multiple controllers per service and applying the config across all controllers. Second, it also breaks some security requirements our company has as a health care entity. We need to keep internal and external facing traffic completely separate at all layers that live outside the pods themselves. I would be happy to join a contributor conversation. Where can I find a schedule for that? I'll be opening up a PR here shortly and hopefully that can help drive more conversation as well. It will be more or less identical to what I did here rallyhealth#1 . FWIW, this code is live in production at a massive scale right now for Optum successfully. |
PR made #2467 |
As an fyi, the community meetings schedule can be found https://calendar.google.com/calendar/u/0/[email protected]&pli=1 the Argo Contributor Experience Office Hours is the one you would want to attend. |
Thanks! |
We had the exact same reasons for wanting this feature. We have a product with multiple ingresses handling different URL paths with their own security requirements. Maintaining a 'second hop' ingress will be a nightmare. I fully support this feature |
I have updated the original proposal above since I realized it was grossly out of date. I originally made this feature request well before I had the understanding to articulate certain thoughts around it. |
This issue is stale because it has been open 60 days with no activity. |
Will be looking at this soon |
This was implemented in 1.5 |
Summary
Argo Rollouts lacks a feature brought up in this Issue #1708 that our org (Rally Health) also needs. There are a few key user cases that need a single rollout resource that manages multiple Ngnix ingress resources that have different network rules but resolve to the same service managed by a rollout resource.
Broadly speaking, allowing n Nginx Ingress controllers per rollout (where n > 1) allows for separation of traffic at the network level.
Since Nginx is a load balancer that accepts traffic from outside the kubernetes cluster and load balances it to a group of pods, there are some specific use cases for having more than one per pod group.
Use Case
Motivation for feature request
Rally Health wants to use Argo for managing Canary deploys.
We are currently in process of building a MVP and have already built out a basic POC that we are happy with.With our existing infrastructure, the ability to have a single rollout resource describe multiple Ngnix Ingress resources is a go/no-go point. We need to use rollouts as a way to manage multiple paths of traffic via separate Ngnix ingress resources that we reason about differently at the network level, which ultimately resolve to the samepodsservice(s). This allows for a zero trust pattern of network architecture via separation of network behaviors at each layer of our stack that extends past the Ngnix Ingress.update: We have forked the project here but we do not want to maintain indefinitely; we would rather support the main project.
visual example of configuration and network pathing
note: I have updated feature request/proposal (12/20/22) since I originally created (4/26/22) before I had enough understanding to adequately describe certain components, value propositions, etc.
Message from the maintainers:
Impacted by this bug? Give it a 👍. We prioritize the issues with the most 👍.
The text was updated successfully, but these errors were encountered: