Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for Multiple Ngnix Ingress resources in a single rollout #2001

Closed
tperdue321 opened this issue Apr 26, 2022 · 14 comments
Closed

Support for Multiple Ngnix Ingress resources in a single rollout #2001

tperdue321 opened this issue Apr 26, 2022 · 14 comments
Labels
enhancement New feature or request

Comments

@tperdue321
Copy link
Contributor

tperdue321 commented Apr 26, 2022

Summary

Argo Rollouts lacks a feature brought up in this Issue #1708 that our org (Rally Health) also needs. There are a few key user cases that need a single rollout resource that manages multiple Ngnix ingress resources that have different network rules but resolve to the same service managed by a rollout resource.

Broadly speaking, allowing n Nginx Ingress controllers per rollout (where n > 1) allows for separation of traffic at the network level.
Since Nginx is a load balancer that accepts traffic from outside the kubernetes cluster and load balances it to a group of pods, there are some specific use cases for having more than one per pod group.

Use Case

  1. separation of traffic at a load balancer level. note: This is a security requirement for some industries or individual companies.
    1. internal vs external vs 3rd parties, etc
    2. Apply different firewalls/configs/permissions at the load balancer level for all traffic.
      1. This can help reduce the chance of human error of configuration application by separating at the logical device level instead relying on correct granularity configurations within one load balancer

Motivation for feature request

Rally Health wants to use Argo for managing Canary deploys. We are currently in process of building a MVP and have already built out a basic POC that we are happy with. With our existing infrastructure, the ability to have a single rollout resource describe multiple Ngnix Ingress resources is a go/no-go point. We need to use rollouts as a way to manage multiple paths of traffic via separate Ngnix ingress resources that we reason about differently at the network level, which ultimately resolve to the same pods service(s). This allows for a zero trust pattern of network architecture via separation of network behaviors at each layer of our stack that extends past the Ngnix Ingress.

update: We have forked the project here but we do not want to maintain indefinitely; we would rather support the main project.

visual example of configuration and network pathing

Nginx feature diagram-2

note: I have updated feature request/proposal (12/20/22) since I originally created (4/26/22) before I had enough understanding to adequately describe certain components, value propositions, etc.


Message from the maintainers:

Impacted by this bug? Give it a 👍. We prioritize the issues with the most 👍.

@tperdue321 tperdue321 added the enhancement New feature or request label Apr 26, 2022
@tperdue321
Copy link
Contributor Author

tperdue321 commented Apr 26, 2022

I'll add here that we have forked the project and are actively working on adding this feature for our use and if we are happy with it would like to have it merged back into the main project. Also would like to know what counts as "Organizations below are officially using Argo Rollouts" https://github.com/argoproj/argo-rollouts/blob/master/USERS.md since we do not have Argo out in the wild but we are actively working towards that end.

tperdue321 added a commit to tperdue321/argo-rollouts that referenced this issue May 3, 2022
Modify Nginx SetWeight: supports multiple stable ingresses. Addresses argoproj#2001
@ssanders1449
Copy link
Contributor

@zachaller provided the following workaround in Slack

As a work around for this you could have one Ingress resource that is managed by rollouts then have multiple other ingress resources acts as proxies to the rollouts managed one. You would need to use k8s ExternalService to point to a domain that resolves to your ingress managed by rollouts. Something kind of like https://www.elvinefendi.com/2018/08/08/ingress-nginx-proxypass-to-external-upstream.html but with a domain that resolves internally to the rollouts managed ingress. This is a bit more involved of course but could possibly unblock you today if that is a requirement

I am trying to decide between compiling from a fork that includes rallyhealth#1 or trying the workaround from @zachaller

The determining factor will be when can that PR be merged into master (I am willing to use a fork for some time, but not forever). How do I get this issue prioritized?

@github-actions
Copy link
Contributor

This issue is stale because it has been open 60 days with no activity.

@tperdue321
Copy link
Contributor Author

@zachaller unfortunately my work has blocked the cloud-native slack workspace (working to get that undone) so I did not see the workaround suggested in slack. How much appetite is there for pulling this fork into the upstream project? rallyhealth#1

@zachaller
Copy link
Collaborator

zachaller commented Dec 8, 2022

I am not 100% opposed to it but I also do have some reservations to add this to rollouts without fully understanding why having a setup outside of rollouts dose not solve the issue with pretty low overhead as far as implementation. The idea being something like this. I am not 100% sure on this idea but would like to explore it and fully understand your use case.

nginx-proxy-1----\
                  |--------------nginx-managed-by-rollouts-------> backend
ngixnx-proxy-2 --/

This would allow you to have pretty much any config that want within the proxies you could do different route based on paths etc etc. It could also be worth an in face meeting at one of the contributors meetings as well if you want to talk more about this or on slack. @ssanders1449 are you guys also still interested in this?

@tperdue321
Copy link
Contributor Author

tperdue321 commented Dec 8, 2022

While I do understand that there is a value to the argoproject to keep this concern out of the project, I have a couple reservations with this approach.

First, building multiple routing layers creates an additional layer of complexity in building infrastructure as code which creates opportunity for bugs and mis-configuration/human errors via pointing to the wrong Nginx controller. This is complexity that can be avoided by providing simple support for multiple controllers per service and applying the config across all controllers.

Second, it also breaks some security requirements our company has as a health care entity. We need to keep internal and external facing traffic completely separate at all layers that live outside the pods themselves.

I would be happy to join a contributor conversation. Where can I find a schedule for that?

I'll be opening up a PR here shortly and hopefully that can help drive more conversation as well. It will be more or less identical to what I did here rallyhealth#1 . FWIW, this code is live in production at a massive scale right now for Optum successfully.

@tperdue321
Copy link
Contributor Author

PR made #2467

@zachaller
Copy link
Collaborator

As an fyi, the community meetings schedule can be found https://calendar.google.com/calendar/u/0/[email protected]&pli=1 the Argo Contributor Experience Office Hours is the one you would want to attend.

@tperdue321
Copy link
Contributor Author

Thanks!

@ssanders1449
Copy link
Contributor

While I do understand that there is a value to the argoproject to keep this concern out of the project, I have a couple reservations with this approach.

First, building multiple routing layers creates an additional layer of complexity in building infrastructure as code which creates opportunity for bugs and mis-configuration/human errors via pointing to the wrong Nginx controller. This is complexity that can be avoided by providing simple support for multiple controllers per service and applying the config across all controllers.

Second, it also breaks some security requirements our company has as a health care entity. We need to keep internal and external facing traffic completely separate at all layers that live outside the pods themselves.

I would be happy to join a contributor conversation. Where can I find a schedule for that?

I'll be opening up a PR here shortly and hopefully that can help drive more conversation as well. It will be more or less identical to what I did here rallyhealth#1 . FWIW, this code is live in production at a massive scale right now for Optum successfully.

We had the exact same reasons for wanting this feature. We have a product with multiple ingresses handling different URL paths with their own security requirements. Maintaining a 'second hop' ingress will be a nightmare. I fully support this feature

@tperdue321
Copy link
Contributor Author

I have updated the original proposal above since I realized it was grossly out of date. I originally made this feature request well before I had the understanding to articulate certain thoughts around it.

@github-actions
Copy link
Contributor

This issue is stale because it has been open 60 days with no activity.

@zachaller
Copy link
Collaborator

Will be looking at this soon

@zachaller
Copy link
Collaborator

This was implemented in 1.5

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants