Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kuma support for multi-zone service-mesh deployments #1170

Closed
mhumeSF opened this issue Apr 13, 2022 · 9 comments
Closed

Kuma support for multi-zone service-mesh deployments #1170

mhumeSF opened this issue Apr 13, 2022 · 9 comments

Comments

@mhumeSF
Copy link

mhumeSF commented Apr 13, 2022

Describe the feature

Kuma support for multi-zone (multi-cluster) service-mesh deployments

What problem are you trying to solve?
Currently when using flagger with kuma set as the mesh-provider, updates to the service mesh are sent to the kuma control plane on the same cluster where flagger is deployed. When kuma is deployed in a multi-zone deployment mode, updates have to be sent to the global control plane.

Currently flagger reports errors when sending update instructions:

{"level":"info","ts":"2022-04-13T22:33:15.403Z","caller":"controller/events.go:45","msg":"TrafficRoute podinfo create error: admission webhook \"validator.kuma-admission.kuma.io\" denied the request: Operation not allowed. Kuma resources like TrafficRoute can be updated or deleted only from the GLOBAL control plane and not from a ZONE control plane.","canary":"podinfo.test"}

Proposed solution

We need a way to tell flagger the address of the kuma global control plane

@lahabana
Copy link

lahabana commented May 3, 2022

Have you tried using --kubeconfig-host with a kubconfig for the global cluster? Seems like this should work. In any case we should update the docs to explain this

@mhumeSF
Copy link
Author

mhumeSF commented May 3, 2022

I'll give this a try and report back

@michaelbeaumont
Copy link

michaelbeaumont commented May 3, 2022

It looks like setting the arg --kubeconfig-service-mesh to a path containing a kubeconfig file for the global cluster works. Flagger running on the zone cluster then creates policies via the global CP.

{"level":"info","ts":"2022-05-03T09:53:59.509Z","caller":"router/kuma.go:111","msg":"TrafficRoute podinfo created","canary":"podinfo.test"}

Given a secret global-kubeconfig with a key kubeconfig containing the kubeconfig, with Helm you'll want to set Values.istio.kubeconfig.secretName and Values.istio.kubeconfig.key to global-kubeconfig and kubeconfig.

@lahabana
Copy link

lahabana commented May 3, 2022

The Values.istio.kubeconfig.secretName istio bit is weird here :) Might be worth committing a change to rename this config in the helm chart no?

@stefanprodan
Copy link
Member

@michaelbeaumont have you tested the kubeconfig flag? Based on this reporting seems broken #694

@michaelbeaumont
Copy link

@stefanprodan I verified that Kuma policies are at least able to be created. It's not clear to me yet in what way that Istio problem would manifest using Kuma but I'll take a deeper look.

@stefanprodan
Copy link
Member

Because we add owner refs to the generated objects, and the owner object (the canary) is not present on that cluster, only the TrafficRoutes are.

@michaelbeaumont
Copy link

michaelbeaumont commented May 3, 2022

Kuma policies are cluster-scoped so I don't think a TrafficRoute can get an owner reference to a Canary which is namespaced. The TrafficRoute is not recreated in a loop like in the istio issue.

@stefanprodan
Copy link
Member

@michaelbeaumont ah good point, thanks for the explanation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants