Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fine-grained AuthorizationPolicy for Linkerd Multicluster #9848

Closed
Tyrion85 opened this issue Nov 17, 2022 · 18 comments
Closed

Fine-grained AuthorizationPolicy for Linkerd Multicluster #9848

Tyrion85 opened this issue Nov 17, 2022 · 18 comments

Comments

@Tyrion85
Copy link
Contributor

What problem are you trying to solve?

I have a:

  • service A, deployed in cluster 1 (server),
  • and a service B, deployed in cluster 2 (client).

Clusters 1 and 2 are linked.

I would like to limit access on certain routes in service A in cluster 1, to service B from cluster 2, and only that service.

How should the problem be solved?

I would like for access to be limited, so that only service B can communicate with service A, regardless of if they were deployed in a single cluster, or in multiple linked clusters.

Any alternatives you've considered?

So far, I've been able to limit access for server (service A in cluster 1), to linkers-gateway service account. However, this allows any service in cluster 2 (and not just service B) with an access to gateway to also reach service A. Ideally, access would be limited to service B only.

How would users interact with this feature?

No response

Would you like to work on this feature?

No response

@whiskeysierra
Copy link

We stumbled over this also recently when we started trying out Linkerd (as a replacement for Consul Connect).
I was a bit surprised, that the gateway has its own identity, instead of acting as a transparent intermediate.
Just for my understanding, why is it designed like that?
I'm guessing the Gateway needs to decrypt the traffic to figure out where to route it to and since it can't re-encrypt it using the original sender's certificate (due to lack of access to the private key), it has to encrypt it using its own certificate?

@whiskeysierra
Copy link

Just referencing #7235 because it contains a lot of useful information.

@stale
Copy link

stale bot commented May 10, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix label May 10, 2023
@whiskeysierra
Copy link

whiskeysierra commented May 10, 2023 via email

@stale stale bot removed the wontfix label May 10, 2023
@stale
Copy link

stale bot commented Aug 8, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix label Aug 8, 2023
@whiskeysierra
Copy link

whiskeysierra commented Aug 8, 2023 via email

@stale stale bot removed the wontfix label Aug 8, 2023
@wmorgan
Copy link
Member

wmorgan commented Aug 8, 2023

In Linkerd 2.14 this will be possible in the case where the clusters are on a shared flat network, since the gateway will not be used for pod-to-pod communication across the cluster boundary. Does that apply in your case?

@whiskeysierra
Copy link

Yes, in fact it does apply in my case.

@whiskeysierra
Copy link

@wmorgan I just read up on the changes that you mentioned here https://github.com/linkerd/website/blob/0a00e824065eb0b54739cd1f7f7a5d139f40e166/linkerd.io/content/2-edge/features/flat-network-multicluster.md and stumbled over namespace sameness:

For the purpose of multi-cluster communication, Linkerd has adopted the "namespace sameness" principle described in a SIG Multicluster Position Statement.

In this multi-cluster model, all namespaces with a given name are considered to be the same across clusters. In other words, namespaces are a wholistic concept. By extension, all services defined in a namespace are considered the same service across all different clusters.

Are there any plans to have the MeshTLSAuthentication identities include a reference to the cluster that they are coming from or any other way to pin authorization policies to a cluster? Otherwise the concept of namespace sameness expands trust boundaries across all linked clusters, which (at least for me) is not desirable.

@wmorgan
Copy link
Member

wmorgan commented Aug 15, 2023

Assuming you stick to the recommended practice of per-cluster identity domains (and a cross-cluster trust anchor), mesh identities include the cluster they are coming from and you can apply policies on a per-cluster basis.

@whiskeysierra
Copy link

Are you referring to this? https://linkerd.io/2.13/tasks/using-custom-domain/

@wmorgan
Copy link
Member

wmorgan commented Aug 17, 2023

I believe you can set the identity domains when you create the issuer certificates for the clusters. Normally that is aligned with the cluster DNS domain (e.g. by using the config you specify above) but I don't think that is strictly mandatory.

@whiskeysierra
Copy link

I don't think that is strictly mandatory

The docs say something different, if I'm understanding this correctly:
image

@wmorgan
Copy link
Member

wmorgan commented Aug 17, 2023

Trust the docs over me :)

@whiskeysierra
Copy link

whiskeysierra commented Aug 24, 2023

I looked into this and I believe the identity trust domain (or cluster domain) is half the solution.
Yes, it does give every TLS identity an indicator (in form of a suffix) from where this identity is coming from.

But one problem that I still see is that the issuing CA in each cluster (linkerd identity + private key material) could be used directly to create a certificate with a DN/URI SAN that matches another cluster's identity trust domain.

Am I wrong here?

A possible solution could be to apply name constraints to the issuing CA certificates, so that each cluster can only sign certificates for its own workloads.

@adleong
Copy link
Member

adleong commented Aug 24, 2023

Using name constraints would be ideal, unfortunately there are some bugs in underlying libraries that need to be fixed or worked around first. See #9299

@whiskeysierra
Copy link

@adleong I see that #9299 has been reopened, but it's still locked. Can you unlock it for comments?

@DavidMcLaughlin
Copy link
Contributor

Marking this as resolved. With #9299 and pod to pod multi-cluster, we now support this capability. If there are use cases for having this work with private networking, we could probably open a separate issue to track that.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Dec 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

8 participants