-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fine-grained AuthorizationPolicy for Linkerd Multicluster #9848
Comments
We stumbled over this also recently when we started trying out Linkerd (as a replacement for Consul Connect). |
Just referencing #7235 because it contains a lot of useful information. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions. |
Still very much needed, despite lack of activity.
…On Wed, 10 May 2023, 05:08 stale[bot], ***@***.***> wrote:
This issue has been automatically marked as stale because it has not had
recent activity. It will be closed in 14 days if no further activity
occurs. Thank you for your contributions.
—
Reply to this email directly, view it on GitHub
<#9848 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AADI7HMBSPIHUO5KDYSRSNDXFMBCDANCNFSM6AAAAAASDR5VG4>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions. |
Nope, still important
…On Tue, 8 Aug 2023, 07:17 stale[bot], ***@***.***> wrote:
This issue has been automatically marked as stale because it has not had
recent activity. It will be closed in 14 days if no further activity
occurs. Thank you for your contributions.
—
Reply to this email directly, view it on GitHub
<#9848 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AADI7HMTJFZWZ43VRGUBQR3XUHDYDANCNFSM6AAAAAASDR5VG4>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
In Linkerd 2.14 this will be possible in the case where the clusters are on a shared flat network, since the gateway will not be used for pod-to-pod communication across the cluster boundary. Does that apply in your case? |
Yes, in fact it does apply in my case. |
@wmorgan I just read up on the changes that you mentioned here https://github.com/linkerd/website/blob/0a00e824065eb0b54739cd1f7f7a5d139f40e166/linkerd.io/content/2-edge/features/flat-network-multicluster.md and stumbled over namespace sameness:
Are there any plans to have the MeshTLSAuthentication identities include a reference to the cluster that they are coming from or any other way to pin authorization policies to a cluster? Otherwise the concept of namespace sameness expands trust boundaries across all linked clusters, which (at least for me) is not desirable. |
Assuming you stick to the recommended practice of per-cluster identity domains (and a cross-cluster trust anchor), mesh identities include the cluster they are coming from and you can apply policies on a per-cluster basis. |
Are you referring to this? https://linkerd.io/2.13/tasks/using-custom-domain/ |
I believe you can set the identity domains when you create the issuer certificates for the clusters. Normally that is aligned with the cluster DNS domain (e.g. by using the config you specify above) but I don't think that is strictly mandatory. |
Trust the docs over me :) |
I looked into this and I believe the identity trust domain (or cluster domain) is half the solution. But one problem that I still see is that the issuing CA in each cluster (linkerd identity + private key material) could be used directly to create a certificate with a DN/URI SAN that matches another cluster's identity trust domain. Am I wrong here? A possible solution could be to apply name constraints to the issuing CA certificates, so that each cluster can only sign certificates for its own workloads. |
Using name constraints would be ideal, unfortunately there are some bugs in underlying libraries that need to be fixed or worked around first. See #9299 |
Marking this as resolved. With #9299 and pod to pod multi-cluster, we now support this capability. If there are use cases for having this work with private networking, we could probably open a separate issue to track that. |
What problem are you trying to solve?
I have a:
Clusters 1 and 2 are linked.
I would like to limit access on certain routes in service A in cluster 1, to service B from cluster 2, and only that service.
How should the problem be solved?
I would like for access to be limited, so that only service B can communicate with service A, regardless of if they were deployed in a single cluster, or in multiple linked clusters.
Any alternatives you've considered?
So far, I've been able to limit access for server (service A in cluster 1), to linkers-gateway service account. However, this allows any service in cluster 2 (and not just service B) with an access to gateway to also reach service A. Ideally, access would be limited to service B only.
How would users interact with this feature?
No response
Would you like to work on this feature?
No response
The text was updated successfully, but these errors were encountered: