-
Notifications
You must be signed in to change notification settings - Fork 326
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Confirm handling of consul-clients in separate cluster to consul-server (non-federated) #3280
Comments
I suspect this is related to our architecture. In our consul-server clusters we also run the consul-client daemonset there, and they are working fine. The same consul-client daemonset on remote (client only) kubernetes clusters are the ones showing the issue. |
I've made some progress here, but not sure if I'm going in the right direction :) I think the general approach to get this working could be (for-each consul client cluster):
The example here shows how I got the
In this case the I also needed some cluster-role updates on the consul-client cluster
At this point the |
So I think where I'm at now is.
Also wondering how the default k8s auth-methods are created - I tried to match them against existing service-accounts (well secret tokens) in the consul-server cluster, but they didn't seem to align... edit I think I can just make use of the existing This may need custom service-accounts/secrets to be created and some role-binding adjustments - essentially I can just call with a custom |
failed to verify certificate
for tokenreview service
Closing this as i got it working with custom manifests. |
Community Note
Overview of the Issue
I am upgrading the helm chart from
0.41.1
to0.49.8
(in preparation for future 1.x updates).Server & Client consul clusters are running:
Currently I cannot get the new k8s auth method to work.
One of the key changes are the feature changes described here https://github.com/hashicorp/consul-k8s/blob/main/CHANGELOG.md#0420-april-04-2022
The one I think I'm having issues with is:
Support issuing global ACL tokens via k8s auth method. [https://github.com//pull/1075]
I'm finding the
client-acl-init
container in the consul-client daemonset is failing to run and produces the following error:From my understanding this error is actually from the consul-server (i see the same on the log entry).
Environment details
The architecture of our setup is that we have consul-servers running on 1 cluster (a dedicated cluster for consul-servers and termianting gateways) and a number of separate kubernetes clusters running consul-client. These are connected via vpc-peering.
We run Kubernetes 1.24 (EKS) on both server/client clusters.
Additional Context
In our previous setup the
client-acl-init
init-container was configured like so:In the new setup (via the helm-chart upgrade) we have:
I've checked that
consul-client-k8s-component-auth-method
exists as an auth method on the server.Any tips on how to debug this?
The text was updated successfully, but these errors were encountered: