Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kube config certificate #124

Open
davisonja opened this issue Aug 28, 2024 · 8 comments
Open

kube config certificate #124

davisonja opened this issue Aug 28, 2024 · 8 comments

Comments

@davisonja
Copy link

We're setting up the next stage of our systems with OpenUnison underpinning the auth (via Google/oidc); in the cluster environment we're using HAProxy to provide a stable IP for the k8s API, but effectively all access to the cluster is from outside.
The external access is provided (and controlled) by a thirdy party proxy (we ended up on 1.31 which finally switched to web sockets by default for kubectl exec); this third party proxy has a certificate that's been issued by something large (Google, IIRC) meaning that the ~/.kube/config file works without a certificate-authority-data entry.

Currently kubectl oulogin will reinsert the certificate authority data entry when it's run, and because the internal cluster has a certificate it's clobbering the kube-config values. In theory, I think, we should be able to convince kubectl oulogin to hand out a chain that matched the proxy cert, but that would mean it was wrong for inside-the-cluster-environment (and I'm not having much luck doing that).
As the proxy cert is automagically trusted, simply not populating the certificate authority would work. Is there a way to do this? Or would a paramter to like kubectl oulogin --no-certificate-authority be a possibility?

It's all working well at the moment, aside from a manual deletion of the certificate-authority-data line everytime I kubectl oulogin (which can be a couple of times a day); A workaround would be to wrap my kubectl oulogin in a script that edited the kube config each time, but that's a bit of a hack.

@mlbiam
Copy link
Contributor

mlbiam commented Aug 28, 2024

what is network.createIngressCertificate in your values.yaml? If it's true, set it to false and rerun ouctl. Here are the details: https://openunison.github.io/knowledgebase/certificates/

side note - are you using impersonation? I'm pretty sure it will work fine with websockets i've just never tried it.

@davisonja
Copy link
Author

network.createIngressCertificate is currently false, and I'm fairly sure always has been for this install - the original experiementation was a proof-of-concept where I collected the set of options that seemed to do the right things for what we do (which included ingress certificate generation outside of OU).

Impersonation is the line:

enable_impersonation: false

?
The websockets issue is a kubernetes one, that I thought would be unrelated to OU - when kubectl execing on previous versions it attempted to create a SPDY streaming connection, which our proxy rejects because SPDY is deprecated. Websocket support has been in kubernetes for a while, but 1.31 was the first version to default to using it, so instead of working out how to convince kubectl exec to do websockets and not SPDY on the older version, I just upgraded.
In the cluster environment the k8s-api resolves to an internal IP, while outside it resolves to the proxy.

@mlbiam
Copy link
Contributor

mlbiam commented Aug 28, 2024

network.createIngressCertificate is currently false,

Hm, if it's false then there shouldn't be any cert. The cert only gets included if it's created by the operator or it's explicitly included. I know you said the cert is signed by a common CA. Another option is to include it as the unison-ca in trusted_certs. Can you please share the values.yaml?

The websockets issue is a kubernetes one, that I thought would be unrelated to OU

right, i was curious if you were using impersonation. you're not so it wouldn't matter to OU.

@davisonja
Copy link
Author

It might be that just including it is going to end up being the simplest - though presumably in doing that at some point (albeit a while away) we'll need to udpate the config when the certificates eventually expire, or if we change proxy providers (and the new one uses a different CA); in contrast to not having the CA listed at all in the kube config, where it'll continue to work so long as it's using one of the common CAs..?

network:
  openunison_host: "k8sou-staging"
  dashboard_host: "k8sdb-staging"
  api_server_host: "k8sapi-staging"
  session_inactivity_timeout_seconds: 3600
  k8s_url: https://k8s-api-staging
  force_redirect_to_tls: false
  createIngressCertificate: false
  ingress_type: nginx
  ingress_annotations: {cert-manager.io/issuer: "letsencrypt-staging"}

cert_template:
  ou: "Kubernetes"
  o: "MyOrg"
  l: "My Cluster"
  st: "State of Cluster"
  c: "MyCountry"

myvd_config_path: "WEB-INF/myvd.conf"
k8s_cluster_name: cluster-staging
enable_impersonation: false

impersonation:
  use_jetstack: true
  explicit_certificate_trust: true

dashboard:
  namespace: "kubernetes-dashboard"
  cert_name: "kubernetes-dashboard-certs"
  label: "k8s-app=kubernetes-dashboard"
  service_name: kubernetes-dashboard
  require_session: true
  new: true

certs:
  use_k8s_cm: false

trusted_certs: []

monitoring:
  prometheus_service_account: system:serviceaccount:monitoring:prometheus-k8s

oidc:
  client_id: id.apps.googleusercontent.com
  issuer: https://accounts.google.com
  user_in_idtoken: false
  domain: ""
  scopes: openid email profile
  forceauthentication: false
  claims:
    sub: email
    email: email
    given_name: given_name
    family_name: family_name
    display_name: name
    groups: groups

network_policies:
  enabled: false
  ingress:
    enabled: true
    labels:
      app.kubernetes.io/name: ingress-nginx
  monitoring:
    enabled: true
    labels:
      app.kubernetes.io/name: monitoring
  apiserver:
    enabled: true
    labels:
      app.kubernetes.io/name: kube-system

services:
  enable_tokenrequest: false
  token_request_audience: api
  token_request_expiration_seconds: 3600
  node_selectors: []

openunison:
  replicas: 1
  non_secret_data:
    K8S_DB_SSO: oidc
    PROMETHEUS_SERVICE_ACCOUNT: system:serviceaccount:monitoring:prometheus-k8s
    SHOW_PORTAL_ORGS: "false"
  secrets: []
  enable_provisioning: false
  use_standard_jit_workflow: true
  include_auth_chain: google-ws-load-groups

google_ws:
  admin_email: "[email protected]"
  service_account_email: "[email protected]"

@davisonja
Copy link
Author

If I put a cert in unison-ca (in trusted_certs) it shows up in the kube config file as idp-certificate-authority-data in the users section, rather than the certificate-authority-data in the cluster section - it's the cluster value that I'm looking at replacing, or getting left out of the config set my oulogin...

@davisonja
Copy link
Author

If I'm following what is meant to happen, with impersonation off, the tokens page (which I presume matches the info that kubectl oulogin uses to write into the kube config file) and a trusted_cert entry for k8s-master I should see the k8s-master cert on that tokens page.
Instead I'm seeing the cert as found in the kube-root-ca.crt config map (which matches /etc/kubernetes/pki/ca.crt from the control-plane node's OS); deleteing the kube-root-ca.crt entry simply results in it being created again, as it was.

Hopefully the above values.yaml have an obvious error!

@mlbiam
Copy link
Contributor

mlbiam commented Aug 30, 2024

I found the issue. I was confusing the OpenUnison cert with the API server cert. Sorry for that. I update the docs on how to handle your usecase - https://openunison.github.io/knowledgebase/certificates/#how-do-i-trust-my-api-servers-certificate

I tested this out and the api server certificate doesn't get set

@davisonja
Copy link
Author

Thank you, that's exactly what I was after!

Just before this gets rolled out to the devs, tho it's only slightly related to the kube config certificates...
Is the expected behaviour when the timeout is up that we see this:

❯ kubectl get pods
E0905 11:32:15.368861   35553 memcache.go:265] "Unhandled Error" err=<
	couldn't get current server API group list: Get "https://k8s-api/api?timeout=32s": failed to refresh token: oauth2: cannot fetch token: 401 Unauthorized
	Response:
 >
E0905 11:32:15.612322   35553 memcache.go:265] "Unhandled Error" err=<
	couldn't get current server API group list: Get "https://k8s-api/api?timeout=32s": failed to refresh token: oauth2: cannot fetch token: 401 Unauthorized
	Response:
 >
E0905 11:32:15.843542   35553 memcache.go:265] "Unhandled Error" err=<
	couldn't get current server API group list: Get "https://k8s-api/api?timeout=32s": failed to refresh token: oauth2: cannot fetch token: 401 Unauthorized
	Response:
 >
E0905 11:32:16.065723   35553 memcache.go:265] "Unhandled Error" err=<
	couldn't get current server API group list: Get "https://k8s-api/api?timeout=32s": failed to refresh token: oauth2: cannot fetch token: 401 Unauthorized
	Response:
 >
E0905 11:32:16.274359   35553 memcache.go:265] "Unhandled Error" err=<
	couldn't get current server API group list: Get "https://k8s-api/api?timeout=32s": failed to refresh token: oauth2: cannot fetch token: 401 Unauthorized
	Response:
 >
Unable to connect to the server: failed to refresh token: oauth2: cannot fetch token: 401 Unauthorized
Response:

Following that with a kubectl oulogin brings it all back to life, but we need to do the oulogin again each time, is that correct?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants