Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provision "root" jwt_token on installation #151

Open
thomaspetit opened this issue Dec 23, 2023 · 8 comments
Open

Provision "root" jwt_token on installation #151

thomaspetit opened this issue Dec 23, 2023 · 8 comments
Assignees
Labels

Comments

@thomaspetit
Copy link
Contributor

I'm looking into installing zitadel with the Helm chart but immediatly bootstrapping it with the terraform provider without any human interaction: https://registry.terraform.io/providers/zitadel/zitadel/latest/docs

As by the latest docs they require a token/jwt_file to be provisioned to connect to the Zitadel instance. Is there a workaround to execute the terraform provider without the need to login manually and generating a jwt token?

Per example, similar setups can be found here:

@hifabienne
Copy link
Member

@eliobischof @stebenz can you answer this question?

@hifabienne hifabienne moved this to 🧐 Investigating in Product Management Jan 8, 2024
@bdalpe
Copy link

bdalpe commented Mar 19, 2024

@thomaspetit I believe this is what you're looking for: https://github.com/zitadel/zitadel-charts/blob/main/examples/6-machine-user/README.md

Which creates a secret named whatever you configure in .Values.zitadel.configmapConfig.FirstInstance.Org.Machine.Machine.Username

command: [ "sh","-c","until [ ! -z $(kubectl -n {{ .Release.Namespace }} get po ${POD_NAME} -o jsonpath=\"{.status.containerStatuses[?(@.name=='{{ .Chart.Name }}-setup')].state.terminated}\") ]; do echo 'waiting for {{ .Chart.Name }}-setup container to terminate'; sleep 5; done && echo '{{ .Chart.Name }}-setup container terminated' && if [ -f /machinekey/sa.json ]; then kubectl -n {{ .Release.Namespace }} create secret generic {{ .Values.zitadel.configmapConfig.FirstInstance.Org.Machine.Machine.Username }} --from-file={{ .Values.zitadel.configmapConfig.FirstInstance.Org.Machine.Machine.Username }}.json=/machinekey/sa.json; fi;" ]

@thomaspetit
Copy link
Contributor Author

thomaspetit commented Mar 20, 2024

Awesome.. looks like exactly what I was looking for. I should have looked at the source code a bit better. 😅

Edit: Not 100% what I was l looking for on further insight. I already had configured the machine user. Sadly I can't specify the actual sa.json file that is created.

I currently have this:

zitadel:
  zitadel:
    masterkeySecretName: zitadel-masterkey
    configmapConfig:
      Log:
        Level: 'error'
      ExternalDomain: zitadel.k3s.tpcservices.be
      ExternalPort: 443
      ExternalSecure: true
      TLS:
        Enabled: false
      # Please note that you either chose human or machine!
      # https://github.com/zitadel/zitadel/blob/main/cmd/setup/steps.yaml#L35
      FirstInstance:
        Org:
          name: TPCSERVICES
          Machine:
            Machine:
              Username: zitadel-admin-sa
              Name: Admin
            MachineKey:
              # ExpirationDate: "2030-01-01T00:00:00Z"
              Type: 1

I can indeed specify the MachineKey properties but sadly not pass a self-created key that I pass to Zitadel

@kervel
Copy link

kervel commented Mar 27, 2024

we fixed this by running the terraform provisioner also as a kubernetes job. it took some effort to get it running, but basically we mounted the generated secret as a volume in a Job that does "terraform apply".

i can share more details if you want. i think with some work it would be possible to integrate terraform provisioning in the helm chart (where you could just specify .Values.terraformScriptConfigmap or so)

@thomaspetit
Copy link
Contributor Author

thomaspetit commented Mar 27, 2024

I'm actually also doing this (using the terraform operator from galleybytes) but it seems that there is no way to provision that zitadel-admin-sa.json? You found something for that? 😃

All help or ideas are welcome.

@bdalpe
Copy link

bdalpe commented Mar 27, 2024

@thomaspetit my comment here might help you: zitadel/terraform-provider-zitadel#167 (comment)

I found that the Zitadel Terraform Provider tries to use the secret before it exists, so you have to do one of a few things: terragrunt apply, terraform apply -target helm_release.zitadel, or make two separate modules for the Helm release and Zitadel resources so that Terraform will correctly wait for the dependency to be resolved.

@kervel
Copy link

kervel commented Mar 28, 2024

Hi thomas!

let me lay out my plan in a bit more detail. i guess you want to work "the other way around" but i wonder if that's really needed.

  • First i deploy zitadel in kubernetes using this helm chart. I use a values so that it will provision an initial machine user like so (yaml redacted to remove DB creds):
zitadel:
  configmapConfig:
    FirstInstance:
      Org:
        Machine:
          Machine:
            Username: zitadel-admin-sa
            Name: Admin
          MachineKey:
            ExpirationDate: "2026-01-01T00:00:00Z"
            # Type: 1 means JSON. This is currently the only supported machine key type.
            Type: 1
    ExternalDomain: zitadel.atlas.intern.kapernikov.com
    ExternalPort: 443
    ExternalSecure: true
    TLS:
      Enabled: false
  masterkey: x123x567890123456789012f4567891y

now, i want to deploy my application that uses zitadel. In my case its logical to have the zitadel config as part of the deployment procedure of my application rather than zitadel itself. I want it to be easy to deploy (create as much test instances as i want)

this also means that i want to be able to deploy+configure it when my https cert is missing or even when the DNS for my ingress is not yet good. Here i had some difficulties to tackle:

  • Zitadel terraform uses GRPC api, and the nginx ingress controller needs to be configured to support it. The ingress for zitadel needs this:
ingress:
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
    ## edit the nnginx ingress controller configmap to make sure snippets are allowed, after editing, kill the pod
    nginx.ingress.kubernetes.io/configuration-snippet: |
      grpc_set_header Host $host;
    cert-manager.io/cluster-issuer: selfsigned-ca-issuer

that works but is not ideal because by default nginx ingress controller doesn't allow for setting config snippets (you have to enable it when installing the controller).

Second difficulty: i now have to use the public ingress to connect to my zitadel instance. I'd rather connect using the internal service in kubernetes, because this is both more robust (works when the ingress is not fine yet for whatever reason) and more secure. But if i change the uri i also change the issuer because of zitadel/terraform-provider-zitadel#143

Because i used a selfsigned cert i need to modify the terraform docker image, to autotrust my selfsigned cert:

openssl s_client -connect $TF_VAR_ZITADEL_DOMAIN:443 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /usr/local/share/ca-certificates/example.crt
update-ca-certificates

This would also not be a problem if i could connect using plain http (it is intra-cluster anyway) but that doesn't work due to the fact that then i also change the issuer. So i think (without more support from zitadel) the good way would be to add a sidecar to the terraform container that acts as a proxy. This way i don't have to use grpc over the ingress, and i can modify the "Host" header so that it matches the issuer in the zitadel configuration.

In the deployment yaml of my job, i also mount the secret of the admin user so terraform can access it (i guess that's not the way you want to do it). This has a disadvantage: zitadel needs to run in the same namespace as my app. But there are secret copier operators that could alleviate that.

I don't use the operator, i use a Job as part of the postinstall of my own helmchart. So i'm free to add a sidecar. But i don't know if the terraform operator would allow that.

Greetings,
Frank

@eliobischof
Copy link
Member

@thomaspetit my comment here might help you: zitadel/terraform-provider-zitadel#167 (comment)

I found that the Zitadel Terraform Provider tries to use the secret before it exists, so you have to do one of a few things: terragrunt apply, terraform apply -target helm_release.zitadel, or make two separate modules for the Helm release and Zitadel resources so that Terraform will correctly wait for the dependency to be resolved.

Could this issue actually be closed if we implemented zitadel/terraform-provider-zitadel#167 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: 🧐 Investigating
Development

No branches or pull requests

5 participants