diff --git a/docs/assets/app1.png b/docs/assets/app1.png new file mode 100644 index 0000000..9661e45 Binary files /dev/null and b/docs/assets/app1.png differ diff --git a/docs/assets/app2.png b/docs/assets/app2.png new file mode 100644 index 0000000..daebbc8 Binary files /dev/null and b/docs/assets/app2.png differ diff --git a/docs/assets/app3.png b/docs/assets/app3.png new file mode 100644 index 0000000..2a7492e Binary files /dev/null and b/docs/assets/app3.png differ diff --git a/docs/assets/app4.png b/docs/assets/app4.png new file mode 100644 index 0000000..f8f50aa Binary files /dev/null and b/docs/assets/app4.png differ diff --git a/docs/assets/app5.png b/docs/assets/app5.png new file mode 100644 index 0000000..9dce221 Binary files /dev/null and b/docs/assets/app5.png differ diff --git a/docs/assets/fork-github-repo.png b/docs/assets/fork-github-repo.png new file mode 100644 index 0000000..fa33964 Binary files /dev/null and b/docs/assets/fork-github-repo.png differ diff --git a/docs/assets/gh-add-team-member-role.png b/docs/assets/gh-add-team-member-role.png new file mode 100644 index 0000000..8e715a0 Binary files /dev/null and b/docs/assets/gh-add-team-member-role.png differ diff --git a/docs/assets/sonarcloud/sc_projectsetup_1.png b/docs/assets/sonarcloud/sc_projectsetup_1.png new file mode 100644 index 0000000..98a93c6 Binary files /dev/null and b/docs/assets/sonarcloud/sc_projectsetup_1.png differ diff --git a/docs/assets/sonarcloud/sc_projectsetup_2.png b/docs/assets/sonarcloud/sc_projectsetup_2.png new file mode 100644 index 0000000..c2c016f Binary files /dev/null and b/docs/assets/sonarcloud/sc_projectsetup_2.png differ diff --git a/docs/assets/sonarcloud/sc_projectsetup_3.png b/docs/assets/sonarcloud/sc_projectsetup_3.png new file mode 100644 index 0000000..b3149d5 Binary files /dev/null and b/docs/assets/sonarcloud/sc_projectsetup_3.png differ diff --git a/docs/assets/sonarcloud/sc_projectsetup_4.png b/docs/assets/sonarcloud/sc_projectsetup_4.png new file mode 100644 index 0000000..3653079 Binary files /dev/null and b/docs/assets/sonarcloud/sc_projectsetup_4.png differ diff --git a/docs/assets/sonarcloud/sc_projectsetup_5.png b/docs/assets/sonarcloud/sc_projectsetup_5.png new file mode 100644 index 0000000..ba9da0a Binary files /dev/null and b/docs/assets/sonarcloud/sc_projectsetup_5.png differ diff --git a/docs/assets/sonarcloud/sc_projectsetup_6.png b/docs/assets/sonarcloud/sc_projectsetup_6.png new file mode 100644 index 0000000..2a2f11f Binary files /dev/null and b/docs/assets/sonarcloud/sc_projectsetup_6.png differ diff --git a/docs/assets/sonarcloud/sc_projectsetup_7.png b/docs/assets/sonarcloud/sc_projectsetup_7.png new file mode 100644 index 0000000..dbf1c50 Binary files /dev/null and b/docs/assets/sonarcloud/sc_projectsetup_7.png differ diff --git a/docs/assets/sonarcloud/sc_projectsetup_8.png b/docs/assets/sonarcloud/sc_projectsetup_8.png new file mode 100644 index 0000000..ef11c53 Binary files /dev/null and b/docs/assets/sonarcloud/sc_projectsetup_8.png differ diff --git a/docs/assets/vault-avp-config.png b/docs/assets/vault-avp-config.png new file mode 100644 index 0000000..37c0bcb Binary files /dev/null and b/docs/assets/vault-avp-config.png differ diff --git a/docs/how-to-onboard-teams-to-any-environment.md b/docs/how-to-onboard-teams-to-any-environment.md new file mode 100644 index 0000000..9ab2ad1 --- /dev/null +++ b/docs/how-to-onboard-teams-to-any-environment.md @@ -0,0 +1,351 @@ +# How To Onboard Product-Teams To Any Environment + +## Basics + +We handle all of our support requests as a Jira task. There are [templates](https://catenax-ng.github.io/docs/resources) +present for well-known and recurring tasks and also a blank template. +For handling these support tasks, we follow our internal support workflow. + +Since we set up teams and repositories in our GitHub organization and manage secrets in Hashicorp Vault using only one +script, at first **terraform has to be initialized** as described in the +[README.md](https://github.com/catenax-ng/k8s-cluster-stack/blob/main/terraform/100_team_onboarding/README.md) file in the directory +[100_team_onboarding](https://github.com/catenax-ng/k8s-cluster-stack/tree/main/terraform/100_team_onboarding). +It is assumed, that you already have installed the terraform CLI. Before you start, make sure you've cloned +the [k8s_cluster_stack](https://github.com/catenax-ng/k8s-cluster-stack) +repository and navigated to `/terraform/100_team_onboarding` inside that repository on your terminal. +The check of the changes with 'terraform plan' and creation with 'terraform apply' which can be done after every +terraform change or only at the end of all necessary changes is also described in the +[README.md](https://github.com/catenax-ng/k8s-cluster-stack/blob/main/terraform/100_team_onboarding/README.md). + +For `terraform apply` and `terraform plan` command the following command line variables has to be set: + +```shell +# You can get a login token, by logging into the Vault web UI and using 'copy token' from the top right user menu +export VAULT_TOKEN= +# The OIDC settings that needs to be specified is the client-id and the client-secret for DEX. You can find this +information in our devsecops secret engine in vault at path `devsecops/clusters/vault/github-oauth`. +export TF_VAR_vault_oidc_client_id= +export TF_VAR_vault_oidc_client_secret= +# A Github personal access token has to be created. +export TF_VAR_github_token= +``` + +## Info regarding terraform + +Following steps have to be done in the given order, otherwise there could be problems with other developments done in +parallel: + +1. create a new branch +2. make changes +3. do a terraform plan to check if the changes meet your expectations +4. create a PR and merge +5. do a `terraform apply` + +Only after the merge in GitHub and the `terraform apply` have been done, the terraform state is consistent. +Otherwise, changes which are applied in parallel by someone else might be deleted again + +## GitHub + +The following section describes how to handle users, teams and repositories in our GitHub organisation + +### Invitation of a single user + +Interaction with most of our tooling and also access to repositories is granted to members of our GitHub organization +"catenax-ng". So [inviting](https://github.com/orgs/catenax-ng/people) users to the organization is the starting point for every Catena-X member. + +As initial information to onboard a user to the organization, we need: + +- The GitHub username (or email address) of the person to onboard +- A person (i.e. the product PO) to vouch for the person being onboarded to actually be part of Catena-X + +Assigning a GitHub user to the several GitHub product teams should be done by the maintainers of the GitHub product teams. Only in rare cases, +like onboarding a new person and a new team in the same step, DevSecOps team should assign GitHub users to GitHub teams. + +### Creating a GitHub team via terraform + +Access to repositories is granted on a GitHub team level instead of individuals. Also, RBAC definitions on Vault and +ArgoCD are based on GitHub team membership. + +To create GitHub teams, we are using the terraform root module +[100_team_onboarding](https://github.com/catenax-ng/k8s-cluster-stack/tree/main/terraform/100_team_onboarding). +To create a new GitHub team, edit `main.tf` in the `100_team_onboarding` directory and locate the variable `github_teams` +inside `module "github" { ... }`. This variable contains a map of all the teams in our GitHub organization with name and +description properties. + +All you need to do is to add a new entry to that map with the new team name and an optional description. Make sure, the +key you use for your new entry is unique. This key will also be used by terraform to create an entry in the state file. + +### Creating a repository via terraform + +Git repositories are also managed by our terraform root module +[100_team_onboarding](https://github.com/catenax-ng/k8s-cluster-stack/tree/main/terraform/100_team_onboarding). The +process of creating a new repository is similar to creating a team. You need to edit the `main.tf` file in the +`100_team_onboarding` directory. Repositories are defined in the +`github_repositories` variable inside `module "github" { ... }`. This variable is a map containing all the repository +information. To create a new one, add a new entry to the map. + +Event though most of the repository settings are configurable, the following should be set in a default case. + +- `visibility : "public"`. Exception is only, if the teams did not yet clarify IP related questions +- `pages : { enabled : false }`. If a team wants to use GitHub pages, you can set this to true. This is needed, if teams + want to release artifacts like helm charts. +- `is_template : false`. We usually do not create new repositories as template +- `uses_template : false`. Currently, our repositories are set up blank and not based on a template +- `template : null`. Since we usually do not use a template, we do not specify one. In case we want to use a template, + this variable has to be defined as object of form `{ owner : "github-org" repository : "repo-name" }` + +### Caution + +If the team requested k8s-helm-example repository to be used as a template, the following settings needs to be changed: + +- `uses_template : true` +- `template : { owner : "catenax-ng" repository : "k8s-helm-example" }` + +The newly created repository will be populated with files from the template, GitHub pages will be enabled and GitHub action for releasing helm charts to pages will be added. + +### Assigning a team as contributor to a repository via terraform + +Contribution access to a repository in our GitHub organization is granted on a team level. We do not +grant this kind of access to individuals. +Access is again managed by our terraform root module +[100_team_onboarding](https://github.com/catenax-ng/k8s-cluster-stack/tree/main/terraform/100_team_onboarding). + +To manage contribution access for a team on a repository, edit the `main.tf` file in the `100_team_onboarding` directory. +There, add a new map entry to the `github_repositories_teams` variable inside `module "github" { ... }`. +As convention, we decided to for the map key as a combination of repository and team (``). +This is done, because we have cases of multiple teams contributing to a single repository. This is configured, by +adding multiple entries to the `github_repositories_teams` map, containing the same repository, but a different team +each time. + +As default, we configure `maintain` access on the product repositories for the teams, since all the administrative +tasks are handled by the team managing the organization. + +## Vault via terraform + +To be able to manage secrets in Hashicorp Vault and use them via ArgoCD Vault Plugin (AVP), a team needs the following +Vault resources set up: + +- A _secret engine_ +- A _read-write policy_ for the secret engine, used to manage secrets via web UI or CLI; Mapped to the GitHub team +- An _approle_, that is used as AVP credentials +- A _read-only policy_ for the secret engine, used as AVP credentials; Mapped to the approle +- Approle credentials (secret-id and role-id) available as _avp-config in the devsecops_ secret engine + +All of these resources are created through terraform scripts. The scripts are part of the +[k8s_cluster_stack](https://github.com/catenax-ng/k8s-cluster-stack) repository. + +### Add the new team to the list of product teams + +Onboarding a new team is also managed by our terraform root module +[100_team_onboarding](https://github.com/catenax-ng/k8s-cluster-stack/tree/main/terraform/100_team_onboarding). +You need to edit `main.tf` in the `100_team_onboarding` directory and locate the variable `product_teams` +inside `module "vault" { ... }`. This variable contains a map of all the product teams. To create a new one, add a +new entry to the map. + +## ArgoCD + +To provide a product-team access to the Hotel Budapest infrastructure following onboarding steps must be performed (all +steps are related to repository [k8s_cluster_stack](https://github.com/catenax-ng/k8s-cluster-stack)): + +- create ArgoCD project +- create AVP secret +- deploy ArgoCD project and AVP secret + +Create a new branch in [k8s_cluster_stack](https://github.com/catenax-ng/k8s-cluster-stack) repo for onboarding a new +product-team to ArgoCD. + +### Create ArgoCD Project + +Create a manifest for the new product-team to create: + +- k8s namespace +- ArgoCD project: + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + name: product-productName +--- +apiVersion: argoproj.io/v1alpha1 +kind: AppProject +metadata: + name: product-productName + namespace: argocd +spec: + description: Project for product-productName + sourceRepos: + - "*" + destinations: + - namespace: product-productName + server: https://kubernetes.default.svc + # Allow all namespaced-scoped resources to be created, except for ResourceQuota, LimitRange, NetworkPolicy + namespaceResourceBlacklist: + - group: "" + kind: ResourceQuota + - group: "" + kind: LimitRange + - group: "" + kind: NetworkPolicy + roles: + # A role which provides access to all applications in the project + - name: team-admin + description: All access to applications inside project-bpdm. Read only on project itself + policies: + - p, proj:project-productName:team-admin, applications, *, project-productName/*, allow + groups: + - catenax-ng:product-productName +``` + +Store this manifest in [k8s-cluster-stack](https://github.com/catenax-ng/k8s-cluster-stack) repo in +path `environments/hotel-budapest/argo-projects/` and in every environment you need it. Default is +dev and int (Hotel-Budapest). + +### Create AVP Secret + +To enable the product-team to use Vault with ArgoCD create a team specific AVP secret manifest: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + annotations: + avp.kubernetes.io/path: "devsecops/data/avp-config/product-productName" + name: vault-secret + namespace: product-productName +type: Opaque +stringData: + VAULT_ADDR: https://vault.demo.catena-x.net/ + AVP_TYPE: vault + AVP_AUTH_TYPE: approle + AVP_ROLE_ID: + AVP_SECRET_ID: +``` + +Store this manifest in [k8s-cluster-stack](https://github.com/catenax-ng/k8s-cluster-stack) repo in +path `environments/hotel-budapest/avp-secrets/` and in every environment you need it. Default is +dev and int (Hotel-Budapest). + +The secret will be called _vault-secret_ and stored in k8s namespace related to product-team. + +### Prepare Deployment Of ArgoCD Project And AVP Secret + +To deploy k8s namespace, ArgoCD Project and the AVP secret to Hotel Budapest you'll have to add the two created manifest +files to `environments/hotel-budapest/kustomization.yaml` +in [k8s-cluster-stack](https://github.com/catenax-ng/k8s-cluster-stack) repo: + +```yaml +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +#namespace: argocd + +resources: + # ... + - argo-projects/product-productName.yaml + - avp-secrets/productName-vault-secret.yaml + #... +``` + +Please add the new product-team in alphabetical order to the _resources_ section of file `kustomization.yaml`. + +### Create Pull Request + +After you have created the three files + +- `environments/hotel-budapest/argo-projects/product-productName.yaml` +- `environments/hotel-budapest/avp-config/productName-team-vault-secret.yaml` +- `environments/hotel-budapest/kustomization.yaml` + +create a PR for your branch. After the PR has been approved and merged into main branch, the new team will be +automatically deployed to Hotel Budapest cluster (via ArgoCD application _hotel-budapest-config_ at ArgoCD _CORE_ +cluster). + +## Special Topics + +### Enable access to a private repository via deploy key + +The project/product has to follow the steps which can be found +here: [How to prepare a private repo](../github/enable-private-repo.md). + +- Go to `catenax-ng\k8s-cluster-stack\environments\hotel-budapest\argo-repos` +- Add a file named `product--repo.yaml`, e.g. for _product-semantics_ (`product-semantics-repo.yaml`): + +```yaml + apiVersion: v1 + kind: Secret + metadata: + name: product-semantics-repo + namespace: argocd + annotations: + avp.kubernetes.io/path: "semantics/data/deploy-key" + labels: + argocd.argoproj.io/secret-type: repository + stringData: + type: git + url: git@github.com:catenax-ng/product-semantics + name: product-semantics-repo + project: project-semantics + sshPrivateKey: | + +``` + +- Add following line to `environments/hotel-budapest/kustomization.yaml` and for every environment you need it. +Default is dev and int (Hotel-Budapest). + + ```yaml + - argo-repos/product-semantics-repo.yaml + ``` + +### Enable access to a private package (central pull secret) + +- Create a PAT within GitHub user account (machine user) + settings - Developer settings - Personal access token. Be sure to give just the needed rights (read:package will be + sufficient to deploy) +- Now do a base64 encoding for the PAT $ echo -n "[USERNAME]:[PAT]" | base64 +- Create a file `.dockerconfigjson` containing the base-64 encoded PAT + +```json + { + "auths": { + "ghcr.io": { + "auth": "" + } + } + } +``` + +- Do a base 64 encoding for the auth part + +```shell + echo -n'{"auths":{"ghcr.io":{"auth":""}}}' | base64 +``` + +If the output is divided into 2 lines, just add the second line to the first (without space) + +- Create a file `dockerconfigjson.yaml`: + + ```yaml + kind: Secret + type: kubernetes.io/dockerconfigjson + apiVersion: v1 + metadata: + name: budapest-machine-user-read-package + labels: + app: app-name + data: + .dockerconfigjson: + ``` + +- Then add the secret to the cluster + + ```shell + kubectl create -f dockerconfigjson.yaml + ``` + +- Pull secret has to be added to the product´s code + + ```yaml + imagePullSecrets: + - name: + ``` diff --git a/docs/how-to-onboard-teams-to-sonarcloud.md b/docs/how-to-onboard-teams-to-sonarcloud.md new file mode 100644 index 0000000..64ba9a0 --- /dev/null +++ b/docs/how-to-onboard-teams-to-sonarcloud.md @@ -0,0 +1,43 @@ +# How to onboard teams to sonarcloud + +This guide is only for those who operate the environment + +## SonarCloud overview + +Catena-X uses Sonarcloud to do quality checks. [SonarCloud](https://sonarcloud.io/) is an online service offering [SonarQube](https://en.wikipedia.org/wiki/SonarQube) and is free for opensource projects. + +## How to onboard into SonarCloud + +### Prerequisite + +- Make sure to create a support ticket for tracking in case no ticket was created from our customer +- You need admin permissions. All team members should already have admin permissions, pls talk to a colleague to get yours if missing +- The project to scan, needs to be public. We do not have any paid plan and only public repositories are free + +### Add project + +- Hover over **Administration** + ![Administration](assets/sonarcloud/sc_projectsetup_1.png) +- Select **Projects Management** + ![Administration](assets/sonarcloud/sc_projectsetup_2.png) +- After the page loaded, go to **Analyse new projects** which is on the right side + ![Administration](assets/sonarcloud/sc_projectsetup_3.png) +- Select the **public** repository, you like to onboard + ![Administration](assets/sonarcloud/sc_projectsetup_4.png) + +:::caution +You now need to wait for SonarCloud to analyse the project. After the project is available in the overview page and analysed, continue with the next section +::: + +### Share **SONAR_TOKEN** + +Now the project is in SonarCloud. You can now enable customers to use SonarCloud with GitHub Actions. + +- Select the new project + ![Administration](assets/sonarcloud/sc_projectsetup_5.png) +- On the left navigation at the bottom there is **Administration**. Hover over it and go to **Analysis Method** + ![Administration](assets/sonarcloud/sc_projectsetup_6.png) +- One of the options is **GitHub Action**. Go to **Follow the tutorial** + ![Administration](assets/sonarcloud/sc_projectsetup_7.png) +- This page shows you the **SONAR_TOKEN** required for our customers GitHub Action to do a more specific scanning. Default scanning of SonarCloud works for Java as a first try, but it should be added as a GitHub Action. + ![Administration](assets/sonarcloud/sc_projectsetup_8.png) diff --git a/docs/how-to-setup-apps.md b/docs/how-to-setup-apps.md new file mode 100644 index 0000000..5e0eaae --- /dev/null +++ b/docs/how-to-setup-apps.md @@ -0,0 +1,105 @@ +# How to set up GitHub apps + +This guide is only for those who operate the environment + +This how-to will guide you through the deployment and configuration of GitHub Apps + +## Context + +As users don't have admin rights on repositories, they can't trigger actions in other repositories. They could use their PATs, but this is seen as bad practice. But as this got requested more often, we set up GitHub Apps which act like a technical user. + +In this document the source repository is referring to the repository from which an action is initiated, whereas the target repository will be the one where the actions will be called. + +## Create GitHub App + +To create an app follow the official guide [here](https://docs.github.com/en/developers/apps/building-github-apps/creating-a-github-app) + +- Callback URL needs to be filled out, we just use the standard catena homepage link +- As we just use a basic setup, options like the webhook URL and device workflow don't need to be configured. + +The app then needs to be configured within the organization menu: + +![Administration](assets/app1.png) + +The individual configuration is described below. + +## General + +From this menu one needs the app ID: + +![Administration](assets/app2.png) + +Further below a private key needs to be created (You need to download it): + +![Administration](assets/app3.png) + +These settings (app ID and private key) need to be stored as secrets, as the users will, can use them in their actions/workflows: + +![Administration](assets/app4.png) + +In this example PORTAL is the product name: + +- ORG_PORTAL_DISPATCH_APPID -> app ID from above +- ORG_PORTAL_DISPATCH_KEY -> content of private key file + +When creating the secret, set the scope (=permissions to the source repository) + +## Permissions & events + +Here only permission for actions need to be set to read and write + +## Install App + +Here you need to choose all source and target repositories + +![Administration](assets/app5.png) + +## Additions for the source repository workflow + +The products need to add the following steps to their calling action: + +``` +steps: +- name: Get Token + id: get_workflow_token + uses: peter-murray/workflow-application-token-action@v1 + with: + application_id: ${{ secrets.ORG_REPO_DISPATCH_APPID }} + application_private_key: ${{ secrets.ORG_REPO_DISPATCH_KEY }} +- name: trigger-workflow + id: call_action + env: + TOKEN: ${{ steps.get_workflow_token.outputs.token }} + run: | + curl -v \ + --request POST \ + --url https://api.github.com/repos/catenax-ng/playground-target/actions/workflows/example.yaml/dispatches \ + --header "authorization: Bearer $TOKEN" \ + --header "Accept: application/vnd.github.v3+json" \ + --data '{"ref":"test_branch","inputs":{"any_data":"anything","any_data2":"anything2"}}' \ + --fail +``` + +## Additions for the target repository workflow + +``` +name: Demo +on: + workflow_dispatch: + inputs: + # any parameter used in calling workflow needs to be declared here + # setting required to false means it's an optional parameter + any_data: + description: "content here" + required: true + default: "no content" + any_data2: + description: "more data" + required: false + default: "no content" +jobs: + show_workspace: + runs-on: ubuntu-latest + steps: + - run: echo "event payload ${{ github.event.inputs.any_data }}" +``` diff --git a/docs/how-to-setup-hashicorp-vault.md b/docs/how-to-setup-hashicorp-vault.md new file mode 100644 index 0000000..46f1bcc --- /dev/null +++ b/docs/how-to-setup-hashicorp-vault.md @@ -0,0 +1,409 @@ +# How to set up Hashicorp Vault + +This guide is only for those who operate the environment + +This how-to will guide you through the deployment and configuration of Hashicorp Vault + +## Create an AKS cluster for vault + +main.tf contains resources that will be created, e.g. + +``` +module "resource_group" { + source = "../modules/resource_group" + + resource_group_name = var.environment_name +} + +module "aks" { + source = "../modules/aks_cluster" + + aks_cluster_name = "cx-${var.environment_name}-aks" + aks_location = module.resource_group.resource_location + aks_resource_group = module.resource_group.resource_group_name + + aks_service_principal_client_id = var.service_principal_client_id + aks_service_principal_client_secret = var.service_principal_client_secret + aks_dns_prefix = "cx-${var.environment_name}-aks" + + k8s_vm_size = var.k8s_vm_size + k8s_cluster_node_count = var.k8s_cluster_node_count +} + +module "public_ip" { + source = "../modules/public_ip" + + public_ip_name = "cx-${var.environment_name}-public-ip" + resource_location = module.resource_group.resource_location + resource_group_name = module.aks.node_resource_group +} + +module "a_record" { + source = "../modules/a_record" + + record_name = "*.${var.environment_name}" + target_resource_id = module.public_ip.id + resource_group_name = "cxtsi-demo-shared-rg" + zone_name = "demo.catena-x.net" +} +``` + +variables.tf contains all parameters of the resources, e.g. + +``` +variable "environment_name" { + description = "Name of the environment to create, i.e. 'core'. Will be used in several resource names" + type = string +} + +variable "service_principal_client_id" { + description = "USE TF_VAR_service_principal_client_id! The client ID of the service principal that will be used to create the AKS cluster." + type = string +} + +variable "service_principal_client_secret" { + description = "USE TF_VAR_service_principal_client_secret! The secret of the service principal that will be used to create the AKS cluster." +} + +variable "k8s_vm_size" { + description = "The Azure VM Size string i.e. Standard_D2_v2 or Standard_D8s_v3" + type = string + default = "Standard_D8s_v3" +} + +variable "k8s_cluster_node_count" { + description = "The number of kubernetes nodes to create for the k8s cluster" + type = number + default = 3 +} +``` + +environments/vault.tfvars contains variables that are specific to the environment, and override the ones in variables.tf + +``` +environment_name="vault" +k8s_vm_size="Standard_B2s" +``` + +[More information on AKS cluster creation](https://catenax-ng.github.io/docs/internal/how-to-setup-aks-cluster-via-terraform) + +## Deploy Vault + +ArgoCD application + +``` +apiVersion: argoproj.io/v1alpha1 +kind: Application +metadata: + name: vault + namespace: argocd + labels: + environment: core +spec: + project: default + source: + repoURL: 'https://github.com/catenax-ng/k8s-cluster-stack' + path: apps/vault + targetRevision: 'HEAD' + plugin: + name: argocd-vault-plugin-helm-args + env: + - name: AVP_SECRET + value: vault-secret + - name: helm_args + value: '-f values.yaml -f values-vault-vault.yaml' + destination: + namespace: vault + name: vault-cluster + server: '' + syncPolicy: + syncOptions: + - Validate=false + - CreateNamespace=true + - PrunePropagationPolicy=foreground + - PruneLast=true + retry: + limit: 5 + backoff: + duration: 5s + factor: 2 + maxDuration: 3m + ignoreDifferences: # https://github.com/argoproj/argo-cd/issues/4276#issuecomment-908455476 + - group: admissionregistration.k8s.io + kind: MutatingWebhookConfiguration + jqPathExpressions: + - .webhooks[]?.clientConfig.caBundle +``` + +Helm chart + +Chart.yaml + +``` +apiVersion: v2 +name: vault +description: Hashicorp vault +type: application +version: 0.0.2 +appVersion: 0.1 +``` + +values.yaml + +``` +domain: "demo.catena-x.net" +vault: + server: + ha: + config: | + ui = true + listener "tcp" { + tls_disable = 1 + address = "[::]:8200" + cluster_address = "[::]:8201" + } + storage "raft" { + path = "/vault/data" + } + service_registration "kubernetes" {} + disable_mlock = true + enabled: true + raft: + enabled: true + config: | + ui = true + listener "tcp" { + tls_disable = 1 + address = "[::]:8200" + cluster_address = "[::]:8201" + } + storage "raft" { + path = "/vault/data" + } + service_registration "kubernetes" {} + disable_mlock = true + extraEnvironmentVars: + VAULT_SEAL_TYPE: "azurekeyvault" + VAULT_AZUREKEYVAULT_VAULT_NAME: "cx-vault-unseal" + VAULT_AZUREKEYVAULT_KEY_NAME: "hashicorp-vault-key" + AZURE_TENANT_ID: "" + AZURE_CLIENT_ID: "" + extraSecretEnvironmentVars: + - envName: AZURE_CLIENT_SECRET + secretName: azure-vault-secret + secretKey: client-secret + +``` + +Initialization and first time unseal is manual action (for now) + +Get the kube config of the vault cluster + +``` +az login --use-device-code --tenant +az account set +az aks get-credentials --admin --resource-group cx-vault-rg --name cx-vault-aks-services --file $HOME/.kube/cx-vault-admin +``` + +Initialize one of the vault instances and save the root token and unseal keys + +`kubectl --kubeconfig=.kube/cx-vault-admin -n vault exec pod/vault-0 -- /bin/sh vault operator init` + +Login with the root token + +`kubectl --kubeconfig=.kube/cx-vault-admin -n vault exec pod/vault-0 -- /bin/sh vault login` + +Unseal the first instance by running the following command three times + +Each time provide a different unseal key out of the five that are generated during initialization + +`kubectl --kubeconfig=.kube/cx-vault-admin -n vault exec pod/vault-0 -- /bin/sh vault operator unseal` + +Display the status of the first instance and note the internal url / ip address of the first node that will be the leader + +`kubectl --kubeconfig=.kube/cx-vault-admin -n vault exec pod/vault-0 -- /bin/sh vault status` + +If the initialization and unseal were successful, you will see the following status + +...\ +Initialized true\ +Sealed false\ +... + +Join the other (two) instances as followers to the first instance + +Provide the vault root token when prompted + +`kubectl --kubeconfig=.kube/cx-vault-admin -n vault exec pod/vault-1 -- /bin/sh vault login` + +Join the first instance using its internal url or ip address + +`kubectl --kubeconfig=.kube/cx-vault-admin -n vault exec pod/vault-1 -- /bin/sh vault operator raft join http://vault-0.vault-internal:8200` + +Check the status of the following instances + +`kubectl --kubeconfig=.kube/cx-vault-admin -n vault exec pod/vault-1 -- /bin/sh vault status` + +In case sealed is true, then unseal them as well, again run the command three times providing three different unseal keys out of the five + +`kubectl --kubeconfig=.kube/cx-vault-admin -n vault exec pod/vault-1 -- /bin/sh vault operator unseal` + +Once all instances have been unsealed, no further unseal will be necessary, as Azure key-vault will take care of it. + +## Configure Vault + +Clone GitHub repository k8s-cluster-stack + +`git clone https://github.com/catenax-ng/k8s-cluster-stack.git` + +Get the approle ID and approle secret ID from Azure Key-vault cx-vault-unseal secrets using Azure CLI + +``` +az login --use-device-code --tenant +az account set +az keyvault secret show --vault-name cx-vault-unseal --name vault-approle-id | jq '.value' +az keyvault secret show --vault-name cx-vault-unseal --name vault-approle-secret-id | jq '.value' +``` + +Get the approle ID and approle secret ID from Azure Key-vault cx-vault-unseal secrets from the Azure portal + +[Approle ID](https://portal.azure.com/#@catenax.onmicrosoft.com/asset/Microsoft_Azure_KeyVault/Secret/https://cx-vault-unseal.vault.azure.net/secrets/vault-approle-id) + +[Approle secret ID](https://portal.azure.com/#@catenax.onmicrosoft.com/asset/Microsoft_Azure_KeyVault/Secret/https://cx-vault-unseal.vault.azure.net/secrets/vault-approle-secret-id) + +Configure Vault in Terraform code + +main.tf + +``` +locals { + teams = [ + "bpdm", + "catenax-at-home", + "dft", + "edc", + "esc-backbone", + "essential-services", + "integrity-demonstrator", + "managed-identity-wallets", + "material-pass", + "portal", + "semantics", + "team-example", + "test-data-generator", + "traceability-foss", + "traceability-irs" + ] +} + + +resource "vault_mount" "devsecops-secret-engine" { + path = "devsecops" + type = "kv-v2" + description = "Secret engine for DevSecOps team" +} + +resource "vault_mount" "product-team-secret-engines" { + + for_each = toset( local.teams ) + + path = each.key + type = "kv-v2" + description = "Secret engine for team ${each.key}" +} + +resource "vault_policy" "product-team-policies" { + + for_each = toset(local.teams) + + name = each.key + policy = < [docu](../guides/how-to-prepare-a-private-repo.md)) | + +### Examples + +```bash +# Enable Secret Engine +$ vault secrets enable -version=2 -path=productName kv +Success! Enabled the kv secrets engine at: productName/ + +# Create AppRole +$ vault write auth/approle/role/productName \ + secret_id_ttl=10m \ + token_num_uses=10 \ + token_ttl=20m \ + token_max_ttl=30m \ + secret_id_num_uses=40 +Success! Data written to: auth/approle/role/productName + +# List existing AppRole definitions +$ vault list auth/approle/role +Keys +---- +AppRole1 +AppRole2 +AppRole3 + +# Issue Secret Id for Approle (listed secret_id and secret_id_accessor doesn't exist) +$ vault write -f auth/approle/role/productName/secret-id +Key Value +--- ----- +secret_id d8ff2be9-1ecb-4481-bfae-21071baf42c1 +secret_id_accessor 701e38a4-408d-4db0-94cc-3166c7277daa +secret_id_ttl 10m + +# Read AppRole Id (listed role_id doesn't exist) +$ vault read auth/approle/role/productName/role-id +Key Value +--- ----- +role_id 89dd5e0d-2991-4d0c-bb1a-a8b12ee7228f + +# Create Policy (policy will be read from file full_policy.hcl) +$ vault policy write policy-full-productName full_policy.hcl +Success! Uploaded policy: productName-rw + +# Create Policy (policy will be read from file full_policy.hcl) +$ vault policy write policy-read-productName read_policy.hcl +Success! Uploaded policy: productName-ro + +# Create GitHub auth mapping (with policy) +$ vault write auth/github/map/teams/productName value=productName-rw +Success! Data written to: auth/github/map/teams/productName +``` + +## ArgoCD + +### Definition + +| Item | Naming Convention | Additional description | +|:---------------------------|:-----------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------| +| Kubernetes Namespace | _product-productName_ | | +| ArgoCD Project Name | _project-productName_ | | +| ArgoCD Cluster Secret Name | _clusterName-cluster-secret_ | Representing the k8s secret name | +| ArgoCD Child Cluster Name | _clusterName_ | as of now, this is _dev_, _core_ or _hotel-budapest_. It might be later contain also _productName_ if a product will get its own cluster | + +### Examples + +#### ArgoCD Project And Kubernetes Namespace + +Each Catena-X product will get its own ArgoCD project and Kubernetes namespace at target cluster. Therefore, k8s namespace +and ArgoCD project definition is handled within the same manifest file. + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + // highlight-next-line + name: product-[productName] +--- +apiVersion: argoproj.io/v1alpha1 +kind: AppProject +metadata: + // highlight-next-line + name: project-[productName] + namespace: argocd +spec: + // highlight-next-line + description: Project for team [productName] + sourceRepos: + - '*' + destinations: + // highlight-next-line + - namespace: product-[productName] + server: https://kubernetes.default.svc + # Allow all namespaced-scoped resources to be created, except for ResourceQuota, LimitRange, NetworkPolicy + namespaceResourceBlacklist: + - group: '' + kind: ResourceQuota + - group: '' + kind: LimitRange + - group: '' + kind: NetworkPolicy + roles: + - name: team-admin + description: All access to applications inside project-bpdm. Read only on project itself + policies: + // highlight-next-line + - p, proj:project-[productName]:team-admin, applications, *, project-[productName]/*, allow + groups: + - catenax-ng:product-[productName] +``` + +#### ArgoCD Cluster Secret + +```yaml +apiVersion: v1 +kind: Secret +metadata: + annotations: + // highlight-next-line + avp.kubernetes.io/path: "devsecops/data/clusters/[clusterName]/k8s" + labels: + argocd.argoproj.io/secret-type: cluster + // highlight-next-line + name: [clusterName]-cluster-secret +type: Opaque +stringData: + name: + server: + config: | + { + "bearerToken": "", + "tlsClientConfig": { + "insecure": false, + "caData": "" + } + } +``` + +The highlighted lines contains naming convention relevant placeholder `[clusterName]`. This should be replaced by the +native cluster name, e.g. _core_ (for ArgoCD core cluster).