Skip to content

Latest commit

 

History

History
311 lines (210 loc) · 15.2 KB

README.md

File metadata and controls

311 lines (210 loc) · 15.2 KB

Humanitec Red Hat OpenShift Reference Architecture

TL;DR

Skip the theory? Go here to spin up your Humanitec Red Hat OpenShift Reference Architecture Implementation.

Follow this learning path to master your Internal Developer Platform.

Building an Internal Developer Platform (IDP) can come with many challenges. To give you a head start, we’ve created a set of reference architectures based on hundreds of real-world setups. These architectures described in code provide a starting point to build your own IDP within minutes, along with customization capabilities to ensure your platform meets the unique needs of your users (developers).

The initial version of this reference architecture has been presented by Mike Gatto, Sr. DevOps Engineer, McKinsey and Stephan Schneider, Digital Expert Associate Partner, McKinsey at PlartformCon 2023.

What is an Internal Developer Platform (IDP)?

An Internal Developer Platform (IDP) is the sum of all the tech and tools that a platform engineering team binds together to pave golden paths for developers. IDPs lower cognitive load across the engineering organization and enable developer self-service, without abstracting away context from developers or making the underlying tech inaccessible. Well-designed IDPs follow a Platform as a Product approach, where a platform team builds, maintains, and continuously improves the IDP, following product management principles and best practices.

Understanding the different planes of the IDP reference architecture

When McKinsey originally published the reference architecture they proposed five planes that describe the different parts of a modern Internal Developer Platform (IDP).

RHOS reference architecture Humanitec

Developer Control Plane

This plane is the primary configuration layer and interaction point for the platform users. It harbors the following components:

  • A Version Control System. GitHub is a prominent example, but this can be any system that contains two types of repositories:
    • Application Source Code
    • Platform Source Code, e.g. using Terraform
  • Workload specifications. The reference architecture uses Score.
  • A portal for developers to interact with. It can be the Humanitec Portal, but you might also use Backstage or any other portal on the market.

Integration and Delivery Plane

This plane is about building and storing the image, creating app and infra configs from the abstractions provided by the developers, and deploying the final state. It’s where the domains of developers and platform engineers meet.

This plane usually contains four different tools:

  • A CI pipeline. It can be Github Actions or any CI tooling on the market.
  • The image registry holding your container images. Again, this can be any registry on the market.
  • An orchestrator which in our example, is the Humanitec Platform Orchestrator.
  • The CD system, which can be the Platform Orchestrator’s deployment pipeline capabilities — an external system triggered by the Orchestrator using a webhook, or a setup in tandem with GitOps operators like ArgoCD.

Monitoring and Logging Plane

The integration of monitoring and logging systems varies greatly depending on the system. This plane however is not a focus of the reference architecture.

Security Plane

The security plane of the reference architecture is focused on the secrets management system. The secrets manager stores configuration information such as database passwords, API keys, or TLS certificates needed by an Application at runtime. It allows the Platform Orchestrator to reference the secrets and inject them into the Workloads dynamically. You can learn more about secrets management and integration with other secrets management here.

The reference architecture sample implementations use the secrets store attached to the Humanitec SaaS system.

Resource Plane

This plane is where the actual infrastructure exists including clusters, databases, storage, or DNS services. The configuration of the Resources is managed by the Platform Orchestrator which dynamically creates app and infrastructure configurations with every deployment and creates, updates, or deletes dependent Resources as required.

How to spin up your Humanitec Red Hat OpenShift Reference Architecture

This repo contains an implementation of part of the Humanitec Reference Architecture for an Internal Developer Platform, including two different Portal solutions: Red Hat Developer Hub and Backstage.

By default, the following will be provisioned:

  • Resource Definitions in Humanitec for:
    • Kubernetes Cluster
  • AWS IAM objects for using the Elastic Container Registry (ECR) and AWS Secrets Manager

Prerequisites

  • A Humanitec account with the Administrator role in an Organization. Get a free trial if you are just starting.
  • A Red Hat OpenShift cluster
  • An AWS account
  • AWS CLI installed locally
  • OpenShift CLI installed locally
  • terraform installed locally

The OpenShift Reference Architecture does not make any assumptions where your OpenShift platform runs. The cluster API server has to be publicly accessible.

The Reference Architecture uses AWS ECR to store container images and AWS Secrets Manager to store secrets and therefore requires an AWS account.

Usage

Note: Using this Reference Architecture Implementation will incur costs for your infrastructure.

It is recommended that you fully review the code before you run it to ensure you understand the impact of provisioning this infrastructure. Humanitec does not take responsibility for any costs incurred or damage caused when using the Reference Architecture Implementation.

This reference architecture implementation uses Terraform. You will need to do the following:

  1. Fork this GitHub repo, clone it to your local machine and navigate to the root of the repository.

  2. Set the required input variables. (see Required input variables)

  3. Ensure you are logged in with aws. (Follow the quickstart if you aren't)

  4. Ensure you are logged in with oc. (Follow Logging in to the OpenShift CLI using a web browser if you aren't)

  5. Set the HUMANITEC_TOKEN environment variable to an appropriate Humanitec API token with the Administrator role on the Humanitec Organization.

    For example:

    export HUMANITEC_TOKEN="my-humanitec-api-token"
  6. Run terraform:

    terraform init
    terraform plan
    terraform apply

Required input variables

Terraform reads variables by default from a file called terraform.tfvars. You can create your own file by renaming the terraform.tfvars.example file in the root of the repo and then filling in the missing values.

You can see find a details about each of those variables and additional supported variables under Inputs.

Verify your result

Check for the existence of key elements of the reference architecture. This is a subset of all elements only. For a complete list of what was installed, review the Terraform code.

  1. Set the HUMANITEC_ORG environment variable to the ID of your Humanitec Organization (must be all lowercase):

    export HUMANITEC_ORG="my-humanitec-org"
  2. Verify the existence of the Resource Definition for the OpenShift cluster in your Humanitec Organization:

    curl -s https://api.humanitec.io/orgs/${HUMANITEC_ORG}/resources/defs/ref-arch \
      --header "Authorization: Bearer ${HUMANITEC_TOKEN}" \
      | jq .id,.type

    This should output:

    "ref-arch"
    "k8s-cluster"
  3. Verify the existence of the Humanitec K8s Service Account:

    kubectl -n humanitec-system get serviceaccounts humanitec

    This should output:

    NAME        SECRETS   AGE
    humanitec   1         <>

Enable a portal (optional)

Portal Prerequisites

Both portal solutions require a GitHub connection, which in turn needs:

  • A GitHub organization and permission to create new repositories in it. Go to https://github.com/account/organizations/new to create a new org (the "Free" option is fine). Note: is has to be an organization, a free account is not sufficient.

  • Create a classic github personal access token with repo, workflow, delete_repo and admin:org scope here.

  • Set the GITHUB_TOKEN environment variable to your token.

    export GITHUB_TOKEN="my-github-token"
  • Set the GITHUB_ORG_ID environment variable to your GitHub organization ID.

    export GITHUB_ORG_ID="my-github-org-id"
  • Install the GitHub App for Backstage into your GitHub organization

    • Run docker run --rm -it -e GITHUB_ORG_ID -v $(pwd):/pwd -p 127.0.0.1:3000:3000 ghcr.io/humanitec-architecture/create-gh-app (image source) and follow the instructions:
      • “All repositories” ~> Install
      • “Okay, […] was installed on the […] account.” ~> You can close the window and server.

Portal Usage

  • Enable with_backstage or with_rhdh inside your terraform.tfvars and configure the additional variables that a required for Backstage and RHDH.
  • Perform another terraform apply

Verify portal setup

Backstage
  • Fetch the DNS entry of the Humanitec Application backstage, Environment development.
  • Open the host in your browser.
  • Click the "Create" button and scaffold your first application.
Red Hat Developer Hub
  • Get the host of your Developer Hub instance via kubectl -n rhdh get routes
  • Open the host in your browser.
  • Click the "Create" button and scaffold your first application.

Enable ArgoCD (optional)

ArgoCD Prerequisites

ArgoCD requires a GitHub connection, which in turn needs:

  • A GitHub organization and permission to create new repositories in it. Go to https://github.com/account/organizations/new to create a new org (the "Free" option is fine). Note: is has to be an organization, a free account is not sufficient.

  • Create a classic github personal access token with repo, workflow, delete_repo and admin:org scope here.

  • Set the GITHUB_TOKEN environment variable to your token.

    export GITHUB_TOKEN="my-github-token"
  • Set the GITHUB_ORG_ID environment variable to your GitHub organization ID.

    export GITHUB_ORG_ID="my-github-org-id"

ArgoCD Usage

  • Enable with_argocd inside your terraform.tfvars and configure the additional variables that a required for ArgoCD.
  • Perform another terraform apply

Verify ArgoCD setup

  • Run kubectl -n argocd get routes
  • Open the host in your browser.
  • Select "Log In Via OpenShift"
  • Deploy a Humanitec Application and within a minute you should see a new Application in ArgoCD being synced.

Cleaning up

Once you are finished with the reference architecture, you can remove all provisioned infrastructure and the resource definitions created in Humanitec with the following steps:

  1. Delete all Humanitec Applications scaffolded using the Portal, if you used one, but not the backstage app itself.

  2. Ensure you are (still) logged in with aws.

  3. Ensure you still have the HUMANITEC_TOKEN environment variable set to an appropriate Humanitec API token with the Administrator role on the Humanitec Organization.

  4. Run terraform:

    terraform destroy

Requirements

Name Version
terraform >= 1.3.0
aws ~> 5.17
github ~> 5.38
helm ~> 2.13
humanitec ~> 1.0
kubectl ~> 2.0
kubernetes ~> 2.30
random ~> 3.5
time ~> 0.11
tls ~> 4.0

Providers

Name Version
humanitec ~> 1.0

Modules

Name Source Version
base ./modules/base n/a
cd_argocd ./modules/cd-argocd n/a
github github.com/humanitec-architecture/reference-architecture-aws v2024-06-11//modules/github
github_app github.com/humanitec-architecture/shared-terraform-modules v2024-06-10//modules/github-app
humanitec_k8s_connection ./modules/humanitec-k8s-connection n/a
portal_backstage ./modules/portal-backstage n/a
portal_rhdh ./modules/portal-rhdh n/a

Resources

Name Type
humanitec_service_user_token.deployer resource
humanitec_user.deployer resource

Inputs

Name Description Type Default Required
apiserver The API server URL of your OpenShift cluster string n/a yes
aws_account_id AWS Account (ID) to use string n/a yes
aws_region AWS region string n/a yes
basedomain Base domain string n/a yes
kubeconfig Path to your kubeconfig file string n/a yes
kubectx The context to use from your kubeconfig to connect Terraform providers to the cluster string n/a yes
environment Environment string "development" no
github_manifests_password GitHub password to pull & push manifests (required for ArgoCD) string null no
github_manifests_repo GitHub repository for manifests (required for ArgoCD) string "humanitec-app-manifests" no
github_manifests_username GitHub username to pull & push manifests (required for ArgoCD) string null no
github_org_id GitHub org id (required for Backstage and RHDH) string null no
humanitec_org_id Humanitec Organization ID string null no
with_argocd Deploy ArgoCD bool false no
with_backstage Deploy Backstage bool false no
with_rhdh Deploy Red Hat Developer Hub bool false no