Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/master' into feature/dev-123-upd…
Browse files Browse the repository at this point in the history
…ate-release-engineering-documentation-to-suggest-design
  • Loading branch information
Benbentwo committed Oct 10, 2024
2 parents 8ff1159 + bd5ec68 commit 448b98e
Show file tree
Hide file tree
Showing 55 changed files with 2,497 additions and 1,227 deletions.
5 changes: 0 additions & 5 deletions .github/actions/build-website/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,11 +30,6 @@ runs:
role-to-assume: ${{ inputs.iam_role_arn }}
role-session-name: ${{ inputs.iam_role_session_name }}

- name: Checkout Repository
uses: actions/checkout@v4
with:
fetch-depth: 0

- name: Setup Node
uses: actions/setup-node@v4
with:
Expand Down
6 changes: 6 additions & 0 deletions .github/workflows/website-deploy-preview.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,10 @@ permissions:
id-token: write
contents: read

concurrency:
group: "docs-preview-${{ github.event.pull_request.number }}"
cancel-in-progress: true

jobs:
deploy-preview:
runs-on: ubuntu-latest
Expand All @@ -42,6 +46,8 @@ jobs:
uses: actions/checkout@v4
with:
fetch-depth: 0
# This workflows runs on pull_request_target, so we need to checkout the PR branch
ref: ${{ github.event.pull_request.head.ref }}

- name: Build Website
uses: ./.github/actions/build-website
Expand Down
2 changes: 1 addition & 1 deletion docs/intro/intro.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ With SweetOps you can implement the following complex architectural patterns wit

## What are the alternatives?

The reference archietcture is comparable to various other solutions that bundle ready-to-go Terraform "templates" and offer subscription plans for access to their modules.
The reference architecture is comparable to various other solutions that bundle ready-to-go Terraform "templates" and offer subscription plans for access to their modules.

How does it differentiate from these solutions?

Expand Down
9 changes: 4 additions & 5 deletions docs/jumpstart/action-items.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ Before we can get started, here's the minimum information we need from you.

Please also provision a single test user in your IdP for Cloud Posse to use for testing and add those user credentials to 1Password.

- [AWS Identity Center (SSO) ClickOps](/layers/identity/aws-sso/)
- [Setup AWS Identity Center (SSO)](/layers/identity/aws-sso/)

<Admonition type="caution">
- GSuite does not automatically sync Users and Groups with AWS Identity Center without additional configuration! If using GSuite as an IdP, considering deploying the [ssosync tool](https://github.com/awslabs/ssosync).
Expand All @@ -76,10 +76,9 @@ Before we can get started, here's the minimum information we need from you.

If deploying AWS SAML as an alternative to AWS SSO, we will need a separate configuration and metadata file. Again, please refer to the relevant linked guide.

- [GSuite](https://aws.amazon.com/blogs/desktop-and-application-streaming/setting-up-g-suite-saml-2-0-federation-with-amazon-appstream-2-0/): Follow Steps 1 through 7. This document refers to Appstream, but the process will be the same for AWS.
- [Office 365](/layers/identity/tutorials/how-to-setup-saml-login-to-aws-from-office-365)
- [JumpCloud](https://support.jumpcloud.com/support/s/article/getting-started-applications-saml-sso2)
- [Okta](https://help.okta.com/en-us/Content/Topics/DeploymentGuides/AWS/aws-configure-identity-provider.htm)
Please see the following guide and follow the steps to export metadata for your Identity Provider integration. All steps in AWS will be handled by Cloud Posse.

- [Setup AWS SAML](/layers/identity/aws-saml/)
</Step>
</Steps>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Cloud Posse recommends starting with a **Net-New Organization**

- Only one AWS Control Tower can exist in an organization.

- AWS Control Tower only recenlty became managable with Terraform, and full support is not availble.
- AWS Control Tower only recently became manageable with Terraform, and full support is not available.
Depending on the Scope of Work, Cloud Posse is usually responsible for provisioning accounts with terraform which requires all the same access as Control Tower.

- Member accounts can only be provisioned from the top-level root “organization” account
Expand Down
4 changes: 2 additions & 2 deletions docs/layers/accounts/tutorials/manual-configuration.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -675,7 +675,7 @@ stacks/orgs/(namespace)/(tenant)/identity/global-region.yaml and add the arn:

```
import:
- orgs/e98s/gov/iam/_defaults
- orgs/acme/gov/iam/_defaults
- mixins/region/global-region
#...
Expand All @@ -694,7 +694,7 @@ If the auto account id is not known, create an empty list instead:

```
import:
- orgs/e98s/gov/iam/_defaults
- orgs/acme/gov/iam/_defaults
- mixins/region/global-region
#...
Expand Down
135 changes: 135 additions & 0 deletions docs/layers/ecs/tutorials/1password-scim-bridge.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,135 @@
---
title: "Deploy 1Password SCIM Bridge"
sidebar_label: "1Password SCIM Bridge"
description: "Deploy the 1Password SCIM Bridge for ECS environments"
---

import Intro from "@site/src/components/Intro";
import Steps from "@site/src/components/Steps";
import Step from "@site/src/components/Step";
import StepNumber from "@site/src/components/StepNumber";
import CollapsibleText from "@site/src/components/CollapsibleText";

<Intro>
The 1Password SCIM Bridge is a service that allows you to automate the management of users and groups in 1Password. This guide will walk you through deploying the SCIM Bridge for ECS environments.
</Intro>

## Implementation

The implementation of this is fairly simple. We will generate credentials for the SCIM bridge in 1Password, store them in AWS SSM Parameter Store, deploy the SCIM bridge ECS service, and then finally connect your chosen identity provider.

<Steps>
<Step>
### <StepNumber/> Generate Credentials for your SCIM bridge in 1Password

The first step is to generate credentials for your SCIM bridge in 1Password. We will pass these credentials to Terraform and the ECS task definition to create the SCIM bridge.

<Steps>
1. Log in to your 1Password account
1. Click Integrations in the sidebar
1. Select "Set up user provisioning"
1. Choose "Custom"
1. You should now see the SCIM bridge credentials. We will need the "scimsession" and "Bearer Token" for the next steps.
1. Save these credentials in a secure location (such as 1Password) for future reference
1. Store only the "scimsession" in AWS SSM Parameter Store. This will allow the ECS task definition to access the credentials securely. Then once the service is running, the server will ask for the bearer token to verify the connection, which we will enter at that time.

<Steps>
- Open the AWS Web Console - Navigate to the target account, such as `core-auto`, and target region, such as `us-west-2`
- Open "AWS System Manager" > "Parameter Store"
- Create a new Secure String parameter using the credentials you generated in the previous step: `/1password/scim/scimsession`
</Steps>
</Steps>

There will be additional steps to complete the integration in 1Password, but first we need to deploy the SCIM bridge service.
</Step>

<Step>
### <StepNumber /> Deploy the SCIM bridge ECS Service

The next step is to deploy the SCIM bridge ECS service. We will use Terraform to create the necessary resources with our existing `ecs-service` component. Ensure you have the `ecs-service` component and `ecs` cluster before proceeding.

If you do not have ECS prerequisites, please see the [ECS layer](/layers/ecs) to create the necessary resources.

<Steps>
1. Create a new stack configuration for the SCIM bridge. The placement of this file will depend on your project structure. For example, you could create a new file such as `stacks/catalog/ecs-services/1password-scim-bridge.yaml` with the following content:

<CollapsibleText type="medium">
```yaml
import:
- catalog/terraform/services/defaults

components:
terraform:
1pass-scim:
metadata:
component: ecs-service
inherits:
- ecs-service/defaults
vars:
enabled: true
name: 1pass-scim
containers:
service:
name: op_scim_bridge
image: 1password/scim:v2.9.5
cpu: 128
memory: 512
essential: true
dependsOn:
- containerName: redis
condition: START
port_mappings:
- containerPort: 3002
hostPort: 3002
protocol: tcp
map_environment:
OP_REDIS_URL: redis://localhost:6379
OP_TLS_DOMAIN: ""
OP_CONFIRMATION_INTERVAL: "300"
map_secrets:
OP_SESSION: "1password/scim/scimsession"
log_configuration:
logDriver: awslogs
options: {}
redis:
name: redis
image: redis:latest
cpu: 128
memory: 512
essential: true
restart: always
port_mappings:
- containerPort: 6379
hostPort: 6379
protocol: tcp
map_environment:
REDIS_ARGS: "--maxmemory 256mb --maxmemory-policy volatile-lru"
log_configuration:
logDriver: awslogs
options: {}
```
</CollapsibleText>
2. Confirm the `map_secrets` value for `OP_SESSION` matches the AWS SSM Parameter Store path you created previously, an confirm they are in the same account and region as this ECS service component.
3. Deploy the ECS service with Atmos:
```bash
atmos terraform apply 1pass-scim -s core-usw2-auto
```
</Steps>
</Step>

<Step>
### <StepNumber/> Validate the Integration

After deploying the SCIM bridge ECS service, verify the service is running and accessible. Connect to the VPN (if deployed the ECS service is deployed with a private ALB), navigate to the SCIM bridge URL, and confirm the service is running.

For example, go to `https://1pass-scim.platform.usw1.auto.core.acme-svc.com/`
</Step>

<Step>
### <StepNumber/> Connect your Identity Provider

Finally, connect your identity provider to the SCIM bridge. The SCIM bridge URL will be the URL you validated in the previous step. Follow the instructions in the 1Password SCIM Bridge documentation to connect your identity provider, using the Bearer Token you generated in the first step.

</Step>

</Steps>
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
---
title: "Decide on Secrets Management for EKS"
sidebar_label: "Secrets Management for EKS"
description: Decide on the secrets management strategy for EKS.
---
import Intro from '@site/src/components/Intro';
import KeyPoints from '@site/src/components/KeyPoints';

<Intro>
We need to decide on a secrets management strategy for EKS. We prefer storing secrets externally, like in AWS SSM Parameter Store, to keep clusters more disposable. If we decide on this, we'll need a way to pull these secrets into Kubernetes.
</Intro>

## Problem

We aim to design our Kubernetes clusters to be disposable and ephemeral, treating them like cattle rather than pets. This influences how we manage secrets. Ideally, Kubernetes should not be the sole source of truth for secrets, though we still want to leverage Kubernetes’ native `Secret` resource. If the cluster experiences a failure, storing secrets exclusively within Kubernetes risks losing access to them. Additionally, keeping secrets only in Kubernetes limits integration with other services.

To address this, several solutions allow secrets to be stored externally (as the source of truth) while still utilizing Kubernetes' `Secret` resources. These solutions, including some open-source tools and recent offerings from Amazon, enhance resilience and interoperability. Any approach must respect IAM permissions and ensure secure secret management for applications running on EKS. We have several options to consider that balance external secret storage with Kubernetes-native functionality.

### Option 1: External Secrets Operator

Use [External Secrets Operator](https://external-secrets.io/latest/) with AWS SSM Parameter Store.

External Secrets Operator is a Kubernetes operator that manages and stores sensitive information in external secret management systems like AWS Secrets Manager, GCP Secret Manager, Azure Key Vault, HashiCorp Vault, and more. It allows you to use these external secret management systems to securely add secrets in your Kubernetes cluster.

Cloud Posse historically recommends using External Secrets Operator with AWS SSM Parameter Store and has existing Terraform modules to support this solution. See the [eks/external-secrets-operator](/components/library/aws/eks/external-secrets-operator/) component.

### Option 2: AWS Secrets Manager secrets with Kubernetes Secrets Store CSI Driver

Use [AWS Secrets and Configuration Provider (ASCP) for the Kubernetes Secrets Store CSI Driver](https://docs.aws.amazon.com/secretsmanager/latest/userguide/integrating_csi_driver.html). This option allows you to use AWS Secrets Manager secrets as Kubernetes secrets that can be accessed by Pods as environment variables or files mounted in the pods. The ASCP also works with [Parameter Store parameters](https://docs.aws.amazon.com/systems-manager/latest/userguide/integrating_csi_driver.html)

However, Cloud Posse does not have existing Terraform modules for this solution. We would need to build this support.

### Option 3: SOPS Operator

Use [SOPS Operator](https://github.com/isindir/sops-secrets-operator) to manage secrets in Kubernetes. SOPS Operator is a Kubernetes operator that builds on the `sops` project by Mozilla to encrypt the sensitive portions of a `Secrets` manifest into a `SopsSecret` resource, and then decrypt and provision `Secrets` in the Kubernetes cluster.

1. **Mozilla SOPS Encryption**: Mozilla SOPS (Secrets OPerationS) is a tool that encrypts Kubernetes secret manifests, allowing them to be stored securely in Git repositories. SOPS supports encryption using a variety of key management services. Most importantly, it supports AWS KMS which enables IAM capabilities for native integration with AWS.

2. **GitOps-Compatible Secret Management**: In a GitOps setup, storing plain-text secrets in Git poses security risks. Using SOPS, we can encrypt sensitive data in Kubernetes secret manifests while keeping the rest of the manifest in clear text. This allows us to store encrypted secrets in Git, track changes with diffs, and maintain security while benefiting from GitOps practices like version control, auditability, and CI/CD pipelines.

3. **AWS KMS Integration**: SOPS uses AWS KMS to encrypt secrets with customer-managed keys (CMKs), ensuring only authorized users—based on IAM policies—can decrypt them. The encrypted secret manifests can be safely committed to Git, with AWS securely managing the keys. Since it's IAM-based, it integrates seamlessly with STS tokens, allowing secrets to be decrypted inside the cluster without hardcoded credentials.

4. **Kubernetes Operator**: The [SOPS Secrets Operator](https://github.com/isindir/sops-secrets-operator) automates the decryption and management of Kubernetes secrets. It monitors a `SopsSecret` resource containing encrypted secrets. When a change is detected, the operator decrypts the secrets using AWS KMS and generates a native Kubernetes `Secret`, making them available to applications in the cluster. AWS KMS uses envelope encryption to manage the encryption keys, ensuring that secrets remain securely encrypted at rest.

5. **Improved Disaster Recovery and Security**: By storing the source of truth for secrets outside of Kubernetes (e.g., in Git), this setup enhances disaster recovery, ensuring secrets remain accessible even if the cluster is compromised or destroyed. While secrets are duplicated across multiple locations, security is maintained by using IAM for encryption and decryption outside Kubernetes, and Kubernetes' native Role-Based Access Control (RBAC) model for managing access within the cluster. This ensures that only authorized entities, both external and internal to Kubernetes, can access the secrets.

The SOPS Operator combines the strengths of Mozilla SOPS and AWS KMS, allowing you to:
- Encrypt secrets using KMS keys.
- Store encrypted secrets in Git repositories.
- Automatically decrypt and manage secrets in Kubernetes using the SOPS Operator.

This solution is ideal for teams following GitOps principles, offering secure, external management of sensitive information while utilizing Kubernetes' secret management capabilities. However, the redeployment required for secret rotation can be heavy-handed, potentially leading to a period where services are still using outdated or invalid secrets. This could cause services to fail until the new secrets are fully rolled out.

## Recommendation

We recommend using the External Secrets Operator with AWS SSM Parameter Store. This is a well-tested solution that we have used in the past. We have existing Terraform modules to support this solution.

However, we are in the process of evaluating the AWS Secrets Manager secrets with Kubernetes Secrets Store CSI Driver solution. This is the AWS supported option and may be a better long-term solution. We will build the required Terraform component to support this solution.

## Consequences

We will develop the `eks/secrets-store-csi-driver` component using the [Secrets Store CSI Driver](https://secrets-store-csi-driver.sigs.k8s.io/getting-started/installation)
4 changes: 2 additions & 2 deletions docs/layers/eks/foundational-platform.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ We first deploy the foundation for the cluster. The `eks/cluster` component depl
including Auth Config mapping. We do not deploy any nodes with the cluster initially. Then once EKS is available, we
connect to the cluster and start deploying resources. First is Karpenter. We deploy the Karpenter chart on a Fargate
node and the IAM service role to allow Karpenter to purchase Spot Instances. Karpenter is the only resources that will
be deployed to Fargate. Then we deploy Karpenter Provisioners using the CRD created by the initial Karpenter component.
be deployed to Fargate. Then we deploy Karpenter Node Pools using the CRD created by the initial Karpenter component.
These provisioners will automatically launch and scale the cluster to meet our demands. Next we deploy `idp-roles` to
manage custom roles for the cluster, and deploy `metrics-server` to provide access to resource metrics.

Expand Down Expand Up @@ -49,7 +49,7 @@ those implementations in follow up topics. For details, see the
EKS Cluster, including IAM role to Kubernetes Auth Config mapping.
- [`eks/karpenter`](/components/library/aws/eks/karpenter/): Installs the Karpenter chart on the EKS cluster and
prepares the environment for provisioners.
- [`eks/karpenter-provisioner`](/components/library/aws/eks/karpenter-node-pool/): Deploys Karpenter Provisioners
- [`eks/karpenter-provisioner`](/components/library/aws/eks/karpenter-node-pool/): Deploys Karpenter Node Pools
using CRDs made available by `eks/karpenter`
- [`iam-service-linked-roles`](/components/library/aws/iam-service-linked-roles/): Provisions
[IAM Service-Linked](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html) roles. These
Expand Down
Loading

0 comments on commit 448b98e

Please sign in to comment.