Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update example READMEs #681

Merged
merged 27 commits into from
Aug 23, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
3aa96db
Update example READMEs (initial)
Racer159 Aug 17, 2022
74b1410
Refine docs for docusaurus
Racer159 Aug 17, 2022
7671ada
Fix walkthroughs
Racer159 Aug 17, 2022
480582b
Merge branch 'master' into 676-example-readmes-are-out-of-date
Racer159 Aug 17, 2022
c58472d
Update the git-data Makefile
Racer159 Aug 17, 2022
bcaf32a
Fix tests with renamed git-data example
Racer159 Aug 18, 2022
b25dd76
Merge branch 'master' into 676-example-readmes-are-out-of-date
jeff-mccoy Aug 22, 2022
e44b472
Merge branch 'master' into 676-example-readmes-are-out-of-date
Racer159 Aug 22, 2022
57b4f70
Merge branch 'master' into 676-example-readmes-are-out-of-date
Racer159 Aug 22, 2022
ad5d098
Update logging png image
Racer159 Aug 22, 2022
364df95
Resolve bad markdown syntax
Racer159 Aug 22, 2022
365df1d
Test autogenerated index
Racer159 Aug 22, 2022
d56fbb7
Cleanup example headings and paths
Racer159 Aug 22, 2022
aa412dd
Tell people to click edit instead of a direct link
Racer159 Aug 22, 2022
0b75a49
Cleanup example note
Racer159 Aug 22, 2022
d73c729
Cleanup example link verbiage
Racer159 Aug 22, 2022
e7b0b16
Fix examples link
Racer159 Aug 22, 2022
2e825ed
Remove Examples to prevent double
Racer159 Aug 22, 2022
268d8e5
Fix CONTRIBUTING.md link
Racer159 Aug 22, 2022
1416a84
Fix packages links
Racer159 Aug 22, 2022
7bdb8a8
Desiccate the contributing guide
Racer159 Aug 22, 2022
c14f368
Merge branch 'master' into 676-example-readmes-are-out-of-date
jeff-mccoy Aug 23, 2022
85d8945
Address PR feedback
Racer159 Aug 23, 2022
beb92e0
Resolve bad links
Racer159 Aug 23, 2022
80a2c87
Fix images in user guide and overview
Racer159 Aug 23, 2022
95c6830
use zarf tools monitor instead of kubectl
Racer159 Aug 23, 2022
df67160
use zarf tools monitor instead of kubectl
Racer159 Aug 23, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ build-examples:

@test -s ./build/zarf-package-data-injection-demo-$(ARCH).tar || $(ZARF_BIN) package create examples/data-injection -o build -a $(ARCH) --confirm

@test -s ./build/zarf-package-gitops-service-data-$(ARCH).tar.zst || $(ZARF_BIN) package create examples/gitops-data -o build -a $(ARCH) --confirm
@test -s ./build/zarf-package-git-data-$(ARCH).tar.zst || $(ZARF_BIN) package create examples/git-data -o build -a $(ARCH) --confirm

@test -s ./build/zarf-package-test-helm-releasename-$(ARCH).tar.zst || $(ZARF_BIN) package create examples/helm-alt-release-name -o build -a $(ARCH) --confirm

Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,9 +49,9 @@ From the docs you can learn more about [installation](https://docs.zarf.dev/docs

To contribute, please see our [Contributor Guide](https://docs.zarf.dev/docs/developer-guide/contributor-guide). Below is an architectural diagram showing the basics of how Zarf functions which you can read more about [here](https://docs.zarf.dev/docs/developer-guide/nerd-notes).

![Architecture Diagram](./docs/architecture.drawio.svg)
![Architecture Diagram](./docs/.images/architecture.drawio.svg)

[Source DrawIO](docs/architecture.drawio.svg)
[Source DrawIO](docs/.images/architecture.drawio.svg)

## Special Thanks

Expand Down
2 changes: 1 addition & 1 deletion adr/0001-record-architecture-decisions.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ We need to record the architectural decisions made on this project.

## Decision

We will use Architecture Decision Records, as [described by Michael Nygard](http://thinkrelevance.com/blog/2011/11/15/documenting-architecture-decisions), with a couple of small tweaks. See the [Documentation section in the Contributor guide](../../CONTRIBUTING.md#documentation) for full details.
We will use Architecture Decision Records, as [described by Michael Nygard](http://thinkrelevance.com/blog/2011/11/15/documenting-architecture-decisions), with a couple of small tweaks. See the [Documentation section in the Contributor guide](../CONTRIBUTING.md#documentation) for full details.

## Consequences

Expand Down
2 changes: 1 addition & 1 deletion adr/0002-moving-e2e-tests-away-from-terratest.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Accepted

In previous releases of Zarf, the creation of the initialization package at the core of many of our E2E tests required repository secrets to login to registry1. Since this is an open-source project, anyone could submit a change to one of our GitHub workflows that could steal our secrets. In order to protect our secrets from any bad-actors we used [peter-evans/slash-command-dispatch@v2](https://github.com/peter-evans/slash-command-dispatch) so that only a maintainer would have the ability to run the E2E tests when a PR is submitted for review.

In the current version of Zarf (v0.15) images from registry1 are no longer needed to create the zarf-init-<arch>.tar.zst. This means, given our current span of E2E tests, we no longer need to use repository secrets when running tests. This gives us the ability to reassess the way we do our E2E testing.
In the current version of Zarf (v0.15) images from registry1 are no longer needed to create the zarf-init-{{arch}}.tar.zst. This means, given our current span of E2E tests, we no longer need to use repository secrets when running tests. This gives us the ability to reassess the way we do our E2E testing.

When considering how to handle the tests, some of the important additions we were considering were:
1. Ability to test against different kubernetes distributions
Expand Down
4 changes: 2 additions & 2 deletions adr/0005-mutating-webhook.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,14 +8,14 @@ Accepted

## Context

Currently Zarf leverages [Helm Post Rendering](https://helm.sh/docs/topics/advanced/#post-rendering) to mutate image paths and secrets for K8s to use the internal [Zarf Registry](../../packages/zarf-registry/README.md). This works well for simple K8s deployments where Zarf is performing the actual manifest apply but fails when using a secondary gitops tools suchs as [Flux](https://github.com/fluxcd/flux2), [ArgoCD](https://argo-cd.readthedocs.io/en/stable/), etc. At that point, Zarf is unable to provide mutation and it is dependent on the package author to do the mutations themselves using rudimentary templating. Further, this issue also exists when for CRDs that references the [git server](../../packages/gitea/README.md). A `zarf prepare` command was added previously to make this less painful, but it still requires additional burden on package authors to do something we are able to prescribe in code.
Currently Zarf leverages [Helm Post Rendering](https://helm.sh/docs/topics/advanced/#post-rendering) to mutate image paths and secrets for K8s to use the internal [Zarf Registry](../packages/zarf-registry/). This works well for simple K8s deployments where Zarf is performing the actual manifest apply but fails when using a secondary gitops tools suchs as [Flux](https://github.com/fluxcd/flux2), [ArgoCD](https://argo-cd.readthedocs.io/en/stable/), etc. At that point, Zarf is unable to provide mutation and it is dependent on the package author to do the mutations themselves using rudimentary templating. Further, this issue also exists when for CRDs that references the [git server](../packages/gitea/). A `zarf prepare` command was added previously to make this less painful, but it still requires additional burden on package authors to do something we are able to prescribe in code.

## Decision

A [mutating webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) is standard practice in K8s and there [are a lot of them](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#what-does-each-admission-controller-do). Using the normal Zarf component structure and deployment strategy we can leverage a mutating webhook to perform automatic imagePullSecret binding and image path updates as well as add additional as-needed mutations such as updating the [GitRepository](https://fluxcd.io/docs/components/source/gitrepositories/) CRD with the appropriate secret and custom URL for the git server if someone is using Flux.

## Consequences

While deploying the webhook will greatly reduce the package development burden, the nature of how helm manages resources still means we will have to be careful how we apply secrets that could collide with secrets deployed by helm with other tools. Additionally, to keep the webhook simple we are foregoing any side-effects in this iteration such as creating secrets on-demand in a namespace as it is created. Adding side effects carries with it the need to roll those back on failure, handle additional RBAC in the cluster and integrate with the K8s API in the webhook. Therefore, some care will have to be taken for now with how registry and git secrets are generated in a namespace. For example, in the case of [Big Bang](https://repo1.dso.mil/platform-one/big-bang/bigbang) these secrets can be created by those helm charts if we pass in the proper configuration.
While deploying the webhook will greatly reduce the package development burden, the nature of how helm manages resources still means we will have to be careful how we apply secrets that could collide with secrets deployed by helm with other tools. Additionally, to keep the webhook simple we are foregoing any side-effects in this iteration such as creating secrets on-demand in a namespace as it is created. Adding side effects carries with it the need to roll those back on failure, handle additional RBAC in the cluster and integrate with the K8s API in the webhook. Therefore, some care will have to be taken for now with how registry and git secrets are generated in a namespace. For example, in the case of [Big Bang](https://repo1.dso.mil/platform-one/big-bang/bigbang) these secrets can be created by those helm charts if we pass in the proper configuration.

Another benefit of this approach is another layer security for Zarf clusters. The Zarf Agent will act as an intermediary not allowing images not in the Zarf Registry or git repos not stored in the internal git server.
2 changes: 1 addition & 1 deletion docs/.images/architecture.drawio.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
10 changes: 5 additions & 5 deletions docs/0-zarf-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,8 +80,8 @@ Given Zarf's being a "k8s cluster to serve _other_ k8s clusters", the following

Zarf is intended for use in a software deployment process that looks something like this:

<a href="../.images/what-is-zarf/how-to-use-it.png">
<img alt="how it works" src="../.images/what-is-zarf/how-to-use-it.png" heigth="262" />
<a target="\_blank" href={require('./.images/what-is-zarf/how-to-use-it.png').default}>
<img alt="diagram showing how Zarf works" src={require('./.images/what-is-zarf/how-to-use-it.png').default} heigth="262" />
</a>

### (0) - Connect to Internet
Expand All @@ -94,7 +94,7 @@ Zarf can pull from lots of places like Docker Hub, Iron Bank, GitHub, local file

This part of the process requires access to the internet. You feed the `zarf` binary a "recipe" (`zarf.yaml`) and it makes itself busy downloading, packing, and compressing the software you asked for. It outputs a single, ready-to-move distributable (cleverly) called "a package".

Find out more about what that looks like in the [Building a package](.//13-walkthroughs/0-creating-a-zarf-package.md) section.
Find out more about what that looks like in the [Building a package](./13-walkthroughs/0-creating-a-zarf-package.md) section.

### (2) - Ship the Package to system location

Expand All @@ -114,13 +114,13 @@ Zarf allows the package to either deploy to an existing K8's cluster or can spin

### Appliance Cluster Mode

![Appliance Mode Diagram](../.images/what-is-zarf/appliance-mode.png)
![Appliance Mode Diagram](.images/what-is-zarf/appliance-mode.png)

In the simplest usage scenario, your package consists of a single application (plus dependencies) and you configure the Zarf cluster to serve your application directly to end users. This mode of operation is called "Appliance Mode"— because it's small & self-contained like a kitchen appliance—and it is intended for use in environments where you want to run k8s-native tooling but need to keep a small footprint (i.e. single-purpose / constrained / "edge" environments).

### Utility Cluster Mode

![Appliance Mode Diagram](../.images/what-is-zarf/utility-mode.png)
![Appliance Mode Diagram](.images/what-is-zarf/utility-mode.png)

In the more complex use case, your package consists of updates for many apps / systems and you configure the Zarf cluster to propagate updates to downstream systems rather than to serve users directly. This mode of operation is called "Utility Mode"—as it's main job is to add utility to other clusters—and it is intended for use in places where you want to run independent, full-service production environments (ex. your own Big Bang cluster) but you need help tracking, caching & disseminating system / dependency updates.

Expand Down
9 changes: 5 additions & 4 deletions docs/13-walkthroughs/1-initializing-a-k8s-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@ Before you're able to deploy an application package to a cluster, you need to in
1. Zarf binary installed on your $PATH: ([Install Instructions](../3-getting-started.md#installing-zarf))
1. An init-package built/downloaded: ([init-package Build Instructions](./0-creating-a-zarf-package.md)) or ([Download Location](https://github.com/defenseunicorns/zarf/releases))
1. A Kubernetes cluster to work with: ([Local k8s Cluster Instructions](./#setting-up-a-local-kubernetes-cluster))
2. kubectl: ([kubectl Install Instructions](https://kubernetes.io/docs/tasks/tools/#kubectl))

## Running the init Command
<!-- TODO: Should add a note about user/pass combos that get printed out when done (and how to get those values again later) -->
Expand All @@ -31,7 +30,7 @@ zarf init # Run the initialization command
### Confirming the Deployment
Just like how we got a prompt when creating a package in the prior walkthrough, we will also get a prompt when deploying a package.
![Confirm Package Deploy](../.images/walkthroughs/package_deploy_confirm.png)
Since there are container images within our init-package, we also get a notification about the [Software Bill of Materials (SBOM)](https://www.ntia.gov/SBOM) Zarf included for our package with a file location of where we could view the [SBOM Ddashoard](../8-dashboard-ui/1-sbom-dashboard.md) if interested incase we were interested in viewing it.
Since there are container images within our init-package, we also get a notification about the [Software Bill of Materials (SBOM)](https://www.ntia.gov/SBOM) Zarf included for our package with a file location of where we could view the [SBOM Dashoard](../7-dashboard-ui/1-sbom-dashboard.md) if interested incase we were interested in viewing it.

<br />

Expand All @@ -45,9 +44,11 @@ The init package comes with a few optional components that can be installed. For

### Validating the Deployment
<!-- TODO: Would a screenshot be helpful here? -->
After the `zarf init` command is done running, you should see a few new pods in the Kubernetes cluster.
After the `zarf init` command is done running, you should see a few new `zarf` pods in the Kubernetes cluster.
```bash
kubectl get pods -n zarf # Expected output is a short list of pods
zarf tools monitor

# Note you can press `0` if you want to see all namespaces and CTRL-C to exit
```

<br />
Expand Down
3 changes: 1 addition & 2 deletions docs/13-walkthroughs/2-deploying-doom.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,13 @@ In this walkthrough, we are going to deploy a fun application onto your cluster.
1. The [Zarf](https://github.com/defenseunicorns/zarf) repository cloned: ([`git clone` Instructions](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository))
1. Zarf binary installed on your $PATH: ([Install Instructions](../3-getting-started.md#installing-zarf))
1. A Kubernetes cluster that has been initialized by Zarf: ([Initializing a Cluster Instructions](./1-initializing-a-k8s-cluster.md))
1. kubectl: ([kubectl Install Instructions](https://kubernetes.io/docs/tasks/tools/#kubectl))


## Deploying The Games

```bash
cd zarf # Enter the zarf repository that you have cloned down
cd examples/games # Enter the games directory, this is where the zarf.yaml for the game package is located
cd examples/game # Enter the games directory, this is where the zarf.yaml for the game package is located

zarf package create . --confirm # Create the games package

Expand Down
55 changes: 55 additions & 0 deletions docs/13-walkthroughs/3-add-logging.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
# Add Logging

In this walkthrough, we are going to show how you can use a Zarf component to inject zero-config, centralized logging into your Zarf cluster.

More specifically, you'll be adding a [Promtail / Loki / Grafana (PLG)](https://github.com/grafana/loki) stack to the [Doom Walkthrough](./2-deploying-doom.md) by installing Zarf's "logging" component.


## Walkthrough Prerequisites
1. The [Zarf](https://github.com/defenseunicorns/zarf) repository cloned: ([`git clone` Instructions](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository))
1. Zarf binary installed on your $PATH: ([Install Instructions](../3-getting-started.md#installing-zarf))


## Install the logging component

To install the logging component, follow the [Initializing a Cluster Instructions](./1-initializing-a-k8s-cluster.md), but instead answer `y` when asked to install the `logging` component


## Note the credentials

Review the `zarf init` command output for the following:

![logging-creds](../.images/walkthroughs/logging_credentials.png)

You should see a section for `Logging`. You will need these credentials later on.


## Deploy the Doom Walkthrough

Follow the remainder of the [Doom Walkthrough](./2-deploying-doom.md).


## Check the logs

:::note

Because Doom is freshly installed it is recommended to refresh the page a few times to generate more log traffic to view in Grafana

:::


### Log into Grafana

To open Grafana you can use the `zarf connect logging` command.

You'll be redirected the `/login` page where you have to sign in with the Grafana credentials you saved [in a previous step](#note-the-credentials).

Once you've successfully logged in go to:

1. The "Explore" page (Button on the left that looks like a compass)

1. you can select `Loki` in the dropdown, and then

1. enter `{app="game"}` into the Log Browser query input field

Submit that query and you'll get back a dump of all the game pod logs that Loki has collected.
5 changes: 0 additions & 5 deletions docs/13-walkthroughs/3-creating-a-k8s-cluster-with-zarf.md

This file was deleted.

19 changes: 19 additions & 0 deletions docs/13-walkthroughs/4-creating-a-k8s-cluster-with-zarf.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# Initializing a New K8s Cluster

:::caution Hard Hat Area
This page is still being developed. More content will be added soon!
:::

In this walkthrough, we are going to show how you can use Zarf on a fresh linux machine to deploy a [k3s](https://k3s.io/) cluster through Zarf's `k3s` component


## Walkthrough Prerequisites
1. The [Zarf](https://github.com/defenseunicorns/zarf) repository cloned: ([`git clone` Instructions](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository))
1. Zarf binary installed on your $PATH: ([Install Instructions](../3-getting-started.md#installing-zarf))
1. An init-package built/downloaded: ([init-package Build Instructions](./0-creating-a-zarf-package.md)) or ([Download Location](https://github.com/defenseunicorns/zarf/releases))
1. kubectl: ([kubectl Install Instructions](https://kubernetes.io/docs/tasks/tools/#kubectl))
1. `root` access on a Linux machine

## Install the k3s component

To install the k3s component, follow the [Initializing a Cluster Instructions](./1-initializing-a-k8s-cluster.md) as `root`, and instead answer `y` when asked to install the `k3s` component
2 changes: 1 addition & 1 deletion docs/13-walkthroughs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Almost all walkthroughs will have the follow prerequisites/assumptions:
<br />

## Setting Up a Local Kubernetes Cluster
While Zarf is able to deploy a local k3s Kubernetes cluster for you, (as you'll find out more in the [Creating a K8s Cluster with Zarf](./3-creating-a-k8s-cluster-with-zarf.md) walkthrough), that k3s cluster will only work if you are on a root user on a Linux machine. If you are on a Mac, or you're on Linux but don't have root access, you'll need to setup a local dockerized Kubernetes cluster manually. We provide instructions on how to quickly set up a local k3d cluster that you can use for the majority of the walkthroughs.
While Zarf is able to deploy a local k3s Kubernetes cluster for you, (as you'll find out more in the [Creating a K8s Cluster with Zarf](./4-creating-a-k8s-cluster-with-zarf.md) walkthrough), that k3s cluster will only work if you are on a root user on a Linux machine. If you are on a Mac, or you're on Linux but don't have root access, you'll need to setup a local dockerized Kubernetes cluster manually. We provide instructions on how to quickly set up a local k3d cluster that you can use for the majority of the walkthroughs.


### Install k3d
Expand Down
Loading