Skip to content

Commit

Permalink
Playground docs (#347)
Browse files Browse the repository at this point in the history
* fluxmeter file

* playground
  • Loading branch information
jaidesai-fn authored Sep 6, 2022
1 parent 0b198eb commit 6f9ccc4
Show file tree
Hide file tree
Showing 7 changed files with 228 additions and 7 deletions.
4 changes: 2 additions & 2 deletions api/buf.lock
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@ deps:
- remote: buf.build
owner: googleapis
repository: googleapis
commit: 80720a488c9a414bb8d4a9f811084989
commit: 62f35d8aed1149c291d606d958a7ce32
- remote: buf.build
owner: grpc-ecosystem
repository: grpc-gateway
commit: 00116f302b12478b85deb33b734e026c
commit: bc28b723cd774c32b6fbc77621518765
5 changes: 5 additions & 0 deletions docs/content/concepts/flow-control/fluxmeter.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,9 @@
---
title: FluxMeter
position: 3
keywords:
- fluxmeter
- histograms
---

### Histograms
2 changes: 1 addition & 1 deletion docs/content/concepts/flow-control/selector.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ See also [Selector reference](/reference/configuration/policies#-v1selector)
Flow observability and control components are instantiated on Aperture Agents
and they select flows based on scoping rules defined in Selectors.

A Selector consists of following fields:
A Selector consists of the following fields:

### Agent Group

Expand Down
2 changes: 1 addition & 1 deletion docs/content/setup/_category_.yml
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
label: Setup
label: Get Started
position: 2
4 changes: 2 additions & 2 deletions docs/content/setup/installation/agent/agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,13 +34,13 @@ The Aperture Agent can be installed in below listed modes:

1. **Kubernetes**

1. **DaemonSet**
1. [**DaemonSet**](kubernetes/daemonset.md)

The Aperture Agent can be installed as a
[Kubernetes DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/),
where it will get deployed on all the nodes of the cluster.

2. **Sidecar**
2. [**Sidecar**](kubernetes/sidecar.md)

The Aperture Agent can also be installed as a Sidecar. In this mode,
whenever a new pod is started with required labels and annotations, the
Expand Down
2 changes: 1 addition & 1 deletion docs/content/setup/istio.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ keywords:
- service mesh
- istio
- envoy filter
sidebar_position: 2
sidebar_position: 3
---

## Envoy Filter
Expand Down
216 changes: 216 additions & 0 deletions docs/content/setup/playground.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,216 @@
---
title: Playground
keywords:
- playground
- proof of concept
- policies
- rate limit
- concurrency control
sidebar_position: 2
---

# Playground

Playground is a Kubernetes-based environment for exploring the capabilities of
Aperture. Additionally, it is used as a development environment for Aperture.
The playground uses [Tilt](https://tilt.dev/)](https://tilt.dev/) for
orchestrating the deployments in Kubernetes. Tilt watches for changes to local
files and auto-deploys any resources that change. This is very convenient for
getting quick feedback during development of Aperture.

Playground deploys resources to the Kubernetes cluster that `kubectl` on your
machine points at. For convience, this README includes instructions for
deploying a local Kubernetes cluster using [Kind](https://kind.sigs.k8s.io/).

## Tools

Described hereafter deployment methods assume usage of specific deployment and
configuration/management tools (which must be installed beforehand).

To install required ones, you can use [ASDF](https://asdf-vm.com/) OR install
manually (check [Tools used for k8s deployment](#tools-used-for-k8s-deployment)
).

When using `asdf`:

- [Download](https://asdf-vm.com/guide/getting-started.html#_2-download-asdf)
and [install](https://asdf-vm.com/guide/getting-started.html#_3-install-asdf)
`asdf`
- Add intended plugins (tools/applications which will be managed by `asdf`) e.g.
`asdf plugin-add terraform`
- Install tools: `asdf install`

> Note: Last command will install tools which have been added as plugins and
> which are defined/versioned in `.tool-versions` file
### Tools required for k8s deployment

Tools which are required for local k8s deployment:

#### _Helm_

Helm is a package manager for k8s.

To install manually, follow instructions: <https://helm.sh/docs/intro/install/>

#### _Tanka and Jsonnet Bundler_

Grafana Tanka is the robust configuration utility for your Kubernetes cluster,
powered by the unique Jsonnet language.

Jsonnet Bundler is used to manage Jsonnet dependencies.

To install manually, follow instructions: <https://tanka.dev/install>

#### _Local k8s cluster_

May use [`kind`](https://kind.sigs.k8s.io/docs/user/quick-start/).

#### _kubectl_

The Kubernetes command line tool. Follow the instructions:
<https://kubernetes.io/docs/tasks/tools/#kubectl>

#### _Alpha features_

Agent core service uses feature gate for managing node-local traffic:
<https://kubernetes.io/docs/concepts/services-networking/service-traffic-policy/>

For kubernetes 1.21 it requires a feature gate activation. For kubernetes 1.22
it's in beta so nothing needs to be added to cluster config.

```yaml
featureGates:
ServiceInternalTrafficPolicy: true
```
## Deploying with Tilt
In case of local deployments and development work, it's nice to be able to
automatically rebuild images and services.
This can be achieved by using `tilt`.

> Note: This builds up on tools mentioned earlier, their installation and
> configuration is required.

### Tilt installation

Tilt can be installed with `asdf install` or manually
<https://docs.tilt.dev/install.html>.

### Prerequisites - k8s cluster bootstrap

Create a K8s cluster using Kind with configuration file:

```sh
kind create cluster --config kind-config.yaml
```

This will start a cluster with the name `aperture-playground`.

Once done, you can delete the cluster with following command:

```sh
kind delete cluster --name aperture-playground
```

Alternatively, you can use [`ctlptl`](https://github.com/tilt-dev/ctlptl) to
start a cluster with built-in local registry for Docker images:

```sh
ctlptl apply -f ctlptl-kind-config.yaml
```

### Services deployment

Simply run `tilt up` - it'll automatically start building and deploying!

You can reach the WebUI by going to <http://localhost:10350> or pressing
(Space).

Tilt should automatically detect new changes to the services, rebuild and
re-deploy them.

Useful flags:

- `--port` or `TILT_PORT` - the port on which WebUI should listen

- `--stream` - will stream both tilt and pod logs to terminal (useful for
debugging `tilt` itself)

- `--legacy` - if you want a basic, terminal-based frontend

By default, `tilt` will deploy and manage Agent and Controller.

If you want to limit it to only manage some namespace(s) or resource(s), simply
pass their name(s) as additional argument(s).

Examples:

- `tilt up aperture-grafana` - only bring up `grafana` and dependent services
(`grafana-operator`, ...)
- `tilt up agent demoapp aperture-grafana` - you can mix namespace names and
resource names, as well as specify as many of them as you want.

If you want to manage only explicitly passed resources/namespaces, you should
pass the `--only` argument:

- `tilt up -- --only aperture-grafana` - only bring up grafana, namespace
resolving to resources still works

To view the available namespaces and resources, either:

- run `tilt up --stream -- --list-resources`
- read the `DEP_TREE` at the top of `Tiltfile`

To disable automatic updates in Tilt, add `--manual` with the command.

### To many open files "warning"

If you are getting following message in cluster container:

> failed to create fsnotify watcher: to many open files

If `sysctl fs.inotify.{max_queued_events,max_user_instances,max_user_watches}`
less than:

```bash
fs.inotify.max_queued_events = 16384
fs.inotify.max_user_instances = 1024
fs.inotify.max_user_watches = 524288
```

change it, using (temporary method):

```bash
sudo sysctl fs.inotify.max_queued_events = 16384
sudo sysctl fs.inotify.max_user_instances = 1024
sudo sysctl fs.inotify.max_user_watches = 524288
```

or add following lines to `/etc/sysctl.conf`:

```bash
fs.inotify.max_queued_events = 16384
fs.inotify.max_user_instances = 1024
fs.inotify.max_user_watches = 524288
```

### Teardown

Simply run `tilt down`. All created resources will be deleted.

### Port forwards

Tilt will automatically setup forwarding for the services.

Below is the mapping of the ports being forwarded by Tilt:

| Component | Container Port | Local Port |
| ---------- | -------------- | ---------- |
| Agent | 80 | 8089 |
| Controller | 80 | 8087 |
| Prometheus | 9090 | 9090 |
| Etcd | 2379 | 2379 |
| Grafana | 3000 | 3000 |

0 comments on commit 6f9ccc4

Please sign in to comment.