Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker compose #26

Merged
merged 7 commits into from
May 6, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
52 changes: 51 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Listed features can be configured per s3 user and per bucket with [management CL
## Components
[Chorus S3 Proxy](./service/proxy) service responsible for routing s3 requests and capturing data change events.
[Chorus Agent](./service/agent) can be used as an alternative solution for capturing events instead of proxy.
[Chorus Worker](./service/worker) service does actual data replication.
[Chorus Worker](./service/worker) service does actual data replication with the help of [RClone](https://github.com/rclone/rclone).
Communication between Proxy/Agent and worker is done over work queue.
[Asynq](https://github.com/hibiken/asynq) with [Redis](https://github.com/redis/redis) is used as a work queue.

Expand All @@ -29,6 +29,56 @@ For more details, see:

Documentation available at [docs.clyso.com](https://docs.clyso.com/docs/products/chorus/overview).

## Run
### From source
**REQUIREMENTS:**
- Go <https://go.dev/doc/install>

Run all-in-one [standalone binry](./service/standalone) from Go:
```shell
go run ./cmd/chorus
```

Or run each service separately:

**REQUIREMENTS:**
- Go <https://go.dev/doc/install>
- Redis <https://github.com/redis/redis>

```shell
# run chorus worker
go run ./cmd/worker

# run chorus worker with a custom yaml config file
go run ./cmd/worker -config <path to worker yaml config>

# run chorus proxy with a custom yaml config file
go run ./cmd/proxy -config <path to proxy yaml config>

# run chorus agent with a custom yaml config file
go run ./cmd/agent -config <path to agent yaml config>
```

### Standalone binary
See: [service/standalone](./service/standalone)

### Docker-compose
**REQUIREMENTS:**
- Docker

See: [docker-compose](./docker-compose)

### With Helm
**REQUIREMENTS:**
- K8s
- Helm

Install chorus helm chart from OCI registry:
```shell
helm install <release name> oci://harbor.clyso.com/chorus/chorus
```
See: [deploy/chours](./deploy/chorus)

## Develop

[test](./test) package contains e2e tests for replications between s3 storages.
Expand Down
11 changes: 11 additions & 0 deletions docker-compose/FakeS3Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
FROM golang:1.21 as builder
ARG GIT_COMMIT='not set'

ENV PATH=$PATH:$GOPATH/bin
ENV GO111MODULE=on


RUN go install github.com/johannesboyne/gofakes3/cmd/...@latest


CMD ["gofakes3", "-backend", "memory"]
113 changes: 113 additions & 0 deletions docker-compose/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
# Docker-compose
**REQUIREMENTS:**
- Docker
- S3 client (e.g. [s3cmd](https://github.com/s3tools/s3cmd))
```shell
brew install s3cmd
```
- Chorus management CLI [chorctl](../tools/chorctl)
```shell
brew install clyso/tap/chorctl
```
To run chorus with docker compose:
1. Clone repo:
```shell
git clone https://github.com/clyso/chorus.git && cd chorus
```
2. Start chorus `worker`,`proxy` with fake main and follower S3 backends:
```shell
docker-compose -f ./docker-compose/docker-compose.yml --profile fake --profile proxy up
```
3. Check chorus config with CLI:
```
% chorctl storage
NAME ADDRESS PROVIDER USERS
follower http://fake-s3-follower:9000 Other user1
main [MAIN] http://fake-s3-main:9000 Other user1
```
And check that there are no ongoing replications with `chorctl dash`.
4. Create a bucket named `test` in the `main` storage:
```shell
s3cmd mb s3://test -c ./docker-compose/s3cmd-main.conf
```
Check that main bucket is empty:
```shell
s3cmd ls s3://test -c ./docker-compose/s3cmd-main.conf
```
5. Upload contents of this directory into `main`:
```shell
s3cmd sync ./docker-compose/ s3://test -c ./docker-compose/s3cmd-main.conf
```
Check that follower bucket is still not exists:
```shell
s3cmd ls s3://test -c ./docker-compose/s3cmd-follower.conf
ERROR: Bucket 'test' does not exist
ERROR: S3 error: 404 (NoSuchBucket): The specified bucket does not exist
```
6. See `test` bucket avaiable for replication:
```shell
% chorctl repl buckets -u user1 -f main -t follower
BUCKET
test
```
7. Enable replication for the bucket:
```shell
chorctl repl add -u user1 -b test -f main -t follower
```
8. Check replication progress with:
```shell
chorctl dash
```
9. Check that data is actually replicated to the follower:
```shell
s3cmd ls s3://test -c ./docker-compose/s3cmd-follower.conf
```
10. Now do some live changes to bucket using proxy:
```shell
s3cmd del s3://test/agent-conf.yaml -c ./docker-compose/s3cmd-proxy.conf
```
11. List main and follower contents again to see that the file was removed from both. Feel free to play around with storages using `s3cmd` and preconfigured configs [s3cmd-main.conf](./s3cmd-main.conf) [s3cmd-follower.conf](./s3cmd-follower.conf) [s3cmd-proxy.conf](./s3cmd-proxy.conf).

## Where to go next
Replace S3 credentials in [./s3-credentials.yaml](./s3-credentials.yaml) with your own s3 storages and start docker-compose without fake backends:
```shell
docker-compose -f ./docker-compose/docker-compose.yml --profile proxy up
```

Or try a setup with [chorus-agent](../service/agent) instead of proxy. Unlike `chorus-proxy`, `chorus-agent` dont't need to intercept s3 requests to propagate a new cahges. Instead, it is capturing changes from [S3 bucket notifications](https://docs.aws.amazon.com/AmazonS3/latest/userguide/EventNotifications.html)
```shell
docker-compose -f ./docker-compose/docker-compose.yml --profile agent up
```
> [!NOTE]
> Chorus agent will not work with fake S3 backend because bucket notifications are not supported by fake S3 backend.

## How-to
To tear-down:
```shell
docker-compose -f ./docker-compose/docker-compose.yml down
```
To tear down and wipe all replication metadata:
```
docker-compose -f ./docker-compose/docker-compose.yml down -v

```
Rebuild images with the latest source changes (`--build`):
```shell
docker-compose -f ./docker-compose/docker-compose.yml --profile fake --profile proxy --build up
```
To use chorus images from registry instead of building from source replace in [docker-compose.yml](./docker-compose.yml):
```yaml
worker:
build:
context: ../
args:
SERVICE: worker
```
to:

```yaml
worker:
image: "harbor.clyso.com/chorus/worker:latest"
```
And similar for `agent` and `proxy`.

21 changes: 21 additions & 0 deletions docker-compose/agent-conf.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
port: 9673 # agent port to listen incoming notifications
url: "http://localhost:9673" # REQUIRED: url to be used by s3 storage to send notifications. The URL should be reachable for s3 storage.
fromStorage: "main" # REQUIRED: notifications source storage name from Chorus config

log:
json: false # false for dev console logger, true - json log for prod to export to Grafana&Loki.
level: info
metrics:
enabled: false
port: 9090
trace:
enabled: false
endpoint: # url to Jaeger or other open trace provider
redis:
address: "redis:6379"
features:
tagging: true # sync object/bucket tags
acl: true # sync object/bucket ACLs
lifecycle: false # sync bucket Lifecycle
policy: false # sync bucket Policies

90 changes: 90 additions & 0 deletions docker-compose/docker-compose.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
volumes:
redis-data:
services:
redis:
image: 'redis'
command: ["redis-server", "--appendonly", "yes"]
environment:
- ALLOW_EMPTY_PASSWORD=yes
- REDIS_PORT_NUMBER=6379
expose:
- 6379
volumes:
- redis-data:/data
worker:
depends_on:
- redis
build:
context: ../
args:
SERVICE: worker
volumes:
- type: bind
source: ./worker-conf.yaml
target: /bin/config/config.yaml
- type: bind
source: ./s3-credentials.yaml
target: /bin/config/override.yaml
ports:
- "9670:9670" # expose grpc management API for CLI
proxy:
# To enable run with proxy profile: "docker-compose up --profile proxy up". See https://docs.docker.com/compose/profiles/
profiles:
- "proxy"
depends_on:
- redis
build:
context: ../
args:
SERVICE: proxy
volumes:
# proxy config
- type: bind
source: ./proxy-conf.yaml
target: /bin/config/config.yaml
# common config: s3 credentials
- type: bind
source: ./s3-credentials.yaml
target: /bin/config/override.yaml
ports:
- "9669:9669" # expose proxy s3 api
agent:
# To enable run with agent profile: "docker-compose up --profile agent up". See https://docs.docker.com/compose/profiles/
profiles:
- "agent"
depends_on:
- redis
build:
context: ../
args:
SERVICE: agent
volumes:
# agent config
- type: bind
source: ./agent-conf.yaml
target: /bin/config/config.yaml
ports:
- "9673:9673" # expose agent bucket notifications webhook

# start fake in-memory s3 endpoint. To enable run with fake profile: "docker-compose up --profile fake up"
fake-s3-main:
profiles:
- "fake"
build:
context: .
dockerfile: ./FakeS3Dockerfile
expose:
- 9000
ports:
- "9001:9000" # expose main s3 to host
fake-s3-follower:
profiles:
- "fake"
build:
context: .
dockerfile: ./FakeS3Dockerfile
expose:
- 9000
ports:
- "9002:9000" # expose follower s3 to host

26 changes: 26 additions & 0 deletions docker-compose/proxy-conf.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
address: "http://localhost:9669" # Chorus proxy s3 api address
port: 9669
cors:
enabled: false
allowAll: false
whitelist:
auth:
allowV2Signature: false
useStorage: main # use credentials from one of configured storages <one|two>
log:
json: false # false for dev console logger, true - json log for prod to export to Grafana&Loki.
level: info
metrics:
enabled: false
port: 9090
trace:
enabled: false
endpoint: # url to Jaeger or other open trace provider
redis:
address: "redis:6379"
features:
tagging: false # sync object/bucket tags
acl: false # sync object/bucket ACLs
lifecycle: false # sync bucket Lifecycle
policy: false # sync bucket Policies

32 changes: 32 additions & 0 deletions docker-compose/s3-credentials.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
storage:
createRouting: true # create roting rules to route proxy requests to main storage
createReplication: false # create replication rules to replicate data from main to other storages
storages:
main: # yaml key with some handy storage name
address: "http://fake-s3-main:9000"
credentials:
user1:
accessKeyID: fakeKey
secretAccessKey: fakeSecret
provider: Other # <Ceph|Minio|AWS|Other see providers list in rclone config> https://rclone.org/s3/#configuration
isMain: true # <true|false> one of the storages in should be main
healthCheckInterval: 10s
httpTimeout: 1m
isSecure: false #set false for http address
rateLimit:
enable: true
rpm: 60
follower: # yaml key with some handy storage name
address: "http://fake-s3-follower:9000"
credentials:
user1:
accessKeyID: fakeKey2
secretAccessKey: fakeSecret2
provider: Other
isMain: false
healthCheckInterval: 10s
httpTimeout: 1m
isSecure: false
rateLimit:
enable: true
rpm: 60
8 changes: 8 additions & 0 deletions docker-compose/s3cmd-follower.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# s3cmd config for fake follower storage

bucket_location = us-east-1
host_base = localhost:9002
host_bucket = localhost:9002
use_https = false
access_key = fakeKey2
secret_key = fakeSecret2
8 changes: 8 additions & 0 deletions docker-compose/s3cmd-main.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# s3cmd config for fake main storage

bucket_location = us-east-1
host_base = localhost:9001
host_bucket = localhost:9001
use_https = false
access_key = fakeKey
secret_key = fakeSecret
8 changes: 8 additions & 0 deletions docker-compose/s3cmd-proxy.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# s3cmd config for chorus s3 proxy storage

bucket_location = us-east-1
host_base = localhost:9669
host_bucket = localhost:9669
use_https = false
access_key = fakeKey
secret_key = fakeSecret
Loading