Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace kamaji-etcd with aenix-io/etcd-operator #95

Merged
merged 2 commits into from
May 2, 2024
Merged

Conversation

kvaps
Copy link
Member

@kvaps kvaps commented Apr 22, 2024

This change replaces kamaji-etcd with our own etcd-operator, which provides a robust environment for running etcd clusters.

The previous kamaji-etcd has not been removed yet, so it will continue to operate for compatibility purposes.

fixes: #33

@kvaps kvaps changed the title Add etcd-operator Replace kamaji-etcd with etcd-operator Apr 22, 2024
@kvaps kvaps force-pushed the etcd-operator branch 4 times, most recently from 25bb31c to c5f1c85 Compare April 23, 2024 10:22
@kvaps kvaps changed the title Replace kamaji-etcd with etcd-operator Replace kamaji-etcd with aenix-io/etcd-operator Apr 23, 2024
@kvaps kvaps force-pushed the etcd-operator branch 2 times, most recently from 9669420 to 3a28029 Compare April 24, 2024 10:00
@kvaps kvaps marked this pull request as ready for review April 24, 2024 10:01
Signed-off-by: Andrei Kvapil <[email protected]>
| etcdOperator.readinessProbe.periodSeconds | int | `10` | |
| etcdOperator.resources.limits.cpu | string | `"500m"` | |
| etcdOperator.resources.limits.memory | string | `"128Mi"` | |
| etcdOperator.resources.requests.cpu | string | `"100m"` | |
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is the request CPU 100m? This is quite a lot, I think you should start with 20-40m, but you need to know how much etcd eats at least.

| etcdOperator.readinessProbe.initialDelaySeconds | int | `5` | |
| etcdOperator.readinessProbe.periodSeconds | int | `10` | |
| etcdOperator.resources.limits.cpu | string | `"500m"` | |
| etcdOperator.resources.limits.memory | string | `"128Mi"` | |
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The memory limit is very small, an average k8s cluster with 100 pods will force etcd to use up to 300MB per instance. I think it should be increased to 384-512MB

| kubeRbacProxy.readinessProbe | object | `{}` | |
| kubeRbacProxy.resources.limits.cpu | string | `"500m"` | |
| kubeRbacProxy.resources.limits.memory | string | `"128Mi"` | |
| kubeRbacProxy.resources.requests.cpu | string | `"100m"` | |
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is quite a lot, I think you should start with 20-40m.

| kubeRbacProxy.livenessProbe | object | `{}` | |
| kubeRbacProxy.readinessProbe | object | `{}` | |
| kubeRbacProxy.resources.limits.cpu | string | `"500m"` | |
| kubeRbacProxy.resources.limits.memory | string | `"128Mi"` | |
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's hard to say here, but it's worth reviewing the limit, you don't really want to catch OOM

@kvaps
Copy link
Member Author

kvaps commented Apr 25, 2024

@themoriarti, there are defaults provided by the etcd-operator maintainers. We can consider other values, but I think it might be better to address this in the upstream project.

https://github.com/aenix-io/etcd-operator/

@themoriarti
Copy link
Collaborator

We are waiting for a response from etcd-operator aenix-io/etcd-operator#199 , if they make changes it will simplify the integration.

resources:
limits:
cpu: 500m
memory: 128Mi
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need more memory up to 512MB

cpu: 500m
memory: 128Mi
requests:
cpu: 100m
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for start etcd not need more then 40m CPU reservation

@themoriarti
Copy link
Collaborator

@kvaps Can you add the default value of requests and resource limits when creating an etсd cluster?

@kvaps kvaps requested a review from themoriarti May 2, 2024 15:18
Copy link
Collaborator

@themoriarti themoriarti left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At the moment, we have gone through all the discussions and have come to the decision not to set any additional limits, but after the release we will monitor and return to the discussion of defaults in the next discussions. Fly fly our good etсd.

@themoriarti themoriarti merged commit dee190a into main May 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[etcd] Replace kamaji-etcd with etcd-operator
2 participants