We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The resource sizing logic does not take into account the number of replicas assigned.
To demonstrate, here is the following TempoStack definition
apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest spec: images: tempo: docker.io/grafana/tempo:x.y.z tempoQuery: docker.io/grafana/tempo-query:x.y.z tempoGateway: quay.io/observatorium/api tempoGatewayOPA: quay.io/observatorium/opa-openshift storage: secret: name: minio-test type: s3 resources: total: limits: memory: 50Gi cpu: 10000m # uncomment and test the difference in output # template: # compactor: # replicas: 5 storageSize: 1Gi
The diff between the version with a comment and without is:
< replicas: 5 --- > replicas: 1
I think this behaviour is not intuitive, and might lead to people over-provisioning their cluster by accident.
The text was updated successfully, but these errors were encountered:
Thanks for the report! Yep, the assigned limits per component should be divided by the number of replicas for this component.
Sorry, something went wrong.
can I help with this? thx
Successfully merging a pull request may close this issue.
The resource sizing logic does not take into account the number of replicas assigned.
To demonstrate, here is the following TempoStack definition
The diff between the version with a comment and without is:
I think this behaviour is not intuitive, and might lead to people over-provisioning their cluster by accident.
The text was updated successfully, but these errors were encountered: