Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update default values.yaml #246

Closed
wants to merge 1 commit into from
Closed

update default values.yaml #246

wants to merge 1 commit into from

Conversation

thobianchi
Copy link

Signed-off-by: Thomas Bianchi [email protected]

What this PR does:
edit values.yaml giving a working cortex instance configured with block storage, memcache and memberlist

Which issue(s) this PR fixes:
[N/A]

Checklist

  • CHANGELOG.md updated - the order of entries should be [CHANGE], [FEATURE], [ENHANCEMENT], [BUGFIX]

This is a proposal to update values.yaml giving a working instance with updated defaults.
I tested with a kubernetes kind created simply by: kind create cluster and then helm install --create-namespace --namespace cortex -f values.yaml cortex ., eventually every component start:

image

in depth:

  • I configured a backend blocks by default on filesystem, we are using in our production environment gcp bucket storage, so I included the example for it ( it needs only two fields name and json ).
  blocks-storage-memcached: true

  storage:
    engine: blocks
  blocks_storage:
    backend: "filesystem"
    filesystem:
      dir: "/data/blocks-storage"
    # backend: "gcs"
    # gcs:
    #   bucket_name: ""
    #   service_account: "" # json credentials
    bucket_store:
      sync_dir: "data"
      index_cache:
        backend: memcached
  • The numbers in this section are taken partially from this issue. I found that these defaults are better for a real environment without being too much.
       memcached:
          addresses: 'dns+{{ include "cortex.fullname" $ }}-memcached-blocks-index:11211'
          timeout: 300ms
          max_idle_connections: 750
          max_async_concurrency: 100
          max_async_buffer_size: 10000000
  • Memberlist configuration instead of consul
    ring:
      kvstore:
        store: memberlist
  • Decreased all components to 1 replica
  • Memcached Chart do not have those values like extraArgs or maxItemMemory so I found out that this works:
memcached-frontend:
  enabled: true
  architecture: "high-availability"
  replicaCount: 1
  # dpdbMinAvailable: 1
  # image: memcached:1.5.7-alpine
  command:
    - /run.sh
    - -m 256
    - -I 32m
    - -t 4
  arguments: []
  resources: {}
  #  requests:
  #    memory: 256Mi
  #    cpu: 250m
  #  limits:
  #    memory: 256Mi
  #    cpu: 250m
  metrics:
    enabled: false
    serviceMonitor:
      enabled: false

We have been using Cortex in production environment for 8 months, with great results! I want to thank you all.
Our configuration is really similar to this proposal.

Some numbers:
image

I think would be useful insert in this repository a directory cortex-mixin with a collection of Grafana dashboard for monitoring cortex itself, I do not even remember where I found this dashboard...

image

Signed-off-by: Thomas Bianchi <[email protected]>
@nschad
Copy link
Collaborator

nschad commented Oct 19, 2021

I think would be useful insert in this repository a directory cortex-mixin with a collection of Grafana dashboard for monitoring cortex itself, I do not even remember where I found this dashboard...

You probably mean this: https://github.com/grafana/cortex-jsonnet

This PR is very similiar with #227. Maybe check that PR first and write down your thoughts as comments?

@thobianchi
Copy link
Author

oh.. I missed that PR, yes I will look into it if I can contribute there.

@nschad nschad added the duplicate This issue or pull request already exists label Nov 5, 2021
@nschad nschad closed this Nov 25, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
duplicate This issue or pull request already exists
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants