Horizontally scalable, highly available, multi-tenant, long term Prometheus.
Homepage: https://cortexmetrics.io/
Name | Url | |
---|---|---|
Tom Hayward | [email protected] | https://github.com/kd7lxl |
Niclas Schad | [email protected] | https://github.com/ShuzZzle |
Checkout our documentation for the cortex-helm-chart here
Cortex requires a Key-Value (KV) store to store the ring. It can use traditional KV stores like Consul or etcd, but it can also build its own KV store on top of memberlist library using a gossip algorithm.
The recommended approach is to use the built-in memberlist as a KV store, where supported.
External KV stores can be installed alongside Cortex using their respective helm charts https://github.com/bitnami/charts/tree/master/bitnami/etcd and https://github.com/helm/charts/tree/master/stable/consul.
Cortex requires a storage backend to store metrics and indexes. See cortex documentation for details on storage types and documentation
Helm must be installed to use the charts. Please refer to Helm's documentation to get started.
Once Helm is set up properly, add the repo as follows:
helm repo add cortex-helm https://cortexproject.github.io/cortex-helm-chart
Cortex can now be installed with the following command:
helm install cortex --namespace cortex cortex-helm/cortex
If you have custom options or values you want to override:
helm install cortex --namespace cortex -f my-cortex-values.yaml cortex-helm/cortex
Specific versions of the chart can be installed using the --version
option, with the default being the latest release.
What versions are available for installation can be listed with the following command:
helm search repo cortex-helm
As part of this chart many different pods and services are installed which all have varying resource requirements. Please make sure that you have sufficient resources (CPU/memory) available in your cluster before installing Cortex Helm chart.
To upgrade Cortex use the following command:
helm upgrade cortex -f my-cortex-values.yaml cortex-helm/cortex
Note that it might be necessary to use --reset-values
since some default values in the values.yaml might have changed or were removed.
Source code can be found here
Kubernetes: ^1.19.0-0
Repository | Name | Version |
---|---|---|
https://charts.bitnami.com/bitnami | memcached(memcached) | 5.15.12 |
https://charts.bitnami.com/bitnami | memcached-index-read(memcached) | 5.15.12 |
https://charts.bitnami.com/bitnami | memcached-index-write(memcached) | 5.15.12 |
https://charts.bitnami.com/bitnami | memcached-frontend(memcached) | 5.15.12 |
https://charts.bitnami.com/bitnami | memcached-blocks-index(memcached) | 5.15.12 |
https://charts.bitnami.com/bitnami | memcached-blocks(memcached) | 5.15.12 |
https://charts.bitnami.com/bitnami | memcached-blocks-metadata(memcached) | 5.15.12 |
Key | Type | Default | Description |
---|---|---|---|
alertmanager.affinity | object | {} |
|
alertmanager.annotations | object | {} |
|
alertmanager.containerSecurityContext.enabled | bool | true |
|
alertmanager.containerSecurityContext.readOnlyRootFilesystem | bool | true |
|
alertmanager.enabled | bool | true |
|
alertmanager.env | list | [] |
Extra env variables to pass to the cortex container |
alertmanager.extraArgs | object | {} |
Additional Cortex container arguments, e.g. log level (debug, info, warn, error) |
alertmanager.extraContainers | list | [] |
Additional containers to be added to the cortex pod. |
alertmanager.extraPorts | list | [] |
Additional ports to the cortex services. Useful to expose extra container ports. |
alertmanager.extraVolumeMounts | list | [] |
Extra volume mounts that will be added to the cortex container |
alertmanager.extraVolumes | list | [] |
Additional volumes to the cortex pod. |
alertmanager.initContainers | list | [] |
Init containers to be added to the cortex pod. |
alertmanager.livenessProbe.httpGet.path | string | "/ready" |
|
alertmanager.livenessProbe.httpGet.port | string | "http-metrics" |
|
alertmanager.nodeSelector | object | {} |
|
alertmanager.persistentVolume.accessModes | list | ["ReadWriteOnce"] |
Alertmanager data Persistent Volume access modes Must match those of existing PV or dynamic provisioner Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/ |
alertmanager.persistentVolume.annotations | object | {} |
Alertmanager data Persistent Volume Claim annotations |
alertmanager.persistentVolume.enabled | bool | true |
If true and alertmanager.statefulSet.enabled is true, Alertmanager will create/use a Persistent Volume Claim If false, use emptyDir |
alertmanager.persistentVolume.size | string | "2Gi" |
Alertmanager data Persistent Volume size |
alertmanager.persistentVolume.storageClass | string | nil |
Alertmanager data Persistent Volume Storage Class If defined, storageClassName: If set to "-", storageClassName: "", which disables dynamic provisioning If undefined (the default) or set to null, no storageClassName spec is set, choosing the default provisioner. |
alertmanager.persistentVolume.subPath | string | "" |
Subdirectory of Alertmanager data Persistent Volume to mount Useful if the volume's root directory is not empty |
alertmanager.podAnnotations | object | {"prometheus.io/port":"8080","prometheus.io/scrape":"true"} |
Pod Annotations |
alertmanager.podDisruptionBudget | object | {"maxUnavailable":1} |
If not set then a PodDisruptionBudget will not be created |
alertmanager.podLabels | object | {} |
Pod Labels |
alertmanager.readinessProbe.httpGet.path | string | "/ready" |
|
alertmanager.readinessProbe.httpGet.port | string | "http-metrics" |
|
alertmanager.replicas | int | 1 |
|
alertmanager.resources | object | {} |
|
alertmanager.securityContext | object | {} |
|
alertmanager.service.annotations | object | {} |
|
alertmanager.service.labels | object | {} |
|
alertmanager.serviceAccount.name | string | "" |
"" disables the individual serviceAccount and uses the global serviceAccount for that component |
alertmanager.serviceMonitor.additionalLabels | object | {} |
|
alertmanager.serviceMonitor.enabled | bool | false |
|
alertmanager.serviceMonitor.extraEndpointSpec | object | {} |
Additional endpoint configuration https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#endpoint |
alertmanager.serviceMonitor.metricRelabelings | list | [] |
|
alertmanager.serviceMonitor.relabelings | list | [] |
|
alertmanager.sidecar | object | {"containerSecurityContext":{"enabled":true,"readOnlyRootFilesystem":true},"defaultFolderName":null,"enableUniqueFilenames":false,"enabled":false,"folder":"/data","folderAnnotation":null,"image":{"repository":"quay.io/kiwigrid/k8s-sidecar","sha":"","tag":"1.10.7"},"imagePullPolicy":"IfNotPresent","label":"cortex_alertmanager","labelValue":null,"resources":{},"searchNamespace":null,"skipTlsVerify":false,"watchMethod":null} |
Sidecars that collect the configmaps with specified label and stores the included files them into the respective folders |
alertmanager.sidecar.skipTlsVerify | bool | false |
skipTlsVerify Set to true to skip tls verification for kube api calls |
alertmanager.startupProbe.failureThreshold | int | 10 |
|
alertmanager.startupProbe.httpGet.path | string | "/ready" |
|
alertmanager.startupProbe.httpGet.port | string | "http-metrics" |
|
alertmanager.statefulSet.enabled | bool | false |
If true, use a statefulset instead of a deployment for pod management. This is useful for using a persistent volume for storing silences between restarts. |
alertmanager.statefulStrategy.type | string | "RollingUpdate" |
|
alertmanager.strategy.rollingUpdate.maxSurge | int | 0 |
|
alertmanager.strategy.rollingUpdate.maxUnavailable | int | 1 |
|
alertmanager.strategy.type | string | "RollingUpdate" |
|
alertmanager.terminationGracePeriodSeconds | int | 60 |
|
alertmanager.tolerations | list | [] |
Tolerations for pod assignment ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ |
clusterDomain | string | "cluster.local" |
Kubernetes cluster DNS domain |
compactor.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].key | string | "app.kubernetes.io/component" |
|
compactor.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].operator | string | "In" |
|
compactor.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].values[0] | string | "compactor" |
|
compactor.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.topologyKey | string | "kubernetes.io/hostname" |
|
compactor.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].weight | int | 100 |
|
compactor.annotations | object | {} |
|
compactor.containerSecurityContext.enabled | bool | true |
|
compactor.containerSecurityContext.readOnlyRootFilesystem | bool | true |
|
compactor.enabled | bool | true |
|
compactor.env | list | [] |
|
compactor.extraArgs | object | {} |
Additional Cortex container arguments, e.g. log.level (debug, info, warn, error) |
compactor.extraContainers | list | [] |
|
compactor.extraPorts | list | [] |
|
compactor.extraVolumeMounts | list | [] |
|
compactor.extraVolumes | list | [] |
|
compactor.initContainers | list | [] |
|
compactor.livenessProbe.httpGet.path | string | "/ready" |
|
compactor.livenessProbe.httpGet.port | string | "http-metrics" |
|
compactor.livenessProbe.httpGet.scheme | string | "HTTP" |
|
compactor.nodeSelector | object | {} |
|
compactor.persistentVolume.accessModes | list | ["ReadWriteOnce"] |
compactor data Persistent Volume access modes Must match those of existing PV or dynamic provisioner Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/ |
compactor.persistentVolume.annotations | object | {} |
compactor data Persistent Volume Claim annotations |
compactor.persistentVolume.enabled | bool | true |
If true compactor will create/use a Persistent Volume Claim If false, use emptyDir |
compactor.persistentVolume.size | string | "2Gi" |
|
compactor.persistentVolume.storageClass | string | nil |
compactor data Persistent Volume Storage Class If defined, storageClassName: If set to "-", storageClassName: "", which disables dynamic provisioning If undefined (the default) or set to null, no storageClassName spec is set, choosing the default provisioner. |
compactor.persistentVolume.subPath | string | "" |
Subdirectory of compactor data Persistent Volume to mount Useful if the volume's root directory is not empty |
compactor.podAnnotations | object | {"prometheus.io/port":"8080","prometheus.io/scrape":"true"} |
Pod Annotations |
compactor.podDisruptionBudget.maxUnavailable | int | 1 |
|
compactor.podLabels | object | {} |
Pod Labels |
compactor.readinessProbe.httpGet.path | string | "/ready" |
|
compactor.readinessProbe.httpGet.port | string | "http-metrics" |
|
compactor.replicas | int | 1 |
|
compactor.resources | object | {} |
|
compactor.securityContext | object | {} |
|
compactor.service.annotations | object | {} |
|
compactor.service.labels | object | {} |
|
compactor.serviceAccount.name | string | "" |
"" disables the individual serviceAccount and uses the global serviceAccount for that component |
compactor.serviceMonitor.additionalLabels | object | {} |
|
compactor.serviceMonitor.enabled | bool | false |
|
compactor.serviceMonitor.extraEndpointSpec | object | {} |
Additional endpoint configuration https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#endpoint |
compactor.serviceMonitor.metricRelabelings | list | [] |
|
compactor.serviceMonitor.relabelings | list | [] |
|
compactor.startupProbe.failureThreshold | int | 60 |
|
compactor.startupProbe.httpGet.path | string | "/ready" |
|
compactor.startupProbe.httpGet.port | string | "http-metrics" |
|
compactor.startupProbe.httpGet.scheme | string | "HTTP" |
|
compactor.startupProbe.initialDelaySeconds | int | 120 |
|
compactor.startupProbe.periodSeconds | int | 30 |
|
compactor.strategy.type | string | "RollingUpdate" |
|
compactor.terminationGracePeriodSeconds | int | 240 |
|
compactor.tolerations | list | [] |
|
config.alertmanager.enable_api | bool | false |
Enable the experimental alertmanager config api. |
config.alertmanager.external_url | string | "/api/prom/alertmanager" |
|
config.alertmanager.storage | object | {} |
Type of backend to use to store alertmanager configs. Supported values are: "configdb", "gcs", "s3", "local". refer to: https://cortexmetrics.io/docs/configuration/configuration-file/#alertmanager_config |
config.api.prometheus_http_prefix | string | "/prometheus" |
|
config.api.response_compression_enabled | bool | true |
Use GZIP compression for API responses. Some endpoints serve large YAML or JSON blobs which can benefit from compression. |
config.auth_enabled | bool | false |
|
config.blocks_storage.bucket_store.bucket_index.enabled | bool | true |
|
config.blocks_storage.bucket_store.sync_dir | string | "/data/tsdb-sync" |
|
config.blocks_storage.tsdb.dir | string | "/data/tsdb" |
|
config.distributor.pool.health_check_ingesters | bool | true |
|
config.distributor.shard_by_all_labels | bool | true |
Distribute samples based on all labels, as opposed to solely by user and metric name. |
config.frontend.log_queries_longer_than | string | "10s" |
|
config.ingester.lifecycler.final_sleep | string | "30s" |
Duration to sleep for before exiting, to ensure metrics are scraped. |
config.ingester.lifecycler.join_after | string | "10s" |
We don't want to join immediately, but wait a bit to see other ingesters and their tokens first. It can take a while to have the full picture when using gossip |
config.ingester.lifecycler.num_tokens | int | 512 |
|
config.ingester.lifecycler.observe_period | string | "10s" |
To avoid generating same tokens by multiple ingesters, they can "observe" the ring for a while, after putting their own tokens into it. This is only useful when using gossip, since multiple ingesters joining at the same time can have conflicting tokens if they don't see each other yet. |
config.ingester.lifecycler.ring.kvstore.store | string | "memberlist" |
|
config.ingester.lifecycler.ring.replication_factor | int | 3 |
Ingester replication factor per default is 3 |
config.ingester_client.grpc_client_config.max_recv_msg_size | int | 10485760 |
|
config.ingester_client.grpc_client_config.max_send_msg_size | int | 10485760 |
|
config.limits.enforce_metric_name | bool | true |
Enforce that every sample has a metric name |
config.limits.max_query_lookback | string | "0s" |
|
config.limits.reject_old_samples | bool | true |
|
config.limits.reject_old_samples_max_age | string | "168h" |
|
config.memberlist.bind_port | int | 7946 |
|
config.memberlist.join_members | list | ["{{ include \"cortex.fullname\" $ }}-memberlist"] |
the service name of the memberlist if using memberlist discovery |
config.querier.active_query_tracker_dir | string | "/data/active-query-tracker" |
|
config.querier.query_ingesters_within | string | "13h" |
Maximum lookback beyond which queries are not sent to ingester. 0 means all queries are sent to ingester. Ingesters by default have no data older than 12 hours, so we can safely set this 13 hours |
config.querier.query_store_after | string | "12h" |
The time after which a metric should be queried from storage and not just ingesters. |
config.querier.store_gateway_addresses | string | automatic | Comma separated list of store-gateway addresses in DNS Service Discovery format. This option should is set automatically when using the blocks storage and the store-gateway sharding is disabled (when enabled, the store-gateway instances form a ring and addresses are picked from the ring). |
config.query_range.align_queries_with_step | bool | true |
|
config.query_range.cache_results | bool | true |
|
config.query_range.results_cache.cache.memcached.expiration | string | "1h" |
|
config.query_range.results_cache.cache.memcached_client.timeout | string | "1s" |
|
config.query_range.split_queries_by_interval | string | "24h" |
|
config.ruler.enable_alertmanager_discovery | bool | false |
|
config.ruler.enable_api | bool | true |
Enable the experimental ruler config api. |
config.ruler.storage | object | {} |
Method to use for backend rule storage (configdb, azure, gcs, s3, swift, local) refer to https://cortexmetrics.io/docs/configuration/configuration-file/#ruler_config |
config.runtime_config.file | string | "/etc/cortex-runtime-config/runtime_config.yaml" |
|
config.server.grpc_listen_port | int | 9095 |
|
config.server.grpc_server_max_concurrent_streams | int | 10000 |
|
config.server.grpc_server_max_recv_msg_size | int | 10485760 |
|
config.server.grpc_server_max_send_msg_size | int | 10485760 |
|
config.server.http_listen_port | int | 8080 |
|
config.storage | object | {"engine":"blocks","index_queries_cache_config":{"memcached":{"expiration":"1h"},"memcached_client":{"timeout":"1s"}}} |
See https://github.com/cortexproject/cortex/blob/master/docs/configuration/config-file-reference.md#storage_config |
config.storage.index_queries_cache_config.memcached.expiration | string | "1h" |
How long keys stay in the memcache |
config.storage.index_queries_cache_config.memcached_client.timeout | string | "1s" |
Maximum time to wait before giving up on memcached requests. |
config.store_gateway | object | {"sharding_enabled":false} |
https://cortexmetrics.io/docs/configuration/configuration-file/#store_gateway_config |
configs.affinity | object | {} |
|
configs.annotations | object | {} |
|
configs.containerSecurityContext.enabled | bool | true |
|
configs.containerSecurityContext.readOnlyRootFilesystem | bool | true |
|
configs.enabled | bool | false |
|
configs.env | list | [] |
|
configs.extraArgs | object | {} |
Additional Cortex container arguments, e.g. log.level (debug, info, warn, error) |
configs.extraContainers | list | [] |
|
configs.extraPorts | list | [] |
|
configs.extraVolumeMounts | list | [] |
|
configs.extraVolumes | list | [] |
|
configs.initContainers | list | [] |
|
configs.livenessProbe.httpGet.path | string | "/ready" |
|
configs.livenessProbe.httpGet.port | string | "http-metrics" |
|
configs.nodeSelector | object | {} |
|
configs.persistentVolume.subPath | string | nil |
|
configs.podAnnotations | object | {"prometheus.io/port":"8080","prometheus.io/scrape":"true"} |
Pod Annotations |
configs.podDisruptionBudget.maxUnavailable | int | 1 |
|
configs.podLabels | object | {} |
Pod Labels |
configs.readinessProbe.httpGet.path | string | "/ready" |
|
configs.readinessProbe.httpGet.port | string | "http-metrics" |
|
configs.replicas | int | 1 |
|
configs.resources | object | {} |
|
configs.securityContext | object | {} |
|
configs.service.annotations | object | {} |
|
configs.service.labels | object | {} |
|
configs.serviceAccount.name | string | "" |
"" disables the individual serviceAccount and uses the global serviceAccount for that component |
configs.serviceMonitor.additionalLabels | object | {} |
|
configs.serviceMonitor.enabled | bool | false |
|
configs.serviceMonitor.extraEndpointSpec | object | {} |
Additional endpoint configuration https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#endpoint |
configs.serviceMonitor.metricRelabelings | list | [] |
|
configs.serviceMonitor.relabelings | list | [] |
|
configs.startupProbe.failureThreshold | int | 10 |
|
configs.startupProbe.httpGet.path | string | "/ready" |
|
configs.startupProbe.httpGet.port | string | "http-metrics" |
|
configs.strategy.rollingUpdate.maxSurge | int | 0 |
|
configs.strategy.rollingUpdate.maxUnavailable | int | 1 |
|
configs.strategy.type | string | "RollingUpdate" |
|
configs.terminationGracePeriodSeconds | int | 180 |
|
configs.tolerations | list | [] |
|
configsdb_postgresql.auth.existing_secret.key | string | nil |
|
configsdb_postgresql.auth.existing_secret.name | string | nil |
|
configsdb_postgresql.auth.password | string | nil |
|
configsdb_postgresql.enabled | bool | false |
|
configsdb_postgresql.uri | string | nil |
|
distributor.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].key | string | "app.kubernetes.io/component" |
|
distributor.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].operator | string | "In" |
|
distributor.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].values[0] | string | "distributor" |
|
distributor.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.topologyKey | string | "kubernetes.io/hostname" |
|
distributor.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].weight | int | 100 |
|
distributor.annotations | object | {} |
|
distributor.autoscaling.behavior | object | {} |
Ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-configurable-scaling-behavior |
distributor.autoscaling.enabled | bool | false |
Creates a HorizontalPodAutoscaler for the distributor pods. |
distributor.autoscaling.maxReplicas | int | 30 |
|
distributor.autoscaling.minReplicas | int | 2 |
|
distributor.autoscaling.targetCPUUtilizationPercentage | int | 80 |
|
distributor.autoscaling.targetMemoryUtilizationPercentage | int | 0 |
|
distributor.containerSecurityContext.enabled | bool | true |
|
distributor.containerSecurityContext.readOnlyRootFilesystem | bool | true |
|
distributor.env | list | [] |
|
distributor.extraArgs | object | {} |
Additional Cortex container arguments, e.g. log.level (debug, info, warn, error) |
distributor.extraContainers | list | [] |
|
distributor.extraPorts | list | [] |
|
distributor.extraVolumeMounts | list | [] |
|
distributor.extraVolumes | list | [] |
|
distributor.initContainers | list | [] |
|
distributor.lifecycle | object | {} |
|
distributor.livenessProbe.httpGet.path | string | "/ready" |
|
distributor.livenessProbe.httpGet.port | string | "http-metrics" |
|
distributor.nodeSelector | object | {} |
|
distributor.persistentVolume.subPath | string | nil |
|
distributor.podAnnotations | object | {"prometheus.io/port":"8080","prometheus.io/scrape":"true"} |
Pod Annotations |
distributor.podDisruptionBudget.maxUnavailable | int | 1 |
|
distributor.podLabels | object | {} |
Pod Labels |
distributor.readinessProbe.httpGet.path | string | "/ready" |
|
distributor.readinessProbe.httpGet.port | string | "http-metrics" |
|
distributor.replicas | int | 2 |
|
distributor.resources | object | {} |
|
distributor.securityContext | object | {} |
|
distributor.service.annotations | object | {} |
|
distributor.service.labels | object | {} |
|
distributor.serviceAccount.name | string | "" |
"" disables the individual serviceAccount and uses the global serviceAccount for that component |
distributor.serviceMonitor.additionalLabels | object | {} |
|
distributor.serviceMonitor.enabled | bool | false |
|
distributor.serviceMonitor.extraEndpointSpec | object | {} |
Additional endpoint configuration https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#endpoint |
distributor.serviceMonitor.metricRelabelings | list | [] |
|
distributor.serviceMonitor.relabelings | list | [] |
|
distributor.startupProbe.failureThreshold | int | 10 |
|
distributor.startupProbe.httpGet.path | string | "/ready" |
|
distributor.startupProbe.httpGet.port | string | "http-metrics" |
|
distributor.strategy.rollingUpdate.maxSurge | int | 0 |
|
distributor.strategy.rollingUpdate.maxUnavailable | int | 1 |
|
distributor.strategy.type | string | "RollingUpdate" |
|
distributor.terminationGracePeriodSeconds | int | 60 |
|
distributor.tolerations | list | [] |
|
externalConfigSecretName | string | "secret-with-config.yaml" |
|
externalConfigVersion | string | "0" |
|
image.pullPolicy | string | "IfNotPresent" |
|
image.pullSecrets | list | [] |
Optionally specify an array of imagePullSecrets. Secrets must be manually created in the namespace. ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ |
image.repository | string | "quay.io/cortexproject/cortex" |
|
image.tag | string | "" |
Allows you to override the cortex version in this chart. Use at your own risk. |
ingester.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].key | string | "app.kubernetes.io/component" |
|
ingester.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].operator | string | "In" |
|
ingester.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].values[0] | string | "ingester" |
|
ingester.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.topologyKey | string | "kubernetes.io/hostname" |
|
ingester.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].weight | int | 100 |
|
ingester.annotations | object | {} |
|
ingester.autoscaling.behavior.scaleDown.policies | list | [{"periodSeconds":1800,"type":"Pods","value":1}] |
see https://cortexmetrics.io/docs/guides/ingesters-scaling-up-and-down/#scaling-down for scaledown details |
ingester.autoscaling.behavior.scaleDown.stabilizationWindowSeconds | int | 3600 |
uses metrics from the past 1h to make scaleDown decisions |
ingester.autoscaling.behavior.scaleUp.policies | list | [{"periodSeconds":1800,"type":"Pods","value":1}] |
This default scaleup policy allows adding 1 pod every 30 minutes. Ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-configurable-scaling-behavior |
ingester.autoscaling.enabled | bool | false |
|
ingester.autoscaling.maxReplicas | int | 30 |
|
ingester.autoscaling.minReplicas | int | 3 |
|
ingester.autoscaling.targetMemoryUtilizationPercentage | int | 80 |
|
ingester.containerSecurityContext.enabled | bool | true |
|
ingester.containerSecurityContext.readOnlyRootFilesystem | bool | true |
|
ingester.env | list | [] |
|
ingester.extraArgs | object | {} |
Additional Cortex container arguments, e.g. log.level (debug, info, warn, error) |
ingester.extraContainers | list | [] |
|
ingester.extraPorts | list | [] |
|
ingester.extraVolumeMounts | list | [] |
|
ingester.extraVolumes | list | [] |
|
ingester.initContainers | list | [] |
|
ingester.lifecycle.preStop | object | {"httpGet":{"path":"/ingester/shutdown","port":"http-metrics"}} |
The /shutdown preStop hook is recommended as part of the ingester scaledown process, but can be removed to optimize rolling restarts in instances that will never be scaled down or when using chunks storage with WAL disabled. https://cortexmetrics.io/docs/guides/ingesters-scaling-up-and-down/#scaling-down |
ingester.livenessProbe | object | {} |
Startup/liveness probes for ingesters are not recommended. Ref: https://cortexmetrics.io/docs/guides/running-cortex-on-kubernetes/#take-extra-care-with-ingesters |
ingester.nodeSelector | object | {} |
|
ingester.persistentVolume.accessModes | list | ["ReadWriteOnce"] |
Ingester data Persistent Volume access modes Must match those of existing PV or dynamic provisioner Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/ |
ingester.persistentVolume.annotations | object | {} |
Ingester data Persistent Volume Claim annotations |
ingester.persistentVolume.enabled | bool | true |
If true and ingester.statefulSet.enabled is true, Ingester will create/use a Persistent Volume Claim If false, use emptyDir |
ingester.persistentVolume.size | string | "2Gi" |
Ingester data Persistent Volume size |
ingester.persistentVolume.storageClass | string | nil |
Ingester data Persistent Volume Storage Class If defined, storageClassName: If set to "-", storageClassName: "", which disables dynamic provisioning If undefined (the default) or set to null, no storageClassName spec is set, choosing the default provisioner. |
ingester.persistentVolume.subPath | string | "" |
Subdirectory of Ingester data Persistent Volume to mount Useful if the volume's root directory is not empty |
ingester.podAnnotations | object | {"prometheus.io/port":"8080","prometheus.io/scrape":"true"} |
Pod Annotations |
ingester.podDisruptionBudget.maxUnavailable | int | 1 |
|
ingester.podLabels | object | {} |
Pod Labels |
ingester.readinessProbe.httpGet.path | string | "/ready" |
|
ingester.readinessProbe.httpGet.port | string | "http-metrics" |
|
ingester.replicas | int | 3 |
|
ingester.resources | object | {} |
|
ingester.securityContext | object | {} |
|
ingester.service.annotations | object | {} |
|
ingester.service.labels | object | {} |
|
ingester.serviceAccount.name | string | nil |
|
ingester.serviceMonitor.additionalLabels | object | {} |
|
ingester.serviceMonitor.enabled | bool | false |
|
ingester.serviceMonitor.extraEndpointSpec | object | {} |
Additional endpoint configuration https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#endpoint |
ingester.serviceMonitor.metricRelabelings | list | [] |
|
ingester.serviceMonitor.relabelings | list | [] |
|
ingester.startupProbe | object | {} |
Startup/liveness probes for ingesters are not recommended. Ref: https://cortexmetrics.io/docs/guides/running-cortex-on-kubernetes/#take-extra-care-with-ingesters |
ingester.statefulSet.enabled | bool | false |
If true, use a statefulset instead of a deployment for pod management. This is useful when using WAL |
ingester.statefulSet.podManagementPolicy | string | "OrderedReady" |
ref: https://cortexmetrics.io/docs/guides/ingesters-scaling-up-and-down/#scaling-down and https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies for scaledown details |
ingester.statefulStrategy.type | string | "RollingUpdate" |
|
ingester.strategy.rollingUpdate.maxSurge | int | 0 |
|
ingester.strategy.rollingUpdate.maxUnavailable | int | 1 |
|
ingester.strategy.type | string | "RollingUpdate" |
|
ingester.terminationGracePeriodSeconds | int | 240 |
|
ingester.tolerations | list | [] |
|
ingress.annotations | object | {} |
|
ingress.enabled | bool | false |
|
ingress.hosts[0].host | string | "chart-example.local" |
|
ingress.hosts[0].paths[0] | string | "/" |
|
ingress.ingressClass.enabled | bool | false |
|
ingress.ingressClass.name | string | "nginx" |
|
ingress.tls | list | [] |
|
memcached | object | {"architecture":"high-availability","enabled":false,"extraEnv":[{"name":"MEMCACHED_CACHE_SIZE","value":"1024"},{"name":"MEMCACHED_MAX_CONNECTIONS","value":"1024"},{"name":"MEMCACHED_THREADS","value":"4"}],"metrics":{"enabled":true,"serviceMonitor":{"enabled":false}},"replicaCount":2,"resources":{}} |
chunk caching for legacy chunk storage engine |
memcached-blocks-index.architecture | string | "high-availability" |
|
memcached-blocks-index.extraEnv[0] | object | {"name":"MEMCACHED_CACHE_SIZE","value":"1024"} |
MEMCACHED_CACHE_SIZE is the amount of memory allocated to memcached for object storage |
memcached-blocks-index.extraEnv[1] | object | {"name":"MEMCACHED_MAX_CONNECTIONS","value":"1024"} |
MEMCACHED_MAX_CONNECTIONS is the maximum number of simultaneous connections to the memcached service |
memcached-blocks-index.extraEnv[2] | object | {"name":"MEMCACHED_THREADS","value":"4"} |
MEMCACHED_THREADS is the number of threads to use when processing incoming requests. By default, memcached is configured to use 4 concurrent threads. The threading improves the performance of storing and retrieving data in the cache, using a locking system to prevent different threads overwriting or updating the same values. |
memcached-blocks-index.metrics.enabled | bool | true |
|
memcached-blocks-index.metrics.serviceMonitor.enabled | bool | false |
|
memcached-blocks-index.replicaCount | int | 2 |
|
memcached-blocks-index.resources | object | {} |
|
memcached-blocks-metadata.architecture | string | "high-availability" |
|
memcached-blocks-metadata.extraEnv[0] | object | {"name":"MEMCACHED_CACHE_SIZE","value":"1024"} |
MEMCACHED_CACHE_SIZE is the amount of memory allocated to memcached for object storage |
memcached-blocks-metadata.extraEnv[1] | object | {"name":"MEMCACHED_MAX_CONNECTIONS","value":"1024"} |
MEMCACHED_MAX_CONNECTIONS is the maximum number of simultaneous connections to the memcached service |
memcached-blocks-metadata.extraEnv[2] | object | {"name":"MEMCACHED_THREADS","value":"4"} |
MEMCACHED_THREADS is the number of threads to use when processing incoming requests. By default, memcached is configured to use 4 concurrent threads. The threading improves the performance of storing and retrieving data in the cache, using a locking system to prevent different threads overwriting or updating the same values. |
memcached-blocks-metadata.metrics.enabled | bool | true |
|
memcached-blocks-metadata.metrics.serviceMonitor.enabled | bool | false |
|
memcached-blocks-metadata.replicaCount | int | 2 |
|
memcached-blocks-metadata.resources | object | {} |
|
memcached-blocks.architecture | string | "high-availability" |
|
memcached-blocks.extraEnv[0] | object | {"name":"MEMCACHED_CACHE_SIZE","value":"1024"} |
MEMCACHED_CACHE_SIZE is the amount of memory allocated to memcached for object storage |
memcached-blocks.extraEnv[1] | object | {"name":"MEMCACHED_MAX_CONNECTIONS","value":"1024"} |
MEMCACHED_MAX_CONNECTIONS is the maximum number of simultaneous connections to the memcached service |
memcached-blocks.extraEnv[2] | object | {"name":"MEMCACHED_THREADS","value":"4"} |
MEMCACHED_THREADS is the number of threads to use when processing incoming requests. By default, memcached is configured to use 4 concurrent threads. The threading improves the performance of storing and retrieving data in the cache, using a locking system to prevent different threads overwriting or updating the same values. |
memcached-blocks.metrics.enabled | bool | true |
|
memcached-blocks.metrics.serviceMonitor.enabled | bool | false |
|
memcached-blocks.replicaCount | int | 2 |
|
memcached-blocks.resources | object | {} |
|
memcached-frontend.architecture | string | "high-availability" |
|
memcached-frontend.enabled | bool | false |
|
memcached-frontend.extraEnv[0] | object | {"name":"MEMCACHED_CACHE_SIZE","value":"1024"} |
MEMCACHED_CACHE_SIZE is the amount of memory allocated to memcached for object storage |
memcached-frontend.extraEnv[1] | object | {"name":"MEMCACHED_MAX_CONNECTIONS","value":"1024"} |
MEMCACHED_MAX_CONNECTIONS is the maximum number of simultaneous connections to the memcached service |
memcached-frontend.extraEnv[2] | object | {"name":"MEMCACHED_THREADS","value":"4"} |
MEMCACHED_THREADS is the number of threads to use when processing incoming requests. By default, memcached is configured to use 4 concurrent threads. The threading improves the performance of storing and retrieving data in the cache, using a locking system to prevent different threads overwriting or updating the same values. |
memcached-frontend.metrics.enabled | bool | true |
|
memcached-frontend.metrics.serviceMonitor.enabled | bool | false |
|
memcached-frontend.replicaCount | int | 2 |
|
memcached-frontend.resources | object | {} |
|
memcached-index-read | object | {"architecture":"high-availability","enabled":false,"extraEnv":[{"name":"MEMCACHED_CACHE_SIZE","value":"1024"},{"name":"MEMCACHED_MAX_CONNECTIONS","value":"1024"},{"name":"MEMCACHED_THREADS","value":"4"}],"metrics":{"enabled":true,"serviceMonitor":{"enabled":false}},"replicaCount":2,"resources":{}} |
index read caching for legacy chunk storage engine |
memcached-index-read.extraEnv[0] | object | {"name":"MEMCACHED_CACHE_SIZE","value":"1024"} |
MEMCACHED_CACHE_SIZE is the amount of memory allocated to memcached for object storage |
memcached-index-read.extraEnv[1] | object | {"name":"MEMCACHED_MAX_CONNECTIONS","value":"1024"} |
MEMCACHED_MAX_CONNECTIONS is the maximum number of simultaneous connections to the memcached service |
memcached-index-read.extraEnv[2] | object | {"name":"MEMCACHED_THREADS","value":"4"} |
MEMCACHED_THREADS is the number of threads to use when processing incoming requests. By default, memcached is configured to use 4 concurrent threads. The threading improves the performance of storing and retrieving data in the cache, using a locking system to prevent different threads overwriting or updating the same values. |
memcached-index-write | object | {"architecture":"high-availability","enabled":false,"extraEnv":[{"name":"MEMCACHED_CACHE_SIZE","value":"1024"},{"name":"MEMCACHED_MAX_CONNECTIONS","value":"1024"},{"name":"MEMCACHED_THREADS","value":"4"}],"metrics":{"enabled":true,"serviceMonitor":{"enabled":false}},"replicaCount":2,"resources":{}} |
index write caching for legacy chunk storage engine |
memcached-index-write.extraEnv[0] | object | {"name":"MEMCACHED_CACHE_SIZE","value":"1024"} |
MEMCACHED_CACHE_SIZE is the amount of memory allocated to memcached for object storage |
memcached-index-write.extraEnv[1] | object | {"name":"MEMCACHED_MAX_CONNECTIONS","value":"1024"} |
MEMCACHED_MAX_CONNECTIONS is the maximum number of simultaneous connections to the memcached service |
memcached-index-write.extraEnv[2] | object | {"name":"MEMCACHED_THREADS","value":"4"} |
MEMCACHED_THREADS is the number of threads to use when processing incoming requests. By default, memcached is configured to use 4 concurrent threads. The threading improves the performance of storing and retrieving data in the cache, using a locking system to prevent different threads overwriting or updating the same values. |
memcached.extraEnv[0] | object | {"name":"MEMCACHED_CACHE_SIZE","value":"1024"} |
MEMCACHED_CACHE_SIZE is the amount of memory allocated to memcached for object storage |
memcached.extraEnv[1] | object | {"name":"MEMCACHED_MAX_CONNECTIONS","value":"1024"} |
MEMCACHED_MAX_CONNECTIONS is the maximum number of simultaneous connections to the memcached service |
memcached.extraEnv[2] | object | {"name":"MEMCACHED_THREADS","value":"4"} |
MEMCACHED_THREADS is the number of threads to use when processing incoming requests. By default, memcached is configured to use 4 concurrent threads. The threading improves the performance of storing and retrieving data in the cache, using a locking system to prevent different threads overwriting or updating the same values. |
nginx.affinity | object | {} |
|
nginx.annotations | object | {} |
|
nginx.autoscaling.behavior | object | {} |
Ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-configurable-scaling-behavior |
nginx.autoscaling.enabled | bool | false |
Creates a HorizontalPodAutoscaler for the nginx pods. |
nginx.autoscaling.maxReplicas | int | 30 |
|
nginx.autoscaling.minReplicas | int | 2 |
|
nginx.autoscaling.targetCPUUtilizationPercentage | int | 80 |
|
nginx.autoscaling.targetMemoryUtilizationPercentage | int | 0 |
|
nginx.config.auth_orgs | list | [] |
(optional) List of auth tenants to set in the nginx config |
nginx.config.basicAuthSecretName | string | "" |
(optional) Name of basic auth secret. In order to use this option, a secret with htpasswd formatted contents at the key ".htpasswd" must exist. For example: apiVersion: v1 kind: Secret metadata: name: my-secret namespace: stringData: .htpasswd: |
nginx.config.client_max_body_size | string | "1M" |
ref: http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size |
nginx.config.dnsResolver | string | "kube-dns.kube-system.svc.cluster.local" |
|
nginx.config.httpSnippet | string | "" |
arbitrary snippet to inject in the http { } section of the nginx config |
nginx.config.mainSnippet | string | "" |
arbitrary snippet to inject in the top section of the nginx config |
nginx.config.serverSnippet | string | "" |
arbitrary snippet to inject in the server { } section of the nginx config |
nginx.config.setHeaders | object | {} |
|
nginx.containerSecurityContext.enabled | bool | true |
|
nginx.containerSecurityContext.readOnlyRootFilesystem | bool | false |
|
nginx.enabled | bool | true |
|
nginx.env | list | [] |
|
nginx.extraArgs | object | {} |
Additional Cortex container arguments, e.g. log.level (debug, info, warn, error) |
nginx.extraContainers | list | [] |
|
nginx.extraPorts | list | [] |
|
nginx.extraVolumeMounts | list | [] |
|
nginx.extraVolumes | list | [] |
|
nginx.http_listen_port | int | 80 |
|
nginx.image.pullPolicy | string | "IfNotPresent" |
|
nginx.image.repository | string | "nginx" |
|
nginx.image.tag | float | 1.21 |
|
nginx.initContainers | list | [] |
|
nginx.livenessProbe.httpGet.path | string | "/healthz" |
|
nginx.livenessProbe.httpGet.port | string | "http-metrics" |
|
nginx.nodeSelector | object | {} |
|
nginx.persistentVolume.subPath | string | nil |
|
nginx.podAnnotations | object | {} |
Pod Annotations |
nginx.podDisruptionBudget.maxUnavailable | int | 1 |
|
nginx.podLabels | object | {} |
Pod Labels |
nginx.readinessProbe.httpGet.path | string | "/healthz" |
|
nginx.readinessProbe.httpGet.port | string | "http-metrics" |
|
nginx.replicas | int | 2 |
|
nginx.resources | object | {} |
|
nginx.securityContext | object | {} |
|
nginx.service.annotations | object | {} |
|
nginx.service.labels | object | {} |
|
nginx.service.type | string | "ClusterIP" |
|
nginx.serviceAccount.name | string | "" |
"" disables the individual serviceAccount and uses the global serviceAccount for that component |
nginx.startupProbe.failureThreshold | int | 10 |
|
nginx.startupProbe.httpGet.path | string | "/healthz" |
|
nginx.startupProbe.httpGet.port | string | "http-metrics" |
|
nginx.strategy.rollingUpdate.maxSurge | int | 0 |
|
nginx.strategy.rollingUpdate.maxUnavailable | int | 1 |
|
nginx.strategy.type | string | "RollingUpdate" |
|
nginx.terminationGracePeriodSeconds | int | 10 |
|
nginx.tolerations | list | [] |
|
querier.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].key | string | "app.kubernetes.io/component" |
|
querier.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].operator | string | "In" |
|
querier.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].values[0] | string | "querier" |
|
querier.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.topologyKey | string | "kubernetes.io/hostname" |
|
querier.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].weight | int | 100 |
|
querier.annotations | object | {} |
|
querier.autoscaling.behavior | object | {} |
Ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-configurable-scaling-behavior |
querier.autoscaling.enabled | bool | false |
Creates a HorizontalPodAutoscaler for the querier pods. |
querier.autoscaling.maxReplicas | int | 30 |
|
querier.autoscaling.minReplicas | int | 2 |
|
querier.autoscaling.targetCPUUtilizationPercentage | int | 80 |
|
querier.autoscaling.targetMemoryUtilizationPercentage | int | 0 |
|
querier.containerSecurityContext.enabled | bool | true |
|
querier.containerSecurityContext.readOnlyRootFilesystem | bool | true |
|
querier.env | list | [] |
|
querier.extraArgs | object | {} |
Additional Cortex container arguments, e.g. log.level (debug, info, warn, error) |
querier.extraContainers | list | [] |
|
querier.extraPorts | list | [] |
|
querier.extraVolumeMounts | list | [] |
|
querier.extraVolumes | list | [] |
|
querier.initContainers | list | [] |
|
querier.lifecycle | object | {} |
|
querier.livenessProbe.httpGet.path | string | "/ready" |
|
querier.livenessProbe.httpGet.port | string | "http-metrics" |
|
querier.nodeSelector | object | {} |
|
querier.persistentVolume.subPath | string | nil |
|
querier.podAnnotations | object | {"prometheus.io/port":"8080","prometheus.io/scrape":"true"} |
Pod Annotations |
querier.podDisruptionBudget.maxUnavailable | int | 1 |
|
querier.podLabels | object | {} |
Pod Labels |
querier.readinessProbe.httpGet.path | string | "/ready" |
|
querier.readinessProbe.httpGet.port | string | "http-metrics" |
|
querier.replicas | int | 2 |
|
querier.resources | object | {} |
|
querier.securityContext | object | {} |
|
querier.service.annotations | object | {} |
|
querier.service.labels | object | {} |
|
querier.serviceAccount.name | string | "" |
"" disables the individual serviceAccount and uses the global serviceAccount for that component |
querier.serviceMonitor.additionalLabels | object | {} |
|
querier.serviceMonitor.enabled | bool | false |
|
querier.serviceMonitor.extraEndpointSpec | object | {} |
Additional endpoint configuration https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#endpoint |
querier.serviceMonitor.metricRelabelings | list | [] |
|
querier.serviceMonitor.relabelings | list | [] |
|
querier.startupProbe.failureThreshold | int | 10 |
|
querier.startupProbe.httpGet.path | string | "/ready" |
|
querier.startupProbe.httpGet.port | string | "http-metrics" |
|
querier.strategy.rollingUpdate.maxSurge | int | 0 |
|
querier.strategy.rollingUpdate.maxUnavailable | int | 1 |
|
querier.strategy.type | string | "RollingUpdate" |
|
querier.terminationGracePeriodSeconds | int | 180 |
|
querier.tolerations | list | [] |
|
query_frontend.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].key | string | "app.kubernetes.io/component" |
|
query_frontend.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].operator | string | "In" |
|
query_frontend.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].values[0] | string | "query-frontend" |
|
query_frontend.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.topologyKey | string | "kubernetes.io/hostname" |
|
query_frontend.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].weight | int | 100 |
|
query_frontend.annotations | object | {} |
|
query_frontend.containerSecurityContext.enabled | bool | true |
|
query_frontend.containerSecurityContext.readOnlyRootFilesystem | bool | true |
|
query_frontend.env | list | [] |
|
query_frontend.extraArgs | object | {} |
Additional Cortex container arguments, e.g. log.level (debug, info, warn, error) |
query_frontend.extraContainers | list | [] |
|
query_frontend.extraPorts | list | [] |
|
query_frontend.extraVolumeMounts | list | [] |
|
query_frontend.extraVolumes | list | [] |
|
query_frontend.initContainers | list | [] |
|
query_frontend.lifecycle | object | {} |
|
query_frontend.livenessProbe.httpGet.path | string | "/ready" |
|
query_frontend.livenessProbe.httpGet.port | string | "http-metrics" |
|
query_frontend.nodeSelector | object | {} |
|
query_frontend.persistentVolume.subPath | string | nil |
|
query_frontend.podAnnotations | object | {"prometheus.io/port":"8080","prometheus.io/scrape":"true"} |
Pod Annotations |
query_frontend.podDisruptionBudget.maxUnavailable | int | 1 |
|
query_frontend.podLabels | object | {} |
Pod Labels |
query_frontend.readinessProbe.httpGet.path | string | "/ready" |
|
query_frontend.readinessProbe.httpGet.port | string | "http-metrics" |
|
query_frontend.replicas | int | 2 |
|
query_frontend.resources | object | {} |
|
query_frontend.securityContext | object | {} |
|
query_frontend.service.annotations | object | {} |
|
query_frontend.service.labels | object | {} |
|
query_frontend.serviceAccount.name | string | "" |
"" disables the individual serviceAccount and uses the global serviceAccount for that component |
query_frontend.serviceMonitor.additionalLabels | object | {} |
|
query_frontend.serviceMonitor.enabled | bool | false |
|
query_frontend.serviceMonitor.extraEndpointSpec | object | {} |
Additional endpoint configuration https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#endpoint |
query_frontend.serviceMonitor.metricRelabelings | list | [] |
|
query_frontend.serviceMonitor.relabelings | list | [] |
|
query_frontend.startupProbe.failureThreshold | int | 10 |
|
query_frontend.startupProbe.httpGet.path | string | "/ready" |
|
query_frontend.startupProbe.httpGet.port | string | "http-metrics" |
|
query_frontend.strategy.rollingUpdate.maxSurge | int | 0 |
|
query_frontend.strategy.rollingUpdate.maxUnavailable | int | 1 |
|
query_frontend.strategy.type | string | "RollingUpdate" |
|
query_frontend.terminationGracePeriodSeconds | int | 180 |
|
query_frontend.tolerations | list | [] |
|
ruler.affinity | object | {} |
|
ruler.annotations | object | {} |
|
ruler.containerSecurityContext.enabled | bool | true |
|
ruler.containerSecurityContext.readOnlyRootFilesystem | bool | true |
|
ruler.directories | object | {} |
allow configuring rules via configmap. ref: https://cortexproject.github.io/cortex-helm-chart/guides/configure_rules_via_configmap.html |
ruler.enabled | bool | true |
|
ruler.env | list | [] |
|
ruler.extraArgs | object | {} |
Additional Cortex container arguments, e.g. log.level (debug, info, warn, error) |
ruler.extraContainers | list | [] |
|
ruler.extraPorts | list | [] |
|
ruler.extraVolumeMounts | list | [] |
|
ruler.extraVolumes | list | [] |
|
ruler.initContainers | list | [] |
|
ruler.livenessProbe.httpGet.path | string | "/ready" |
|
ruler.livenessProbe.httpGet.port | string | "http-metrics" |
|
ruler.nodeSelector | object | {} |
|
ruler.persistentVolume.subPath | string | nil |
|
ruler.podAnnotations | object | {"prometheus.io/port":"8080","prometheus.io/scrape":"true"} |
Pod Annotations |
ruler.podDisruptionBudget.maxUnavailable | int | 1 |
|
ruler.podLabels | object | {} |
Pod Labels |
ruler.readinessProbe.httpGet.path | string | "/ready" |
|
ruler.readinessProbe.httpGet.port | string | "http-metrics" |
|
ruler.replicas | int | 1 |
|
ruler.resources | object | {} |
|
ruler.securityContext | object | {} |
|
ruler.service.annotations | object | {} |
|
ruler.service.labels | object | {} |
|
ruler.serviceAccount.name | string | "" |
"" disables the individual serviceAccount and uses the global serviceAccount for that component |
ruler.serviceMonitor.additionalLabels | object | {} |
|
ruler.serviceMonitor.enabled | bool | false |
|
ruler.serviceMonitor.extraEndpointSpec | object | {} |
Additional endpoint configuration https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#endpoint |
ruler.serviceMonitor.metricRelabelings | list | [] |
|
ruler.serviceMonitor.relabelings | list | [] |
|
ruler.sidecar | object | {"containerSecurityContext":{"enabled":true,"readOnlyRootFilesystem":true},"defaultFolderName":null,"enableUniqueFilenames":false,"enabled":false,"folder":"/tmp/rules","folderAnnotation":null,"image":{"repository":"quay.io/kiwigrid/k8s-sidecar","sha":"","tag":"1.10.7"},"imagePullPolicy":"IfNotPresent","label":"cortex_rules","labelValue":null,"resources":{},"searchNamespace":null,"watchMethod":null} |
Sidecars that collect the configmaps with specified label and stores the included files them into the respective folders |
ruler.sidecar.defaultFolderName | string | nil |
The default folder name, it will create a subfolder under the folder and put rules in there instead |
ruler.sidecar.folder | string | "/tmp/rules" |
folder in the pod that should hold the collected rules (unless defaultFolderName is set) |
ruler.sidecar.folderAnnotation | string | nil |
If specified, the sidecar will look for annotation with this name to create folder and put graph here. You can use this parameter together with provider.foldersFromFilesStructure to annotate configmaps and create folder structure. |
ruler.sidecar.label | string | "cortex_rules" |
label that the configmaps with rules are marked with |
ruler.sidecar.labelValue | string | nil |
value of label that the configmaps with rules are set to |
ruler.sidecar.searchNamespace | string | nil |
If specified, the sidecar will search for rules config-maps inside this namespace. Otherwise the namespace in which the sidecar is running will be used. It's also possible to specify ALL to search in all namespaces |
ruler.startupProbe.failureThreshold | int | 10 |
|
ruler.startupProbe.httpGet.path | string | "/ready" |
|
ruler.startupProbe.httpGet.port | string | "http-metrics" |
|
ruler.strategy.rollingUpdate.maxSurge | int | 0 |
|
ruler.strategy.rollingUpdate.maxUnavailable | int | 1 |
|
ruler.strategy.type | string | "RollingUpdate" |
|
ruler.terminationGracePeriodSeconds | int | 180 |
|
ruler.tolerations | list | [] |
|
runtimeconfigmap.annotations | object | {} |
|
runtimeconfigmap.create | bool | true |
If true, a configmap for the runtime_config will be created. If false, the configmap must exist already on the cluster or pods will fail to create. |
runtimeconfigmap.runtime_config | object | {} |
https://cortexmetrics.io/docs/configuration/arguments/#runtime-configuration-file |
serviceAccount.annotations | object | {} |
|
serviceAccount.automountServiceAccountToken | bool | true |
|
serviceAccount.create | bool | true |
|
serviceAccount.name | string | nil |
|
store_gateway.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].key | string | "app.kubernetes.io/component" |
|
store_gateway.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].operator | string | "In" |
|
store_gateway.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].values[0] | string | "store-gateway" |
|
store_gateway.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.topologyKey | string | "kubernetes.io/hostname" |
|
store_gateway.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].weight | int | 100 |
|
store_gateway.annotations | object | {} |
|
store_gateway.containerSecurityContext.enabled | bool | true |
|
store_gateway.containerSecurityContext.readOnlyRootFilesystem | bool | true |
|
store_gateway.env | list | [] |
|
store_gateway.extraArgs | object | {} |
Additional Cortex container arguments, e.g. log.level (debug, info, warn, error) |
store_gateway.extraContainers | list | [] |
|
store_gateway.extraPorts | list | [] |
|
store_gateway.extraVolumeMounts | list | [] |
|
store_gateway.extraVolumes | list | [] |
|
store_gateway.initContainers | list | [] |
|
store_gateway.livenessProbe.httpGet.path | string | "/ready" |
|
store_gateway.livenessProbe.httpGet.port | string | "http-metrics" |
|
store_gateway.livenessProbe.httpGet.scheme | string | "HTTP" |
|
store_gateway.nodeSelector | object | {} |
|
store_gateway.persistentVolume.accessModes | list | ["ReadWriteOnce"] |
Store-gateway data Persistent Volume access modes Must match those of existing PV or dynamic provisioner Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/ |
store_gateway.persistentVolume.annotations | object | {} |
Store-gateway data Persistent Volume Claim annotations |
store_gateway.persistentVolume.enabled | bool | true |
If true Store-gateway will create/use a Persistent Volume Claim If false, use emptyDir |
store_gateway.persistentVolume.size | string | "2Gi" |
Store-gateway data Persistent Volume size |
store_gateway.persistentVolume.storageClass | string | nil |
Store-gateway data Persistent Volume Storage Class If defined, storageClassName: If set to "-", storageClassName: "", which disables dynamic provisioning If undefined (the default) or set to null, no storageClassName spec is set, choosing the default provisioner. |
store_gateway.persistentVolume.subPath | string | "" |
Subdirectory of Store-gateway data Persistent Volume to mount Useful if the volume's root directory is not empty |
store_gateway.podAnnotations | object | {"prometheus.io/port":"8080","prometheus.io/scrape":"true"} |
Pod Annotations |
store_gateway.podDisruptionBudget.maxUnavailable | int | 1 |
|
store_gateway.podLabels | object | {} |
Pod Labels |
store_gateway.readinessProbe.httpGet.path | string | "/ready" |
|
store_gateway.readinessProbe.httpGet.port | string | "http-metrics" |
|
store_gateway.replicas | int | 1 |
|
store_gateway.resources | object | {} |
|
store_gateway.securityContext | object | {} |
|
store_gateway.service.annotations | object | {} |
|
store_gateway.service.labels | object | {} |
|
store_gateway.serviceAccount.name | string | "" |
"" disables the individual serviceAccount and uses the global serviceAccount for that component |
store_gateway.serviceMonitor.additionalLabels | object | {} |
|
store_gateway.serviceMonitor.enabled | bool | false |
|
store_gateway.serviceMonitor.extraEndpointSpec | object | {} |
Additional endpoint configuration https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#endpoint |
store_gateway.serviceMonitor.metricRelabelings | list | [] |
|
store_gateway.serviceMonitor.relabelings | list | [] |
|
store_gateway.startupProbe.failureThreshold | int | 60 |
|
store_gateway.startupProbe.httpGet.path | string | "/ready" |
|
store_gateway.startupProbe.httpGet.port | string | "http-metrics" |
|
store_gateway.startupProbe.httpGet.scheme | string | "HTTP" |
|
store_gateway.startupProbe.initialDelaySeconds | int | 120 |
|
store_gateway.startupProbe.periodSeconds | int | 30 |
|
store_gateway.strategy.type | string | "RollingUpdate" |
|
store_gateway.terminationGracePeriodSeconds | int | 240 |
|
store_gateway.tolerations | list | [] |
|
table_manager.affinity | object | {} |
|
table_manager.annotations | object | {} |
|
table_manager.containerSecurityContext.enabled | bool | true |
|
table_manager.containerSecurityContext.readOnlyRootFilesystem | bool | true |
|
table_manager.env | list | [] |
|
table_manager.extraArgs | object | {} |
Additional Cortex container arguments, e.g. log.level (debug, info, warn, error) |
table_manager.extraContainers | list | [] |
|
table_manager.extraPorts | list | [] |
|
table_manager.extraVolumeMounts | list | [] |
|
table_manager.extraVolumes | list | [] |
|
table_manager.initContainers | list | [] |
|
table_manager.livenessProbe.httpGet.path | string | "/ready" |
|
table_manager.livenessProbe.httpGet.port | string | "http-metrics" |
|
table_manager.nodeSelector | object | {} |
|
table_manager.persistentVolume.subPath | string | nil |
|
table_manager.podAnnotations | object | {"prometheus.io/port":"8080","prometheus.io/scrape":"true"} |
Pod Annotations |
table_manager.podDisruptionBudget.maxUnavailable | int | 1 |
|
table_manager.podLabels | object | {} |
Pod Labels |
table_manager.readinessProbe.httpGet.path | string | "/ready" |
|
table_manager.readinessProbe.httpGet.port | string | "http-metrics" |
|
table_manager.replicas | int | 1 |
|
table_manager.resources | object | {} |
|
table_manager.securityContext | object | {} |
|
table_manager.service.annotations | object | {} |
|
table_manager.service.labels | object | {} |
|
table_manager.serviceAccount.name | string | "" |
"" disables the individual serviceAccount and uses the global serviceAccount for that component |
table_manager.serviceMonitor.additionalLabels | object | {} |
|
table_manager.serviceMonitor.enabled | bool | false |
|
table_manager.serviceMonitor.extraEndpointSpec | object | {} |
Additional endpoint configuration https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#endpoint |
table_manager.serviceMonitor.metricRelabelings | list | [] |
|
table_manager.serviceMonitor.relabelings | list | [] |
|
table_manager.startupProbe.failureThreshold | int | 10 |
|
table_manager.startupProbe.httpGet.path | string | "/ready" |
|
table_manager.startupProbe.httpGet.port | string | "http-metrics" |
|
table_manager.strategy.rollingUpdate.maxSurge | int | 0 |
|
table_manager.strategy.rollingUpdate.maxUnavailable | int | 1 |
|
table_manager.strategy.type | string | "RollingUpdate" |
|
table_manager.terminationGracePeriodSeconds | int | 180 |
|
table_manager.tolerations | list | [] |
|
tags.blocks-storage-memcached | bool | false |
Set to true to enable block storage memcached caching |
useConfigMap | bool | false |
|
useExternalConfig | bool | false |