From 4ac4de26b0d78679ae78ffd06e5f2f32770e067f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Peter=20=C5=A0tibran=C3=BD?= Date: Tue, 8 Feb 2022 16:58:47 +0100 Subject: [PATCH 1/2] Remove mentions of obsolete sharding-enabled flags. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: Peter Štibraný --- docs/sources/architecture/compactor.md | 2 +- docs/sources/architecture/compactor.template | 2 +- docs/sources/architecture/store-gateway.md | 4 ++-- docs/sources/architecture/store-gateway.template | 4 ++-- docs/sources/guides/sharded_ruler.md | 12 +++--------- 5 files changed, 9 insertions(+), 15 deletions(-) diff --git a/docs/sources/architecture/compactor.md b/docs/sources/architecture/compactor.md index 708eaff07cf..ac149f4f485 100644 --- a/docs/sources/architecture/compactor.md +++ b/docs/sources/architecture/compactor.md @@ -77,7 +77,7 @@ Whenever the pool of compactors increase or decrease (ie. following up a scale u The compactor sharding is based on the Mimir [hash ring](../architecture.md#the-hash-ring). At startup, a compactor generates random tokens and registers itself to the ring. While running, it periodically scans the storage bucket at every interval defined by `-compactor.compaction-interval` to discover the list of tenants in the storage and compacts blocks for each tenant whose hash matches the token ranges that are assigned to the instance itself within the ring. -This feature can be enabled via `-compactor.sharding-enabled=true` and requires the backend [hash ring](../architecture.md#the-hash-ring) to be configured via `-compactor.ring.*` flags (or their respective YAML config options). +This feature requires the backend [hash ring](../architecture.md#the-hash-ring) to be configured via `-compactor.ring.*` flags (or their respective YAML config options). ### Waiting for stable ring at startup diff --git a/docs/sources/architecture/compactor.template b/docs/sources/architecture/compactor.template index 5da51d0708f..7d9bf062486 100644 --- a/docs/sources/architecture/compactor.template +++ b/docs/sources/architecture/compactor.template @@ -77,7 +77,7 @@ Whenever the pool of compactors increase or decrease (ie. following up a scale u The compactor sharding is based on the Mimir [hash ring](../architecture.md#the-hash-ring). At startup, a compactor generates random tokens and registers itself to the ring. While running, it periodically scans the storage bucket at every interval defined by `-compactor.compaction-interval` to discover the list of tenants in the storage and compacts blocks for each tenant whose hash matches the token ranges that are assigned to the instance itself within the ring. -This feature can be enabled via `-compactor.sharding-enabled=true` and requires the backend [hash ring](../architecture.md#the-hash-ring) to be configured via `-compactor.ring.*` flags (or their respective YAML config options). +This feature requires the backend [hash ring](../architecture.md#the-hash-ring) to be configured via `-compactor.ring.*` flags (or their respective YAML config options). ### Waiting for stable ring at startup diff --git a/docs/sources/architecture/store-gateway.md b/docs/sources/architecture/store-gateway.md index b6d12236e76..904e8041823 100644 --- a/docs/sources/architecture/store-gateway.md +++ b/docs/sources/architecture/store-gateway.md @@ -49,9 +49,9 @@ Store-gateways continuously monitor the ring state and whenever the ring topolog For each block belonging to a store-gateway shard, the store-gateway loads its `meta.json`, the `deletion-mark.json` and the index-header. Once a block is loaded on the store-gateway, it's ready to be queried by queriers. When the querier queries blocks through a store-gateway, the response will contain the list of actually queried block IDs. If a querier tries to query a block which has not been loaded by a store-gateway, the querier will either retry on a different store-gateway (if blocks replication is enabled) or fail the query. -Blocks can be replicated across multiple store-gateway instances based on a replication factor configured via `-store-gateway.sharding-ring.replication-factor`. The blocks replication is used to protect from query failures caused by some blocks not loaded by any store-gateway instance at a given time like, for example, in the event of a store-gateway failure or while restarting a store-gateway instance (e.g. during a rolling update). +Blocks are replicated across multiple store-gateway instances based on a replication factor configured via `-store-gateway.sharding-ring.replication-factor`. The blocks replication is used to protect from query failures caused by some blocks not loaded by any store-gateway instance at a given time like, for example, in the event of a store-gateway failure or while restarting a store-gateway instance (e.g. during a rolling update). -This feature can be enabled via `-store-gateway.sharding-enabled=true` and requires the backend [hash ring](../architecture.md#the-hash-ring) to be configured via `-store-gateway.sharding-ring.*` flags (or their respective YAML config options). +This feature requires the backend [hash ring](../architecture.md#the-hash-ring) to be configured via `-store-gateway.sharding-ring.*` flags (or their respective YAML config options). ### Sharding strategies diff --git a/docs/sources/architecture/store-gateway.template b/docs/sources/architecture/store-gateway.template index c7167f29cba..2205a1ab1e9 100644 --- a/docs/sources/architecture/store-gateway.template +++ b/docs/sources/architecture/store-gateway.template @@ -49,9 +49,9 @@ Store-gateways continuously monitor the ring state and whenever the ring topolog For each block belonging to a store-gateway shard, the store-gateway loads its `meta.json`, the `deletion-mark.json` and the index-header. Once a block is loaded on the store-gateway, it's ready to be queried by queriers. When the querier queries blocks through a store-gateway, the response will contain the list of actually queried block IDs. If a querier tries to query a block which has not been loaded by a store-gateway, the querier will either retry on a different store-gateway (if blocks replication is enabled) or fail the query. -Blocks can be replicated across multiple store-gateway instances based on a replication factor configured via `-store-gateway.sharding-ring.replication-factor`. The blocks replication is used to protect from query failures caused by some blocks not loaded by any store-gateway instance at a given time like, for example, in the event of a store-gateway failure or while restarting a store-gateway instance (e.g. during a rolling update). +Blocks are replicated across multiple store-gateway instances based on a replication factor configured via `-store-gateway.sharding-ring.replication-factor`. The blocks replication is used to protect from query failures caused by some blocks not loaded by any store-gateway instance at a given time like, for example, in the event of a store-gateway failure or while restarting a store-gateway instance (e.g. during a rolling update). -This feature can be enabled via `-store-gateway.sharding-enabled=true` and requires the backend [hash ring](../architecture.md#the-hash-ring) to be configured via `-store-gateway.sharding-ring.*` flags (or their respective YAML config options). +This feature requires the backend [hash ring](../architecture.md#the-hash-ring) to be configured via `-store-gateway.sharding-ring.*` flags (or their respective YAML config options). ### Sharding strategies diff --git a/docs/sources/guides/sharded_ruler.md b/docs/sources/guides/sharded_ruler.md index dadc659165f..f16e61479a3 100644 --- a/docs/sources/guides/sharded_ruler.md +++ b/docs/sources/guides/sharded_ruler.md @@ -11,21 +11,15 @@ One option to scale the ruler is by scaling it horizontally. However, with multi ## Config -In order to enable sharding in the ruler the following flag needs to be set: - -``` - -ruler.enable-sharding=true -``` - -In addition the ruler requires it's own ring to be configured, for instance: +To enable sharding of rule groups between rulers, they must be configured with the ring backend, for instance: ``` -ruler.ring.consul.hostname=consul.dev.svc.cluster.local:8500 ``` -The only configuration that is required is to enable sharding and configure a key value store. From there the rulers will shard and handle the division of rules automatically. +The only configuration that is required is to configure a key value store. From there the rulers will shard and handle the division of rules automatically. -Unlike ingesters, rulers do not hand over responsibility: all rules are re-sharded randomly every time a ruler is added to or removed from the ring. +All rules are re-sharded randomly every time a ruler is added to or removed from the ring. ## Ruler Storage From e14454ce36bbbc9173a095f333a643552929d98e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Peter=20=C5=A0tibran=C3=BD?= Date: Wed, 9 Feb 2022 09:52:53 +0100 Subject: [PATCH 2/2] Don't make it sound like ruler sharding is optional. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: Peter Štibraný --- docs/sources/guides/sharded_ruler.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/sources/guides/sharded_ruler.md b/docs/sources/guides/sharded_ruler.md index f16e61479a3..102ffb7fd4f 100644 --- a/docs/sources/guides/sharded_ruler.md +++ b/docs/sources/guides/sharded_ruler.md @@ -11,7 +11,7 @@ One option to scale the ruler is by scaling it horizontally. However, with multi ## Config -To enable sharding of rule groups between rulers, they must be configured with the ring backend, for instance: +To make sharding of rule groups between rulers work, ruler requires the ring backend to be configured, for example: ``` -ruler.ring.consul.hostname=consul.dev.svc.cluster.local:8500