Skip to content

Commit

Permalink
Remove mentions of obsolete sharding-enabled flags. (#1123)
Browse files Browse the repository at this point in the history
* Remove mentions of obsolete sharding-enabled flags.

Signed-off-by: Peter Štibraný <[email protected]>

* Don't make it sound like ruler sharding is optional.

Signed-off-by: Peter Štibraný <[email protected]>
  • Loading branch information
pstibrany authored Feb 9, 2022
1 parent 859fbb3 commit 9251587
Show file tree
Hide file tree
Showing 5 changed files with 9 additions and 15 deletions.
2 changes: 1 addition & 1 deletion docs/sources/architecture/compactor.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ Whenever the pool of compactors increase or decrease (ie. following up a scale u

The compactor sharding is based on the Mimir [hash ring](../architecture.md#the-hash-ring). At startup, a compactor generates random tokens and registers itself to the ring. While running, it periodically scans the storage bucket at every interval defined by `-compactor.compaction-interval` to discover the list of tenants in the storage and compacts blocks for each tenant whose hash matches the token ranges that are assigned to the instance itself within the ring.

This feature can be enabled via `-compactor.sharding-enabled=true` and requires the backend [hash ring](../architecture.md#the-hash-ring) to be configured via `-compactor.ring.*` flags (or their respective YAML config options).
This feature requires the backend [hash ring](../architecture.md#the-hash-ring) to be configured via `-compactor.ring.*` flags (or their respective YAML config options).

### Waiting for stable ring at startup

Expand Down
2 changes: 1 addition & 1 deletion docs/sources/architecture/compactor.template
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ Whenever the pool of compactors increase or decrease (ie. following up a scale u

The compactor sharding is based on the Mimir [hash ring](../architecture.md#the-hash-ring). At startup, a compactor generates random tokens and registers itself to the ring. While running, it periodically scans the storage bucket at every interval defined by `-compactor.compaction-interval` to discover the list of tenants in the storage and compacts blocks for each tenant whose hash matches the token ranges that are assigned to the instance itself within the ring.

This feature can be enabled via `-compactor.sharding-enabled=true` and requires the backend [hash ring](../architecture.md#the-hash-ring) to be configured via `-compactor.ring.*` flags (or their respective YAML config options).
This feature requires the backend [hash ring](../architecture.md#the-hash-ring) to be configured via `-compactor.ring.*` flags (or their respective YAML config options).

### Waiting for stable ring at startup

Expand Down
4 changes: 2 additions & 2 deletions docs/sources/architecture/store-gateway.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,9 +49,9 @@ Store-gateways continuously monitor the ring state and whenever the ring topolog

For each block belonging to a store-gateway shard, the store-gateway loads its `meta.json`, the `deletion-mark.json` and the index-header. Once a block is loaded on the store-gateway, it's ready to be queried by queriers. When the querier queries blocks through a store-gateway, the response will contain the list of actually queried block IDs. If a querier tries to query a block which has not been loaded by a store-gateway, the querier will either retry on a different store-gateway (if blocks replication is enabled) or fail the query.

Blocks can be replicated across multiple store-gateway instances based on a replication factor configured via `-store-gateway.sharding-ring.replication-factor`. The blocks replication is used to protect from query failures caused by some blocks not loaded by any store-gateway instance at a given time like, for example, in the event of a store-gateway failure or while restarting a store-gateway instance (e.g. during a rolling update).
Blocks are replicated across multiple store-gateway instances based on a replication factor configured via `-store-gateway.sharding-ring.replication-factor`. The blocks replication is used to protect from query failures caused by some blocks not loaded by any store-gateway instance at a given time like, for example, in the event of a store-gateway failure or while restarting a store-gateway instance (e.g. during a rolling update).

This feature can be enabled via `-store-gateway.sharding-enabled=true` and requires the backend [hash ring](../architecture.md#the-hash-ring) to be configured via `-store-gateway.sharding-ring.*` flags (or their respective YAML config options).
This feature requires the backend [hash ring](../architecture.md#the-hash-ring) to be configured via `-store-gateway.sharding-ring.*` flags (or their respective YAML config options).

### Sharding strategies

Expand Down
4 changes: 2 additions & 2 deletions docs/sources/architecture/store-gateway.template
Original file line number Diff line number Diff line change
Expand Up @@ -49,9 +49,9 @@ Store-gateways continuously monitor the ring state and whenever the ring topolog

For each block belonging to a store-gateway shard, the store-gateway loads its `meta.json`, the `deletion-mark.json` and the index-header. Once a block is loaded on the store-gateway, it's ready to be queried by queriers. When the querier queries blocks through a store-gateway, the response will contain the list of actually queried block IDs. If a querier tries to query a block which has not been loaded by a store-gateway, the querier will either retry on a different store-gateway (if blocks replication is enabled) or fail the query.

Blocks can be replicated across multiple store-gateway instances based on a replication factor configured via `-store-gateway.sharding-ring.replication-factor`. The blocks replication is used to protect from query failures caused by some blocks not loaded by any store-gateway instance at a given time like, for example, in the event of a store-gateway failure or while restarting a store-gateway instance (e.g. during a rolling update).
Blocks are replicated across multiple store-gateway instances based on a replication factor configured via `-store-gateway.sharding-ring.replication-factor`. The blocks replication is used to protect from query failures caused by some blocks not loaded by any store-gateway instance at a given time like, for example, in the event of a store-gateway failure or while restarting a store-gateway instance (e.g. during a rolling update).

This feature can be enabled via `-store-gateway.sharding-enabled=true` and requires the backend [hash ring](../architecture.md#the-hash-ring) to be configured via `-store-gateway.sharding-ring.*` flags (or their respective YAML config options).
This feature requires the backend [hash ring](../architecture.md#the-hash-ring) to be configured via `-store-gateway.sharding-ring.*` flags (or their respective YAML config options).

### Sharding strategies

Expand Down
12 changes: 3 additions & 9 deletions docs/sources/guides/sharded_ruler.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,21 +11,15 @@ One option to scale the ruler is by scaling it horizontally. However, with multi

## Config

In order to enable sharding in the ruler the following flag needs to be set:

```
-ruler.enable-sharding=true
```

In addition the ruler requires it's own ring to be configured, for instance:
To make sharding of rule groups between rulers work, ruler requires the ring backend to be configured, for example:

```
-ruler.ring.consul.hostname=consul.dev.svc.cluster.local:8500
```

The only configuration that is required is to enable sharding and configure a key value store. From there the rulers will shard and handle the division of rules automatically.
The only configuration that is required is to configure a key value store. From there the rulers will shard and handle the division of rules automatically.

Unlike ingesters, rulers do not hand over responsibility: all rules are re-sharded randomly every time a ruler is added to or removed from the ring.
All rules are re-sharded randomly every time a ruler is added to or removed from the ring.

## Ruler Storage

Expand Down

0 comments on commit 9251587

Please sign in to comment.