Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transparent* database sharding #46639

Merged
merged 18 commits into from
Aug 28, 2024
Merged

Transparent* database sharding #46639

merged 18 commits into from
Aug 28, 2024

Conversation

icewind1991
Copy link
Member

@icewind1991 icewind1991 commented Jul 19, 2024

This implements (mostly) transparent database sharding with a custom query builder implementation that modifies the query as it's being built to run queries on sharded tables on the correct shards and split queries that join between sharded and non-sharded tables into separate parts.

The goal is that code using the database can (mostly) ignore sharding and "just work™". In practice it isn't quite transparent but most queries should be easy to make sharding compatible.

Partitioning

The first step is implemented in the PartitionedQueryBuilder, this query builder checks for all referenced in a query (from/update/join/etc) if they are configured to be sharded. And creates a separate "partition" for the sharded and non-sharded tables. Each partition has a "sub-query" that only references the tables from the partition. The various query parts (select/where/etc) are directed to the respective sub-query for the table they reference.

When executing, the various sub-queries are executed and the results are merged based on the configured join conditions.

See also the documentation of the PartitionedQueryBuilder class for some more info.

Sharding

Now that we have split of the part of the query that involves the sharded tables, the ShardedQueryBuilder no longer has to think about joins.

The ShardedQueryBuilder checks the configured sharding information for the tables it references and then looks in the query for the "shard key" (e.g. storage) or primary key (e.g. fileid) as the query is being built.

When executing, if the shard key is set in the query, then the query is only being ran on the shards for those shard keys.
If we only know the primary keys for the query, we first try to guess the shard that the primary key is likely to reside in. We do this by encoding the shard into the primary key when it's first being inserted. Since most rows are never moved to another shard this guess should have a fairly high success rate.
If we haven't found all the rows we are looking for from the likely shards, we loop over the other shards until we found all the rows for the primary keys being searched.

See also the documentation of the ShardedQueryBuilder class for some more info.

Query limitations

For all the automatic query inspection and manipulation mentioned above to work, various limitations to database queries are imposed. See the documentation of InvalidPartitionedQueryException and InvalidShardedQueryException for a list of limitations.

This PR adjusts various queries to be compatible with those limitations or adds alternate "sharding compatible" implementations of methods where having a query that is both compatible with these limitations and doesn't introduce a significant performance regression for non-sharded setups isn't possible.

Apps that touch the sharded tables will have to be tested for compatibility with sharding.

Configuration

Sharding can be configuration by setting the dbsharding config.php option. For each pre-defined "sharding config" (currently just "filecache"). A list of database configurations can be provided.

Each pre-definied "sharding config" (see \OC\DB\Connection::SHARD_PRESETS) defines the list of tables that will be sharded, the database field used for the shard key, and the primary key.

The configured database configurations are merged with the existing database configuration (overwriting) to get the final database configuration for each shard.

Changing the sharding configuration for installed instances is not supported atm.

For example:

"dbsharding" => 
	"filecache" => [
		"shards" => [
			[
				"port" => 5001,
			],
			[
				"port" => 5002,
			],
			[
				"port" => 5003,
			],
			[
				"port" => 5004,
			],
		]
	]
]

Will shard the filecache (and filecache_extended and files_metadata) over 4 database servers, running on port 5001 till 5004.

@icewind1991 icewind1991 force-pushed the autosharding branch 3 times, most recently from bf33998 to af16d88 Compare July 19, 2024 17:31
@solracsf solracsf added the 2. developing Work in progress label Jul 20, 2024
@icewind1991 icewind1991 force-pushed the autosharding branch 3 times, most recently from 512351a to c0059ff Compare July 26, 2024 12:52
@icewind1991 icewind1991 force-pushed the autosharding branch 4 times, most recently from 455551c to 80eeaff Compare August 5, 2024 15:23
@icewind1991 icewind1991 force-pushed the autosharding branch 2 times, most recently from 5cdaa11 to 5a48b15 Compare August 8, 2024 14:16
lib/private/Memcache/Factory.php Fixed Show fixed Hide fixed
lib/private/Memcache/Factory.php Fixed Show fixed Hide fixed
lib/private/Memcache/Factory.php Fixed Show fixed Hide fixed
@icewind1991 icewind1991 force-pushed the autosharding branch 4 times, most recently from c0094fa to 80aad73 Compare August 9, 2024 14:56
Signed-off-by: Robin Appelman <[email protected]>
@artonge
Copy link
Contributor

artonge commented Aug 28, 2024

Rebased. Conflicts were mostly due to the updated cast rule of the new coding standard.

@sorbaugh sorbaugh merged commit b4d7498 into master Aug 28, 2024
176 checks passed
@sorbaugh sorbaugh deleted the autosharding branch August 28, 2024 09:22
@sorbaugh
Copy link
Contributor

/backport to stable30

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
3. to review Waiting for reviews
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants