You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For example we need to update 100k records with different keys.
At the moment there is only one option:
split the data by replicaset and then by bucket_id
manually call each replicaset, with appropriate data
on each replicaset update individual buckets, not to forget to call 'vshard.storage.bucket_refrw' / 'bucket_unrefrw'.
take care of situation when bucket was moved during operation
It would be reasonable to have that functionality in vshard. A call like that:
vshard.batch_write(data, storage_write_func_name)
where 'data' is a map of key -> record and specified storage functions takes single 'data' element as argument and stores it.
and a similar call like that:
vshard.batch_read(data, storage_read_func_name)
The text was updated successfully, but these errors were encountered:
Add support for batch operations.
For example we need to update 100k records with different keys.
At the moment there is only one option:
It would be reasonable to have that functionality in vshard. A call like that:
vshard.batch_write(data, storage_write_func_name)
where 'data' is a map of key -> record and specified storage functions takes single 'data' element as argument and stores it.
and a similar call like that:
vshard.batch_read(data, storage_read_func_name)
The text was updated successfully, but these errors were encountered: