Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[prototype] kv: support committing txns in parallel with writes #35165

Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
229 changes: 221 additions & 8 deletions c-deps/libroach/protos/roachpb/data.pb.cc

Large diffs are not rendered by default.

140 changes: 131 additions & 9 deletions c-deps/libroach/protos/roachpb/data.pb.h

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion docs/RFCS/20180324_parallel_commit.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
- Feature Name: parallel commit
- Status: draft
- Status: in-progress
- Start Date: 2018-03-24
- Authors: Tobias Schottdorf, Nathan VanBenschoten
- RFC PR: #24194
Expand Down
1 change: 1 addition & 0 deletions docs/generated/settings/settings.html
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@
<tr><td><code>kv.snapshot_recovery.max_rate</code></td><td>byte size</td><td><code>8.0 MiB</code></td><td>the rate limit (bytes/sec) to use for recovery snapshots</td></tr>
<tr><td><code>kv.transaction.max_intents_bytes</code></td><td>integer</td><td><code>262144</code></td><td>maximum number of bytes used to track write intents in transactions</td></tr>
<tr><td><code>kv.transaction.max_refresh_spans_bytes</code></td><td>integer</td><td><code>256000</code></td><td>maximum number of bytes used to track refresh spans in serializable transactions</td></tr>
<tr><td><code>kv.transaction.parallel_commits</code></td><td>boolean</td><td><code>true</code></td><td>if enabled, transactional commits will be parallelized with transactional writes</td></tr>
<tr><td><code>kv.transaction.write_pipelining_enabled</code></td><td>boolean</td><td><code>true</code></td><td>if enabled, transactional writes are pipelined through Raft consensus</td></tr>
<tr><td><code>kv.transaction.write_pipelining_max_batch_size</code></td><td>integer</td><td><code>128</code></td><td>if non-zero, defines that maximum size batch that will be pipelined through Raft consensus</td></tr>
<tr><td><code>kv.transaction.write_pipelining_max_outstanding_size</code></td><td>byte size</td><td><code>256 KiB</code></td><td>maximum number of bytes used to track in-flight pipelined writes before disabling pipelining</td></tr>
Expand Down
22 changes: 21 additions & 1 deletion pkg/kv/batch.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,12 @@ import (

var emptySpan = roachpb.Span{}

const (
filterNone int = iota
filterEndTxn
filterNotEndTxn
)

// truncate restricts all contained requests to the given key range and returns
// a new, truncated, BatchRequest. All requests contained in that batch are
// "truncated" to the given span, and requests which are found to not overlap
Expand All @@ -32,9 +38,23 @@ var emptySpan = roachpb.Span{}
// rs = [a,bb],
//
// then truncate(ba,rs) returns a batch (Put[a], Put[b]) and positions [0,2].
func truncate(ba roachpb.BatchRequest, rs roachpb.RSpan) (roachpb.BatchRequest, []int, error) {
func truncate(
ba roachpb.BatchRequest, rs roachpb.RSpan, filter int,
) (roachpb.BatchRequest, []int, error) {
truncateOne := func(args roachpb.Request) (bool, roachpb.Span, error) {
header := args.Header().Span()
switch filter {
case filterNone:
case filterEndTxn:
if args.Method() != roachpb.EndTransaction {
return false, emptySpan, nil
}
case filterNotEndTxn:
if args.Method() == roachpb.EndTransaction {
return false, emptySpan, nil
}
}

if !roachpb.IsRange(args) {
// This is a point request.
if len(header.EndKey) > 0 {
Expand Down
Loading