Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kv: stop DistSender from double-covering RangeFeeds during splits #35466

Merged
merged 1 commit into from
Mar 11, 2019

Conversation

danhhz
Copy link
Contributor

@danhhz danhhz commented Mar 6, 2019

If a RangeFeed was running over [/a,/d) and it split at /b, then
we'd get an error from the server and the span would be kicked out to
divideAndSendRangeFeedToRanges. The RangeIterator would usually hand
out the post-split [/a,/b) descriptor for /a, then advance to /b
and first try the pre-split [/a,/d) descriptor. Each would get a new
RangeFeed. The one over [/a,/d) would immediately come back with an
error from the server that it didn't fit in the bounds of a range, evict
the RangeDescriptorCache token, and get kicked out again to
divideAndSendRangeFeedToRanges. This time, because of the eviction,
the RangeIterator would get the post-split [/b,/d) descriptor. The
end result was one RangeFeed over [/a,/b) and two over [/b,/d). A
second split at /c meant we could double it again and end up with 4
over [/c,/d). RangeFeed always has the potential for sending
duplicates, so changefeeds have to be resilient to this, and this was
not a correctness issue, but it's obviously bad.

The fix is simple: use the same 'nextRS' trick that
divideAndSendBatchToRanges does to keep track of the uncovered part of
the input rs span to divideAndSendRangeFeedToRanges and use that to
trim the descriptors that come back.

Found when splits were added to TestChangefeedNemeses and manually
investiating why RangeFeed was returning duplicates. No test yet since
one is coming in #32721.

Release note: None

@danhhz danhhz requested review from tbg, nvanbenschoten and a team March 6, 2019 17:21
@cockroach-teamcity
Copy link
Member

This change is Reviewable

Copy link
Member

@nvanbenschoten nvanbenschoten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm:

Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained (waiting on @nvanbenschoten and @tbg)

If a RangeFeed was running over `[/a,/d)` and it split at `/b`, then
we'd get an error from the server and the span would be kicked out to
`divideAndSendRangeFeedToRanges`. The `RangeIterator` would usually hand
out the post-split `[/a,/b)` descriptor for `/a`, then advance to `/b`
and first try the pre-split `[/a,/d)` descriptor. Each would get a new
RangeFeed. The one over `[/a,/d)` would immediately come back with an
error from the server that it didn't fit in the bounds of a range, evict
the `RangeDescriptorCache` token, and get kicked out again to
`divideAndSendRangeFeedToRanges`. This time, because of the eviction,
the `RangeIterator` would get the post-split `[/b,/d)` descriptor. The
end result was one RangeFeed over `[/a,/b)` and two over `[/b,/d)`. A
second split at `/c` meant we could double it again and end up with 4
over `[/c,/d)`. RangeFeed always has the potential for sending
duplicates, so changefeeds have to be resilient to this, and this was
not a correctness issue, but it's obviously bad.

The fix is simple: use the same 'nextRS' trick that
`divideAndSendBatchToRanges` does to keep track of the uncovered part of
the input `rs` span to `divideAndSendRangeFeedToRanges` and use that to
trim the descriptors that come back.

Found when splits were added to TestChangefeedNemeses and manually
investiating why RangeFeed was returning duplicates. No test yet since
one is coming in cockroachdb#32721.

Release note: None
@danhhz danhhz force-pushed the rangefeed_distsender branch from 0c3e267 to e0f66ed Compare March 11, 2019 18:59
@danhhz
Copy link
Contributor Author

danhhz commented Mar 11, 2019

Flake looks like #35550. Thanks for the review!

bors r=nvanbenschoten

@craig
Copy link
Contributor

craig bot commented Mar 11, 2019

Build failed (retrying...)

craig bot pushed a commit that referenced this pull request Mar 11, 2019
35466: kv: stop DistSender from double-covering RangeFeeds during splits r=nvanbenschoten a=danhhz

If a RangeFeed was running over `[/a,/d)` and it split at `/b`, then
we'd get an error from the server and the span would be kicked out to
`divideAndSendRangeFeedToRanges`. The `RangeIterator` would usually hand
out the post-split `[/a,/b)` descriptor for `/a`, then advance to `/b`
and first try the pre-split `[/a,/d)` descriptor. Each would get a new
RangeFeed. The one over `[/a,/d)` would immediately come back with an
error from the server that it didn't fit in the bounds of a range, evict
the `RangeDescriptorCache` token, and get kicked out again to
`divideAndSendRangeFeedToRanges`. This time, because of the eviction,
the `RangeIterator` would get the post-split `[/b,/d)` descriptor. The
end result was one RangeFeed over `[/a,/b)` and two over `[/b,/d)`. A
second split at `/c` meant we could double it again and end up with 4
over `[/c,/d)`. RangeFeed always has the potential for sending
duplicates, so changefeeds have to be resilient to this, and this was
not a correctness issue, but it's obviously bad.

The fix is simple: use the same 'nextRS' trick that
`divideAndSendBatchToRanges` does to keep track of the uncovered part of
the input `rs` span to `divideAndSendRangeFeedToRanges` and use that to
trim the descriptors that come back.

Found when splits were added to TestChangefeedNemeses and manually
investiating why RangeFeed was returning duplicates. No test yet since
one is coming in #32721.

Release note: None

Co-authored-by: Daniel Harrison <[email protected]>
@craig
Copy link
Contributor

craig bot commented Mar 11, 2019

Build succeeded

@craig craig bot merged commit e0f66ed into cockroachdb:master Mar 11, 2019
@danhhz danhhz deleted the rangefeed_distsender branch March 11, 2019 20:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants