-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
logictest: TestPlannerLogic/5node-dist/distsql_interleaved_join created unexpected plan #31068
Comments
I think it's unrelated but might as well add it here: there's a data race that is probably generic in the logictests but which I triggered (on master) while stressing via five minutes of
|
I think @arjunravinarayan or someone saw this (the data race) before, but couldn't make sense of it given that it came from |
Seems like a pretty obvious race to me:
|
Should probably do that kind of thing in an init function to avoid racing the test harness (or anything else that might be doing stuff on other goroutines) |
@tschottdorf Here is the other issue about the |
Thanks @petermattis. Back to the original test failure, thankfully it fails on master too:
|
This will be a nice test of my automated bisection script I have lying around. Bisecting: 173 revisions left to test after this (roughly 8 steps) |
Thanks for the investigation so far - I wonder if there's a race in the way that the data moves between nodes in the setup. Or maybe the cache that is supposed to keep track of range location is broken. @RaduBerinde or @andreimatei do you have time to look into this? I feel like there were some related flakiness in the range cache integration test, but I'm failing to turn it up. |
Yes, it could very well be related to that. What I don't understand is why your PR tickles it so easily. |
I'm bisecting on |
and the winner is: 4b28b0b storage: Avoid adding all replicas at once in RelocateRange (I'm verifying, but seems plausible) |
Verified. I'm going to make sure that I don't merge my PR if it makes the race much more likely, but will otherwise leave the issue in your hands @jordanlewis if that's OK |
Alright, so the tl;dr is that RELOCATE_RANGE isn't synchronous anymore, right? |
Hmm, no, that wasn't my takeaway. I think the impl has just changed to be smarter about getting the replica set where it needs to be with less spurious copies created. Perhaps @a-robinson has an idea what the difference might be. Btw, this test is a lot more flaky on my branch. I guess that makes it my problem 🙈 |
Ugh, I'm sorry for not having caught this one in the first place. 4b28b0b made a few other Also, I can't tell what the difference is in #31068 (comment). They look logically equivalent? |
I was also confused by that.
Wait, how or rather why does that work? |
The difference is that node i gets assigned joiner i. In the failing test, the numbers are sometimes mixed up. These tests are brittle, I'm probably introducing just enough nondeterminism to sometimes generate an every so slightly different (but basically equivalent) plan. Sigh. |
@a-robinson this is the segment that fails: https://github.com/cockroachdb/cockroach/blob/master/pkg/sql/logictest/testdata/planner_test/distsql_interleaved_join#L366-L387 Is that an easy fix with some magic query? Looks like we already "verify placement" before. |
I've been trying to get a repro in order to be able to test out changes like that, but (a) I haven't been able to repro the failure yet, and (b) we do already verify the placement of outer_p1 right after the relocate range. I'm not sure how that query could fail but then the data/leases would be in the wrong places in the next query. Perhaps we should also verify outer_c1, but it isn't having RELOCATE run on it. I haven't spent much time working with the planner tests, but it seems like you'd always want to verify the placements of all tables in order to rule out whether that's the cause of plan differences.
I don't know. I initially put them in just to better understand what was wrong about the data/lease placement when things went wrong, but after putting them in the tests stopped failing. The tests were deterministically failing back then, though, not flaking. |
They're logically equivalent, but the test asserts that the plans are identical. This shouldn't be a problem as long as the placement is exactly what the tests request. |
I'm experimenting on top of #31013 and as far as I can tell from enabling verbose logging the replicas/leases are indeed being put in the right places and not moving after the This makes me suspect that the issue here is either around bad caching behavior somewhere in the kv/distsql stack or some sort of bad behavior around new replicas not being initialized fully/properly and thus not able to respond to a real request by the time the failing query runs (or some combination of the two). The latter would especially make sense given the nature of #31013. |
Yeah, could someone familiar with the planner track down the info that's being used in constructing the failing plans? The ranges all appear to be in the right places. With #31013, the following command fails almost immediately:
|
FWIW - As I think you've realized, what matters for the purposes of planning is not where ranges are, but where the range cache on the gateway thinks they are. #31013 does claim to affect that caching by removing some eviction in some case, if I understand correctly. |
The eviction isn't relevant (I disabled it and the failure remains). What matters is that we don't try the next replica on RangeNotFound. Not quite sure why, but I should be able to find out. |
A theory I have is that by trying the next replica, we discover the leader "organically" but don't populate the cache, where previously we would've tried that first replica for a while and eventually it'd give us a NotLeaseHolderError. |
Yeah, that seems to be it. I have a change that populates the leaseholder when a successful RPC comes back and that fixes it. I'll see whether that is easy enough to gets tests passing for, it's definitely a change I've wanted to make for a while. |
Sigh, now another test fails under stress. expected: got:
|
I added
and now this works. #magic |
I've been wanting to do that for a while too. Nice work! |
Whenever a successful response is received from an RPC that we know has to contact the leaseholder to succeed, update the leaseholder cache. The immediate motivation for this is to be able to land the preceding commits, which greatly exacerbated (as in, added a much faster failure mode to) ``` make stress PKG=./pkg/sql/logictest TESTS=TestPlannerLogic/5node-dist/distsql_interleaved_join ``` However, the change is one we've wanted to make for a while; our caching and in particular the eviction of leaseholders has been deficient essentially ever since it was first introduced. Touches cockroachdb#31068. Release note: None
Whenever a successful response is received from an RPC that we know has to contact the leaseholder to succeed, update the leaseholder cache. The immediate motivation for this is to be able to land the preceding commits, which greatly exacerbated (as in, added a much faster failure mode to) ``` make stress PKG=./pkg/sql/logictest TESTS=TestPlannerLogic/5node-dist/distsql_interleaved_join ``` However, the change is one we've wanted to make for a while; our caching and in particular the eviction of leaseholders has been deficient essentially ever since it was first introduced. Touches cockroachdb#31068. Release note: None
Whenever a successful response is received from an RPC that we know has to contact the leaseholder to succeed, update the leaseholder cache. The immediate motivation for this is to be able to land the preceding commits, which greatly exacerbated (as in, added a much faster failure mode to) ``` make stress PKG=./pkg/sql/logictest TESTS=TestPlannerLogic/5node-dist/distsql_interleaved_join ``` However, the change is one we've wanted to make for a while; our caching and in particular the eviction of leaseholders has been deficient essentially ever since it was first introduced. Touches cockroachdb#31068. Release note: None
Whenever a successful response is received from an RPC that we know has to contact the leaseholder to succeed, update the leaseholder cache. The immediate motivation for this is to be able to land the preceding commits, which greatly exacerbated (as in, added a much faster failure mode to) ``` make stress PKG=./pkg/sql/logictest TESTS=TestPlannerLogic/5node-dist/distsql_interleaved_join ``` However, the change is one we've wanted to make for a while; our caching and in particular the eviction of leaseholders has been deficient essentially ever since it was first introduced. Touches cockroachdb#31068. Release note: None
@tschottdorf going to close this as the immediate problem is fixed (right?) |
Yup, thanks!
On Wed, Oct 17, 2018 at 1:43 AM Jordan Lewis ***@***.***> wrote:
Closed #31068 <#31068>.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#31068 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AE135FrOu1f843Txh6gs8Tx5x74y-DxEks5ulm8IgaJpZM4XM7no>
.
--
…-- Tobias
|
Seen on a PR which I don't think should have that effect: https://teamcity.cockroachdb.com/viewLog.html?buildId=950803&buildTypeId=Cockroach_UnitTests_Test&tab=buildResultsDiv
(There's a chance that it is, in some obscure way, caused by it, but I don't think so and it hasn't repro'ed in several minutes of stressrace on my gceworker)
Basically the plan looks as if there were only four nodes.
Have:
Want:
@jordanlewis for routing.
The text was updated successfully, but these errors were encountered: