-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rangefeed: Pushtxn in rangefeed returned abort, but txn may have been committed #104309
Comments
Hello, I am Blathers. I am here to help you get the issue triaged. It looks like you have not filled out the issue in the format of any of our templates. To best assist you, we advise you to use one of these templates. I was unable to automatically find someone to ping. If we have not gotten back to your issue within a few business days, you can try the following:
🦉 Hoot! I am a Blathers, a bot for CockroachDB. My owner is dev-inf. |
cc @cockroachdb/cdc |
cc @cockroachdb/replication |
I'm looking into this. Seems like the question would be can we have pending intent in the past when we see a transaction tombstone for committed transaction that we interpreted as abort. |
Is it possible that this is the case: It is possible that the rangefeed is built on a non majority replica, while the pushtxn in the rangefeed is sent at a fixed frequency, that is:
|
It looks like we just need a range feed running on a follower and that follower should be lagging behind on raft application. In that case it could try to push already committed transactions because it tries to compare txn timestamps from the intents it observes with wall clock ( We have a protection in rangefeed cockroach/pkg/kv/kvserver/rangefeed/resolved_timestamp.go Lines 265 to 273 in 9c2f650
which would cause node to panic if timestamp regression is detected to prevent any potential data inconsistency/loss on the client side. And I can't find any evidence of panics generated by regressions like that which means it happens sufficiently infrequently. To address it we may need to distinguish between abort types when resolving intents and in this particular case keep transaction and wait for intents to get resolved. |
"We have a protection in rangefeed" Yes, but due to MVCCAbortTxnOp, txn was removed from the list, causing rts to advance, and due to 'enginepb'. MVCCCommitteIntentOp "is not protected by assertOpAboveRTS, so this data has been discarded in the following code. cockroach/pkg/ccl/changefeedccl/event_processing.go Lines 401 to 411 in 9c2f650
"To address it we may need to distinguish between abort types when resolving intents and in this particular case keep transaction and wait for intents to get resolved." |
I see what you mean. Didn't realize we skip intents when checking timestamp assertion. Even if we did, intents could arrive after checkpoint is out and even if we panic, client will not reread that data because of published checkpoint timestamp. As for pts_cache, I don't think we need any extra info there. The problem seem to be that we know this special cause where transaction tombstone is present which should be a enough rare case, but we don't have a mechanism to surface this error to range feed processor easily without changing protocol. pts_cache can reside on any other node where transaction record was created. Did you see this error actually happening? |
by code review, I want to use one slow node and two fast nodes, which may trigger this issue. Because from a code perspective, this issue should exist. I will try it out some time. |
cc @cockroachdb/replication |
117968: kvserver: add `Replica.WaitForLeaseAppliedIndex()` r=erikgrinaker a=erikgrinaker Extracted from #117612. --- This allows a caller to wait for a replica to reach a certain lease applied index. Similar functionality elsewhere is not migrated yet, out of caution. Touches #104309. Epic: none Release note: None Co-authored-by: Erik Grinaker <[email protected]>
117969: batcheval: add `PushTxnResponse.AmbiguousAbort` r=erikgrinaker a=erikgrinaker Extracted from #117612. --- This indicates to the caller that the `ABORTED` status of the pushed transaction is ambiguous, and the transaction may in fact have been committed and GCed already. This information is also plumbed through the `IntentResolver` txn push APIs. Touches #104309. Epic: none Release note: None 117992: roachpb: address review comments from #117840 r=nvanbenschoten a=nvanbenschoten I had missed a `git push` immediately before merging #117840. This updates two comments. Epic: None Release note: None Co-authored-by: Erik Grinaker <[email protected]> Co-authored-by: Nathan VanBenschoten <[email protected]>
117967: batcheval: add `BarrierRequest.WithLeaseAppliedIndex` r=erikgrinaker a=erikgrinaker Extracted from #117612. --- **batcheval: add `BarrierRequest.WithLeaseAppliedIndex`** This can be used to detect whether a replica has applied the barrier command yet. Touches #104309. **kvnemsis: add support for `Barrier` operations** This only executes random `Barrier` requests, but does not verify that the barrier guarantees are actually satisfied (i.e. that all past and concurrent writes are applied before it returns). At least we get some execution coverage, and verify that it does not have negative interactions with other operations. Epic: none Release note: None Co-authored-by: Erik Grinaker <[email protected]>
I don't know if it's appropriate to send it here, but I'm really troubled.
The RangeFeed relies on the return result of PushTxns (task. go: pushOldTxns) ABORTED to remove the tracked txn in UnresolvedIntentQueue, which depends on the correctness of PushTxn.
cockroach/pkg/kv/kvserver/rangefeed/task.go
Lines 324 to 336 in 9c2f650
But in cmd_push_txn.go The “case txnID” case of [PushTxn ->SynthesizeTxnFromMeta ->CanCreateTxnRecord(replica_tscache.go)] may return a transaction that has already been COMMITTED but has status=Abort, which may cause rts in the RangeFeed to advance incorrectly. Do I understand correctly?
cockroach/pkg/kv/kvserver/replica_tscache.go
Lines 529 to 543 in 9c2f650
Jira issue: CRDB-28452
Epic CRDB-27235
The text was updated successfully, but these errors were encountered: