-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
enhance: wal adaptor implementation #34122
enhance: wal adaptor implementation #34122
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: chyezh The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
ff488e3
to
b660a25
Compare
7a1a506
to
5932f22
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #34122 +/- ##
==========================================
+ Coverage 80.58% 80.85% +0.27%
==========================================
Files 1095 1114 +19
Lines 137929 138464 +535
==========================================
+ Hits 111145 111958 +813
+ Misses 22573 22254 -319
- Partials 4211 4252 +41
|
5932f22
to
f8de92c
Compare
f8de92c
to
3b16b36
Compare
|
||
// AllocateScannerName a scanner name for a scanner. | ||
// The scanner name should be persistent on meta for garbage clean up. | ||
func (m *scannerRegistry) AllocateScannerName() (string, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If some walimpls can not create stateless consumer.
we need to implement a garbage collector to collect the consumers.
@chyezh E2e jenkins job failed, comment |
/run-cpu-e2e |
rerun ut |
3b16b36
to
5b7025b
Compare
@chyezh E2e jenkins job failed, comment |
/run-cpu-e2e |
@chyezh E2e jenkins job failed, comment |
/run-cpu-e2e |
@chyezh E2e jenkins job failed, comment |
/run-cpu-e2e |
@chyezh E2e jenkins job failed, comment |
/run-cpu-e2e |
@chyezh E2e jenkins job failed, comment |
/run-cpu-e2e |
4 similar comments
/run-cpu-e2e |
/run-cpu-e2e |
/run-cpu-e2e |
/run-cpu-e2e |
- add adaptor to implement walimpls into wal interface. - add test for wal. Signed-off-by: chyezh <[email protected]>
Signed-off-by: chyezh <[email protected]>
Signed-off-by: chyezh <[email protected]>
8e65ada
to
beeeff8
Compare
@chyezh E2e jenkins job failed, comment |
/run-cpu-e2e |
@chyezh E2e jenkins job failed, comment |
/run-cpu-e2e |
@chyezh E2e jenkins job failed, comment |
/run-cpu-e2e |
return scanner.Chan(), nil | ||
} | ||
// TODO: configurable pending count. | ||
if s.pendingQueue.Len()+s.reorderBuffer.Len() > 1024 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here, if our timetick interval is greater than 1024, the scanner get stucked.
Should be modified into another strategy.
} | ||
|
||
// blockUntilSyncTimeTickReady blocks until the first time tick message is sent. | ||
func (impl *timeTickAppendInterceptor) blockUntilSyncTimeTickReady(underlyingWALImpls walimpls.WALImpls) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
move first timetick sent as a single function.
return scanner.Chan(), nil | ||
} | ||
// TODO: configurable pending count. | ||
if s.pendingQueue.Len()+s.reorderBuffer.Len() > 1024 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We shouldn't restrict the pending queue length as it might block receiving all messages within the timetick window. Two key points for message packing are:
- Use the pending queue of a bounded length to accommodate all messages within a timetick window.
- Activate back pressure mechanism with sendingCh if downstream consumption is too slow.
All refinements or fixes will be done in the next PR. |
/lgtm |
issue: #33285