Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

changefeedccl: restart when memory limit changes #109167

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

jayshrivastava
Copy link
Contributor

Previously, changing changefeed.memory.per_changefeed_limit would require restarting a changefeed for the setting to take effect.

This change makes it so that when the changefeed coordinator detects a change in memory limits, it restarts all the aggregators using a retryable error.

Release note: None
Informs: #96953
Epic: None

Previously, changing `changefeed.memory.per_changefeed_limit` would require
restarting a changefeed for the setting to take effect.

This change makes it so that when the changefeed coordinator detects a change
in memory limits, it restarts all the aggregators using a retryable error.

Release note: None
Informs: cockroachdb#96953
Epic: None
@cockroach-teamcity
Copy link
Member

This change is Reviewable

@jayshrivastava jayshrivastava marked this pull request as ready for review August 21, 2023 18:36
@jayshrivastava jayshrivastava requested a review from a team as a code owner August 21, 2023 18:36
Copy link
Contributor

@miretskiy miretskiy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @jayshrivastava)


pkg/ccl/changefeedccl/changefeed_processors.go line 1204 at r1 (raw file):

func (cf *changeFrontier) makeMemoryLimitWatcher() *atomic.Bool {
	var memoryLimitChanged atomic.Bool
	changefeedbase.PerChangefeedMemLimit.SetOnChange(&cf.flowCtx.Cfg.Settings.SV, func(ctx context.Context) {

we shouldn't use this capability to drive shutdown. Callbacks for the same setting are executed on the single (settings) goroutine; You definitely don't want to create as many callbacks as you have changefeeds; and worse yet, you can't clear out these callbacks if changefeed is paused/completed.

Instead, this logic should be based by periodically checking setting value durning normal processor operations (Next() or tick()).


pkg/ccl/changefeedccl/changefeed_processors.go line 1212 at r1 (raw file):

// Next is part of the RowSource interface.
func (cf *changeFrontier) Next() (rowenc.EncDatumRow, *execinfrapb.ProducerMetadata) {
	memLimitWatcher := cf.makeMemoryLimitWatcher()

This seems wrong; even though Next() has a for loop; in reality this loop is more of a historical remant: it just runs through "one" state transition -- Next() -- and returns either the next row, or metadata.
The distflow (process helper, I think) will call next again after processing the result of this method.


pkg/ccl/changefeedccl/changefeed_processors.go line 1281 at r1 (raw file):

		// We need to restart all aggregators if the memory limits change.
		if memLimitWatcher.Load() {

we should remember the limit that was used to create our bound account (or maybe there is a way to query it), and we should trigger logic to reset monitor when that happens.

I'm actually not too thrilled to see that we have to shutdown entire flow -- including throwing away important things we've done already.

What would be nice if we didn't need to do this. Assuming that it's not possible to change memory limit (i don't think there is, but dbl check), what we should do instead is make sure that we execute memory check logic after we have emitted our checkpoint. Once we did, we should effectively perform part of the initialization
to reset kvfeed monitor; restart kvfeed, and whatever else needs to be restarted.

jayshrivastava added a commit to jayshrivastava/cockroach that referenced this pull request Aug 23, 2023
Previously, the kvfeed was responsible for monitoring for
node drains using a goroutine. This change moves this logic
into the change aggregator and removes the goroutine.
Overall, this change makes the code more organized and performant.

This change was inspired by work being done for cockroachdb#109167. The
work in that PR requires being able to restart the kvfeed.
Having drain logic intermingled with the kvfeed makes
restarts much more complex, hard to review, prone to bugs, etc.

Informs: cockroachdb#96953
Release note: None
Epic: None
jayshrivastava added a commit to jayshrivastava/cockroach that referenced this pull request Aug 23, 2023
Previously, the kvfeed was responsible for monitoring for
node drains using a goroutine. This change moves this logic
into the change aggregator and removes the goroutine.
Overall, this change makes the code more organized and performant.

This change was inspired by work being done for cockroachdb#109167. The
work in that PR requires being able to restart the kvfeed.
Having drain logic intermingled with the kvfeed makes
restarts much more complex, hard to review, prone to bugs, etc.

Informs: cockroachdb#96953
Release note: None
Epic: None
jayshrivastava added a commit to jayshrivastava/cockroach that referenced this pull request Aug 23, 2023
Previously, the kvfeed was responsible for monitoring for
node drains using a goroutine. This change moves this logic
into the change aggregator and removes the goroutine.
Overall, this change makes the code more organized and performant.

This change was inspired by work being done for cockroachdb#109167. The
work in that PR requires being able to restart the kvfeed.
Having drain logic intermingled with the kvfeed makes
restarts much more complex, hard to review, prone to bugs, etc.

Informs: cockroachdb#96953
Release note: None
Epic: None
jayshrivastava added a commit to jayshrivastava/cockroach that referenced this pull request Aug 23, 2023
Previously, the kvfeed was responsible for monitoring for
node drains using a goroutine. This change moves this logic
into the change aggregator and removes the goroutine.
Overall, this change makes the code more organized and performant.

This change was inspired by work being done for cockroachdb#109167. The
work in that PR requires being able to restart the kvfeed.
Having drain logic intermingled with the kvfeed makes
restarts much more complex, hard to review, prone to bugs, etc.

Informs: cockroachdb#96953
Release note: None
Epic: None
craig bot pushed a commit that referenced this pull request Aug 23, 2023
…109356

108485: github: code coverage workflows r=RaduBerinde a=RaduBerinde

This change adds two GitHub Action workflows which run on each PR. One generates unit test code coverage data, and one publishes that data to a GCS bucket from where Reviewable can access it.

We generate coverage data using `bazel coverage`, but we restrict it to only test the packages that have been modified by the PR.

Two workflows are required for security (the first workflow runs potentially malicious code from a fork); for more details, see https://securitylab.github.com/research/github-actions-preventing-pwn-requests/

Epic: none
Release note: None

109036: roachprod: better determination if scp -R flag can be used r=RaduBerinde a=RaduBerinde

When uploading a file to a cluster, we use the "tree dist" algorithm by default. This uploads the file to a single node, then we copy the file from that node to the other nodes (up to 10).

This only makes sense if the remote-to-remote transfers can happen directly, which only happens if we pass the `-R -A` flags to `scp`. Unfortunately older versions don't support these flags. Currently the flags are only passed if the OS is `darwin`.

This commits improves the determination - we run `ssh -V` (once) and check if the `SSL` major version is three. For reference, some examples of what `ssh -V` returns:
 - recent MacOSX: `OpenSSH_9.0p1, LibreSSL 3.3.6`
 - Ubuntu 22.04: `OpenSSH_8.9p1 Ubuntu-3ubuntu0.3, OpenSSL 3.0.2 15 Mar 2022`

In addition, if the version is not 3, we disable the use of "tree dist".

Epic: none
Release note: None

109260: sql stats: skip tests hitting combinedstmts and statements endpoints r=gtr a=gtr

Part of #109184.

This commit skips tests which hit the `combinedStmts` or `statements` endpoints which will sometimes timeout under stress as a result of recent backend changes. The test investigation is tracked by #109184.

Release note: None

109288: dev: error when trying to `dev test` a bazel tested target r=rickystewart a=liamgillies

Running `dev test` on these integration tests will always fail, so this PR adds a error when running the command on those files.

Fixes: #107813
Release note: None

109292: sql: fix expected batch count for edge case in copy test r=rharding6373 a=rharding6373

In TestLargeDynamicRows we test that 4 rows of data can fit in a batch size of at least 4 rows given default memory sizes. However, when we set the batch row size to the minimum value of 4, the test hook that counts batches counts an extra empty batch. This PR changes adjusts the minimum row size to 5 for the purposes of this test.

Epic: None
Fixes: #109134

Release note: None

109324: build: update bazel builder build docs r=rickystewart a=rail

Previously, the documentation described a manual build of the `bazelbuilder` docker image. The current approach is to use CI to build the image.

This PR updates the documentation to reflect the current process, including the FIPS image steps.

Epic: none
Release note: None

109340: changefeedccl: move node drain handling logic out of kvfeed r=miretskiy a=jayshrivastava

Previously, the kvfeed was responsible for monitoring for
node drains using a goroutine. This change moves this logic
into the change aggregator and removes the goroutine.
Overall, this change makes the code more organized and performant.

This change was inspired by work being done for #109167. The
work in that PR requires being able to restart the kvfeed.
Having drain logic intermingled with the kvfeed makes
restarts much more complex, hard to review, prone to bugs, etc.

Informs: #96953
Release note: None
Epic: None

109349: kv: wait on latches on each key in reverse acquisition order r=arulajmani,kvoli a=nvanbenschoten

This commit allocates latch IDs from the top of the uint64 space and in reverse order. This is done to order latches in the tree on a same key in reverse order of acquisition. Doing so ensures that when we iterate over the tree and see a key with many conflicting latches, we visit the latches on that key in the reverse order that they will be released. In doing so, we minimize the number of open channels that we wait on (calls to `waitForSignal`) and minimize the number of goroutine scheduling points. This is important to avoid spikes in runnable goroutine after each request completes, which can negatively affect node health.

See experiments below.

Epic: None
Release note (performance improvement): The impact of high concurrency blind writes to the same key on goroutine scheduling latency was reduced.

109356: build: explicitly set SKIP_LABEL_TEST_FAILURE in compose.sh r=rickystewart a=chrisseto

Previously, `SKIP_LABEL_TEST_FAILURE` was being set via a teamcity configuration. This change was quite opaque as the majority of CI configuration for Cockroach is stored as shell scripts within its repo. This commit follows that pattern by explicitly setting `SKIP_LABEL_TEST_FAILURE` in the script that runs `TestComposeCompare`.

Epic: None
Release note: None

Co-authored-by: Radu Berinde <[email protected]>
Co-authored-by: gtr <[email protected]>
Co-authored-by: Liam Gillies <[email protected]>
Co-authored-by: rharding6373 <[email protected]>
Co-authored-by: Rail Aliiev <[email protected]>
Co-authored-by: Jayant Shrivastava <[email protected]>
Co-authored-by: Nathan VanBenschoten <[email protected]>
Co-authored-by: Chris Seto <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants