Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sql: fix flakiness with InternalExecutorFactoryMemMonitor closing TestServers #84212

Merged
merged 1 commit into from
Jul 12, 2022

Conversation

RichardJCai
Copy link
Contributor

Release note: None

@RichardJCai RichardJCai requested review from a team as code owners July 11, 2022 21:44
@cockroach-teamcity
Copy link
Member

This change is Reviewable

Copy link
Contributor

@ajwerner ajwerner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed all commit messages.
Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @RichardJCai and @yuzefovich)


pkg/server/server_sql.go line 934 at r1 (raw file):

	// systemIeMonitor is used for the jobsIeFactory and for the
	// circularInternalExecutor.
	// The schema changer may still be executing jobs queries.

Can you help me understand how this happens? Shouldn't the schema changer queries stop before the Closer gets called?

@RichardJCai
Copy link
Contributor Author

Reviewed all commit messages.
Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @RichardJCai and @yuzefovich)

pkg/server/server_sql.go line 934 at r1 (raw file):

	// systemIeMonitor is used for the jobsIeFactory and for the
	// circularInternalExecutor.
	// The schema changer may still be executing jobs queries.

Can you help me understand how this happens? Shouldn't the schema changer queries stop before the Closer gets called?

I just updated the PR to just not close anything.

I was seeing in TestAtMostOneRunningCreateStats that the monitor I attached to the jobs internal executor was still allocated some memory when Closer was called. Is it possible that the query stops but theres a race with the memory being cleared.

…tServers

We do not actually have to worry about closing the
InternalExecutorFactoryMemMonitor.

Release note: None
Copy link
Member

@yuzefovich yuzefovich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm:

Reviewed 1 of 3 files at r1, 2 of 2 files at r2, all commit messages.
Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained (waiting on @ajwerner)

@RichardJCai
Copy link
Contributor Author

Thanks!

bors r=yuzefovich

Copy link
Contributor

@ajwerner ajwerner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained (waiting on @RichardJCai)


pkg/server/server_sql.go line 934 at r1 (raw file):

Previously, RichardJCai (Richard Cai) wrote…

I just updated the PR to just not close anything.

I was seeing in TestAtMostOneRunningCreateStats that the monitor I attached to the jobs internal executor was still allocated some memory when Closer was called. Is it possible that the query stops but theres a race with the memory being cleared.

🤔 that's not very satisfying. Do you have more details on what was not released? If we could get to the bottom of it, it'd be really nice. It'd give us a claim that we do actually clean up properly. If we don't then we'll just let more folks get away with not cleaning up properly.

@craig
Copy link
Contributor

craig bot commented Jul 12, 2022

Build succeeded:

@RichardJCai
Copy link
Contributor Author

Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained (waiting on @RichardJCai)

pkg/server/server_sql.go line 934 at r1 (raw file):

Previously, RichardJCai (Richard Cai) wrote…
🤔 that's not very satisfying. Do you have more details on what was not released? If we could get to the bottom of it, it'd be really nice. It'd give us a claim that we do actually clean up properly. If we don't then we'll just let more folks get away with not cleaning up properly.

Opened up an issue with context for now - #84302 don't have much time in the near future to dig into this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants