Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kvserver: Replace multiTestContext with TestCluster in client_replica_gc_test #50657

Merged

Conversation

lunevalex
Copy link
Collaborator

Makes progress on #8299

multiTestContext is legacy construct that is deprecated in favor of running
tests via TestCluster. This is one PR out of many to remove the usage of
multiTestContext in the client_replica_gc test cases.

Release note: none

@cockroach-teamcity
Copy link
Member

This change is Reviewable

@lunevalex lunevalex requested review from tbg and nvanbenschoten June 25, 2020 20:20
@lunevalex lunevalex force-pushed the alex/remove-multi-context-replica-gc-test branch from e4fd7e9 to 4438167 Compare June 26, 2020 14:37
Copy link
Member

@nvanbenschoten nvanbenschoten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 3 of 3 files at r1.
Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @lunevalex and @tbg)


pkg/kv/kvserver/client_replica_gc_test.go, line 38 at r1 (raw file):

	const numStores = 3

	// Use actual engines (not in memory) because the in-mem ones don't write

I'm surprised we need this. I don't quite understand the justification. You say the test would still pass but then say it would probably fail. What were you seeing without it? The in-memory engine should still support a mock file system that every disk access should go through. We seem to handle this correctly below with calls like `fs.WriteFile(eng, ...). Are we forgetting to do that somewhere?


pkg/kv/kvserver/replica_gc_queue.go, line 92 at r1 (raw file):

	metrics                           ReplicaGCQueueMetrics
	db                                *kv.DB
	replicaGCQueueInactivityThreshold time.Duration

Are we actually using this? I would have expected to see something in replicaGCShouldQueueImpl.


pkg/kv/kvserver/replica_gc_queue.go, line 98 at r1 (raw file):

func newReplicaGCQueue(store *Store, db *kv.DB, gossip *gossip.Gossip) *replicaGCQueue {
	replicaGCQueueInactivityThreshold := ReplicaGCQueueInactivityThreshold
	if store.TestingKnobs().ReplicaGCQueueInactivityThreshold != 0 {

nit: this kind of thing is often done using a temp variable like:

if knob := store.TestingKnobs().ReplicaGCQueueInactivityThreshold; knob != 0 {
	replicaGCQueueInactivityThreshold = knob
}

@lunevalex
Copy link
Collaborator Author


pkg/kv/kvserver/replica_gc_queue.go, line 92 at r1 (raw file):

Previously, nvanbenschoten (Nathan VanBenschoten) wrote…

Are we actually using this? I would have expected to see something in replicaGCShouldQueueImpl.

Good catch, I did not finish plumbing it through but the test passed. This was to replace the bit where were were manually moving the clock. It looks like its not needed because we disable the queue at the start of the cluster and then enable it, which means this gets picked up on the first run. Going to remove this.

Copy link
Collaborator Author

@lunevalex lunevalex left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @lunevalex, @nvanbenschoten, and @tbg)


pkg/kv/kvserver/client_replica_gc_test.go, line 38 at r1 (raw file):

Previously, nvanbenschoten (Nathan VanBenschoten) wrote…

I'm surprised we need this. I don't quite understand the justification. You say the test would still pass but then say it would probably fail. What were you seeing without it? The in-memory engine should still support a mock file system that every disk access should go through. We seem to handle this correctly below with calls like `fs.WriteFile(eng, ...). Are we forgetting to do that somewhere?

Great question, I did not change the logic or conditions of the test just it's usage of multiTestContext. I will test it out to see it makes a difference.

@lunevalex lunevalex force-pushed the alex/remove-multi-context-replica-gc-test branch 2 times, most recently from 73d8056 to 01f245e Compare June 29, 2020 18:46
@lunevalex
Copy link
Collaborator Author


pkg/kv/kvserver/client_replica_gc_test.go, line 38 at r1 (raw file):

Previously, lunevalex wrote…

Great question, I did not change the logic or conditions of the test just it's usage of multiTestContext. I will test it out to see it makes a difference.

Actually you are right I am not sure why I thought the original test was creating file system based engines, I removed all of that and switched to in-memory ones and everything works.

…_gc_test

Makes progress on cockroachdb#8299

multiTestContext is legacy construct that is deprecated in favor of running
tests via TestCluster. This is one PR out of many to remove the usage of
multiTestContext in the client_replica_gc test cases.

Release note: none
@lunevalex lunevalex force-pushed the alex/remove-multi-context-replica-gc-test branch from 01f245e to b43124c Compare June 29, 2020 18:49
Copy link
Member

@nvanbenschoten nvanbenschoten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm: thanks for the update, the PR is nice and straightforward now.

Reviewed 3 of 3 files at r2.
Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained (waiting on @tbg)

@lunevalex
Copy link
Collaborator Author

bors r+

@craig
Copy link
Contributor

craig bot commented Jun 29, 2020

Build failed (retrying...)

@craig
Copy link
Contributor

craig bot commented Jun 30, 2020

Build succeeded

@craig craig bot merged commit f1d6bc3 into cockroachdb:master Jun 30, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants