Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(catalog): fix issue where subscriptions sometimes get "stuck" #847

Conversation

ecordell
Copy link
Member

@ecordell ecordell commented May 7, 2019

we were not resetting the client when updating a catalogsource, which
meant it was possible for the client to be stale and never attempt
a reconnect if it didn't go unhealthy "in time" for us to detect and
reconnect.

I ran InstallPlanWithCSVsAcrossMultipleCatalogSources 20 times in a row to verify it no longer flakes (previously it would error within ~5 tries)

we were not resetting the client when updating a catalogsource, which
meant it was possible for the client to be stale and never attempt
a reconnect if it didn't go unhealthy "in time" for us to detect and
reconnect.
@@ -446,6 +446,12 @@ func (o *Operator) syncCatalogSources(obj interface{}) (syncError error) {
o.sourcesLastUpdate = timeNow()
logger.Debug("registry server recreated")

func() {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is the fix, everything else is small things I noticed when reviewing

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason you use a closure instead of just lock/unlock without defer?

			o.sourcesLock.Lock()
			delete(o.sources, sourceKey)
			o.sourcesLock.Unlock()

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's purely to keep the lock/unlock always next to each other, so that in the future if we need to do additional work with sources here it's harder to make a mistake

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, just curious if I could learn something here, thanks.

@openshift-ci-robot openshift-ci-robot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label May 7, 2019
@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label May 7, 2019
@tkashem
Copy link
Collaborator

tkashem commented May 7, 2019

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label May 7, 2019
@openshift-ci-robot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: ecordell, tkashem

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@ecordell
Copy link
Member Author

ecordell commented May 7, 2019

/retest

1 similar comment
@ecordell
Copy link
Member Author

ecordell commented May 7, 2019

/retest

@eparis
Copy link

eparis commented May 8, 2019

/retest
hard to imagine this brought down an apiserver and broke etcd :)

@openshift-merge-robot openshift-merge-robot merged commit fb76336 into operator-framework:master May 8, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants