Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🌱 use cluster level lock instead of global lock for cluster accessor initialization #6380

Closed
wants to merge 2 commits into from

Conversation

fgutmann
Copy link
Contributor

@fgutmann fgutmann commented Apr 5, 2022

What this PR does / why we need it:

Currently, initialization of a cluster accessor requires a global lock to be held. Initializing an accessor includes creating the dynamic rest mapper for the workload cluster and waiting for caches to populate. On a high latency connection to a workload cluster this can take a significant amount of time, because there are 10s of requests sent to the API-server for initializing the dynamic rest mapper and populating caches. During this time all reconciliation loops which require an accessor for any workload cluster are fully blocked, effectively blocking reconciliation of all clusters.

This PR allows multiple accessors for different clusters to be initialized in parallel by splitting the global lock into one lock per cluster. The implemented locking mechanism ensures that:

  1. only one cluster accessor can exist for a particular cluster
  2. initialization of cluster accessors for different clusters do not block each other

@k8s-ci-robot k8s-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Apr 5, 2022
@linux-foundation-easycla
Copy link

linux-foundation-easycla bot commented Apr 5, 2022

CLA Signed

The committers listed above are authorized under a signed CLA.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. label Apr 5, 2022
@k8s-ci-robot
Copy link
Contributor

Welcome @fgutmann!

It looks like this is your first PR to kubernetes-sigs/cluster-api 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/cluster-api has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Apr 5, 2022
@k8s-ci-robot
Copy link
Contributor

Hi @fgutmann. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
To complete the pull request process, please assign neolit123 after the PR has been reviewed.
You can assign the PR to them by writing /assign @neolit123 in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Apr 5, 2022
@sbueringer
Copy link
Member

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Apr 5, 2022
@fgutmann fgutmann marked this pull request as ready for review April 6, 2022 01:12
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Apr 6, 2022
@fabriziopandini
Copy link
Member

this PR includes a separate commit to update docker/distribution from v2.8.0 to v2.8.1. This minor version upgrade was required to make the project build in my environment

Would be a problem to move this commit to a separated PR so we can get this merged ASAP/without waiting discussion on the other changes

controllers/remote/cluster_cache.go Outdated Show resolved Hide resolved
util/sync/mutex/keyedmutex.go Outdated Show resolved Hide resolved
util/sync/mutex/keyedmutex.go Outdated Show resolved Hide resolved
util/sync/mutex/keyedmutex.go Outdated Show resolved Hide resolved
controllers/remote/cluster_cache.go Outdated Show resolved Hide resolved
go.mod Outdated Show resolved Hide resolved
@fgutmann
Copy link
Contributor Author

Thank you @fabriziopandini and @sbueringer very much for taking a look at this PR and providing feedback!

I will address the suggested changes next week. I'm currently on vacation and don't have access to my regular work environment.

@sbueringer
Copy link
Member

sbueringer commented Apr 27, 2022

Thank you @fabriziopandini and @sbueringer very much for taking a look at this PR and providing feedback!

I will address the suggested changes next week. I'm currently on vacation and don't have access to my regular work environment.

Sure, no rush! Enjoy your vacation

@vincepri
Copy link
Member

@fgutmann Thanks for the PR! I looked through the code and it the keyed lock is definitely a good improvement on the existing global mutex. Have we thought also to add some sort of timeout to get the cluster accessor when populating the caches? If there is no timeout today, we can still incur in the same issue regardless of key-level locks or not, given that there is always a fixed number of workers that can reconcile requests.

@fgutmann
Copy link
Contributor Author

@vincepri The dynamic rest mapper used by the discovery client gets a timeout from the rest config, which is 10 seconds per request. The discovery phase thus has a sensible timeout.

However, the cache.WaitForCacheSync(cacheCtx) call currently uses an unbounded context and can get stuck forever. What is a sensible timeout for initially syncing the caches? If my understanding is correct, initially only Nodes are synced (coming from the remote.DefaultIndexes) list. Maybe 5 minutes? That should be on the safe side even for clusters with lots of nodes.

@fgutmann fgutmann force-pushed the lock-splitting branch 2 times, most recently from 14c7cc2 to 17cbd0d Compare May 26, 2022 00:13
@k8s-ci-robot k8s-ci-robot added size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels May 26, 2022
Before this commit, workload cluster client initialization required
a global lock to be held. If initialization of a single workload cluster
client took time, all other reconcile-loops who require a workload cluster
connection were blocked until initialization finished. Initialization
of a workload cluster client can take a significant amount of time,
because it requires to initialize the discovery client, which sends
multiple request to the API-server.

With this change initialization of a workload cluster client only
requires to hold a lock for the specific cluster. This means
reconciliation for other clusters is not affected by a long running
workload cluster client initialization.
@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels May 26, 2022
@fgutmann
Copy link
Contributor Author

Pushed an updated version with the changes discussed above. It also now contains a timeout of 5 minutes for initially synchronizing the cache.

Comment on lines +21 to +23
// keyedMutex is a mutex locking on the key provided to the Lock function.
// Only one caller can hold the lock for a specific key at a time.
type keyedMutex struct {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was this copied from somewhere else? Can we reuse a library?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, this was written for this CR. Did some research but didn't find any library that provides this functionality.

controllers/remote/cluster_cache.go Outdated Show resolved Hide resolved
a, err := t.newClusterAccessor(ctx, cluster, indexes...)
if err != nil {
log.V(4).Info("error creating new cluster accessor")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this? If it's an error, it shouldn't be an info?

controllers/remote/cluster_cache.go Outdated Show resolved Hide resolved
@sbueringer
Copy link
Member

@fgutmann Do you have time to address the findings?

Co-authored-by: Vince Prignano <[email protected]>
@fgutmann
Copy link
Contributor Author

fgutmann commented Jul 5, 2022

Updated the log messages and replied to the comments by @vincepri.

@k8s-ci-robot
Copy link
Contributor

@fgutmann: PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jul 27, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 25, 2022
@fgutmann
Copy link
Contributor Author

Superseded by #6380

@fgutmann fgutmann closed this Nov 14, 2022
@fgutmann fgutmann deleted the lock-splitting branch November 14, 2022 19:23
@fabriziopandini
Copy link
Member

@fgutmann thanks for this PR, it was really valuable to get 6380 merged
sorry again about this PR getting out of the radar...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants