Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

standardize scope local cache #2657

Closed
wants to merge 1 commit into from

Conversation

jackfrancis
Copy link
Contributor

What type of PR is this?

/kind feature
/kind cleanup

What this PR does / why we need it:

This PR picks up where this one left off:

#2604

Here we standardize the way we maintain a local scope cache (in other words, we cache values that are retrieved externally so that we only have to fetch them once in any given scope lifecycle [e.g., one reconciliation loop]).

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #

Special notes for your reviewer:

Please confirm that if this PR changes any image versions, then that's the sole change this PR makes.

TODOs:

  • squashed commits
  • includes documentation
  • adds unit tests

Release note:

standardize scope local cache

@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. kind/feature Categorizes issue or PR as related to a new feature. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Sep 15, 2022
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please ask for approval from jackfrancis by writing /assign @jackfrancis in a comment. For more information see:The Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@jackfrancis
Copy link
Contributor Author

/test pull-cluster-api-provider-azure-e2e-exp
/test pull-cluster-api-provider-azure-e2e-optional

}

// InitMachineCache sets cached information about the machine to be used in the scope.
func (m *MachineScope) InitMachineCache(ctx context.Context) error {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@devigned do you have any concerns about replacing this "do a one-time initiatlization of all cached properties when we instantiate scope" vs implement the property value caching on demand?

My thinking (and this was @CecileRobertMichon 's thinking when we implemented caching in a prior PR) is that not all scope lifecycles are going to require access to all of these properties, and so in the act of caching them we are actually deoptimizing things by fetching data unnecessarily.

@jackfrancis
Copy link
Contributor Author

/retest

spec.SKU = m.cache.VMSKU
spec.Image = m.cache.VMImage
spec.BootstrapData = m.cache.BootstrapData
if bootstrapData, err := m.GetBootstrapData(ctx); err == nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should be handling the errors here and below

if I remember correctly that was one of the main reasons we decided to go with a separate InitMachineCache func which would be able to handle errors and pass in context without requiring those on the Spec functions.

}

// NodeStatus represents the status of a Kubernetes node.
NodeStatus struct {
Ready bool
Version string
}

// MachinePoolCache stores common machine information so we don't have to hit the API multiple times within the same reconcile loop.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// MachinePoolCache stores common machine information so we don't have to hit the API multiple times within the same reconcile loop.
// MachinePoolCache stores common machine pool information so we don't have to hit the API multiple times within the same reconcile loop.

@jackfrancis
Copy link
Contributor Author

/hold

Not sure we actually want this change

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Sep 26, 2022
@CecileRobertMichon
Copy link
Contributor

Not sure we actually want this change

Going to close it to remove noise for reviewers in PR queue, please reopen if/when it's ready for review

/close

@k8s-ci-robot
Copy link
Contributor

@CecileRobertMichon: Closed this PR.

In response to this:

Not sure we actually want this change

Going to close it to remove noise for reviewers in PR queue, please reopen if/when it's ready for review

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@jackfrancis jackfrancis deleted the scope-cache branch December 9, 2022 22:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. kind/feature Categorizes issue or PR as related to a new feature. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants