Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use storage version API #85

Closed
wants to merge 3 commits into from

Conversation

roycaihw
Copy link
Member

@roycaihw roycaihw commented Dec 7, 2020

/assign @caesarxuchao

This PR changes the storage migrator trigger controller to use the new storage version API, which supports HA clusters. This PR also adds a test pull-kube-storage-version-migrator-ha-master, which rolling-updates an HA cluster to change the storage version of CRD from apiextensions.k8s.io/v1beta1 to apiextensions.k8s.io/v1.

The two existing tests pull-kube-storage-version-migrator-fully-automated-e2e and pull-kube-storage-version-migrator-disruptive are failing now because they exercise storage migration for custom resources. These tests will pass when custom resource storage version support is merged upstream: kubernetes/kubernetes#96403 (which I'm working on this week)

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Dec 7, 2020
@k8s-ci-robot k8s-ci-robot added the size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. label Dec 7, 2020
@roycaihw
Copy link
Member Author

roycaihw commented Dec 8, 2020

/test pull-kube-storage-version-migrator-ha-master

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: roycaihw
To complete the pull request process, please assign caesarxuchao after the PR has been reviewed.
You can assign the PR to them by writing /assign @caesarxuchao in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

This was referenced Dec 8, 2020
@roycaihw roycaihw force-pushed the kinder-e2e branch 10 times, most recently from 472da72 to 2b87611 Compare December 14, 2020 01:39
@k8s-ci-robot
Copy link
Contributor

@roycaihw: The following tests failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
pull-kube-storage-version-migrator-fully-automated-e2e 4e2ae36 link /test pull-kube-storage-version-migrator-fully-automated-e2e
pull-kube-storage-version-migrator-disruptive 4e2ae36 link /test pull-kube-storage-version-migrator-disruptive

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

Copy link
Member

@caesarxuchao caesarxuchao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need more time to think about the logic in storageversion_handler.go, so you don't need to address my comments in that file.

We need to revisit the storageState API. Currently it's recording the storageVersionHash, we need to convert it to recording storage version.

The storageState.status.lastHeartbeatTime field probably can be removed, because we now uses watch instead of polling, we won't miss the storage version change.

storageVersionInformer: apiserverinternalinformers.NewStorageVersionInformer(kubeClient,
0,
cache.Indexers{}),
queue: workqueue.NewNamedRateLimitingQueue(workqueue.DefaultControllerRateLimiter(), "migration_triggering_controller"),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we name this queue and respective enqueue/dequeue function as well? Maybe just migrationQueue.

func (mt *MigrationTrigger) updateStorageVersion(_ interface{}, obj interface{}) {
mt.addStorageVersion(obj)
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should handle deleteStorageVersion as well. The controller can garbage collect respective storageVersionMigration and storageState objects.

m := &migrationv1alpha1.StorageVersionMigration{
ObjectMeta: metav1.ObjectMeta{
GenerateName: storageStateName(resource) + "-",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to garbage collects the old storageVersionMigration. cleanMigrations won't GC the old ones. We can do this in a follow-up.

@@ -47,25 +50,40 @@ const (
)

type MigrationTrigger struct {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can split the Trigger to two controllers. The storageVersion control loop and the migration control loop is decoupled.

)

func (mt *MigrationTrigger) processStorageVersion(ctx context.Context, sv *apiserverinternalv1alpha1.StorageVersion) error {
klog.V(2).Infof("processing storage version %#v", sv)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In production we don't want to log the entire storageVersion object.

}
}
storageVersionChanged := found && (ss.Status.CurrentStorageVersionHash != *sv.Status.CommonEncodingVersion ||
lastTransitionTime != mt.lastSeenTransitionTime[sv.Name]) && foundCondition
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we make a standalone variable for case storageVersionlastTransitionTime != mt.lastSeenTransitionTime[sv.Name], maybe naming it storageVersionPossiblyFlipped (or a better name)?

@caesarxuchao
Copy link
Member

Also I don't remember if the migrator skips migrating the event resource currently, perhaps event resource doesn't have a storageVersionHash.

We should skip migrating events, it's transient anyway.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 15, 2021
@roycaihw
Copy link
Member Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 16, 2021
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 14, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 14, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants