Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generation and separate status updates for DCs #7149

Merged
merged 2 commits into from
May 9, 2016
Merged

Generation and separate status updates for DCs #7149

merged 2 commits into from
May 9, 2016

Conversation

0xmichalis
Copy link
Contributor

This PR adds generation numbers and separate status call for DCs.

@smarterclayton @ironcladlou @deads2k @liggitt @openshift/api-review

Closes #6337
Closes #6465

@0xmichalis
Copy link
Contributor Author

Finally got generation working for both new and old clients. Still needs tests but this is ready for reviews

@openshift/api-review @ironcladlou

@0xmichalis
Copy link
Contributor Author

@liggitt addressed all of your comments from #6479

@0xmichalis 0xmichalis changed the title [WIP] Generation for DCs Generation for DCs Feb 11, 2016
@0xmichalis
Copy link
Contributor Author

[test][extended:core(client deployments)]

@smarterclayton
Copy link
Contributor

Not sure if changing strategy should always update generation. Need to
sort through that.

On Thu, Feb 11, 2016 at 1:55 PM, OpenShift Bot [email protected]
wrote:

continuous-integration/openshift-jenkins/test FAILURE (
https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/1044/)
(Extended Tests: core(client deployments))


Reply to this email directly or view it on GitHub
#7149 (comment).

@smarterclayton
Copy link
Contributor

Maybe in the classic meaning I guess. Not in the user driven meaning.

On Thu, Feb 11, 2016 at 3:40 PM, Clayton Coleman [email protected]
wrote:

Not sure if changing strategy should always update generation. Need to
sort through that.

On Thu, Feb 11, 2016 at 1:55 PM, OpenShift Bot [email protected]
wrote:

continuous-integration/openshift-jenkins/test FAILURE (
https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/1044/)
(Extended Tests: core(client deployments))


Reply to this email directly or view it on GitHub
#7149 (comment).

@openshift-bot openshift-bot added needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. and removed needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Feb 12, 2016
@0xmichalis
Copy link
Contributor Author

[test][extended:core(client deployments)]

@0xmichalis
Copy link
Contributor Author

Not sure if my extended test run and/or if - Error: Element not found: .deployment-trigger is something relevant

@smarterclayton does the indication in core() is a regexp? Or should I pass the whole test name?

@@ -76,7 +76,7 @@ var (
GroupsToResources = map[string][]string{
BuildGroupName: {"builds", "buildconfigs", "buildlogs", "buildconfigs/instantiate", "buildconfigs/instantiatebinary", "builds/log", "builds/clone", "buildconfigs/webhooks"},
ImageGroupName: {"imagestreams", "imagestreammappings", "imagestreamtags", "imagestreamimages", "imagestreamimports"},
DeploymentGroupName: {"deployments", "deploymentconfigs", "generatedeploymentconfigs", "deploymentconfigrollbacks", "deploymentconfigs/log", "deploymentconfigs/scale"},
DeploymentGroupName: {"deployments", "deploymentconfigs", "generatedeploymentconfigs", "deploymentconfigrollbacks", "deploymentconfigs/log", "deploymentconfigs/scale", "deploymentconfigs/status"},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not in this group. We grant edit on this. OpenshiftStatusGroupName until we manage to finish killing these. We're only keeping them for compatibility with old clients and unreconciled roles

@0xmichalis
Copy link
Contributor Author

[test][extended:core(deployment generation)]

@0xmichalis 0xmichalis closed this Feb 14, 2016
@0xmichalis 0xmichalis reopened this Feb 14, 2016
@smarterclayton
Copy link
Contributor

It is a regex, and it needs to be escaped correctly.

On Fri, Feb 12, 2016 at 12:03 PM, Michail Kargakis <[email protected]

wrote:

Not sure if my extended test run and/or if - Error: Element not found:
.deployment-trigger is something relevant

@smarterclayton https://github.com/smarterclayton does the indication
in core() is a regexp? Or should I pass the whole test name?


Reply to this email directly or view it on GitHub
#7149 (comment).

@0xmichalis
Copy link
Contributor Author

--- FAIL: TestTriggers_imageChange-2 (7.64s)
    deploy_trigger_test.go:148: Waiting for image stream mapping to be reflected in the IS status...
    deploy_trigger_test.go:158: Still waiting for latest tag status on ImageStream test-image-stream
    deploy_trigger_test.go:155: ImageStream test-image-stream now has Status with tags: map[string]api.TagEventList{"latest":api.TagEventList{Items:[]api.TagEvent{api.TagEvent{Created:unversioned.Time{Time:time.Time{sec:63591320857, nsec:0, loc:(*time.Location)(0x4cca940)}}, DockerImageReference:"registry:8080/openshift/test-image@sha256:00000000000000000000000000000001", Image:"sha256:00000000000000000000000000000001", Generation:1}}, Conditions:[]api.TagEventCondition(nil)}}
    deploy_trigger_test.go:171: Waiting for a new deployment config in response to ImageStream update
    deploy_trigger_test.go:182: unexpected image for pod template container 0; expected "registry:8080/openshift/test-image@sha256:00000000000000000000000000000001", got "registry:8080/repo1:ref1"
FAIL

@smarterclayton the image change controller needs to update both status (latestVersion, details) and spec (resolves container image). Should I add an atomic update for DCs updating both?

@smarterclayton
Copy link
Contributor

Couple of comments

@0xmichalis
Copy link
Contributor Author

@smarterclayton the image change controller needs to update both status (latestVersion, details) and spec (resolves container image). Should I add an atomic update for DCs updating both?

Hm, actually when I am running with this PR, I see the container image being resolved. And I am using a status update in the image controller... : confused :

@0xmichalis
Copy link
Contributor Author

[test][extended:core(client deployments)]

@smarterclayton
Copy link
Contributor

Yes, like we did for builds. Make sure it's internal only for now.

On Wed, Feb 17, 2016 at 11:11 AM, Michail Kargakis <[email protected]

wrote:

--- FAIL: TestTriggers_imageChange-2 (7.64s)
deploy_trigger_test.go:148: Waiting for image stream mapping to be reflected in the IS status...
deploy_trigger_test.go:158: Still waiting for latest tag status on ImageStream test-image-stream
deploy_trigger_test.go:155: ImageStream test-image-stream now has Status with tags: map[string]api.TagEventList{"latest":api.TagEventList{Items:[]api.TagEvent{api.TagEvent{Created:unversioned.Time{Time:time.Time{sec:63591320857, nsec:0, loc:(*time.Location)(0x4cca940)}}, DockerImageReference:"registry:8080/openshift/test-image@sha256:00000000000000000000000000000001", Image:"sha256:00000000000000000000000000000001", Generation:1}}, Conditions:[]api.TagEventCondition(nil)}}
deploy_trigger_test.go:171: Waiting for a new deployment config in response to ImageStream update
deploy_trigger_test.go:182: unexpected image for pod template container 0; expected "registry:8080/openshift/test-image@sha256:00000000000000000000000000000001", got "registry:8080/repo1:ref1"
FAIL

@smarterclayton https://github.com/smarterclayton the image change
controller needs to update both status (latestVersion, details) and spec
(resolves container image). Should I add an atomic update for DCs updating
both?


Reply to this email directly or view it on GitHub
#7149 (comment).

@0xmichalis
Copy link
Contributor Author

Yes, like we did for builds. Make sure it's internal only for now.

Not sure about builds, I remember implementing it for imagestreamtags

@0xmichalis
Copy link
Contributor Author

@smarterclayton the controller is using an OpenShift client, how can I make the update with the storage being internal? In other words, how can the server know that this update comes from the controller and use the internal storage? Annotations? That would be bad.

@0xmichalis
Copy link
Contributor Author

An idea would be to allow container image changes in status updates

@smarterclayton
Copy link
Contributor

That's what I meant.

On Thu, Feb 18, 2016 at 8:57 AM, Michail Kargakis [email protected]
wrote:

Yes, like we did for builds. Make sure it's internal only for now.

Not sure about builds, I remember implementing it for imagestreamtags


Reply to this email directly or view it on GitHub
#7149 (comment).

@smarterclayton
Copy link
Contributor

That's a lot harder. I don't think we can do that easily today (and I
really don't want to allow it to update spec). However, shouldn't it just
fire the config change and then handle creating a DC from that status
atomically?

On Thu, Feb 18, 2016 at 9:19 AM, Michail Kargakis [email protected]
wrote:

@smarterclayton https://github.com/smarterclayton the controller is
using an OpenShift client, how can I make the update with the storage being
internal? In other words, how can the server know that this update comes
from the controller and use the internal storage? Annotations? That would
be bad.


Reply to this email directly or view it on GitHub
#7149 (comment).

@0xmichalis
Copy link
Contributor Author

I also don't like allowing status to modify the spec. Beats the reason of adding status in the first place. The best solution I can think of right now, is have the image change controller update the container image and the trigger params, detect it in the update hook and update status details and maybe latestVersion (#7439).

@0xmichalis
Copy link
Contributor Author

@smarterclayton comments addressed, PTAL

@0xmichalis
Copy link
Contributor Author

#7455 failure, got merged after the test started :/

@0xmichalis
Copy link
Contributor Author

[test][extended:core(client deployments)]

@@ -225,11 +225,12 @@ func (o DeployOptions) deploy(config *deployapi.DeploymentConfig, out io.Writer)
}
}

config.Status.LatestVersion++
_, err = o.osClient.DeploymentConfigs(config.Namespace).Update(config)
config.Generation++
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So upstream they don't want any clients to update generation. I would prefer not to deviate and it would be better actually since some of the hacks in the registry hook won't be needed. Any ideas here? I really don't want to add a new spec field[1] and I am leaning towards an annotation. openshift.io/instantiated?

[1] So once we have parity with upstream Deployments, the DeploymentConfigSpec will look like this:

type DeploymentConfigSpec struct {
    // Strategy describes how a deployment is executed.
    Strategy DeploymentStrategy

    // Minimum number of seconds for which a newly created pod should be ready
    // without any of its container crashing, for it to be considered available.
    // Defaults to 0 (pod will be considered available as soon as it is ready)
    MinReadySeconds int32

    // The number of old ReplicaSets to retain to allow rollback.
    // This is a pointer to distinguish between explicit zero and not specified.
    RevisionHistoryLimit *int32

    // Indicates that the deployment is paused and will not be processed by the
    // deployment controller.
    Paused bool

    // Triggers determine how updates to a DeploymentConfig result in new deployments. If no triggers
    // are defined, a new deployment can only occur as a result of an explicit client update to the
    // DeploymentConfig with a new LatestVersion.
    Triggers []DeploymentTriggerPolicy

    // Replicas is the number of desired replicas.
    Replicas int

    // Test ensures that this deployment config will have zero replicas except while a deployment is running. This allows the
    // deployment config to be used as a continuous deployment test - triggering on images, running the deployment, and then succeeding
    // or failing. Post strategy hooks and After actions can be used to integrate successful deployment with an action.
    Test bool

    // Selector is a label query over pods that should match the Replicas count.
    Selector map[string]string

    // Template is the object that describes the pod that will be created if
    // insufficient replicas are detected.
    Template *kapi.PodTemplateSpec
}

As a new user coming to DCs I would probably feel overwhelmed by such a spec. @smarterclayton @ironcladlou do we want 100% parity? I could see us continuing to use ConfigChange triggers only (map it to Pause) and we may not care about MinReadySeconds.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, and I didn't include 1) ProgressDeadlineSeconds which is still cooked and 2) the rollback spec but the latter should never be set by users anyway.

@0xmichalis
Copy link
Contributor Author

@deads2k as per our discussion yesterday, we are going to prohibit users from manipulating system labels/annotations ("openshift.io/") at some point and I find it reasonable. It's going to be a breaking change for oc deploy since it already updates such annotations (--cancel, --retry). I am going to switch --latest to use one more so we can move away from the status update and we can revisit this once we are ready for reserving.

@openshift-bot openshift-bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label May 4, 2016
@openshift-bot openshift-bot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label May 5, 2016
@smarterclayton
Copy link
Contributor

smarterclayton commented May 5, 2016 via email

@deads2k
Copy link
Contributor

deads2k commented May 5, 2016

Wait, what? Who says that?

A separate discussion about how we might try to secure system managed labels and annotations in the future. <something>.openshift.io would be a really easy possibility that could have a lot of benefit. I was choosing a new default label name intended for a user to change and had to choose it's value.

@0xmichalis
Copy link
Contributor Author

@ironcladlou @smarterclayton @deads2k @liggitt I have updated to use an annotation instead of incrementing the generation. PTAL

@0xmichalis
Copy link
Contributor Author

[test]

@openshift-bot openshift-bot added needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. and removed needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels May 9, 2016
@0xmichalis
Copy link
Contributor Author

#8571 flake [test]

@0xmichalis
Copy link
Contributor Author

yum again [test]

@0xmichalis
Copy link
Contributor Author

Webdriver and #8789

[test]

Add generation numbers to deploymentconfigs. Helpful for decoupling
latestVersion from oc
Add a separete call for updating the status of a deploymentconfig.
Use it where it makes sense (controllers should be responsible for
updating the status).

Also make spec updates tolerate status detail updates in case the
updates originates from the image change controller.
// TODO: need to ensure status.latestVersion is not set out of order
dc := obj.(*api.DeploymentConfig)
dc.Generation = 1
dc.Status = api.DeploymentConfigStatus{}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this will break old clients?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You either break those or you let them manipulate the deployment status. Having a preset status.latestVersion is the same as having a config change trigger so I don't think the breakage is big.

@openshift-bot
Copy link
Contributor

Evaluated for origin test up to f12aca3

newStatus := newDc.Status
newDc.Status = oldDc.Status

// If this update comes from the imagechange controller, tolerate status detail updates.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does image change controller expect to set both spec and status? If so, comment here, and make sure we have an to cover that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I will address this in #8746 by removing the status update. So the flow on an image change will be: ImageChange controller updates spec ---(watch triggered)---> ConfigController sees a template diff, updates latestVersion ---(watch triggered)---> main dc controller runs the new deployment.

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/test SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/3687/)

@0xmichalis
Copy link
Contributor Author

images duckduckgo com

@smarterclayton
Copy link
Contributor

LGTM, please watch the queue afterwards in case we start seeing flakes. [merge]

@openshift-bot
Copy link
Contributor

openshift-bot commented May 9, 2016

continuous-integration/openshift-jenkins/merge SUCCESS (https://ci.openshift.redhat.com/jenkins/job/merge_pull_requests_origin/5853/) (Image: devenv-rhel7_4144)

@0xmichalis
Copy link
Contributor Author

Flaked on #8802 which is a known issue (#8444)

re[merge]

@openshift-bot
Copy link
Contributor

Evaluated for origin merge up to f12aca3

@openshift-bot openshift-bot merged commit a7cbb29 into openshift:master May 9, 2016
@0xmichalis 0xmichalis deleted the generation-for-dcs branch May 9, 2016 20:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants