Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for lifecycles #2763

Merged
merged 4 commits into from
Jul 21, 2017
Merged

Conversation

justinsb
Copy link
Member

@justinsb justinsb commented Jun 16, 2017

This change is Reviewable

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Jun 16, 2017
Copy link
Contributor

@chrislovecnm chrislovecnm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Questions for you

@@ -107,6 +109,7 @@ func NewCmdUpdateCluster(f *util.Factory, out io.Writer) *cobra.Command {
cmd.Flags().StringVar(&options.SSHPublicKey, "ssh-public-key", options.SSHPublicKey, "SSH public key to use (deprecated: use kops create secret instead)")
cmd.Flags().StringVar(&options.OutDir, "out", options.OutDir, "Path to write any local output")
cmd.Flags().BoolVar(&options.CreateKubecfg, "create-kube-config", options.CreateKubecfg, "Will control automatically creating the kube config file on your local filesystem")
cmd.Flags().StringVar(&options.Phase, "phase", options.Phase, "Subset of tasks to run")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we have this as a list? So I can run "iam,network" for instance?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain the use case? Just convenience?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For instance many companies have users that have permissions for iam and network, and another group has perms for ec2 instances. We have many users in this situation. It would be god to provide a user to run a single command to create iam and network.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When testing this I am noticing that we are not printing the allowed values for Phase. Can we add to update_cluster.go?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can add those later, let's start trying this out for real and then see whether we need it

@@ -1,22 +0,0 @@
secret/kube: {}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yay more of this stuff removed!

networkLifecycle := lifecyclePointer(fi.LifecycleSync)
clusterLifecycle := lifecyclePointer(fi.LifecycleSync)

switch c.Phase {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would definitely like to have the capability to run both PhaseNetwork and PhaseCluster at the same time.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain why? In particular, why not just run two commands?

A goal here is to eventually make networks a top level object, like Cluster.

@@ -28,7 +28,8 @@ import (
)

type BootstrapChannelBuilder struct {
cluster *kops.Cluster
cluster *kops.Cluster
Lifecycle *fi.Lifecycle
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Crazy question, should bootstrap be another lifecycle? This would allow for the easier upgrade of stuff like CNI providers.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think bootstrap is pretty tightly bound to the rest of the cluster. And if only the CNI has changed, then I think that our logic should ensure that only that is actually changed...

@@ -99,6 +100,57 @@ func (c *Context) NewTempDir(prefix string) (string, error) {
var typeContextPtr = reflect.TypeOf((*Context)(nil))

func (c *Context) Render(a, e, changes Task) error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

function documentation?

@@ -172,6 +175,20 @@ func RunUpdateCluster(f *util.Factory, clusterName string, out io.Writer, c *Upd
glog.Infof("Using SSH public key: %v\n", c.SSHPublicKey)
}

var phase cloudup.Phase
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we have the phase options lower, say in create.go? I will probably need kops create -f mycluster.yaml --phase iam.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Phase is really only for updating. Today there are some circumstances in which create will do an update (for historical reasons), and you have a PR to tweak that, and the kops-server work may also change that (because in k8s today, creating an object immediately does an update).

I'd prefer to leave our options open unless we know we need it today. Do we need it today?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the majority of user that would be interested in this would be interesting with this with yaml support. We can merge and move lower if you like.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's get this in and look at the broader role of create vs update as part of the kops-server UX pass

@justinsb justinsb changed the title WIP: Support for lifecycles Support for lifecycles Jul 2, 2017
@chrislovecnm
Copy link
Contributor

Getting message from update: "Cluster is starting. It should be ready in a few minutes." After running --phase iam. We probably should tweak the output. I can file an issue on it, and have a follow-up PR.

Copy link
Contributor

@chrislovecnm chrislovecnm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am also seeing issues with the shared VPC, and subnets being deleted. I know the shared VPCs being delete is an open issue, but now we are deleting the VPC 😞

I am not certain this is an issue with this PR, I am going to test with master as well.

Copy link
Contributor

@chrislovecnm chrislovecnm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would like LifecycleExistsAndWarnIfChanges updated, and other nit picks.

Looks awesome, what is the plan for getting this in?

networkLifecycle = lifecyclePointer(fi.LifecycleExistsAndWarnIfChanges)
} else {
iamLifecycle = lifecyclePointer(fi.LifecycleExistsAndValidates)
networkLifecycle = lifecyclePointer(fi.LifecycleExistsAndValidates)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we loosen these and change them to LifecycleExistsAndWarnIfChanges. Allowing this will fix multiple scenarios including not using an internet gateway.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not using an internet gateway is orthogonal, but I think it would be great to have more custom topology support.

@justinsb
Copy link
Member Author

Added comment on the Render function. Rebased.

Subnet deletion is probably pre-existing. Will send a PR now.

@chrislovecnm
Copy link
Contributor

Related #1576

Copy link
Contributor

@chrislovecnm chrislovecnm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks

@chrislovecnm
Copy link
Contributor

The one challenge that we have is that network check may need to be LifecycleExistsAndWarnIfChanges not a hard check.

@chrislovecnm
Copy link
Contributor

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jul 21, 2017
@k8s-github-robot
Copy link

/test all [submit-queue is verifying that this PR is safe to merge]

@k8s-github-robot
Copy link

Automatic merge from submit-queue

@k8s-github-robot k8s-github-robot merged commit 62d702a into kubernetes:master Jul 21, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants