Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rework/improve conditions #64

Open
Fizzadar opened this issue Sep 23, 2020 · 11 comments · May be fixed by #71
Open

Rework/improve conditions #64

Fizzadar opened this issue Sep 23, 2020 · 11 comments · May be fixed by #71
Assignees

Comments

@Fizzadar
Copy link
Contributor

Fizzadar commented Sep 23, 2020

Current conditions - these can control where deployments/dependencies/upgrades all run/exist:

dev: true  # dev only, except where other conditions
test: true  # whether this is executed/needed during testing
envs:  []  # list of envs (clusters) this should be present in
namespaces: []  # list of namespaces this should be present in
notNamespaces: []  # list of namespaces this should *not* be present in

Ideally this should be less complex and less gotchas between combinations!

@Fizzadar Fizzadar self-assigned this Sep 23, 2020
@Fizzadar
Copy link
Contributor Author

Proposal:

conditions:
  modes: [dev, test, deploy]  # (ktd up | ktd test | kubetools deploy)
  clusters:
    not: []
    match: ['*']
  namespaces:
    not: []
    match: ['*']

Essentially just a more explicit version of the current ones.

@gchazot
Copy link
Member

gchazot commented Sep 23, 2020

  • clusters is not consistent with env and context, as in, kubetools CLI expects
  • I think deploy is the only mode where clusters and namespaces apply which makes it a bit asymmetrical
  • More flexibility might be needed in the deploy case: One might want to specify "(cluster production AND namespace A) OR (cluster backup AND namespace B)" (as in, for example, NOT namespace B in cluster production, NOT namespace A in cluster backup, NOT namespace C in any cluster). This could be achieved with a list of clusters + namespaces pairs

@Fizzadar
Copy link
Contributor Author

Fizzadar commented Sep 24, 2020

* `clusters` is not consistent with `env` and `context`, as in, `kubetools` CLI expects

Yeah this is a weird one really, context is a (cluster, user) tuple, but only the cluster bit of that is any concern to the app, the user bit really only relates to the place kuebtools is being executed from.

* I think `deploy` is the only mode where `clusters` and `namespaces` apply which makes it a bit asymmetrical

You're absolutely right, doesn't quite work!

* More flexibility might be needed in the `deploy` case: One might want to specify "(cluster production AND namespace A) OR (cluster backup AND namespace B)" (as in, for example, NOT namespace B in cluster production, NOT namespace A in cluster backup, NOT namespace C in any cluster). This could be achieved with a list of `clusters` + `namespaces` pairs

Interesting! So maybe we need to focus the conditions around the 'mode'. So, as default (everywhere/always):

conditions:
  dev: true
  test: true
  deploy: true

Then limiting to clusters/namespaces could be:

conditions:
  deploy:
    - ['production', 'A']
    - ['backup', 'B']

And finally the case where we want to deploy a dependency in all namespaces on the staging cluster but not elsewhere:

conditions:
  deploy:
    - ['staging', '*']

This then removes the need for "not" matching because (I think?) everything can be achieved using the tuples with wildcard support.

@Fizzadar
Copy link
Contributor Author

Fizzadar commented Oct 6, 2020

One question - is defaulting everything to true or false better? Leaning towards true as this is more common, and then conditions can be used to explicitly disable certain things in certain places.

Regarding backwards compatibility: it will be trivial to detect the old version condition object by the keys within, so we can keep the old style implementation in tandem.

@gchazot
Copy link
Member

gchazot commented Oct 6, 2020

One question - is defaulting everything to true or false better? Leaning towards true as this is more common, and then conditions can be used to explicitly disable certain things in certain places.

Definitely true. When reading a kubetools.yml you should expect that:

  • all "modes" are the same
  • everything you read is active

unless otherwise specified explicitely

@gchazot
Copy link
Member

gchazot commented Oct 6, 2020

Instead of:

conditions:
  deploy:
    - ['production', 'A']
    - ['backup', 'B']

I'd rather go explicit:

conditions:
  deploy:
    - env: 'production'
      namespace: 'A'
    - env: 'backup'
      namespace: 'B'

Which opens up to negations and wildcards nicely:

conditions:
  deploy:
    # namespace B in any environment but production
    - not_env: 'production'
      namespace: 'B'
    # Any namespace in backup environment
    - env: 'backup'  # Absence of `namespace` means `'*'`

And also opens backwards-compatible evolutions if later adding other conditions than env and namespace

@Fizzadar
Copy link
Contributor Author

Fizzadar commented Oct 6, 2020

I like that a lot, definitely pro the explicitness!

One thing - we should sort out this mess of env vs cluster vs context. IMO we should go with cluster, as that's most appropriate (context is merely something the client uses to speak with a cluster). In tandem I also think we should populate KUBE_CLUSTER based on the context cluster name when deploying (rather than just passing the context name as-is, because they might be different).

@Fizzadar
Copy link
Contributor Author

Fizzadar commented Oct 6, 2020

Also, a thought for differentiating this from old configs:

name: my-project
configVersion: 1
requireKubetools: '~=12.0'  # optional, replacement for minKubetoolsVersion

@gchazot
Copy link
Member

gchazot commented Oct 6, 2020

One thing - we should sort out this mess of env vs cluster vs context. IMO we should go with cluster, as that's most appropriate (context is merely something the client uses to speak with a cluster).

As far as kubetools references kubectl's context, it should stick with context. I'm not sure but I can imagine an end-user might have separate contexts defined (in .kube/config) to talk with the same cluster (maybe to represent separate "envs" 😆)

@gchazot
Copy link
Member

gchazot commented Oct 6, 2020

In tandem I also think we should populate KUBE_CLUSTER based on the context cluster name when deploying (rather than just passing the context name as-is, because they might be different).

I think more is better in this case, but with consistency with .kube/config:

  • KUBE_CLUSTER the corresponding cluster
  • KUBE_CONTEXT the corresponding context
  • KUBE_NAMESPACE the corresponding namespace

Now this poses a few issues:

  • I'm almost sure that context only makes sense from the deployment tooling side. Once the deployment is done, this notion doesn't exist any more in the k8s cluster.
  • The same might apply to cluster depending whether we're talking about the cluster name from .kube/config or a canonical name of the k8s cluster. My knowledge is short here for whether there is such a thing as the name of the cluster. If there is, then it would be best to use that.

Answers to these questions will define to what extent multiple users of a same cluster can customise their .kube/config, i.e. it will define which values must be the same for all users.

@Fizzadar
Copy link
Contributor Author

Fizzadar commented Oct 6, 2020

I'm almost sure that context only makes sense from the deployment tooling side. Once the deployment is done, this notion doesn't exist any more in the k8s cluster.

Spot on, context is really only a thing on the deployment (user/CI) side.

The same might apply to cluster depending whether we're talking about the cluster name from .kube/config or a canonical name of the k8s cluster. My knowledge is short here for whether there is such a thing as the name of the cluster. If there is, then it would be best to use that.

There isn't really any concept of a "cluster name" - https://stackoverflow.com/questions/38242062/how-to-get-kubernetes-cluster-name-from-k8s-api. So this all ends up being entirely dependent on the deployment environment, which means it's flexible but... complex!

Agree w/adding all three envvars, gives all the information to the underlying app and maximum flexibility! As for the conditions, context + namespace seem most sensible here to me. Ultimately the cluster AND context are up to the end user, and thus it's down to us/jenkins/wherever to enforce the correct values.

Fizzadar pushed a commit that referenced this issue Oct 8, 2020
@Fizzadar Fizzadar linked a pull request Oct 8, 2020 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants