Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Modifying cloudLabels gives "Ignoring tag changes until we have #241" #1099

Closed
FrederikNJS opened this issue Dec 8, 2016 · 1 comment
Closed

Comments

@FrederikNJS
Copy link
Contributor

kops version: 1.4.1

I just tried to update the cloudLabels and nodeLabels on some of my instances. But the update told me that the tag changes were ignored, and I can also see that the Auto Scaling Group has not had the new tags applied.

First I ran an update to see what would change:

> kops update cluster k8s.example.com
I1208 14:40:26.202385    5837 populate_cluster_spec.go:196] Defaulting DNS zone to: Z3E6RP0Q59CGJN
I1208 14:40:27.412810    5837 executor.go:68] Tasks: 0 done / 63 total; 30 can run
I1208 14:40:28.085793    5837 executor.go:68] Tasks: 30 done / 63 total; 12 can run
I1208 14:40:28.493765    5837 executor.go:68] Tasks: 42 done / 63 total; 17 can run
I1208 14:40:29.251172    5837 executor.go:68] Tasks: 59 done / 63 total; 4 can run
I1208 14:40:29.450190    5837 executor.go:68] Tasks: 63 done / 63 total; 0 can run
Will modify resources:
  AutoscalingGroup    	autoscalingGroup/master-eu-west-1a.masters.k8s.example.com
  	Tags                	 {KubernetesCluster: k8s.example.com, Name: master-eu-west-1a.masters.k8s.example.com, k8s.io/role/master: 1} -> {k8s.io/role/master: 1, team: site-reliability-enginering-cph, Name: master-eu-west-1a.masters.k8s.example.com, KubernetesCluster: k8s.example.com, environment: prod}

  AutoscalingGroup    	autoscalingGroup/master-eu-west-1c.masters.k8s.example.com
  	Tags                	 {k8s.io/role/master: 1, KubernetesCluster: k8s.example.com, Name: master-eu-west-1c.masters.k8s.example.com} -> {Name: master-eu-west-1c.masters.k8s.example.com, KubernetesCluster: k8s.example.com, environment: prod, k8s.io/role/master: 1, team: site-reliability-enginering-cph}

  AutoscalingGroup    	autoscalingGroup/nodes.k8s.example.com
  	Tags                	 {KubernetesCluster: k8s.example.com, Name: nodes.k8s.example.com, k8s.io/role/node: 1} -> {environment: prod, k8s.io/role/node: 1, team: site-reliability-enginering-cph, Name: nodes.k8s.example.com, KubernetesCluster: k8s.example.com}

  AutoscalingGroup    	autoscalingGroup/master-eu-west-1b.masters.k8s.example.com
  	Tags                	 {KubernetesCluster: k8s.example.com, Name: master-eu-west-1b.masters.k8s.example.com, k8s.io/role/master: 1} -> {k8s.io/role/master: 1, team: site-reliability-enginering-cph, Name: master-eu-west-1b.masters.k8s.example.com, KubernetesCluster: k8s.example.com, environment: prod}

Must specify --yes to apply changes

Then I actually applied the update:

> kops update cluster k8s.example.com --yes
I1208 14:41:12.434633    5975 populate_cluster_spec.go:196] Defaulting DNS zone to: Z3E6RP0Q59CGJN
I1208 14:41:13.390210    5975 executor.go:68] Tasks: 0 done / 63 total; 30 can run
I1208 14:41:13.876981    5975 executor.go:68] Tasks: 30 done / 63 total; 12 can run
I1208 14:41:15.037965    5975 executor.go:68] Tasks: 42 done / 63 total; 17 can run
I1208 14:41:15.819645    5975 executor.go:68] Tasks: 59 done / 63 total; 4 can run
W1208 14:41:15.884933    5975 autoscalinggroup.go:192] Ignoring tag changes until we have #241: %vmap[team:site-reliability-enginering-cph Name:master-eu-west-1c.masters.k8s.example.com KubernetesCluster:k8s.example.com environment:prod k8s.io/role/master:1]
W1208 14:41:15.888828    5975 autoscalinggroup.go:192] Ignoring tag changes until we have #241: %vmap[Name:master-eu-west-1b.masters.k8s.example.com KubernetesCluster:k8s.example.com environment:prod k8s.io/role/master:1 team:site-reliability-enginering-cph]
W1208 14:41:15.959747    5975 autoscalinggroup.go:192] Ignoring tag changes until we have #241: %vmap[environment:prod k8s.io/role/master:1 team:site-reliability-enginering-cph Name:master-eu-west-1a.masters.k8s.example.com KubernetesCluster:k8s.example.com]
W1208 14:41:15.975720    5975 autoscalinggroup.go:192] Ignoring tag changes until we have #241: %vmap[team:site-reliability-enginering-cph Name:nodes.k8s.example.com KubernetesCluster:k8s.example.com environment:prod k8s.io/role/node:1]
I1208 14:41:16.045292    5975 executor.go:68] Tasks: 63 done / 63 total; 0 can run
I1208 14:41:16.201218    5975 update_cluster.go:150] Exporting kubecfg for cluster
Wrote config for k8s.example.com to "/home/frederiknjs/.kube/config"

Cluster is starting.  It should be ready in a few minutes.

Suggestions:
 * list nodes: kubectl get nodes --show-labels
 * ssh to the master: ssh -i ~/.ssh/id_rsa [email protected]
 * read about installing addons: https://github.com/kubernetes/kops/blob/master/docs/addons.md

It seems that the mentioned issue (#241) has already been closed before kops 1.4.1 was released. So shouldn't this have been fixed?

@krisnova
Copy link
Contributor

This is a one liner - I just opened a PR that addresses it - thanks @FrederikNS

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants