-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updating Skaffold breaks redeploys since it changes labels
#3133
Comments
Thanks for opening @avloss - can you share more about your setup? I tried, but I can't reproduce it with our examples - skaffold redeploys the resources nicely with a new label. |
Hi, I cannot really redo it, but I can say I got the same issue while doing a gcp lab on qwiklabs. Basically the labs tells you to fetch latest skaffold, which fetches 0.41. Then we clone https://github.com/blipzimmerman/microservices-demo-1, run skaffold run, works, then we modify the "image" in one of the yaml, than skaffold run refuses to run because of the label change. I fetched an older version of skaffold, went down randomly to 0.39, then skaffold ran fine to redeploy with the change. So it does not seem to be related necessarily to a skaffold update since I started with the latest. with a image change. only skaffold run was used. |
With 0.41 I'm experiencing the same issue now. First deploy worked but second deployment fails with this error. I'm using Azure AKS. Didn't have this issue on Skaffold 0.30 since it didn't add the run-id label. I noticed that the issue only occurs if the previous deployment successfully started. If its stuck with e.g. imagepullbackoff then a new deployment is allowed, even if the labels have changed. |
I am experiencing this too on the latest v1.1.0. |
I have reproduced this on v1.8.0 and 1.9.1 too. I think the problem is the labels that Skaffold is creating:
This seems quite odd to me. It's not possible to apply a modification of a deployment's labels, for reasons documented in kubernetes/kubernetes#26202. (In summary modifying label selectors would break the deployment's ability to clean up old replicasets, I believe). So in order to apply a Deployment you need to do pass I may be missing something fundamental, but I think the current skaffold behaviour of putting a bunch of mutable state in the labels is broken; there have been similar bugs in many Helm templates as well (see helm/helm#1390 for example). My suggestions would be:
While my pods can be restarted at any time, I'd rather not have my RabbitMQ pod bounce on every deploy when there are no changes to the manifest. I'm guessing that 2. is working as designed, i.e. the Would a better "out of the box" behaviour be to use a fixed run-id for (Note, I think you could use the same fix as #2752 for the original issue, but it seems hacky to have to override internal labels to get working deploys). |
Note that even if you override the |
2x this very much. In addition, when migrating to using skaffold when previously using kustomization (for example), one needs to re-name services to avoid the immutable field errors. Using an annotation would make migration to skaffold much easier. |
If I now try to update this with another version of skaffold, I would get following exception
kubernetes/kubernetes#26202
Downgrading skaffold to the version which was used for original deploy does resolve the issue.
The text was updated successfully, but these errors were encountered: