-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update pods using podman play kube file.yml #4478
Comments
@yangm97 So the behaviour you would expect would be to replace the existing pod with a new pod, correct? |
If we're replacing, I would think we'd want a flag to make it explicit - |
SGTM |
@rhatdan I think it would be nice to replicate the kubernetes behavior, if possible, which I believe is updating a pod when applying a |
I'm a bit torn on this. It is a departure from the regular behavior. The error message seems pretty clear; is deleting it too big of a leap? |
I was looking for a In While I think it's important to replicate k8s behaviors, I also think it's worth to replicate some of the docker behaviors, and to the extent, the docker-compose behaviors. I'm aware of |
@yegle I think you got me wrong. I'm advocating for updating the pod to follow the k8s convention, when applying a spec of On a sidenote, if you're new to kubernetes I recommend "forgetting" the "docker/compose/swarm" way of doing stuff, for a moment, to ease your learning experience. |
This is very conflicting: podman was presented to newcomers as a docker CLI drop-in replacement (this is evident if looking at the # of issues reporting incompatibility between Docker and Podman). But in many places the tool is trying to align with k8s, like this issue. Perhaps the libpod project should clearly state the vision of podman, and the relationship between podman/docker/k8s. I'm probably not the only podman user who just assumes they can use podman with all the knowledge from Docker. |
We don't view We have other plants for supporting |
Right "Docker compose" != Docker CLI. Our goal is to mostly replace the Docker CLI for as much as we can, with the exception of There are some other features of Docker that we have either chosen not to implement (links) or can't implement because of a lack of a Daemon, or we fail it is a bug in Docker. Another difference can pop up when using Rootless, since some things we simply are not allowed to do because of kernel security features that require full root. |
This issue had no activity for 30 days. In the absence of activity or the "do-not-close" label, the issue will be automatically closed within 7 days. |
Currently I miss the "recreate-if-changed" semantic in To be clear: I don't want to recreate the pod every time I run |
That would mean we would have to squirrel away the yaml file, which does not feel like something podman should do, where it would be fairly easy for a script to do. |
I am going to close this since I think this should be scripted, Reopen if you disagree. |
Ok, I think this got a little out of track. The purpose of this issue was to provide a way to replicate the k8s behaviour of mutating (not recreating) pods when given a bare pod spec ( Since using bare pods isn't seen as a good practice and podman 2.0 is going to support |
I am actually not sure how k8s "mutates" a pod, but under the hood, I am fairly certain the pod is just recreated. There is no way to, for instance, change the immutable config.json for a container once the container's been created in the runtime. I think all the apiserver is doing (possibly with help from kubelet) is checking if the spec has changed, and if so, create a new pod with the same name, removing the old one. that's not arguing that it should or should not be handled by podman or a script, but CRI-O certainly doesn't have the capability to mutate a running container, meaning the CRI doesn't need support for it, meaning kubelet shouldn't expect it, and the kube-apiserver can't rely on it. Thus, it must be patched around by someone in kube, and would need to be similarly hacked around here |
I agree, this issue is about
That's right, when the pod spec of a deployment spec changes, Kubernetes does a rolling redeploy of the deployment. The old pods go down one by one and get replaced by a new pod from the new deployment.
The new pods don't even have the same name as the old ones. The deployment is able to keep track of them by labels. That's why you define a
Right, this is totally implemented by Kubernetes, not the container runtime. Kubernetes keeps a copy of the currently deployed deployment spec and diffs it with the updated one at the time of the That said, it is a feature of Kubernetes that you can do |
As for a way to find pods associated with a deployment, I opened a PR that adds labels to pods if they are defined in the deployment spec: #7648 |
Now that #7648 and #7759 are merged, it's easy to find all pods from a deployment and stop them. For any visitors from the future, this is how I do it.
It may even be possible to write a script that does a rolling deployment, but I haven't needed that yet. |
Is it possible to add a parameter similar to "--recreate" because it is a bit difficult for Shell to read the yaml file. |
Doesn't recreating pods cause issues if user wants to use systemd units for pods and containers? Since container IDs change at each recreation |
It's a needed feature. |
I agree, especially since podman is being posed as a "dev's tool" to play with pods and deployments that can be directly moved over to k8s. It's honestly easier to just spin minikube and run Also, this man from podman's website suggest's there's a |
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind feature
Description
Currently, you can create pods using
podman play kube app-pod.yml
but if you try running the same command again you get the following message:kubectl
handles updating bare pods, even though it is recommended that you use deployments instead, but it's the subject of another issue:Steps to reproduce the issue:
podman play kube app-pod.yml
podman play kube app-pod.yml
Describe the results you received:
Describe the results you expected:
Updating pods generated from a kubernetes yaml file.
Additional information you deem important (e.g. issue happens only occasionally):
Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Additional environment details (AWS, VirtualBox, physical, etc.):
The text was updated successfully, but these errors were encountered: