-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unmarshalling error when using podman exec
#20821
Comments
This is because your 4.8 client talks to a 4.7 server and tries to call a new endpoint that does not exists there. When the server is updated it should work again. Regardless this must be fixed on client, the new client should never call endpoints that do not exits on older versions. |
Commit f48a706 added a new API endpoint to remove exec session correctly. And the bindings try to call that endpoint for exec every time. Now since client and server must not be the same version this causes a problem if a new 4.8 client calls an older 4.7 server as it has no idea about such endpoint and throws an ugly error. This is a common scenario for podman machine setups. The client does know the server version so it should make sure to not call such endpoint if the server is older than 4.8. I added a exec test to the machine tests as this can be reproduced with podman machine as at the moment at least the VM image does not contain podman 4.8. And it should at least make sure podman exec keeps working for podman machine without regressions. Fixes containers#20821 Signed-off-by: Paul Holzinger <[email protected]>
I guess I have to wait until the fedora package is out of testing and I can update the machine (or it does it automatically). |
It will take a bit more time as you need to wait for it to land in coreos and that takes another two weeks, see https://docs.podman.io/en/latest/markdown/podman-machine-init.1.html on how you can update. Thus it is likely much faster for us to merge my client side fix and release 4.8.1 with it. |
@jloewe you could also use |
@baude unfortunately, that workaround doesn't work for the existing minikube/kind kubernetes frameworks that just expect exec to work against the default machine image. Those frameworks are expecting "N-1" compatibility for podman client/server APIs (as that's standard for kubernetes & its client kubectl). |
Commit f48a706 added a new API endpoint to remove exec session correctly. And the bindings try to call that endpoint for exec every time. Now since client and server must not be the same version this causes a problem if a new 4.8 client calls an older 4.7 server as it has no idea about such endpoint and throws an ugly error. This is a common scenario for podman machine setups. The client does know the server version so it should make sure to not call such endpoint if the server is older than 4.8. I added a exec test to the machine tests as this can be reproduced with podman machine as at the moment at least the VM image does not contain podman 4.8. And it should at least make sure podman exec keeps working for podman machine without regressions. Fixes containers#20821 Signed-off-by: Paul Holzinger <[email protected]>
I the case of I wouldn't necessarily comment on if podman should have skew support, that's out of scope for KIND. I could imagine another approach would be packaging podman and the VM implementation together and ensuring they don't skew but regardless it's not really our concern, speaking for KIND. |
Issue Description
When I use
podman exec ...
for any container, I get the following error:Steps to reproduce the issue
Steps to reproduce the issue
podman run --name httpd -p 8080:80 -d -i -t docker.io/library/httpd
podman exec httpd pwd
Describe the results you received
An unexpected error message.
Describe the results you expected
No error message, if the command is run correctly.
podman info output
Podman in a container
No
Privileged Or Rootless
None
Upstream Latest Release
Yes
Additional environment details
ZSH on MacOS
Additional information
When downgrading to v4.7.2 the issue disappeared.
The text was updated successfully, but these errors were encountered: