-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
podman run says the container name is already in use but podman ps --all does not show any container with that name #2553
Comments
Some more info: My containers.json on /var/lib/containers/storage/overlay-containers has a reference to this container:
But podman doesn't know about it. podman prune doesn not help neither |
@Zokormazo I'm no podman dev, but maybe try adding
|
That container is probably a relic from a partially failed container
delete, or was made by Buildah or CRI-O. You should be able to force it's
removal, even if we don't see it, with Podman rm -f
…On Wed, Mar 6, 2019, 05:48 Julen Landa Alustiza ***@***.***> wrote:
Some more info:
My containers.json on /var/lib/containers/storage/overlay-containers has a
reference to this container:
{
"id": "31d772490bd4f63019908d896a8d5d62ce4f7db78162db4c8faab60725dbd4e1",
"names": [
"nextcloud"
],
"image": "dbcf87f7f2897ca0763ece1276172605bd18d00565f0b8a86ecfc2341e62a3f4",
"layer": "5078a913609383e102745769c42090cb62c878780002adf133dfadf3ca9b0e55",
"metadata": "{\"image-name\":\"docker.io/library/nextcloud:14.0.3\ <http://docker.io/library/nextcloud:14.0.3%5C>",\"image-id\":\"dbcf87f7f2897ca0763ece1276172605bd18d00565f0b8a86ecfc2341e62a3f4\",\"name\":\"nextcloud\",\"created-at\":1544648833,\"mountlabel\":\"system_u:object_r:container_file_t:s0:c151,c959\"}",
"created": "2018-12-12T21:07:13.804209323Z"
}
But podman doesn't know about it. podman prune doesn not help neither
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#2553 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AHYHCCJTpVbzZpK2bciVGYAfvs9TIq0Eks5vT5zzgaJpZM4bgkhX>
.
|
All my commands were used as root.
Can't remove it with rm -f |
Oh, you're on 1.0 - damn. We added that to If you have Buildah installed, it should be able to remove the container in the meantime - it operates at a lower level than us, and as such can see these containers. |
https://paste.fedoraproject.org/paste/qIQ9gu0DF6ZtN8fEwG5pYg Cleaned with podman 1.1.0 |
Having the same issue on centos 7.6 with podman podman.x86_64 1.2-2.git3bd528e.el7 :
sytemd service:
The podman fails at "podman run":
May be related to moby/moby#34198 |
Well, seems that it was fixed in moby around 2018. For podman, that can be fixed by using "slave" mount. |
I have the same issue on Fedora 31 with |
I saw this also yesterday; podman-1.4.4-3.fc30 as nonroot; but cannot reproduce it. Virt is still up, with one "container name already in use" stuck. Can provide login access on request. |
Try a 'podman rm --storage'.
…On Wed, Jul 10, 2019, 07:48 Ed Santiago ***@***.***> wrote:
I saw this also yesterday; podman-1.4.4-3.fc30 as nonroot; but cannot
reproduce it. Virt is still up, with one "container name already in use"
stuck. Can provide login access on request.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#2553>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AB3AOCB3UJ5DUPXOOMPND73P6XEBDANCNFSM4G4CJBLQ>
.
|
That did it. Since this seems to be a common problem, should the podman-run message perhaps be amended to include this hint?
|
The only issue with recommending it unconditionally is that it will quite happily destroy containers from Buildah/CRI-O as well. The overall recommendation works something like this: Check CRI-O and Buildah to see if it's a container running there. If it is, we recommend deleting them through |
|
Managed to reproduce the issue accidentally by trying to Ctrl-C a container twice.
And I fixed it with an ugly hack:
|
Sounds like a bug with the ZFS driver.
…On Mon, Aug 19, 2019, 02:21 alex ***@***.***> wrote:
Managed to reproduce the issue accidentally by trying to Ctrl-C a
container twice.
^CERRO[0014] Error removing container f9512f7b0b731324f5651e92af7e02910bf35b16d3f373d63fb6ebee27c22d32: error removing container f9512f7b0b731324f5651e92af7e02910bf35b16d3f373d63fb6ebee27c22d32 root filesyste
m: signal: interrupt: "/usr/sbin/zfs zfs destroy -r tank/containers/4834b4aa97d1a48a27f44c718241c2d786349eee9ab66c3d515339402e2ed1c9" =>
ERRO[0014] Error forwarding signal 2 to container f9512f7b0b731324f5651e92af7e02910bf35b16d3f373d63fb6ebee27c22d32: container has already been removed
And I fixed it with an ugly hack:
# zfs create tank/containers/4834b4aa97d1a48a27f44c718241c2d786349eee9ab66c3d515339402e2ed1c9
# podman rm --storage nginx
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#2553>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AB3AOCHIJUIO6W2VCMZ4G33QFI3WBANCNFSM4G4CJBLQ>
.
|
You might want to make a new issue for this
…On Mon, Aug 19, 2019, 07:30 Matthew Heon ***@***.***> wrote:
Sounds like a bug with the ZFS driver.
On Mon, Aug 19, 2019, 02:21 alex ***@***.***> wrote:
> Managed to reproduce the issue accidentally by trying to Ctrl-C a
> container twice.
>
> ^CERRO[0014] Error removing container f9512f7b0b731324f5651e92af7e02910bf35b16d3f373d63fb6ebee27c22d32: error removing container f9512f7b0b731324f5651e92af7e02910bf35b16d3f373d63fb6ebee27c22d32 root filesyste
> m: signal: interrupt: "/usr/sbin/zfs zfs destroy -r tank/containers/4834b4aa97d1a48a27f44c718241c2d786349eee9ab66c3d515339402e2ed1c9" =>
> ERRO[0014] Error forwarding signal 2 to container f9512f7b0b731324f5651e92af7e02910bf35b16d3f373d63fb6ebee27c22d32: container has already been removed
>
> And I fixed it with an ugly hack:
>
> # zfs create tank/containers/4834b4aa97d1a48a27f44c718241c2d786349eee9ab66c3d515339402e2ed1c9
> # podman rm --storage nginx
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <#2553>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AB3AOCHIJUIO6W2VCMZ4G33QFI3WBANCNFSM4G4CJBLQ>
> .
>
|
Also might be better to do discussion in containers/storage since podman is just using that library for management of its container images. And graphdrivers. |
Note: related to #3906 |
I'm having this same issue now, and podman rm -f or podman rm --storage don't resolve the issue. |
Does |
both |
@mheon Ideas, could this be an out of sync libpod database? |
@BBBosp if you have removed all containers, you could remove the bolt_state.db rm /home/dwalsh/.local/share/containers/storage/libpod/bolt_state.db This will remove the database but leave your images, The next run of podman will recreate the database. |
@rhatdan I doubt it - transactions should ensure we never do a partial write to the DB. Do you have any pods with that name? |
Please open a fresh issue with the full issue template filled out - this is too in-depth to discuss here. |
Thanks for the great idea sharing, i could clear one such issues |
I just encountered this problem, but it seems like the recommended solution is obsolete?
|
Try |
Thanks, @rhatdan. Same error. I tried a handful of other things too, like using docker. Always the same error. I have a script that ran podman in a loop and piped its output to another program. I updated the script so that it only calls podman once before the start of the the loop. This appears to have solved my problem. While podman is called in quick succession elsewhere in the script in several places, it appears that only the loop and/or the pipe were problematic |
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
I have this bug after a power outage.
podman ps --all | grep nextcloud
has not outputSteps to reproduce the issue:
Dunno how to reproduce it, it appeared after a power outage and it's abrupt shutdown
Output of
podman version
:Output of
podman info --debug
:Additional environment details (AWS, VirtualBox, physical, etc.):
Bare metal f29
The text was updated successfully, but these errors were encountered: