Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Volume filters are exclusive rather than inclusive #6765

Closed
maybe-sybr opened this issue Jun 25, 2020 · 8 comments · Fixed by #8232
Closed

Volume filters are exclusive rather than inclusive #6765

maybe-sybr opened this issue Jun 25, 2020 · 8 comments · Fixed by #8232
Assignees
Labels
Good First Issue This issue would be a good issue for a first time contributor to undertake. In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@maybe-sybr
Copy link
Contributor

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug
Arguable?

Description

In #6756, I pointed out that volume filters appear to be exclusive rather than inclusive, which makes for a confusing result when using APIv2 to, e.g., retrieve multiple volumes with specified names.

Steps to reproduce the issue:

  1. Make a request to APIv2 for volumes and provide a name filter with multiple entries. Notice that no results are returned.

Describe the results you received:

No results rather than multiple results for each volume matching a specified name.

Describe the results you expected:

As above.

Additional information you deem important (e.g. issue happens only occasionally):

Shell transcript:

sh-5.0$ podman volume ls
DRIVER      VOLUME NAME
local       bar
local       foo
sh-5.0$ python -c 'import urllib.parse; import json; print(urllib.parse.quote(json.dumps({"name": ["foo", "bar"]})))'
%7B%22name%22%3A%20%5B%22foo%22%2C%20%22bar%22%5D%7D
sh-5.0$ curl --unix-socket /tmp/podman.sock -H "Content-Type: application/json" 'http://unixsocket/v1.40/volumes?filters=%7B%22name%22%3A%20%5B%22foo%22%2C%20%22bar%22%5D%7D' | jq
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    29  100    29    0     0   9666      0 --:--:-- --:--:-- --:--:--  9666
{
  "Volumes": [],
  "Warnings": []
}
sh-5.0$ curl --unix-socket /tmp/podman.sock -H "Content-Type: application/json" 'http://unixsocket/v1.40/volumes?filters=%7B%22name%22%3A%20%5B%22foo%22%5D%7D' | jq
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   218  100   218    0     0  72666      0 --:--:-- --:--:-- --:--:--  106k
{
  "Volumes": [
    {
      "CreatedAt": "2020-06-25T10:29:50+10:00",
      "Driver": "local",
      "Labels": {},
      "Mountpoint": "/home/user/.local/share/containers/storage/volumes/foo/_data",
      "Name": "foo",
      "Options": {},
      "Scope": "local"
    }
  ],
  "Warnings": []
}
sh-5.0$ curl --unix-socket /tmp/podman.sock -H "Content-Type: application/json" 'http://unixsocket/v1.40/volumes?filters=%7B%22name%22%3A%20%5B%22bar%22%5D%7D' | jq
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   218  100   218    0     0  72666      0 --:--:-- --:--:-- --:--:--  106k
{
  "Volumes": [
    {
      "CreatedAt": "2020-06-25T10:29:52+10:00",
      "Driver": "local",
      "Labels": {},
      "Mountpoint": "/home/user/.local/share/containers/storage/volumes/bar/_data",
      "Name": "bar",
      "Options": {},
      "Scope": "local"
    }
  ],
  "Warnings": []
}

Output of podman version:

sh-5.0$ podman version
Version:      2.0.0
API Version:  1
Go Version:   go1.14.3
Built:        Thu Jan  1 10:00:00 1970
OS/Arch:      linux/amd64

Output of podman info --debug:

sh-5.0$ podman info --debug
host:
  arch: amd64
  buildahVersion: 1.15.0
  cgroupVersion: v2
  conmon:
    package: conmon-2.0.18-1.fc32.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.18, commit: 6e8799f576f11f902cd8a8d8b45b2b2caf636a85'
  cpus: 8
  distribution:
    distribution: fedora
    version: "32"
  eventLogger: file
  hostname: host
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 31337
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1001
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
  kernel: 5.6.14-300.fc32.x86_64
  linkmode: dynamic
  memFree: 1704017920
  memTotal: 8235126784
  ociRuntime:
    name: crun
    package: crun-0.13-2.fc32.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 0.13
      commit: e79e4de4ac16da0ce48777afb72c6241de870525
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  remoteSocket:
    path: /run/user/1001/podman/podman.sock
  rootless: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.1-1.fc32.x86_64
    version: |-
      slirp4netns version 1.1.1
      commit: bbf27c5acd4356edb97fa639b4e15e0cd56a39d5
      libslirp: 4.2.0
      SLIRP_CONFIG_VERSION_MAX: 2
  swapFree: 7563182080
  swapTotal: 8392798208
  uptime: 49h 38m 43.69s (Approximately 2.04 days)
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - docker.io
store:
  configFile: /home/user/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-1.0.0-1.fc32.x86_64
      Version: |-
        fusermount3 version: 3.9.1
        fuse-overlayfs: version 1.0.0
        FUSE library version 3.9.1
        using FUSE kernel interface version 7.31
  graphRoot: /home/user/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 51
  runRoot: /run/user/1001/containers
  volumePath: /home/user/.local/share/containers/storage/volumes
version:
  APIVersion: 1
  Built: 0
  BuiltTime: Thu Jan  1 10:00:00 1970
  GitCommit: ""
  GoVersion: go1.14.3
  OsArch: linux/amd64
  Version: 2.0.0

Package info (e.g. output of rpm -q podman or apt list podman):

sh-5.0$ rpm -q podman
podman-2.0.0-2.fc32.x86_64

Additional environment details (AWS, VirtualBox, physical, etc.):

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jun 25, 2020
@maybe-sybr
Copy link
Contributor Author

So we don't lose context on the discussion, from: #6756 (comment)

I briefly tested this against a docker on another machine and it looks like they support multiple filters and combine them inclusively. So you can (weirdly) provide dangling=["true", "false"] to get all volumes.

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented Jul 27, 2020

@mheon Can we close this issue?

@mheon
Copy link
Member

mheon commented Jul 27, 2020

No, this is still an issue

@mheon
Copy link
Member

mheon commented Sep 8, 2020

@ashley-cui PTAL

@rhatdan rhatdan added the Good First Issue This issue would be a good issue for a first time contributor to undertake. label Oct 5, 2020
@rhatdan
Copy link
Member

rhatdan commented Oct 5, 2020

@ashley-cui Any movement on this?

ashley-cui added a commit to ashley-cui/podman that referenced this issue Nov 3, 2020
When using multiple filters, return a volume that matches any one of the used filters, rather than matching both of the filters.
This is for compatibility with docker's cli, and more importantly, the apiv2 compat endpoint
Closes containers#6765

Signed-off-by: Ashley Cui <[email protected]>
@ashley-cui ashley-cui added In Progress This issue is actively being worked by the assignee, please do not work on this at this time. and removed stale-issue labels Nov 3, 2020
@maybe-sybr
Copy link
Contributor Author

Thanks for working on this @ashley-cui :)

@ashley-cui
Copy link
Member

of course! :)

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 22, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Good First Issue This issue would be a good issue for a first time contributor to undertake. In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants