Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lookup and query behavior differ when using resource_name vs label_selector #147

Closed
jmazzitelli opened this issue Jun 23, 2021 · 9 comments · Fixed by #155
Closed

lookup and query behavior differ when using resource_name vs label_selector #147

jmazzitelli opened this issue Jun 23, 2021 · 9 comments · Fixed by #155
Assignees
Labels
has_pr jira type/bug Something isn't working verified The issue is reproduced

Comments

@jmazzitelli
Copy link
Contributor

jmazzitelli commented Jun 23, 2021

Run this playbook:

- hosts: localhost
  gather_facts: no
  vars:
    ansible_python_interpreter: python3
  collections:
  - kubernetes.core
  tasks:
  - k8s:
      state: present
      definition:
        apiVersion: v1
        kind: Namespace
        metadata:
          name: testns
          labels:
            customLabel: test
  - assert:
      that: query('kubernetes.core.k8s', kind='Namespace', resource_name='testns')[0].metadata.name == "testns"
  - assert:
      that: query('kubernetes.core.k8s', kind='Namespace', label_selector='customLabel=test')[0].metadata.name == "testns"
    ignore_errors: yes

  - assert:
      that: lookup('kubernetes.core.k8s', kind='Namespace', resource_name='testns')[0].metadata.name == "testns"
    ignore_errors: yes
  - assert:
      that: lookup('kubernetes.core.k8s', kind='Namespace', label_selector='customLabel=test')[0].metadata.name == "testns"

Here I have a pair of query and a pair of lookup invocations. In each pair, one is looking up the namespace by name (using resource_name) and one is looking up the namespace by label (using label_selector).

Each assert is verifying the same thing - that the first element in the returned list has metadata.name == "testns" (thus verifying the results include the namespace resource in the first element of the list).

The asserts with "ignore_errors: yes" are the ones that are failing (but I don't think they should be failing).

When using query, looking up by resource_name is OK, but an error occurs using label_selector.
It is the reverse when using lookup (looking up by label_selector is OK, but an error occurs using resource_name).

Here's the full results output:

PLAY [localhost] ***********************************************************************************************************************

TASK [k8s] *****************************************************************************************************************************
ok: [localhost]

TASK [assert] **************************************************************************************************************************
ok: [localhost] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [assert] **************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "The conditional check 'query('kubernetes.core.k8s', kind='Namespace', label_selector='customLabel=test')[0].metadata.name == \"testns\"' failed. The error was: error while evaluating conditional (query('kubernetes.core.k8s', kind='Namespace', label_selector='customLabel=test')[0].metadata.name == \"testns\"): 'list object' has no attribute 'metadata'"}
...ignoring

TASK [assert] **************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "The conditional check 'lookup('kubernetes.core.k8s', kind='Namespace', resource_name='testns')[0].metadata.name == \"testns\"' failed. The error was: error while evaluating conditional (lookup('kubernetes.core.k8s', kind='Namespace', resource_name='testns')[0].metadata.name == \"testns\"): dict object has no element 0"}
...ignoring

TASK [assert] **************************************************************************************************************************
ok: [localhost] => {
    "changed": false,
    "msg": "All assertions passed"
}

PLAY RECAP *****************************************************************************************************************************
localhost                  : ok=5    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=2   

Ansible version:

$ ansible --version
ansible 2.9.9
  config file = None
  configured module search path = ['/home/jmazzite/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /home/jmazzite/source/ansible/lib/ansible
  executable location = /home/jmazzite/source/ansible/bin/ansible
  python version = 3.6.8 (default, Mar 18 2021, 08:58:41) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]

Collection version:

$ cat ~/.ansible/collections/ansible_collections/kubernetes/core/MANIFEST.json | grep version
  "version": "2.0.2",
@jmazzitelli
Copy link
Contributor Author

jmazzitelli commented Jun 23, 2021

Here is what I see:

a. query when using label_selector returns a list-in-a-list (works incorrectly)
b. query when using resource_name returns a list (works correctly)
c. lookup when using label_selector returns a list (works incorrectly)
d. lookup when using resource_name returns a single item (works correctly)
d1. lookup when using resource_name but no resource exists with that name returns an empty list (works incorrectly)

Reading https://docs.ansible.com/ansible/latest/plugins/lookup.html#forcing-lookups-to-return-lists-query-and-wantlist-true I conclude:

  1. query should always return a list - I do not believe it should be returning a list-in-a-list - multiple resources found via label_selector should just be included in a simple list. This means a. above is wrong, b is correct.
  2. lookup should not return a list unless "wantlist=True". This means c. above is wrong, d. is correct, d1. is wrong.

I used this playbook to see this:

- hosts: localhost
  gather_facts: no
  vars:
    ansible_python_interpreter: python3
  collections:
  - kubernetes.core
  tasks:
  - k8s:
      state: present
      definition:
        apiVersion: v1
        kind: Namespace
        metadata:
          name: testns
          labels:
            customLabel: test
  - name: query with label_selector
    debug:
      msg: "{{ query('kubernetes.core.k8s', kind='Namespace', label_selector='customLabel=test') }}"
  - name: query with resource_name
    debug:
      msg: "{{ query('kubernetes.core.k8s', kind='Namespace', resource_name='testns') }}"
  - name: lookup with label_selector
    debug:
      msg: "{{ lookup('kubernetes.core.k8s', kind='Namespace', label_selector='customLabel=test') }}"
  - name: lookup with resource_name
    debug:
      msg: "{{ lookup('kubernetes.core.k8s', kind='Namespace', resource_name='testns') }}"
  - name: lookup with resource_name but resource does not exist
    debug:
      msg: "{{ lookup('kubernetes.core.k8s', kind='Namespace', resource_name='does-not-exist') }}"

Here are the results I see:

TASK [query with label_selector] *******************************************************************************************************
ok: [localhost] => {
    "msg": [
        [
            {
                "apiVersion": "v1",
                "kind": "Namespace",
                "metadata": {
                    "creationTimestamp": "2021-06-23T21:06:21Z",
                    "labels": {
                        "customLabel": "test"
                    },
                    "managedFields": [
                        {
                            "apiVersion": "v1",
                            "fieldsType": "FieldsV1",
                            "fieldsV1": {
                                "f:status": {
                                    "f:phase": {}
                                }
                            },
                            "manager": "kubectl-create",
                            "operation": "Update",
                            "time": "2021-06-23T21:06:21Z"
                        },
                        {
                            "apiVersion": "v1",
                            "fieldsType": "FieldsV1",
                            "fieldsV1": {
                                "f:metadata": {
                                    "f:labels": {
                                        ".": {},
                                        "f:customLabel": {}
                                    }
                                }
                            },
                            "manager": "kubectl-edit",
                            "operation": "Update",
                            "time": "2021-06-23T21:06:36Z"
                        }
                    ],
                    "name": "testns",
                    "resourceVersion": "65406",
                    "uid": "3da963e0-0358-4eba-8fc1-c10b9585717f"
                },
                "spec": {
                    "finalizers": [
                        "kubernetes"
                    ]
                },
                "status": {
                    "phase": "Active"
                }
            }
        ]
    ]
}

TASK [query with resource_name] ********************************************************************************************************
ok: [localhost] => {
    "msg": [
        {
            "apiVersion": "v1",
            "kind": "Namespace",
            "metadata": {
                "creationTimestamp": "2021-06-23T21:06:21Z",
                "labels": {
                    "customLabel": "test"
                },
                "managedFields": [
                    {
                        "apiVersion": "v1",
                        "fieldsType": "FieldsV1",
                        "fieldsV1": {
                            "f:status": {
                                "f:phase": {}
                            }
                        },
                        "manager": "kubectl-create",
                        "operation": "Update",
                        "time": "2021-06-23T21:06:21Z"
                    },
                    {
                        "apiVersion": "v1",
                        "fieldsType": "FieldsV1",
                        "fieldsV1": {
                            "f:metadata": {
                                "f:labels": {
                                    ".": {},
                                    "f:customLabel": {}
                                }
                            }
                        },
                        "manager": "kubectl-edit",
                        "operation": "Update",
                        "time": "2021-06-23T21:06:36Z"
                    }
                ],
                "name": "testns",
                "resourceVersion": "65406",
                "uid": "3da963e0-0358-4eba-8fc1-c10b9585717f"
            },
            "spec": {
                "finalizers": [
                    "kubernetes"
                ]
            },
            "status": {
                "phase": "Active"
            }
        }
    ]
}

TASK [lookup with label_selector] ******************************************************************************************************
ok: [localhost] => {
    "msg": [
        {
            "apiVersion": "v1",
            "kind": "Namespace",
            "metadata": {
                "creationTimestamp": "2021-06-23T21:06:21Z",
                "labels": {
                    "customLabel": "test"
                },
                "managedFields": [
                    {
                        "apiVersion": "v1",
                        "fieldsType": "FieldsV1",
                        "fieldsV1": {
                            "f:status": {
                                "f:phase": {}
                            }
                        },
                        "manager": "kubectl-create",
                        "operation": "Update",
                        "time": "2021-06-23T21:06:21Z"
                    },
                    {
                        "apiVersion": "v1",
                        "fieldsType": "FieldsV1",
                        "fieldsV1": {
                            "f:metadata": {
                                "f:labels": {
                                    ".": {},
                                    "f:customLabel": {}
                                }
                            }
                        },
                        "manager": "kubectl-edit",
                        "operation": "Update",
                        "time": "2021-06-23T21:06:36Z"
                    }
                ],
                "name": "testns",
                "resourceVersion": "65406",
                "uid": "3da963e0-0358-4eba-8fc1-c10b9585717f"
            },
            "spec": {
                "finalizers": [
                    "kubernetes"
                ]
            },
            "status": {
                "phase": "Active"
            }
        }
    ]
}

TASK [lookup with resource_name] *******************************************************************************************************
ok: [localhost] => {
    "msg": {
        "apiVersion": "v1",
        "kind": "Namespace",
        "metadata": {
            "creationTimestamp": "2021-06-23T21:06:21Z",
            "labels": {
                "customLabel": "test"
            },
            "managedFields": [
                {
                    "apiVersion": "v1",
                    "fieldsType": "FieldsV1",
                    "fieldsV1": {
                        "f:status": {
                            "f:phase": {}
                        }
                    },
                    "manager": "kubectl-create",
                    "operation": "Update",
                    "time": "2021-06-23T21:06:21Z"
                },
                {
                    "apiVersion": "v1",
                    "fieldsType": "FieldsV1",
                    "fieldsV1": {
                        "f:metadata": {
                            "f:labels": {
                                ".": {},
                                "f:customLabel": {}
                            }
                        }
                    },
                    "manager": "kubectl-edit",
                    "operation": "Update",
                    "time": "2021-06-23T21:06:36Z"
                }
            ],
            "name": "testns",
            "resourceVersion": "65406",
            "uid": "3da963e0-0358-4eba-8fc1-c10b9585717f"
        },
        "spec": {
            "finalizers": [
                "kubernetes"
            ]
        },
        "status": {
            "phase": "Active"
        }
    }
}

TASK [lookup with resource_name but resource does not exist] ***************************************************************************
ok: [localhost] => {
    "msg": []
}

@jmazzitelli
Copy link
Contributor Author

And I just realized I have code that depends upon this odd (probably incorrect) behavior - and I do not know how to re-implement it based on how I think it should work after this issue is "fixed". So maybe we should not fix this. LOL

Background: "Ingress" kind has changed in newer versions of kubernetes. If you use v1beta1, you will see warnings like this: [WARNING] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress. My ansible playbook needs to support kubernetes clusters v1.18 as well as v1.22. Thus, the ansible playbook needs to determine if v1 is available or not.

To see if Ingress v1 is available or not, I have this kind of code (this example is in a template, but similar lookups are done elsewhere, too):

apiVersion: "networking.k8s.io/{{ 'v1' if (lookup('kubernetes.core.k8s', kind='Ingress', api_version='networking.k8s.io/v1', errors='ignore') is iterable) else 'v1beta1' }}"
kind: Ingress
...
spec:
...
{% if lookup('kubernetes.core.k8s', kind='Ingress', api_version='networking.k8s.io/v1', errors='ignore') is iterable %}
        pathType: Prefix
        backend:
          service:
            name: {{ kiali_vars.deployment.instance_name }}
            port:
              number: {{ kiali_vars.server.port }}
{% else %}
...

This utilizes the "probably incorrect" behavior of lookup. If the cluster does not have that api_version at all, that lookup results in an error which is ignored and results in an empty string (""). But if there is that API (even if there are no Ingress resources), that lookup returns a list which is iterable.

This can be generic - this is how you can look for any API in a cluster (doesn't have to be Ingress).

I do not know how to do this kind of check otherwise.

Here's a simple playbook that illustrates the behavior that is currently happening with lookup. I think this is really incorrect because according to the docs, lookup should always return a string. I can't switch to query because query (according to the docs) always results in an iterable array. The trick I use above is "if an iterable array is returned, the API exists; if a string is returned, that means the API does not exist". If this issue is fixed, I do not think this trick will work and we would need some other way to perform such a API existence check:

- hosts: localhost
  gather_facts: no
  vars:
    ansible_python_interpreter: python3
  collections:
  - kubernetes.core
  tasks:
  - assert:
      that:
      -  lookup('kubernetes.core.k8s', kind='Ingress', api_version='networking.k8s.io/v1', errors='ignore') is iterable
      -  lookup('kubernetes.core.k8s', kind='Ingress', api_version='networking.k8s.io/v1_INVALID', errors='ignore') is not iterable

@abikouo
Copy link
Contributor

abikouo commented Jun 24, 2021

Hi @jmazzitelli

Thanks for reporting this issue.
You can use the k8s_cluster_info module to determine whether the cluster has v1 API version for Ingress, here is an example:

- k8s_cluster_info:
  register: cluster_info
    
- set_fact:
    api_version: "networking.k8s.io/v1beta1"

- set_fact:
    api_version: "networking.k8s.io/v1"
  when: '"networking.k8s.io/v1" in cluster_info.apis'

@abikouo abikouo added the needs_info Needs additional information from original reporter label Jun 24, 2021
@gravesm
Copy link
Member

gravesm commented Jun 25, 2021

@jmazzitelli Thanks for reporting this. I think you are mostly correct. The only things I would argue otherwise are that lookup without wantlist=True may return a list, and a lookup or query that finds nothing should both return an empty list. Basically, if you need to ensure you get a list, use either query or wantlist=True with lookup. We're looking into fixing this to provide more consistent behavior and better documentation.

In most cases I would generally recommend using k8s_info or k8s_cluster_info instead. The info modules tend to be more readable IMO, offer more functionality (like waiting, for example) and most importantly run on the managed node rather than the controller. We'll still support the lookup plugin, of course, as there may be some cases where it's the better option.

Akasurde added a commit to Akasurde/kubernetes.core that referenced this issue Jun 28, 2021
To provide consistent interface for k8s lookup plugin,
add recommendation for `query` or `lookup` with `wantlist=True`.

Fixes: ansible-collections#147

Signed-off-by: Abhijeet Kasurde <[email protected]>
@Akasurde
Copy link
Member

@jmazzitelli I updated the code and documentation to match query and wantlist=True expectations. Could you please check and let me know if it works for you? Thanks.

@Akasurde Akasurde self-assigned this Jun 28, 2021
@Akasurde Akasurde added type/bug Something isn't working has_pr verified The issue is reproduced jira and removed needs_info Needs additional information from original reporter labels Jun 28, 2021
@jmazzitelli
Copy link
Contributor Author

@Akasurde what is the behavior now if I use errors='ignore'? Perhaps you should add some tests that use that, as well?

This was my only concern with backwards compatibility issues (at least for code I have).

In other words, what will these two return now?

lookup('kubernetes.core.k8s', kind='Ingress', api_version='networking.k8s.io/v1beta1', errors='ignore')
lookup('kubernetes.core.k8s', kind='Ingress', api_version='networking.k8s.io/vINVALID', errors='ignore')

The alternative mentioned above in #147 (comment) would work, however, I'm concerned with the amount of time k8s_cluster_info takes - seems to take several seconds at least to complete. It isn't "horrible" but something I would prefer to avoid if possible. Also, it would mean people would need to refactor their code since the lookup behavior would be changing when using errors='ignore'. Again, it isn't horrible, but if it is avoidable, that's best.

@Akasurde
Copy link
Member

Thanks for updating on errors. I will check errors=ignore behavior and get back

@abikouo
Copy link
Contributor

abikouo commented Jun 28, 2021

@Akasurde what is the behavior now if I use errors='ignore'? Perhaps you should add some tests that use that, as well?

This was my only concern with backwards compatibility issues (at least for code I have).

In other words, what will these two return now?

lookup('kubernetes.core.k8s', kind='Ingress', api_version='networking.k8s.io/v1beta1', errors='ignore')
lookup('kubernetes.core.k8s', kind='Ingress', api_version='networking.k8s.io/vINVALID', errors='ignore')

The alternative mentioned above in #147 (comment) would work, however, I'm concerned with the amount of time k8s_cluster_info takes - seems to take several seconds at least to complete. It isn't "horrible" but something I would prefer to avoid if possible. Also, it would mean people would need to refactor their code since the lookup behavior would be changing when using errors='ignore'. Again, it isn't horrible, but if it is avoidable, that's best.

the response time for the k8s_cluster_info could be reduced if we add some filtering options like kind or any other relevant

Akasurde added a commit to Akasurde/kubernetes.core that referenced this issue Jun 29, 2021
To provide consistent interface for k8s lookup plugin,
add recommendation for `query` or `lookup` with `wantlist=True`.

Fixes: ansible-collections#147

Signed-off-by: Abhijeet Kasurde <[email protected]>
@Akasurde
Copy link
Member

@jmazzitelli I checked, errors=ignore works as expected.

Akasurde added a commit to Akasurde/kubernetes.core that referenced this issue Jun 29, 2021
To provide consistent interface for k8s lookup plugin,
add recommendation for `query` or `lookup` with `wantlist=True`.

Fixes: ansible-collections#147

Signed-off-by: Abhijeet Kasurde <[email protected]>
jmazzitelli added a commit to jmazzitelli/kiali-operator that referenced this issue Oct 8, 2021
jmazzitelli added a commit to jmazzitelli/kiali-operator that referenced this issue Oct 8, 2021
jmazzitelli added a commit to jmazzitelli/kiali-operator that referenced this issue Oct 8, 2021
jmazzitelli added a commit to jmazzitelli/kiali-operator that referenced this issue Oct 8, 2021
jmazzitelli added a commit to jmazzitelli/kiali-operator that referenced this issue Oct 9, 2021
jmazzitelli added a commit to jmazzitelli/kiali-operator that referenced this issue Oct 9, 2021
jmazzitelli added a commit to kiali/kiali-operator that referenced this issue Oct 11, 2021
jmazzitelli added a commit to kiali/kiali-operator that referenced this issue Oct 11, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
has_pr jira type/bug Something isn't working verified The issue is reproduced
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants