Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CRD status is not deployed #43

Open
leonp-c opened this issue Sep 2, 2024 · 1 comment
Open

CRD status is not deployed #43

leonp-c opened this issue Sep 2, 2024 · 1 comment
Labels
feature-request Requested new feature of Hikaru

Comments

@leonp-c
Copy link

leonp-c commented Sep 2, 2024

**What happened:
Registering a CRD with:

subresources:
scale:
labelSelectorPath: .status.selector
specReplicasPath: .spec.replicas
statusReplicasPath: .status.replicas
status: {}
is not registered in k8s. after testing from command line using kubectl get crd some.custom.crd.ai -o yaml
the result yaml is:

subresources:
  scale:
    labelSelectorPath: .status.selector
    specReplicasPath: .spec.replicas
    statusReplicasPath: .status.replicas

status is missing

What you expected to happen:
status should exist so that using kubernetes command (kubernnetes package):
custom_objects_api.get_namespaced_custom_object(group=self.group, version=self.version, namespace=self.namespace, plural=self.plural, name=self.name)
would work

How to reproduce it (as minimally and precisely as possible):
Deploy a CustomResourceDefinition resource that has spec.versions.subresources.status as {} (dict)
check the deployed CRD resource yaml
get crd some.resource.name.ai -o yaml

Anything else we need to know?:
Tried to downgrade to kubernetes 28.1.0, same result to comply to hikaru version (1.3.0)

Environment:

Kubernetes version (kubectl version):
Client Version: v1.27.2
Kustomize Version: v5.0.1
Server Version: v1.27.14
OS (e.g., MacOS 10.13.6):
Python version (python --version): 3.10.12
Python client version (pip list | grep kubernetes): 30.1.0
hikaru version: 1.3.0

@haxsaw haxsaw added the feature-request Requested new feature of Hikaru label Sep 4, 2024
@haxsaw
Copy link
Owner

haxsaw commented Sep 4, 2024

This is similar to #41 where a user wasn't getting a field sent out when it contained an essentially empty value.

This is a documented behaviour; when a field is null or contains some kind of empty container object ([], {}) then that field is not included in what gets set to the cluster. The reason for this is that I found early on that some parts of K8s would reject an object that included empty containers or would otherwise work oddly. So this rule was implemented to keep 'empty' values from going out. There is no discrimination based on field name that results in this treatment-- only field type drives this logic.

The only viable alternative that comes to mind is to provide a user-maintained exclusion list that would need to name the full path to a field that should be allowed to go out empty. It wouldn't be enough just to give the name of the field as we can't be sure that the name doesn't occur in other objects and we don't want to mistreat fields that are expected to behave as they have. I suppose the nicest thing would be to provide an alternative to the field() function in dataclasses that would allow you to specify such as field as being 'empty-allowed'. That would provide metadata that could run in general inspection code that can make a decision as to the way to handle 'empty' fields.

Given the above, I can't see classifying it as a bug as it works as intended, documented, and needed. I've labelled this as a feature request and will look into how to address it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature-request Requested new feature of Hikaru
Projects
None yet
Development

No branches or pull requests

2 participants