Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fallback specialized kubernetes config to vanilla kubernetes not working #35608

Closed
edeandrea opened this issue Aug 28, 2023 · 7 comments · Fixed by #43862
Closed

Fallback specialized kubernetes config to vanilla kubernetes not working #35608

edeandrea opened this issue Aug 28, 2023 · 7 comments · Fixed by #43862
Assignees
Labels
Milestone

Comments

@edeandrea
Copy link
Contributor

Hi @radcortez / @Sgitario / @iocanel should this feature (see #34487 & #34025) have made it into the 3.3.0release? If so, it does not seem to work fully.

If I specify the following (the kubernetes, openshift, knative, and minikube extensions are all present):

quarkus.kubernetes.part-of=villains-service
quarkus.kubernetes.annotations."app.openshift.io/connects-to"=villains-db,otel-collector
quarkus.kubernetes.env.configmaps=${quarkus.application.name}-config
quarkus.kubernetes.env.secrets=${quarkus.application.name}-config-creds
quarkus.kubernetes.labels.app=${quarkus.application.name}
quarkus.kubernetes.labels.application=${quarkus.kubernetes.part-of}
quarkus.kubernetes.labels.system=quarkus-super-heroes
quarkus.openshift.route.expose=true

when the OpenShift & KNative resources are generated, the resources (the OpenShift DeploymentConfig and the KNative Service) do not have the expected annotations or labels. They do, however, have the expected ConfigMaps and Secrets.

The minikube Deployment, though, does look to be ok.

I'm not sure if this is because for OpenShift/Knative the resource types are different (kubernetes Deployment vs OpenShift DeploymentConfig/KNative Service)?

In any event, it does not work as expected.

Originally posted by @edeandrea in #34487 (comment)

@quarkus-bot
Copy link

quarkus-bot bot commented Aug 28, 2023

/cc @Sgitario (kubernetes), @geoand (knative,kubernetes,minikube,openshift), @iocanel (knative,kubernetes,openshift)

@radcortez
Copy link
Member

I looked, and the issue is that the fallback mechanism is unsuitable for Map bindings.

The Long explanation

The fallback mechanism is a low-level API, intended to rewrite the property names on value lookup. While this works well with fixed property names, the nature of a Map dynamic property names and mapping differs.

When mapping to something like String Foo.bar() we know precisely that this refers to the property foo.bar, so we lookup foo.bar and we can fallback with a rule name.startsWith("foo.") -> "baz." + name.substring(4)

When dealing with a Map, we don't know which properties are going to be mapped. In something like Map Foo.bar(), the property foo.one.bar contains the dynamic segment one, so we have to query the Config system to provide all keys that match foo.*.bar.

In this example, when we query for Openshift.labels, we get 0 properties, so nothing to map, and we don't know where to fallback, so we skip the mapping altogether. It is still possible to achieve the intended behavior. We have to also write the Kubernetes keys in the Openshift namespace so they are found and fallback correctly when mapping a Map. This works, but also causes some additional issues.

The config relationship between Kubernetes, Openshift, and Knative is not of a proper hierarchy. When putting Kubernetes on top, we expect Openshift and Knative to inherit the values from Kubernetes. The issue is that the Kubernetes namespace contains several other configuration that does not apply, so if I rewrite the Kubernetes namespace in any of the other two, you will start getting nasty warnings and even errors because the configuration there is only available in Kubernetes.

Proposals

I only see two options to be able to support this fully:

  • We refactor the Kubernetes / Openshift / Knative configuration to create a proper hierarchy between all of them. This may require moving the configuration around. For what I could see, quarkus.kubernetes.replicas, quarkus.kubernetes.node-port will need to either be moved away or added to the complete hierarchy.
  • I can become more creative and add filters when moving the config namespaces in the Fallback interceptor. The issue here is that I need the mapping metadata available to know where a property is being mapped, and the Kubernetes extensions are still in the old Config mechanism, so I need to move them to the new Config.

@edeandrea
Copy link
Contributor Author

edeandrea commented Sep 1, 2023

My opinion here (others can feel free to weigh in, including @holly-cummins), but the way it works now, how would the user know what is a map-based binding and what isn't? I think its in a weird state where a user could easily get confused as to what is included in the hierarchy and what isn't.

I can become more creative and add filters when moving the config namespaces in the Fallback interceptor. The issue here is that I need the mapping metadata available to know where a property is being mapped, and the Kubernetes extensions are still in the old Config mechanism, so I need to move them to the new Config.

If I'm understanding this correctly, this seems like the best & most generic solution, but also the one that involves the most work. I'd hate to get into a situation where we have to "know" or "cherry-pick" specific map-based key properties (like labels and annotations) and move them into the hierarchy. To me, that seems very fragile, as in the future someone could simply forget to do it for some new map-based key. My vote is for a generic and re-usable solution, but of course I'm saying this as someone who has no idea what the underlying implementation looks like :)

@radcortez
Copy link
Member

My opinion here (others can feel free to weigh in, including @holly-cummins), but the way it works now, how would the user know what is a map-based binding and what isn't? I think its in a weird state where a user could easily get confused as to what is included in the hierarchy and what isn't.

Correct. The user shouldn't have to know about this and how all of this work. Unfortunately. things are not as simple, and the Config system has some limitations (which you already encounter). With the current specifications and APIs dealing with Maps is a major pain. I'm sorry for the very long and technical explanation, but I wanted to clearly state the problem.

If I'm understanding this correctly, this seems like the best & most generic solution, but also the one that involves the most work. I'd hate to get into a situation where we have to "know" or "cherry-pick" specific map-based key properties (like labels and annotations) and move them into the hierarchy.

They are already in the hierarchy, so they are not the issue. The issue is that you have a few properties that are available in the Kubernetes config, that may or may not be available in other configs. For instance replicas is available in Kubernetes and Openshift, but not in Knative.

I actually don't agree this is the best & most generic-solution. We are trying to push a hierarchy where there is none, so we have to manually add a bunch of rules to not break it. For instance if you have something like:

     Common Config
           |
    /      |         \
Kubernetes Openshift Knative

Everything is clear and the Config system supports this out of the box.

Instead what we are trying to do is selectively take things from a configuration at the same layer (all of the 3 are not related at all). For instance, if I have quarkus.kubernetes.replicas shouldn't I expect to have this available in Knative? (forget about business logic).

@iocanel
Copy link
Contributor

iocanel commented Sep 12, 2023

The model suggested by @radcortez is something that we are using in dekorate (for distantly related reasons) and works well.
So, I think that this is the way to go.

@iocanel
Copy link
Contributor

iocanel commented Sep 12, 2023

Even if we re-structure the configuration to something like this:

     Common Config
           |
    /      |         \
Kubernetes Openshift Knative

Handling of Map properties is still not working.
I tried something like: #35874 and the issue persists.

@radcortez
Copy link
Member

It has to be done in the new Config system, using @ConfigMapping. I can look into it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants