-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fallback specialized kubernetes config to vanilla kubernetes not working #35608
Comments
I looked, and the issue is that the fallback mechanism is unsuitable for The Long explanation The fallback mechanism is a low-level API, intended to rewrite the property names on value lookup. While this works well with fixed property names, the nature of a When mapping to something like When dealing with a In this example, when we query for The config relationship between Kubernetes, Openshift, and Knative is not of a proper hierarchy. When putting Kubernetes on top, we expect Openshift and Knative to inherit the values from Kubernetes. The issue is that the Kubernetes namespace contains several other configuration that does not apply, so if I rewrite the Kubernetes namespace in any of the other two, you will start getting nasty warnings and even errors because the configuration there is only available in Kubernetes. Proposals I only see two options to be able to support this fully:
|
My opinion here (others can feel free to weigh in, including @holly-cummins), but the way it works now, how would the user know what is a map-based binding and what isn't? I think its in a weird state where a user could easily get confused as to what is included in the hierarchy and what isn't.
If I'm understanding this correctly, this seems like the best & most generic solution, but also the one that involves the most work. I'd hate to get into a situation where we have to "know" or "cherry-pick" specific map-based key properties (like |
Correct. The user shouldn't have to know about this and how all of this work. Unfortunately. things are not as simple, and the Config system has some limitations (which you already encounter). With the current specifications and APIs dealing with Maps is a major pain. I'm sorry for the very long and technical explanation, but I wanted to clearly state the problem.
They are already in the hierarchy, so they are not the issue. The issue is that you have a few properties that are available in the Kubernetes config, that may or may not be available in other configs. For instance I actually don't agree this is the best & most generic-solution. We are trying to push a hierarchy where there is none, so we have to manually add a bunch of rules to not break it. For instance if you have something like:
Everything is clear and the Config system supports this out of the box. Instead what we are trying to do is selectively take things from a configuration at the same layer (all of the 3 are not related at all). For instance, if I have |
The model suggested by @radcortez is something that we are using in dekorate (for distantly related reasons) and works well. |
Even if we re-structure the configuration to something like this:
Handling of Map properties is still not working. |
It has to be done in the new Config system, using |
Hi @radcortez / @Sgitario / @iocanel should this feature (see #34487 & #34025) have made it into the
3.3.0
release? If so, it does not seem to work fully.If I specify the following (the
kubernetes
,openshift
,knative,
andminikube
extensions are all present):when the OpenShift & KNative resources are generated, the resources (the OpenShift
DeploymentConfig
and the KNativeService
) do not have the expected annotations or labels. They do, however, have the expectedConfigMap
s andSecret
s.The minikube
Deployment
, though, does look to be ok.I'm not sure if this is because for OpenShift/Knative the resource types are different (kubernetes
Deployment
vs OpenShiftDeploymentConfig
/KNativeService
)?In any event, it does not work as expected.
Originally posted by @edeandrea in #34487 (comment)
The text was updated successfully, but these errors were encountered: