-
Notifications
You must be signed in to change notification settings - Fork 518
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
stub in a best guess at cluster level configuration #124
stub in a best guess at cluster level configuration #124
Conversation
config/v1/types_cloudprovider.go
Outdated
Status CloudProviderStatus `json:"status"` | ||
} | ||
|
||
type CloudProviderSpec struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The lowest common denominator I know of here would be:
Name string `json:"name"`
Where Name
maps to the kube-controller-manager --cloud-provider
argument.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or Provider
.
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object | ||
|
||
// DNS holds cluster-wide information about DNS. The canonical name is `cluster` | ||
type DNS struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this represent cluster DNS config or external DNS config?
For cluster DNS, the minimal info would be:
ClusterDomain *string `json:"clusterDomain"`
Where ClusterDomain
maps to the kubelet --cluster-domain
argument. This is almost certainly immutable for the foreseeable future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that if enough of this overlaps with Networking that we should consider putting it there. Service CIDR and Internal network domain are fundamental network things that everyone must respect.
Status NetworkStatus `json:"status"` | ||
} | ||
|
||
type NetworkSpec struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@squeed was just talking about this...
} | ||
|
||
type NetworkSpec struct { | ||
// serviceCIDR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use case: cluster-dns-operator picks a static cluster IP from this range.
config/v1/types_routing.go
Outdated
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object | ||
|
||
// Routing holds cluster-wide information about Routing. The canonical name is `cluster` | ||
type Routing struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My guess is this should be Ingress
instead.
config/v1/types_routing.go
Outdated
Status RoutingStatus `json:"status"` | ||
} | ||
|
||
type RoutingSpec struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm. So should we work towards deprecating the cluster-network-operator CRD and configure it exclusively via this API? Otherwise we have a nasty duplicated data problem. |
You can observe the value from here and bump spec in your operator resource to be able to have a single configuration level to determine whether the operator has observed it. See https://github.com/openshift/cluster-kube-apiserver-operator/blob/master/pkg/operator/observe_config.go#L197-L258 for example. |
config/v1/types_cloudprovider.go
Outdated
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object | ||
|
||
// CloudProvider holds cluster-wide information about CloudProvider. The canonical name is `cluster` | ||
type CloudProvider struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We might want to call this InfrastructureProvider
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or just Infrastructure
765113f
to
dbf2acc
Compare
Names updated. @smarterclayton @ironcladlou I'm looking to get the categories merged and then looking to have individual feature owners start filling them in. |
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object | ||
|
||
// DNS holds cluster-wide information about DNS. The canonical name is `cluster` | ||
type DNS struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that if enough of this overlaps with Networking that we should consider putting it there. Service CIDR and Internal network domain are fundamental network things that everyone must respect.
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object | ||
|
||
// Ingress holds cluster-wide information about Ingress. The canonical name is `cluster` | ||
type Ingress struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should definitely reserve this name.
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object | ||
|
||
// OAuth holds cluster-wide information about OAuth. The canonical name is `cluster` | ||
type OAuth struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Possible this should be part of an "Authentication" object instead of separate. I don't want too deep/complex objects, but 3-7 feels like a nice total number to avoid user fatigue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Possible this should be part of an "Authentication" object instead of separate. I don't want too deep/complex objects, but 3-7 feels like a nice total number to avoid user fatigue.
Authentication configuration is distinct from the configuration of the oauth server. OAuth and IDP may be worth collapsing, but not with Authentication too.
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object | ||
|
||
// Project holds cluster-wide information about Project. The canonical name is `cluster` | ||
type Project struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would probably say this becomes SelfService
but we don't have to create it yet. Need to think about it.
} | ||
|
||
type SchedulingSpec struct { | ||
// default node selector (I would be happy to see this die....) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Node selector is the most useful security thing we have done so far, so while I know you hate it... :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Node selector is the most useful security thing we have done so far, so while I know you hate it... :)
I hate that we have two incompatible implementations that have existed for years and never been cleaned up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, but both of those are global security config. Either way we agree.
Probably should clearly mark "maybe objects" with a comment indicating that they are subject to change (anything outside of the ones we've decided on like Build and Image). |
dbf2acc
to
a82fcd5
Compare
done |
/lgtm We can iterate in practice |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: deads2k, smarterclayton The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What happened to Cloud? It seemed useful as long as we handled future case where “external” could be respected to only have meaning for kubelets. We just deferring to future PR? |
Cloud == Infrastructure.
…On Thu, Nov 8, 2018 at 9:18 PM Derek Carr ***@***.***> wrote:
What happened to Cloud? It seemed useful as long as we handled future case
where “external” could be respected to only have meaning for kubelets. We
just deferring to future PR?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#124 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ABG_p42huNwlu5M_wcTbPDPRIdXuIcoxks5utOXagaJpZM4YVE8J>
.
|
Cluster level configuration is a stable, discoverable API in the
config.openshift.io
group that a cluster admin will expect to use to interact and configure his cluster. Placing it in one location allows multiple operators and binaries to depend on a single source of truth for information. It also enable doc-less discovery by a cluster-admin and the divisions here allow individual teams to own their configuration inside of a cluster. Coupling across multiple processes and future potential for subdivision will become clear.A cluster-admin will be able to go through a flow like
oc api-resources --api-group=config.openshift.io
- produces a list of high level features like Images, Builds, Networking, IdentityProvider, etc.oc explain networking.config.openshift.io
- produces a list of API files and their documentation (pull open to kube)oc edit networking.config.openshift.io
- to make a changeThere is another set of actors as well. Many settings are actually observed from cluster and cannot reasonably be provided or set by a cluster-admin. For instance, the
internalRegistryHostname
is known by the image-registry-operator, not the cluster-admin. To represent this, configuration objects have a spec/status split. Controller/Operator maintained information lives in status, cluster-admin maintained information lives in spec. You should not have a field that multi-writer. If multiple writers, especially one machine and one human, try to coordinate writes on a single field, someone will get confused.The divisions will be along feature lines, not teams or binaries. An operator can observe changes to these types to drive behavior. That observation and wiring is expected to be performed by the feature own in all the binaries that need to react to changes. For instance, if a change to the network configuration needs to be observed and handled by the kube-apiserver, openshift-apiserver, and openshift-controller-manager, the networking team will make the configuration available and manage the wiring in the individually affected processes.
The expected flow goes something like this:
The API for these types will be the main entrypoint of choice for a cluster-admin and must remain stable across releases. This is in contrast to the on-disk formats for particular binaries which will no longer need stability guarantees since they are operator managed.
This pull provides a first cut at the different buckets of configuration that known today.
/assign @smarterclayton @jwforres @derekwaynecarr