-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
implement v1alpha3 #373
implement v1alpha3 #373
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: BenTheElder The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
) | ||
|
||
func Convert_v1alpha2_Config_To_config_Cluster(in *Config, out *config.Cluster, s conversion.Scope) error { | ||
// TODO(bentheelder): try to convert kubeadm config patches? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should definitely convert them going in this direction
/hold
cc @munnerz , this file is the hairy part. Technically we don't do round-tripping 1:1, but that's probably OK since we don't actually ever convert back to an external type and the types are alpha.
/hold cancel |
I think that's a prow flake actually ?? |
confirmed locally that with multiple control planes in a v1alpha2 config an external LB is created and works. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM thanks @BenTheElder
added minor comments.
out.KubeadmConfigPatches = append(out.KubeadmConfigPatches, patch) | ||
} | ||
for _, patch := range node.KubeadmConfigPatchesJSON6902 { | ||
out.KubeadmConfigPatchesJSON6902 = append(out.KubeadmConfigPatchesJSON6902, patch) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this will apply patches from multiple nodes that previously could have been unique, to all nodes in v1alpha3.
that said:
- this is probably better than discarding patches and e.g. taking only the patches from the first CP node.
- we probably don't have that many users that use patches.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this will apply patches from multiple nodes that previously could have been unique, to all nodes in v1alpha3.
we only generate one config file currently, so if anyone was using them they were already effectively global patches
func Convert_config_Cluster_To_v1alpha2_Config(in *config.Cluster, out *Config, s conversion.Scope) error { | ||
// TODO(bentheelder): try to convert kubeadm config patches? | ||
// Not sure if that makes sense since these aren't cluster wide, and we don't | ||
// actually roundtrip anywho? :thinking: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems OK to me to exclude the patches..
proper rountrips would be nice to have in future versions though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we could include them currently by shoe-horning them onto one of the nodes, but I'd rather revisit it later given we don't depend on this and that might not be the right answer 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a WIP to generate a config file per node, for better version skew. I'll revisit per-node patches as an actually supported thing in addition to global patches as part of that.
// ExternalEtcdRole identifies a node that hosts an external-etcd instance. | ||
// WARNING: this node type is not yet implemented! | ||
// Please note that `kind` nodes hosting external etcd are not kubernetes nodes | ||
ExternalEtcdRole NodeRole = "external-etcd" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i don't think we should remove fields from alpha2 as part of the alpha3 introduction.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is a value, not a field, and it is rejected during validation, we're exporting an invalid value and then validating that it is invalid currently 🤷♂️
pkg/cluster/config/v1alpha3/types.go
Outdated
const ( | ||
// ControlPlaneRole identifies a node that hosts a Kubernetes control-plane. | ||
// | ||
// NOTE: in single node clusters, control-plane nodes act also as a worker nodes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
possibly use two lines only:
// ControlPlaneRole identifies a node that hosts a Kubernetes control-plane.
// NOTE: in single node clusters, control-plane nodes will allow workloads.
^ proposed minor change to the note.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure, SGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
Looked quickly, it looks great to me. Thanks. 👍 |
/lgtm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me 😄.
A few notes around the conversion code, and then a few other little nits here and there... lgtm!
// SetDefaults_Config sets uninitialized fields to their default value. | ||
func SetDefaults_Config(obj *Config) { | ||
// SetDefaults_Cluster sets uninitialized fields to their default value. | ||
func SetDefaults_Cluster(obj *Cluster) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤔 I didn't think defaults were usually applied as part of the internal API version? Although I could definitely be wrong 😄 (/me goes away to dig into Kubernetes)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have to default the internal type as well currently because its used by library consumers 🤔
// ExtraMounts describes additional mount points for the node container | ||
// These may be used to bind a hostpath | ||
// These may be used to bind a hostPath |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bind a hostPath into a single node container
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ER isn't it implied that they're bound to the current node?
|
||
// skip now-implicit external load balancers | ||
if node.Role == ExternalLoadBalancerRole { | ||
continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we can just skip this altogether here - it will mean that a conversion from v1alpha2->internal->v1alpha3->internal->v1alpha2 will not be bijective, and will produce different results.
This is an important gotcha of writing conversions - in practice, I'm not sure what the effect of not doing it is, but the apimachinery assumes that this is the case, so we are breaking the rules here 🙄
Instead here, we would usually store some additional information on annotations on the new resource, so that we can convert back from that format.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Er so maybe this is the wrong place to do this but effectively I do want to drop this. Previously you needed to specify this and now it is implicit.
We don't support roundtripping properly because we did actually break between versions which is only allowed because the types are alpha by my understanding.
We also don't ever convert backwards yet. Only to internal.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
something similar was discussed in one of my comments earlier.
we are indeed breaking the rountrip rules a little here, so it's a question if we want the full rountrip support now, or close our eyes for the alpha2 -> alpha3 transition, but consider the proper conversations for say alpha3->next.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, I think roundtrip support is important long term, but we're intentionally breaking compatibility between these now in alpha, you cannot express roundtrip because of this. we could have just deleted all old alphas.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
beta and onwards versions should be round trippable without issue.
if err := Convert_v1alpha2_Node_To_config_Node(&node, &convertedNode, s); err != nil { | ||
return err | ||
} | ||
out.Nodes = append(out.Nodes, convertedNode) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if someone specified replicas: 0
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What a jerk /s
We should test how 0.1.0 behaves specifically and reflect that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i did test it at some point and i think it was silently existing. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
$ kind version
0.1.0
$ cat $HOME/kind-config.yaml
kind: Config
apiVersion: kind.sigs.k8s.io/v1alpha2
nodes:
- role: control-plane
replicas: 0
$ kind create cluster --config=$HOME/kind-config.yaml
Error: failed to create cluster: please add at least one node with role "control-plane"
Usage:
kind create cluster [flags]
Flags:
--config string path to a kind config file
-h, --help help for cluster
--image string node docker image to use for booting the cluster
--name string cluster context name (default "1")
--retain retain nodes for debugging when cluster creation fails
--wait duration Wait for control plane node to be ready (default 0s)
Global Flags:
--loglevel string logrus log level [panic, fatal, error, warning, info, debug] (default "warning")
failed to create cluster: please add at least one node with role "control-plane"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so we should actually treat it as 0 and skip the node I guess.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will handle this in a follow-up once we decide
|
||
// ExtraMounts describes additional mount points for the node container | ||
// These may be used to bind a hostPath | ||
ExtraMounts []cri.Mount `json:"extraMounts,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not for part of this PR, but it's just highlighted it to me... we don't currently version the cri.Mount
structure - we should be careful here, as a change to the cri
package could inadvertently affect users and sneak through review (as it's not in the config
package)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. At this point that structure is expected to be stable, but we should be explicit about that.
@@ -209,7 +214,16 @@ func nodesToCreate(cfg *config.Config, clusterName string) []nodeSpec { | |||
}) | |||
} | |||
|
|||
// TODO(bentheelder): handle implicit nodes as well | |||
// add an external load balancer if there are multiple control planes | |||
if controlPlanes > 1 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There should be a note, probably on the Role field or on the Nodes field, to explain that if you specify more than 1 control plane node then a load balancer will be provisioned.
IMO, API docs should themselves serve users quite well as a reference for how things work (even if we also have additional proper documentation for this too).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done. the API docs definitely need work
/hold cancel |
/lgtm |
* [0.3] Move upgrade command * Change k8s version in patch
TODO:
/assign @munnerz