-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Separate subnets for machine pools #491
Comments
Apart from trying to change the CAPA default, would it make sense to supply this configuration as part of our future GS release concept? That way, we would not care about the CAPA default. The templates in our release would contain the recommended configuration. |
The main problem is that this is not configurable or achievable with CAPA, CAPA either uses CP subnets or you need to provide subnet-id for the machine pools. No default will save us or any other template. we cannot even build a simple operator that would create the subnets for us, as the subnets must exist at the time you are trying to create a machine pool, so the operator would have to know it beforehand and create them and then inject their ID during machine pool CR creation via webhook which I have no idea how to achieve. |
blocked by upstream PR kubernetes-sigs/cluster-api-provider-aws#2634 nobody responded and its waiting for merge |
upstream PR is merged, we need to wait for a new release and test this with the new subnet operator |
@alex-dabija What is the desired "user interface" for this?
|
No, I think that
It's more important now to allow for separate subnets to be configured for each machine pool and availability zone. We already have this on AWS vintage. Yes, it true that on vintage we are very inflexible: CIDRs are chosen automatically from a larger one, etc.
Subnets are per availability zone, which means we need 3 for a highly-available cluster.
What do you mean by "use own subnets"? Is it CAPA creating the subnets? I was thinking that in Yes, overall this would not be the best UX and the user could do lots of mistakes but it's going to allow us to be very responsive to future customer configurations. The UX limitations could be mitigated by:
|
Work in progress implementation: giantswarm/cluster-aws#196 Current state: Some things still need to be handled:
|
Current state of things: The changes work if only creating one grouping of subnets (the same number of subnets as When creating multiple groupings of subnets the control plane nodes fail to come up correctly. Example configuration snippet: controlPlane:
replicas: 3
subnetTags:
- subnet.giantswarm.io/role: control-plane
bastion:
subnetTags:
- subnet.giantswarm.io/role: bastion
machinePools:
- instanceType: m5.xlarge
maxSize: 10
minSize: 3
name: machine-pool0
rootVolumeSizeGB: 300
subnetTags:
- subnet.giantswarm.io/role: worker
network:
topologyMode: GiantSwarmManaged
vpcMode: private
apiMode: private
dnsMode: private
vpcCIDR: 10.228.0.0/16
availabilityZoneUsageLimit: 3
subnets:
- cidrBlocks:
- cidr: 10.228.48.0/20
availabilityZone: a
- cidr: 10.228.64.0/20
availabilityZone: b
- cidr: 10.228.80.0/20
availabilityZone: c
isPublic: false
tags:
subnet.giantswarm.io/role: control-plane
- cidrBlocks:
- cidr: 10.228.0.0/20
availabilityZone: a
- cidr: 10.228.16.0/20
availabilityZone: b
- cidr: 10.228.32.0/20
availabilityZone: c
isPublic: false
tags:
subnet.giantswarm.io/role: worker
- cidrBlocks:
- cidr: 10.228.96.0/20
availabilityZone: a
- cidr: 10.228.112.0/20
availabilityZone: b
- cidr: 10.228.128.0/20
availabilityZone: c
isPublic: false
tags:
subnet.giantswarm.io/role: bastion This created 3 grouping of subnets each with a specific The bastion instance correctly launches into one of the 3 subnets tagged with The first control plane node is correctly launched into one of the 3 subnets tagged with
There is no entries in the syslog related to the setup or starting of kubernetes. Some observations:
|
UpdateWith the release of aws-vpc-operator v0.2.2 this PR now allows for the successful creation of private workload clusters with multiple subnet groupings. TODO
|
Fixed in giantswarm/cluster-aws#205 (v0.21.0) kubectl-gs change to support the new layout when using |
User Story
- As a cluster admin, I want the machine pool nodes to be on separate AWS subnets (one per availability zone) in order to have clear network boundaries.
Details, Background
Cluster API for AWS (CAPA) adds the nodes of a new machine pool to the same AWS subnets that are used by the cluster's control plane. In contrast, Giant Swarm clusters create separate subnets (one per availability zone) for each machine pool.
The gap between Cluster API and Giant Swarm clusters can be reduced by changing the default Cluster API behavior to create subnets (one per availability zone) for each machine pool.
Blocked by / depends on
None
The text was updated successfully, but these errors were encountered: