Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aws: Create subnets for additional network CIDRs #15805

Merged
merged 2 commits into from
Aug 26, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 7 additions & 4 deletions cmd/kops/create_cluster.go
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,6 @@ type CreateClusterOptions struct {
ContainerRuntime string
OutDir string
DisableSubnetTags bool
NetworkCIDR string
DNSZone string
NodeSecurityGroups []string
ControlPlaneSecurityGroups []string
Expand Down Expand Up @@ -300,7 +299,7 @@ func NewCmdCreateCluster(f *util.Factory, out io.Writer) *cobra.Command {
cmd.RegisterFlagCompletionFunc("subnets", completeSubnetID(options))
cmd.Flags().StringSliceVar(&options.UtilitySubnetIDs, "utility-subnets", options.UtilitySubnetIDs, "Shared utility subnets to use")
cmd.RegisterFlagCompletionFunc("utility-subnets", completeSubnetID(options))
cmd.Flags().StringVar(&options.NetworkCIDR, "network-cidr", options.NetworkCIDR, "Network CIDR to use")
cmd.Flags().StringSliceVar(&options.NetworkCIDRs, "network-cidr", options.NetworkCIDRs, "Network CIDR(s) to use")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see any mention of network-cidr there.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC, kubetest2 uses that interface which I linked earlier and we would need to provide this flag there as well.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can pass them via the "passthrough" CreateArgs:

if d.CreateArgs != "" {
if strings.Contains(d.CreateArgs, "arm64") {
isArm = true
}
createArgs, err := shlex.Split(d.CreateArgs)
if err != nil {
return err
}
args = append(args, createArgs...)
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My suggestion would be to add a new flag called additional-network-cidrs to be backward compatible.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is already backwards compatible. Nothing changes from user point of view.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, might be missing something here but won't going from StringVar to StringSliceVar require users to make a change to pass a slice of strings now?

Copy link
Member Author

@hakman hakman Aug 23, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can pass a slice as multiple --network-cidr flags or comma separated.
For example, with this change, one can just do:

--network-cidr=10.0.0.0/16 --network-cidr=10.1.0.0/16 --network-cidr=10.2.0.0/16 --network-cidr=10.3.0.0/16 --network-cidr=10.4.0.0/16 --network-cidr=10.5.0.0/16

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, it's a nice feature of the flags library!

cmd.RegisterFlagCompletionFunc("network-cidr", func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
return nil, cobra.ShellCompDirectiveNoFileComp
})
Expand Down Expand Up @@ -614,8 +613,12 @@ func RunCreateCluster(ctx context.Context, f *util.Factory, out io.Writer, c *Cr
cluster.Spec.ContainerRuntime = c.ContainerRuntime
}

if c.NetworkCIDR != "" {
cluster.Spec.Networking.NetworkCIDR = c.NetworkCIDR
for i, cidr := range c.NetworkCIDRs {
if i == 0 {
cluster.Spec.Networking.NetworkCIDR = cidr
} else {
cluster.Spec.Networking.AdditionalNetworkCIDRs = append(cluster.Spec.Networking.AdditionalNetworkCIDRs, cidr)
}
}

if c.DisableSubnetTags {
Expand Down
5 changes: 5 additions & 0 deletions cmd/kops/create_cluster_integration_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -97,6 +97,11 @@ func TestCreateClusterComplex(t *testing.T) {
runCreateClusterIntegrationTest(t, "../../tests/integration/create_cluster/complex", "v1alpha2")
}

// TestCreateClusterComplexPrivate runs kops create cluster, with a grab-bag of edge cases
func TestCreateClusterComplexPrivate(t *testing.T) {
runCreateClusterIntegrationTest(t, "../../tests/integration/create_cluster/complex-private", "v1alpha2")
}

// TestCreateClusterHA runs kops create cluster ha.example.com --zones us-test-1a,us-test-1b,us-test-1c --master-zones us-test-1a,us-test-1b,us-test-1c
func TestCreateClusterHA(t *testing.T) {
runCreateClusterIntegrationTest(t, "../../tests/integration/create_cluster/ha", "v1alpha2")
Expand Down
2 changes: 1 addition & 1 deletion docs/cli/kops_create_cluster.md

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Original file line number Diff line number Diff line change
@@ -0,0 +1,287 @@
apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
creationTimestamp: "2017-01-01T00:00:00Z"
name: complex.example.com
spec:
additionalNetworkCIDRs:
- 10.1.0.0/16
- 10.2.0.0/16
- 10.3.0.0/16
- 10.4.0.0/16
api:
loadBalancer:
class: Network
type: Public
authorization:
rbac: {}
channel: stable
cloudProvider: aws
configBase: memfs://tests/complex.example.com
etcdClusters:
- cpuRequest: 200m
etcdMembers:
- encryptedVolume: true
instanceGroup: control-plane-us-test-1a
name: a
- encryptedVolume: true
instanceGroup: control-plane-us-test-1b
name: b
- encryptedVolume: true
instanceGroup: control-plane-us-test-1c
name: c
manager:
backupRetentionDays: 90
memoryRequest: 100Mi
name: main
- cpuRequest: 100m
etcdMembers:
- encryptedVolume: true
instanceGroup: control-plane-us-test-1a
name: a
- encryptedVolume: true
instanceGroup: control-plane-us-test-1b
name: b
- encryptedVolume: true
instanceGroup: control-plane-us-test-1c
name: c
manager:
backupRetentionDays: 90
memoryRequest: 100Mi
name: events
iam:
allowContainerRegistry: true
legacy: false
kubelet:
anonymousAuth: false
kubernetesApiAccess:
- 0.0.0.0/0
- ::/0
kubernetesVersion: v1.26.0
masterPublicName: api.complex.example.com
networkCIDR: 10.0.0.0/16
networking:
cni: {}
nonMasqueradeCIDR: 100.64.0.0/10
sshAccess:
- 1.2.3.4/32
subnets:
- cidr: 10.0.64.0/18
name: us-test-1a
type: Private
zone: us-test-1a
- cidr: 10.0.128.0/18
name: us-test-1b
type: Private
zone: us-test-1b
- cidr: 10.0.192.0/18
name: us-test-1c
type: Private
zone: us-test-1c
- cidr: 10.1.0.0/16
name: us-test-1a-1
type: Private
zone: us-test-1a
- cidr: 10.2.0.0/16
name: us-test-1b-2
type: Private
zone: us-test-1b
- cidr: 10.3.0.0/16
name: us-test-1c-3
type: Private
zone: us-test-1c
- cidr: 10.4.0.0/16
name: us-test-1a-4
type: Private
zone: us-test-1a
- cidr: 10.0.0.0/21
name: utility-us-test-1a
type: Utility
zone: us-test-1a
- cidr: 10.0.24.0/21
name: utility-us-test-1b
type: Utility
zone: us-test-1b
- cidr: 10.0.40.0/21
name: utility-us-test-1c
type: Utility
zone: us-test-1c
- cidr: 10.0.8.0/21
name: utility-us-test-1a-1
type: Utility
zone: us-test-1a
- cidr: 10.0.32.0/21
name: utility-us-test-1b-2
type: Utility
zone: us-test-1b
- cidr: 10.0.48.0/21
name: utility-us-test-1c-3
type: Utility
zone: us-test-1c
- cidr: 10.0.16.0/21
name: utility-us-test-1a-4
type: Utility
zone: us-test-1a
topology:
bastion:
bastionPublicName: bastion.complex.example.com
dns:
type: Public

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2017-01-01T00:00:00Z"
labels:
kops.k8s.io/cluster: complex.example.com
name: bastions
spec:
image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20230814
instanceMetadata:
httpPutResponseHopLimit: 1
httpTokens: required
machineType: t2.micro
maxSize: 1
minSize: 1
role: Bastion
subnets:
- us-test-1a
- us-test-1b
- us-test-1c
- us-test-1a-1
- us-test-1b-2
- us-test-1c-3
- us-test-1a-4

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2017-01-01T00:00:00Z"
labels:
kops.k8s.io/cluster: complex.example.com
name: control-plane-us-test-1a
spec:
image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20230814
instanceMetadata:
httpTokens: required
machineType: m3.medium
maxSize: 1
minSize: 1
role: Master
subnets:
- us-test-1a
- us-test-1a-1
- us-test-1a-4

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2017-01-01T00:00:00Z"
labels:
kops.k8s.io/cluster: complex.example.com
name: control-plane-us-test-1b
spec:
image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20230814
instanceMetadata:
httpTokens: required
machineType: m3.medium
maxSize: 1
minSize: 1
role: Master
subnets:
- us-test-1b
- us-test-1b-2

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2017-01-01T00:00:00Z"
labels:
kops.k8s.io/cluster: complex.example.com
name: control-plane-us-test-1c
spec:
image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20230814
instanceMetadata:
httpTokens: required
machineType: m3.medium
maxSize: 1
minSize: 1
role: Master
subnets:
- us-test-1c
- us-test-1c-3

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2017-01-01T00:00:00Z"
labels:
kops.k8s.io/cluster: complex.example.com
name: nodes-us-test-1a
spec:
image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20230814
instanceMetadata:
httpPutResponseHopLimit: 1
httpTokens: required
machineType: t2.medium
maxSize: 4
minSize: 4
role: Node
subnets:
- us-test-1a
- us-test-1a-1
- us-test-1a-4

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2017-01-01T00:00:00Z"
labels:
kops.k8s.io/cluster: complex.example.com
name: nodes-us-test-1b
spec:
image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20230814
instanceMetadata:
httpPutResponseHopLimit: 1
httpTokens: required
machineType: t2.medium
maxSize: 3
minSize: 3
role: Node
subnets:
- us-test-1b
- us-test-1b-2

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2017-01-01T00:00:00Z"
labels:
kops.k8s.io/cluster: complex.example.com
name: nodes-us-test-1c
spec:
image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20230814
instanceMetadata:
httpPutResponseHopLimit: 1
httpTokens: required
machineType: t2.medium
maxSize: 3
minSize: 3
role: Node
subnets:
- us-test-1c
- us-test-1c-3
21 changes: 21 additions & 0 deletions tests/integration/create_cluster/complex-private/options.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
ClusterName: complex.example.com
Zones:
- us-test-1a
- us-test-1b
- us-test-1c
CloudProvider: aws
NetworkCIDRs:
- 10.0.0.0/16
- 10.1.0.0/16
- 10.2.0.0/16
- 10.3.0.0/16
- 10.4.0.0/16
Networking: cni
Topology: private
Bastion: true
ControlPlaneCount: 3
NodeCount: 10
KubernetesVersion: v1.26.0
# We specify SSHAccess but _not_ AdminAccess
SSHAccess:
- 1.2.3.4/32
Loading
Loading