Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix issue preventing to create single-AZ nodegroups #435

Merged
merged 4 commits into from
Jan 15, 2019

Conversation

errordeveloper
Copy link
Contributor

@errordeveloper errordeveloper commented Jan 15, 2019

Description

Make sure to fetch public and private subnet IDs when getting all VPC info. This was address in #

Fixes #432. This is an improvement on what #429, but it uses CloudFormation stack outputs, since there we get subnet IDs by topologies easily.

Checklist

  • Code compiles correctly (i.e make build)
  • All tests passing (i.e. make test)

@errordeveloper
Copy link
Contributor Author

 [0] >> ./eksctl create cluster --name=test-node-az-1 --nodes=1 --region=eu-north-1
[ℹ]  using region eu-north-1
[ℹ]  setting availability zones to [eu-north-1a eu-north-1c eu-north-1b]
[ℹ]  subnets for eu-north-1a - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ]  subnets for eu-north-1c - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ]  subnets for eu-north-1b - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ]  nodegroup "ng-83dabd5f" will use "ami-06ee67302ab7cf838" [AmazonLinux2/1.11]
[ℹ]  creating EKS cluster "test-node-az-1" in "eu-north-1" region
[ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
[ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=eu-north-1 --name=test-node-az-1'
[ℹ]  creating cluster stack "eksctl-test-node-az-1-cluster"
[ℹ]  creating nodegroup stack "eksctl-test-node-az-1-nodegroup-ng-83dabd5f"
[✔]  all EKS cluster resource for "test-node-az-1" had been created
[✔]  saved kubeconfig as "/Users/ilya/.kube/config"
[ℹ]  nodegroup "ng-83dabd5f" has 0 node(s)
[ℹ]  waiting for at least 1 node(s) to become ready in "ng-83dabd5f"
[ℹ]  nodegroup "ng-83dabd5f" has 1 node(s)
[ℹ]  node "ip-192-168-94-149.eu-north-1.compute.internal" is ready
[ℹ]  kubectl command should work with "/Users/ilya/.kube/config", try 'kubectl get nodes'
[✔]  EKS cluster "test-node-az-1" in "eu-north-1" region is ready
 [0] >> ./eksctl create nodegroup --cluster=test-node-az-1 --nodes=1 --region=eu-north-1 --node-zones=eu-north-1c
[ℹ]  using region eu-north-1
[ℹ]  nodegroup "ng-47ff879e" will use "ami-06ee67302ab7cf838" [AmazonLinux2/1.11]
[ℹ]  will create a Cloudformation stack for nodegroup ng-47ff879e in cluster test-node-az-1
[ℹ]  creating nodegroup stack "eksctl-test-node-az-1-nodegroup-ng-47ff879e"
[ℹ]  nodegroup "ng-47ff879e" has 0 node(s)
[ℹ]  waiting for at least 1 node(s) to become ready in "ng-47ff879e"
[ℹ]  nodegroup "ng-47ff879e" has 1 node(s)
[ℹ]  node "ip-192-168-47-78.eu-north-1.compute.internal" is ready
[✔]  created nodegroup "ng-47ff879e" in cluster "test-node-az-1"
[ℹ]  will inspect security group configuration for all nodegroups
 [0] >> ./eksctl create nodegroup --cluster=test-node-az-1 --nodes=1 --region=eu-north-1 --node-zones=eu-north-1c -P
[ℹ]  using region eu-north-1
[ℹ]  nodegroup "ng-f7d7b06a" will use "ami-06ee67302ab7cf838" [AmazonLinux2/1.11]
[ℹ]  will create a Cloudformation stack for nodegroup ng-f7d7b06a in cluster test-node-az-1
[ℹ]  creating nodegroup stack "eksctl-test-node-az-1-nodegroup-ng-f7d7b06a"
[ℹ]  nodegroup "ng-f7d7b06a" has 0 node(s)
[ℹ]  waiting for at least 1 node(s) to become ready in "ng-f7d7b06a"

[ℹ]  nodegroup "ng-f7d7b06a" has 1 node(s)
[ℹ]  node "ip-192-168-139-140.eu-north-1.compute.internal" is ready
[✔]  created nodegroup "ng-f7d7b06a" in cluster "test-node-az-1"
[ℹ]  will inspect security group configuration for all nodegroups
 [0] >> kubectl get nodes -o wide
NAME                                             STATUS   ROLES    AGE   VERSION   INTERNAL-IP       EXTERNAL-IP     OS-IMAGE         KERNEL-VERSION               CONTAINER-RUNTIME
ip-192-168-139-140.eu-north-1.compute.internal   Ready    <none>   55s   v1.11.5   192.168.139.140   <none>          Amazon Linux 2   4.14.88-88.76.amzn2.x86_64   docker://17.6.2
ip-192-168-47-78.eu-north-1.compute.internal     Ready    <none>   6m    v1.11.5   192.168.47.78     13.53.171.172   Amazon Linux 2   4.14.88-88.76.amzn2.x86_64   docker://17.6.2
ip-192-168-94-149.eu-north-1.compute.internal    Ready    <none>   18m   v1.11.5   192.168.94.149    13.53.132.191   Amazon Linux 2   4.14.88-88.76.amzn2.x86_64   docker://17.6.2
 [0] >> kubectl get nodes -o wide -l failure-domain.beta.kubernetes.io/zone=eu-north-1c
NAME                                             STATUS   ROLES    AGE   VERSION   INTERNAL-IP       EXTERNAL-IP     OS-IMAGE         KERNEL-VERSION               CONTAINER-RUNTIME
ip-192-168-139-140.eu-north-1.compute.internal   Ready    <none>   1m    v1.11.5   192.168.139.140   <none>          Amazon Linux 2   4.14.88-88.76.amzn2.x86_64   docker://17.6.2
ip-192-168-47-78.eu-north-1.compute.internal     Ready    <none>   6m    v1.11.5   192.168.47.78     13.53.171.172   Amazon Linux 2   4.14.88-88.76.amzn2.x86_64   docker://17.6.2
 [0] >> kubectl get nodes -o wide -l failure-domain.beta.kubernetes.io/zone=eu-north-1c -l alpha.eksctl.io/nodegroup-name=ng-47ff879e 
NAME                                           STATUS   ROLES    AGE   VERSION   INTERNAL-IP     EXTERNAL-IP     OS-IMAGE         KERNEL-VERSION               CONTAINER-RUNTIME
ip-192-168-47-78.eu-north-1.compute.internal   Ready    <none>   7m    v1.11.5   192.168.47.78   13.53.171.172   Amazon Linux 2   4.14.88-88.76.amzn2.x86_64   docker://17.6.2
 [0] >> kubectl get nodes -o wide -l failure-domain.beta.kubernetes.io/zone=eu-north-1c -l alpha.eksctl.io/nodegroup-name=ng-f7d7b06a
NAME                                             STATUS   ROLES    AGE   VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE         KERNEL-VERSION               CONTAINER-RUNTIME
ip-192-168-139-140.eu-north-1.compute.internal   Ready    <none>   2m    v1.11.5   192.168.139.140   <none>        Amazon Linux 2   4.14.88-88.76.amzn2.x86_64   docker://17.6.2
 [0] >> 

@errordeveloper errordeveloper changed the title WIP: Fix nodegroup AZ selection Fix nodegroup AZ selection Jan 15, 2019
@errordeveloper errordeveloper changed the title Fix nodegroup AZ selection Fix issue preventing to create single-AZ nodegroups Jan 15, 2019
This allows us to also obtain public and private subnets separately,
so that we have complete set of identifiers that would be of use
to create build the nodegroup stack.
Copy link
Contributor

@dlespiau dlespiau left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

\o/

@errordeveloper
Copy link
Contributor Author

thanks @dlespiau!

@errordeveloper errordeveloper merged commit 7d763b8 into master Jan 15, 2019
@errordeveloper errordeveloper deleted the fix-nodegroup-az-selection branch January 15, 2019 15:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants