Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[E2E] EKS tests are failing #4574

Closed
richardcase opened this issue Oct 12, 2023 · 6 comments · Fixed by #4575
Closed

[E2E] EKS tests are failing #4574

richardcase opened this issue Oct 12, 2023 · 6 comments · Fixed by #4575
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. kind/release-blocking Issues or PRs that need to be closed before the next release priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Milestone

Comments

@richardcase
Copy link
Member

/kind bug

What steps did you take and what happened:

Looking at testgrid the different EKS have been consistently failing since 3/4th October. For example:

Looking at some of the logs for the failures we see errors like this:

I1011 03:29:27.276687       1 recorder.go:104] "events: Failed to initiate creation of a new EKS control plane: InvalidParameterException: The subnet ID 'eks-extresgc-384bwp-subnet-public-us-west-2a' does not exist (Service: AmazonEC2; Status Code: 400; Error Code: InvalidSubnetID.NotFound

Looking at the logs some more for the reconciliation of the subnets we see this for mentioned subnet:

    {
        "id": "eks-extresgc-384bwp-subnet-public-us-west-2a",
        "resourceID": "subnet-0b94dc61d85f0193d",
        "cidrBlock": "10.0.0.0/20",
        "availabilityZone": "us-west-2a",
        "isPublic": true,
        "routeTableId": "rtb-00a1670a0816f30c6",
        "natGatewayId": "nat-0f38df4fc916ed6c4",
        "tags": {
            "Name": "eks-extresgc-384bwp-subnet-public-us-west-2a",
            "kubernetes.io/cluster/eks-extresgc-o755gy_eks-extresgc-384bwp-control-plane": "shared",
            "kubernetes.io/role/elb": "1",
            "sigs.k8s.io/cluster-api-provider-aws/cluster/eks-extresgc-384bwp": "owned",
            "sigs.k8s.io/cluster-api-provider-aws/role": "public"
        }
    },

We should be passing subnet-0b94dc61d85f0193d and not eks-extresgc-384bwp-subnet-public-us-west-2a when creating the EKS cluster.

Looking at the code here we can see that it is passing the ID and not the value for ResourceID.

ResourceID is a new field introduced as part of #4474. We need to update the EKS code to use the REsourceID

What did you expect to happen:

The EKS e2e to not fail

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Environment:

  • Cluster-api-provider-aws version:
  • Kubernetes version: (use kubectl version):
  • OS (e.g. from /etc/os-release):
@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Oct 12, 2023
@richardcase
Copy link
Member Author

/triage accepted
/priority critical-urgent

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Oct 12, 2023
@richardcase
Copy link
Member Author

/assign

@richardcase
Copy link
Member Author

Additional error:

I1012 11:35:17.560593       1 recorder.go:104] "events: Failed to create managed RouteTable: RouteTableLimitExceeded: The maximum number of route tables has been reached.\n\tstatus code: 400, request id: f54b2239-642b-467e-adc5-166279cf98ff" type="Warning" object={"kind":"AWSManagedControlPlane","namespace":"eks-nodes-ji4qro","name":"eks-nodes-e2szrt-control-plane","uid":"8dddca77-4fb2-4415-aa47-efb68b2ba26b","apiVersion":"controlplane.cluster.x-k8s.io/v1beta2","resourceVersion":"20880"} reason="FailedCreateRouteTable"

Looks like we may need to increase the RT limits.

@richardcase
Copy link
Member Author

The limit is 200 route tables per vpc (soure). We don't create that many are part of creating a cluster....so something weird must be going on and perhaps on every reconciliation loop we create another RT. Investigating.

@richardcase
Copy link
Member Author

We are creating a new route table on every reconciliation loop :( And we hit the limit. Searching for "Created route table" in the logs of a failure yields 200+ entries...each with a different route table ID.

@richardcase richardcase added this to the v2.3.0 milestone Oct 16, 2023
@richardcase richardcase added the kind/release-blocking Issues or PRs that need to be closed before the next release label Oct 16, 2023
@richardcase
Copy link
Member Author

I haven't seen the 200+ route table issue again. I suspect now that the EKS cluster is creatign properly we aren't looping around again and creating route tables every reconcile. We can confirm this on another PR if we run the e2e tests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. kind/release-blocking Issues or PRs that need to be closed before the next release priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants