Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistent kubernetes.io/cluster/ tag usage with EKS #3572

Closed
richardcase opened this issue Jul 4, 2022 · 8 comments · Fixed by #3573
Closed

Inconsistent kubernetes.io/cluster/ tag usage with EKS #3572

richardcase opened this issue Jul 4, 2022 · 8 comments · Fixed by #3573
Assignees
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@richardcase
Copy link
Member

/kind bug
/help
/good-first-issue
/priority backlog
/triage accepted

What steps did you take and what happened:
When creating an EKS cluster that then has services of type load balancer within it the kubernetes.io/cluster/ tags are not consistent across the EKS service and tags for resources created by the CCM, such as ELB/NLBs.

Create a cluster with the following manifest:

---
apiVersion: cluster.x-k8s.io/v1alpha4
kind: Cluster
metadata:
  name: "capi-managed-test"
spec:
  clusterNetwork:
    pods:
      cidrBlocks: ["192.168.0.0/16"]
  infrastructureRef:
    kind: AWSManagedControlPlane
    apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
    name: "capi-managed-test-control-plane"
  controlPlaneRef:
    kind: AWSManagedControlPlane
    apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
    name: "capi-managed-test-control-plane"
---
kind: AWSManagedControlPlane
apiVersion: controlplane.cluster.x-k8s.io/v1alpha4
metadata:
  name: "capi-managed-test-control-plane"
spec:
  region: "eu-west-2"
  sshKeyName: "capi-management"
  version: "v1.21.0"

When the Cluster is Ready in the AWS console look at the tags for th EKS cluster and you'll see:

kubernetes.io/cluster/capi-managed-test=owned

Now create a service of type load balancer, so:

kubectl get secrets capi-managed-test-user-kubeconfig -o jsonpath={.data.value} | base64 -d > test.kubeconfig
kubectl apply --kubeconfig test.kubeconfig -f https://raw.githubusercontent.com/richardcase/cluster-api-provider-aws/1718_cleanup_lb/test/e2e/data/gcworkload.yaml`

Got to the EC2 service in the AWS console and look at the tags for the load balancer created and you'll see:

kubernetes.io/cluster/default_capi-managed-test-control-plane=owned

We should make the tag name on the EKS cluster be the "eksclustername"

What did you expect to happen:
I expect the kubernetes.io/cluster/ tag name to be consistent/

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Environment:

  • Cluster-api-provider-aws version:
  • Kubernetes version: (use kubectl version):
  • OS (e.g. from /etc/os-release):
@k8s-ci-robot
Copy link
Contributor

@richardcase:
This request has been marked as suitable for new contributors.

Guidelines

Please ensure that the issue body includes answers to the following questions:

  • Why are we solving this issue?
  • To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
  • Does this issue have zero to low barrier of entry?
  • How can the assignee reach out to you for help?

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-good-first-issue command.

In response to this:

/kind bug
/help
/good-first-issue
/priority backlog
/triage accepted

What steps did you take and what happened:
When creating an EKS cluster that then has services of type load balancer within it the kubernetes.io/cluster/ tags are not consistent across the EKS service and tags for resources created by the CCM, such as ELB/NLBs.

Create a cluster with the following manifest:

---
apiVersion: cluster.x-k8s.io/v1alpha4
kind: Cluster
metadata:
 name: "capi-managed-test"
spec:
 clusterNetwork:
   pods:
     cidrBlocks: ["192.168.0.0/16"]
 infrastructureRef:
   kind: AWSManagedControlPlane
   apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
   name: "capi-managed-test-control-plane"
 controlPlaneRef:
   kind: AWSManagedControlPlane
   apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
   name: "capi-managed-test-control-plane"
---
kind: AWSManagedControlPlane
apiVersion: controlplane.cluster.x-k8s.io/v1alpha4
metadata:
 name: "capi-managed-test-control-plane"
spec:
 region: "eu-west-2"
 sshKeyName: "capi-management"
 version: "v1.21.0"

When the Cluster is Ready in the AWS console look at the tags for th EKS cluster and you'll see:

kubernetes.io/cluster/capi-managed-test=owned

Now create a service of type load balancer, so:

kubectl get secrets capi-managed-test-user-kubeconfig -o jsonpath={.data.value} | base64 -d > test.kubeconfig
kubectl apply --kubeconfig test.kubeconfig -f https://raw.githubusercontent.com/richardcase/cluster-api-provider-aws/1718_cleanup_lb/test/e2e/data/gcworkload.yaml`

Got to the EC2 service in the AWS console and look at the tags for the load balancer created and you'll see:

kubernetes.io/cluster/default_capi-managed-test-control-plane=owned

We should make the tag name on the EKS cluster be the "eksclustername"

What did you expect to happen:
I expect the kubernetes.io/cluster/ tag name to be consistent/

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Environment:

  • Cluster-api-provider-aws version:
  • Kubernetes version: (use kubectl version):
  • OS (e.g. from /etc/os-release):

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence. good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. triage/accepted Indicates an issue or PR is ready to be actively worked on. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels Jul 4, 2022
@knabben
Copy link
Member

knabben commented Jul 4, 2022

I will give it a try!

/assign

@knabben
Copy link
Member

knabben commented Jul 4, 2022

image
This seems to happen in the SG and EC2 instances as well. It's using the name of the cluster in the configuration instead... tracking it down...

@knabben
Copy link
Member

knabben commented Jul 5, 2022

#3329
This was changed to allow LB works with the correct tagging, probably renaming this function will introduce this regression back.

@faiq
Copy link
Contributor

faiq commented Jul 5, 2022

Yes, that PR was needed to make LBs work on EKS. It looks like I missed this inconsistency with the cluster name.

@richardcase
Copy link
Member Author

Could we change the tag when we create the EKS cluster in AWS?

@faiq
Copy link
Contributor

faiq commented Jul 5, 2022

I consulted @knabben with what I think should fix it in their PR here #3573 (comment)

@knabben
Copy link
Member

knabben commented Jul 5, 2022

If the final tag agreed is the EKS cluster name the change in the EKS createCluster will work, changing the PR.
Still wondering if there's a need for parity with the unmanaged cluster tags.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants