Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 Standardize cluster name in Fargate role names to avoid errors on mismatch between Cluster CR and EKS cluster name #5111

Merged
merged 2 commits into from
Oct 15, 2024

Conversation

alam0rt
Copy link
Contributor

@alam0rt alam0rt commented Sep 4, 2024

What type of PR is this?
/kind bug

What this PR does / why we need it:

The validating webhook

// Spec is immutable, but if the new RoleName is the generated one(or default if EnableIAM is disabled) and
// the old RoleName is nil, then ignore checking that field.
if old.Spec.RoleName == "" {
roleName, err := eks.GenerateEKSName(
"fargate",
fmt.Sprintf("%s-%s", r.Spec.ClusterName, r.Spec.ProfileName),
maxIAMRoleNameLength,
)
uses the Fargate profile's spec.clusterName in order to generate the role name.

This differs from the EKS role service which instead uses the scope.KubernetesClusterName().

s.scope.Info("no EKS fargate role specified, using role based on fargate profile name")
roleName, err = eks.GenerateEKSName(
"fargate",
fmt.Sprintf("%s-%s", s.scope.KubernetesClusterName(), s.scope.FargateProfile.Spec.ProfileName),
maxIAMRoleNameLength,
)

The problem comes from the scenario where the cluster resource is named one thing (i.e. foo) but was created in EKS using some prefixes (i.e. default_foo-control-plane). If this is the case, the CAPA controller will fail to validate its own change.

E0904 05:13:30.268899       1 controller.go:329] "Reconciler error" err="failed to patch AWSFargateProfile default/foo-fargate-0: admission webhook \"validation.awsfargateprofile.infrastructure.cluster.x-k8s.io\" denied the request: AWSFargateProfile.infrastructure.cluster.x-k8s.io \"foo-fargate-0\" is invalid: spec: Invalid value: v1beta2.FargateProfileSpec{ClusterName:\"foo\", ProfileName:\"default_foo-fargate-0\", SubnetIDs:[]string(nil), AdditionalTags:v1beta2.Tags(nil), RoleName:\"default_foo-control-plane-default_foo-fargate-0_fargate\", Selectors:[]v1beta2.FargateSelector{v1beta2.FargateSelector{Labels:map[string]string(nil), Namespace:\"bar\"}}}: is immutable" controller="awsfargateprofile" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="AWSFargateProfile" AWSFargateProfile="default/foo-fargate-0" namespace="default" name="foo-fargate-0" reconcileID="f043997e-5aaf-4b20-a810-7d84e957dd71"

Note the RoleName looks iffy and that this is from using the eks.GenerateEKSName() with the EKS cluster's actual name (scope.KubernetesClusterName()).: default_foo-control-plane-default_foo-fargate-0_fargate

Also, changing the fargate profile's .spec.clusterName to match the scope.KubernetesClusterName() will fail if this does not match the custom resource's metadata.name.

capa-controller-manager-6644fb894c-nvqwd manager I0904 05:43:23.906142       1 awsfargatepool_controller.go:87] "Failed to retrieve Cluster from AWSFargateProfile" controller="awsfargateprofile" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="AWSFargateProfile" AWSFargateProfile="default/manager-fargate-2" namespace="default" name="manager-fargate-2" reconcileID="557ca8d1-9108-4c81-89f5-86989f90811f"

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #

Special notes for your reviewer:

Checklist:

  • squashed commits
  • includes documentation
  • includes emojis
  • adds unit tests
  • adds or updates e2e tests

Release note:

fix: Fargate: Standardize cluster name in role names to avoid errors on mismatch between Cluster CR and EKS cluster name

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Sep 4, 2024
@k8s-ci-robot
Copy link
Contributor

Welcome @alam0rt!

It looks like this is your first PR to kubernetes-sigs/cluster-api-provider-aws 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/cluster-api-provider-aws has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Sep 4, 2024
@k8s-ci-robot
Copy link
Contributor

Hi @alam0rt. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@@ -183,7 +183,7 @@ func (s *NodegroupService) reconcileNodegroupIAMRole() error {
s.scope.Info("no EKS nodegroup role specified, using role based on nodegroup name")
roleName, err = eks.GenerateEKSName(
"nodegroup-iam-service-role",
fmt.Sprintf("%s-%s", s.scope.KubernetesClusterName(), s.scope.NodegroupName()),
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will confirm if this issue exists in the webhook for ng iam roles shortly.

@@ -304,7 +304,7 @@ func (s *FargateService) reconcileFargateIAMRole() (requeue bool, err error) {
s.scope.Info("no EKS fargate role specified, using role based on fargate profile name")
roleName, err = eks.GenerateEKSName(
"fargate",
fmt.Sprintf("%s-%s", s.scope.KubernetesClusterName(), s.scope.FargateProfile.Spec.ProfileName),
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if we should use s.scope.FargateProfile.Spec.ClusterName instead - unsure of the conventions at play.

@alam0rt alam0rt changed the title fix: issue with validating webhook and role names 🐛 ⚠️ issue with validating webhook and role names Sep 4, 2024
@alam0rt alam0rt changed the title 🐛 ⚠️ issue with validating webhook and role names 🐛 issue with validating webhook and role names Sep 4, 2024
@alam0rt alam0rt changed the title 🐛 issue with validating webhook and role names 🐛 issue with validating webhook and Fargate role names Sep 4, 2024
@alam0rt alam0rt marked this pull request as ready for review September 4, 2024 06:27
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Sep 4, 2024
@AndiDog
Copy link
Contributor

AndiDog commented Sep 4, 2024

/retitle 🐛 Standardize cluster name in Fargate role names to avoid errors on mismatch between Cluster CR and EKS cluster name

@k8s-ci-robot k8s-ci-robot changed the title 🐛 issue with validating webhook and Fargate role names 🐛 Standardize cluster name in Fargate role names to avoid errors on mismatch between Cluster CR and EKS cluster name Sep 4, 2024
@AndiDog
Copy link
Contributor

AndiDog commented Sep 4, 2024

I shortly looked into this. Following the two s.scope.ClusterName() calls point to these implementations:

// ClusterName returns the cluster name.
func (s *ManagedMachinePoolScope) ClusterName() string {
	return s.ControlPlane.Spec.EKSClusterName
}
// ClusterName returns the cluster name.
func (s *FargateProfileScope) ClusterName() string {
	return s.Cluster.Name
}

so these functions are inconsistent.

If I do the same go-to-code without your PR changes, I get return s.ControlPlane.Spec.EKSClusterName for both. So I think it already is consistent right now, without changes. But that's from a 5-minute look. @alam0rt, as author, please take the time to check this in-depth to be really sure we have a bug here and also whether a test covers it.

@alam0rt
Copy link
Contributor Author

alam0rt commented Sep 4, 2024

I shortly looked into this. Following the two s.scope.ClusterName() calls point to these implementations:

// ClusterName returns the cluster name.
func (s *ManagedMachinePoolScope) ClusterName() string {
	return s.ControlPlane.Spec.EKSClusterName
}
// ClusterName returns the cluster name.
func (s *FargateProfileScope) ClusterName() string {
	return s.Cluster.Name
}

so these functions are inconsistent.

If I do the same go-to-code without your PR changes, I get return s.ControlPlane.Spec.EKSClusterName for both. So I think it already is consistent right now, without changes. But that's from a 5-minute look. @alam0rt, as author, please take the time to check this in-depth to be really sure we have a bug here and also whether a test covers it.

i'll write up a test tomorrow to hopefully cover the issue we are facing.

the issue is not the consistency between Fargate / ManagedMachinePool scopes, rather in how the awsfargateprofile webhook generates the role name.

The validating webhook has

 	roleName, err := eks.GenerateEKSName( 
 		"fargate", 
 		fmt.Sprintf("%s-%s", r.Spec.ClusterName, r.Spec.ProfileName), 
 		maxIAMRoleNameLength, 
 	) 

The EKS role service has

 roleName, err = eks.GenerateEKSName( 
 	"fargate", 
 	fmt.Sprintf("%s-%s", s.scope.KubernetesClusterName(), s.scope.FargateProfile.Spec.ProfileName), 
 	maxIAMRoleNameLength, 
 )

In the scope of Fargate, KubernetesClusterName() is defined as:

// KubernetesClusterName is the name of the EKS cluster name.
func (s *FargateProfileScope) KubernetesClusterName() string {
	return s.ControlPlane.Spec.EKSClusterName
}

In my case, this would return default_foo-control-plane and as a result the generated role name would be different to that the validating webhook is expecting.

@alam0rt
Copy link
Contributor Author

alam0rt commented Sep 5, 2024

Alright, I ran this code and confirm that it fixes the issue.

So given

apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSFargateProfile
metadata:
  labels:
    cluster.x-k8s.io/cluster-name: manager
  name: manager-fargate-test
  namespace: default
spec:
  clusterName: manager
  selectors:
  - namespace: foo

The resource is successfully mutated by the fargate controller and passes the validation webhook update check.

apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSFargateProfile
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"infrastructure.cluster.x-k8s.io/v1beta2","kind":"AWSFargateProfile","metadata":{"annotations":{},"labels":{"cluster.x-k8s.io/cluster-name":"manager"},"name":"manager-fargate-test","namespace":"default"},"spec":{"clusterName":"manager","selectors":[{"namespace":"foo"}]}}
  creationTimestamp: "2024-09-05T00:34:15Z"
  finalizers:
  - awsfargateprofile.infrastructure.cluster.x-k8s.io
  generation: 2
  labels:
    cluster.x-k8s.io/cluster-name: manager
  name: manager-fargate-test
  namespace: default
  resourceVersion: "517679349"
  uid: 2445c25c-6b6d-4787-a6a8-ef4d22f89a7d
spec:
  clusterName: manager
  profileName: default_manager-fargate-test
  roleName: manager-default_manager-fargate-test_fargate
  selectors:
  - namespace: foo

Whereas before it was failing since the roleName was being generated as default_manager-control-plane-default_manager-fargate-test_fargate

As evidenced by the log line

E0904 05:13:30.268899       1 controller.go:329] "Reconciler error" err="failed to patch AWSFargateProfile default/manager-fargate-test: admission webhook \"validation.awsfargateprofile.infrastructure.cluster.x-k8s.io\" denied the request: AWSFargateProfile.infrastructure.cluster.x-k8s.io \"manager-fargate-test\" is invalid: spec: Invalid value: v1beta2.FargateProfileSpec{ClusterName:\"manager\", ProfileName:\"default_manager-fargate-test\", SubnetIDs:[]string(nil), AdditionalTags:v1beta2.Tags(nil), RoleName:\"default_manager-control-plane-default_manager-fargate-test_fargate\", Selectors:[]v1beta2.FargateSelector{v1beta2.FargateSelector{Labels:map[string]string(nil), Namespace:\"bar\"}}}: is immutable" controller="awsfargateprofile" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="AWSFargateProfile" AWSFargateProfile="default/manager-fargate-test" namespace="default" name="manager-fargate-test" reconcileID="f043997e-5aaf-4b20-a810-7d84e957dd71"

Let me try writing up a test to capture this.

@alam0rt
Copy link
Contributor Author

alam0rt commented Sep 5, 2024

Let me try writing up a test to capture this.

So I've noticed that there aren't any tests covering (at least obviously) the AWSFargateProfileReconciler, am I looking in the wrong place or are there no tests?

@AndiDog
Copy link
Contributor

AndiDog commented Sep 30, 2024

Let me try writing up a test to capture this.

So I've noticed that there aren't any tests covering (at least obviously) the AWSFargateProfileReconciler, am I looking in the wrong place or are there no tests?

Indeed, no tests. I guess Fargate is too rarely used for such contributions, so I'm fine to go ahead without since the fix is small and well-scoped.

/ok-to-test
/lgtm

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Sep 30, 2024
@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Sep 30, 2024
@richardcase
Copy link
Member

Thanks for this @alam0rt . The Fargate functionality needs some love, its still experimental and could do with e2e tests and otehr such things.

/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: richardcase

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 30, 2024
@richardcase
Copy link
Member

/cherrypick release-2.6

@k8s-infra-cherrypick-robot

@richardcase: once the present PR merges, I will cherry-pick it on top of release-2.6 in a new PR and assign it to you.

In response to this:

/cherrypick release-2.6

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. and removed do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. labels Oct 15, 2024
@damdo
Copy link
Member

damdo commented Oct 15, 2024

@richardcase I updated the release notes for this PR as it didn't have them.

@k8s-ci-robot k8s-ci-robot merged commit 06dd716 into kubernetes-sigs:main Oct 15, 2024
24 checks passed
@k8s-infra-cherrypick-robot

@richardcase: new pull request created: #5158

In response to this:

/cherrypick release-2.6

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. needs-priority ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants