Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OCM-2641, OCM-17: Specify worker disk size for machine pools #244

Merged
merged 3 commits into from
Oct 29, 2023

Conversation

JohnStrunk
Copy link
Member

@JohnStrunk JohnStrunk commented Aug 17, 2023

  • OCM-2641: Adds worker_disk_size to cluster resource. This is used only at cluster creation to determine the size of the default MP's disk size
  • OCM-17: Adds disk_size to the machinepool resource. -- Only used at pool creation time

Supersceedes #100

@openshift-ci-robot
Copy link

openshift-ci-robot commented Aug 17, 2023

@JohnStrunk: This pull request references OCM-17 which is a valid jira issue.

In response to this:

  • Adds worker_disk_size to cluster resource. This is used only at cluster creation to determine the size of the default MP's disk size

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci
Copy link

openshift-ci bot commented Aug 17, 2023

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@JohnStrunk JohnStrunk force-pushed the root-disk-size branch 2 times, most recently from cdff057 to 0907376 Compare August 17, 2023 18:00
@openshift-ci-robot
Copy link

openshift-ci-robot commented Aug 17, 2023

@JohnStrunk: This pull request references OCM-17 which is a valid jira issue.

In response to this:

  • OCM-2641: Adds worker_disk_size to cluster resource. This is used only at cluster creation to determine the size of the default MP's disk size
  • OCM-17: Adds disk_size to the machinepool resource. -- Only used at pool creation time

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

1 similar comment
@openshift-ci-robot
Copy link

openshift-ci-robot commented Aug 17, 2023

@JohnStrunk: This pull request references OCM-17 which is a valid jira issue.

In response to this:

  • OCM-2641: Adds worker_disk_size to cluster resource. This is used only at cluster creation to determine the size of the default MP's disk size
  • OCM-17: Adds disk_size to the machinepool resource. -- Only used at pool creation time

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@JohnStrunk JohnStrunk marked this pull request as ready for review August 17, 2023 18:06
@openshift-ci openshift-ci bot requested review from sagidayan and vkareh August 17, 2023 18:06
@JohnStrunk
Copy link
Member Author

JohnStrunk commented Aug 17, 2023

/hold
There seems to be a bug affecting modifying clusters w/ non-standard root disk sizes - OCM-3323

@JohnStrunk
Copy link
Member Author

@bardielle This is ready for a first look, though I want to do some more testing once the above bug is resolved.

@openshift-ci-robot
Copy link

openshift-ci-robot commented Aug 17, 2023

@JohnStrunk: This pull request references OCM-17 which is a valid jira issue.

In response to this:

  • OCM-2641: Adds worker_disk_size to cluster resource. This is used only at cluster creation to determine the size of the default MP's disk size
  • OCM-17: Adds disk_size to the machinepool resource. -- Only used at pool creation time

Supersceedes #100

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Optional: true,
Computed: true,
PlanModifiers: []tfsdk.AttributePlanModifier{
tfsdk.UseStateForUnknown(),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can add a validation here on the disk size value

@JohnStrunk
Copy link
Member Author

/unhold
This is ready

@amalykhi
Copy link
Contributor

/retest

3 similar comments
@amalykhi
Copy link
Contributor

/retest

@amalykhi
Copy link
Contributor

/retest

@amalykhi
Copy link
Contributor

/retest

@JohnStrunk
Copy link
Member Author

/retest

@@ -15,22 +15,22 @@ var _ = Describe("Cluster", func() {
})
Context("CreateNodes validation", func() {
It("Autoscaling disabled minReplicas set - failure", func() {
err := cluster.CreateNodes(false, nil, pointer(int64(2)), nil, nil, nil, nil, false)
err := cluster.CreateNodes(false, nil, pointer(int64(2)), nil, nil, nil, nil, false, nil)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add one test with non empty diskSize value

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

if workerDiskSize != nil {
nodes.ComputeRootVolume(
cmv1.NewRootVolume().AWS(
cmv1.NewAWSVolume().Size(int(*workerDiskSize)),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to set default value here or just use the default value in the OCM backend?
What is the behavior in the CLI?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the CLI, if the size is not set or identical to the flavor's default size, we do nothing and let the backend handle the defaults.

@@ -184,6 +184,16 @@ func (t *MachinePoolResourceType) GetSchema(ctx context.Context) (result tfsdk.S
ValueCannotBeChangedModifier(),
},
},
"disk_size": {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please generate the attributes in the documentation

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

regenerate it please

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@nirarg
Copy link
Member

nirarg commented Sep 6, 2023

/hold Will be merged only after v1.3.0 release

@nirarg
Copy link
Member

nirarg commented Sep 6, 2023

/hold
This feature is on hold
Will be merged only after v1.3.0 release

@JohnStrunk
Copy link
Member Author

CI failure during cluster destroy:

�[31m│�[0m �[0m�[1m�[31mError: �[0m�[0m�[1mdeleting IAM Role (rhcsci-soph7-ControlPlane-Role): unable to detach instance profiles: AccessDenied: User: arn:aws:iam::425464789085:user/terraform-provider-minimal is not authorized to perform: iam:RemoveRoleFromInstanceProfile on resource: instance profile rhcsci-soph7-v9jl4-master-profile because no identity-based policy allows the iam:RemoveRoleFromInstanceProfile action
�[31m│�[0m �[0m	status code: 403, request id: 33115eec-f165-4a5b-a4c7-3d908af11df0�[0m
�[31m│�[0m �[0m
�[31m│�[0m �[0m�[0m
�[31m╵�[0m�[0m
�[31m╷�[0m�[0m
�[31m│�[0m �[0m�[1m�[31mError: �[0m�[0m�[1mdeleting IAM Role (rhcsci-soph7-Worker-Role): unable to detach instance profiles: AccessDenied: User: arn:aws:iam::425464789085:user/terraform-provider-minimal is not authorized to perform: iam:RemoveRoleFromInstanceProfile on resource: instance profile rhcsci-soph7-v9jl4-worker-profile because no identity-based policy allows the iam:RemoveRoleFromInstanceProfile action
�[31m│�[0m �[0m	status code: 403, request id: cd877cb3-7361-4d5d-9628-5afdeeca52bd�[0m

Seems to be an AWS permission problem.

@JohnStrunk
Copy link
Member Author

/retest

Adds `worker_disk_size` to cluster resource. This is used only at
cluster creation to determine the size of the default MP's disk size

Signed-off-by: John Strunk <[email protected]>
@@ -252,6 +252,10 @@ func (r *ClusterRosaClassicResource) Schema(ctx context.Context, req resource.Sc
stringplanmodifier.RequiresReplace(),
},
},
"worker_disk_size": schema.Int64Attribute{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should throw an error if the user tries to update it

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can add a plan modifier that prevents this value being changed.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This option needs to be treated identically to what's being done in #228 since this setting is configuring the default MP.

Since the framework upgrade, we have removed all of the ValueCannotBeChanged checks. Should I look into re-introducing that?
And, what about #228-- should that be changed as well?
If so, I'd prefer to move these checks to another PR.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it should also be changed in #228

@bardielle
Copy link
Member

I had 2 comments but overall it looks good

Signed-off-by: John Strunk <[email protected]>
@bardielle
Copy link
Member

/retest

@bardielle
Copy link
Member

/lgtm

@bardielle
Copy link
Member

/unhold

@JohnStrunk
Copy link
Member Author

/retest

@JohnStrunk
Copy link
Member Author

Looks like CI troubles again:

2023-10-24T13:48:04.183Z [ERROR] provider.terraform-provider-aws_v5.22.0_x5: Response contains error diagnostic: @caller=github.com/hashicorp/[email protected]/tfprotov5/internal/diag/diagnostics.go:58 tf_proto_version=5.4 tf_resource_type=aws_s3_bucket_policy tf_provider_addr=registry.terraform.io/hashicorp/aws @module=sdk.proto diagnostic_detail="" diagnostic_severity=ERROR tf_rpc=ApplyResourceChange diagnostic_summary="putting S3 Bucket (oidc-l1x6) Policy: operation error S3: PutBucketPolicy, https response error StatusCode: 403, RequestID: XVKX5BJF6YSHM3FF, HostID: m20I89LXf7YOjhYoGQoJpiVBtKn9wZMY0PyljlpKjMkTklb300pcS/J5j2mfCsPjrXnpTKwSwxdiTKp9vVz/ag==, api error AccessDenied: Access Denied" tf_req_id=b88605aa-841c-fb26-e157-5ed8fce16572 timestamp=2023-10-24T13:48:04.182Z
2023-10-24T13:48:04.185Z [ERROR] vertex "module.oidc_config_input_resources.module.rosa_oidc_config_resources[0].aws_s3_bucket_policy.allow_access_from_another_account" error: putting S3 Bucket (oidc-l1x6) Policy: operation error S3: PutBucketPolicy, https response error StatusCode: 403, RequestID: XVKX5BJF6YSHM3FF, HostID: m20I89LXf7YOjhYoGQoJpiVBtKn9wZMY0PyljlpKjMkTklb300pcS/J5j2mfCsPjrXnpTKwSwxdiTKp9vVz/ag==, api error AccessDenied: Access Denied
putting S3 Bucket (oidc-l1x6) Policy: operation error S3: PutBucketPolicy, https response error StatusCode: 403, RequestID: XVKX5BJF6YSHM3FF, HostID: m20I89LXf7YOjhYoGQoJpiVBtKn9wZMY0PyljlpKjMkTklb300pcS/J5j2mfCsPjrXnpTKwSwxdiTKp9vVz/ag==, api error AccessDenied: Access Denied

with module.oidc_config_input_resources.module.rosa_oidc_config_resources[0].aws_s3_bucket_policy.allow_access_from_another_account,
 on .terraform/modules/oidc_config_input_resources/oidc_config_resources/create_s3_bucket.tf line 21, in resource "aws_s3_bucket_policy" "allow_access_from_another_account":
  21: resource "aws_s3_bucket_policy" "allow_access_from_another_account"

@amalykhi
Copy link
Contributor

/retest

@sagidayan
Copy link
Contributor

/approve

@openshift-ci
Copy link

openshift-ci bot commented Oct 29, 2023

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: sagidayan

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot merged commit 5b6927c into terraform-redhat:main Oct 29, 2023
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants