Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ enables storage policy in failure domain #3219

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

RenilRaj-BR
Copy link
Contributor

@RenilRaj-BR RenilRaj-BR commented Oct 8, 2024

What this PR does:
Currently failure domain doesn't support storage policy and it supports specifying a datastore only. We want to add a feature to enable specifying a storage policy for the failure domain. This provides flexibility and we don't have to specify the exact datastore name. It can also target multiple datastores when we specify a storage policy instead of a specific datastore

Copy link

linux-foundation-easycla bot commented Oct 8, 2024

CLA Signed

The committers listed above are authorized under a signed CLA.

  • ✅ login: RenilRaj-BR / name: Renil Raj (ff15ea9)

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign killianmuldoon for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. label Oct 8, 2024
@k8s-ci-robot
Copy link
Contributor

Welcome @RenilRaj-BR!

It looks like this is your first PR to kubernetes-sigs/cluster-api-provider-vsphere 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/cluster-api-provider-vsphere has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Oct 8, 2024
@k8s-ci-robot
Copy link
Contributor

Hi @RenilRaj-BR. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Oct 8, 2024
@RenilRaj-BR RenilRaj-BR changed the title ❇️ enables storage policy in failure domain and disables cluster module creation when failure domain is used ✨ enables storage policy in failure domain and disables cluster module creation when failure domain is used Oct 9, 2024
Comment on lines 81 to 85

// StoragePolicy is the name of the policy that is used to target a datastore
// in which the virtual machine is created/located
// +optional
StoragePolicy string `json:"storagePolicy,omitempty"`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should only add this to v1beta1.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@chrischdi ,
I tried adding the storage policy only in v1beta1, but the conversion-gen fails to generate the zz_generated.conversion.go without compilation error due to missing field.
Could you please suggest

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@chrischdi ,
I have removed the storagePolicy from v1alpha3 and v1alpha4. Have introduced the manual conversion for the same

@chrischdi
Copy link
Member

@RenilRaj-BR : you need to have signed the CLA, otherwise we will not be able to merge this.

@chrischdi
Copy link
Member

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Oct 9, 2024
@chrischdi
Copy link
Member

/test help

@k8s-ci-robot
Copy link
Contributor

@chrischdi: The specified target(s) for /test were not found.
The following commands are available to trigger required jobs:

  • /test pull-cluster-api-provider-vsphere-e2e-vcsim-govmomi-main
  • /test pull-cluster-api-provider-vsphere-e2e-vcsim-supervisor-main
  • /test pull-cluster-api-provider-vsphere-test-main
  • /test pull-cluster-api-provider-vsphere-verify-main

The following commands are available to trigger optional jobs:

  • /test pull-cluster-api-provider-vsphere-apidiff-main

Use /test all to run the following jobs that were automatically triggered:

  • pull-cluster-api-provider-vsphere-apidiff-main
  • pull-cluster-api-provider-vsphere-test-main
  • pull-cluster-api-provider-vsphere-verify-main

In response to this:

/test help

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@chrischdi
Copy link
Member

/test pull-cluster-api-provider-vsphere-e2e-vcsim-govmomi-main
/test pull-cluster-api-provider-vsphere-e2e-vcsim-supervisor-main

@chrischdi
Copy link
Member

CC @lubronzhan , @neolit123 , as this falls into govmomi territory :-)

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Oct 9, 2024
@RenilRaj-BR RenilRaj-BR changed the title ✨ enables storage policy in failure domain and disables cluster module creation when failure domain is used ✨ enables storage policy in failure domain Oct 10, 2024
@@ -61,6 +61,11 @@ func (webhook *VSphereFailureDomainWebhook) ValidateCreate(_ context.Context, ra
allErrs = append(allErrs, field.Forbidden(field.NewPath("spec", "Topology", "ComputeCluster"), "cannot be empty if Hosts is not empty"))
}

// We should either pass a datastore or a storage policy, not both at the same time
if obj.Spec.Topology.Datastore != "" && obj.Spec.Topology.StoragePolicy != "" {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hum, why?

I am thinking on a situation where you want a failure domain to use a datastore, but also be compliant with the storage policy rules. As an example, let's say:

  • Failure domain A - Have DS1 and Storage Policy X, which selects DS1 but adds something like "replicate just 1 time (see vsanDatastore)
  • Failure domain B - Have DS2 and Storage Policy Y, with the same as above, but for SP Y I want to establish some other rules like replicate 2 times

I think having both datastores and storagepolicies is a valid scenario (unless I forgot something!)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah looks like the priority of these two are handled at the vm creation level

var datastoreRef *types.ManagedObjectReference
if vmCtx.VSphereVM.Spec.Datastore != "" {
datastore, err := vmCtx.Session.Finder.Datastore(ctx, vmCtx.VSphereVM.Spec.Datastore)
if err != nil {
return errors.Wrapf(err, "unable to get datastore %s for %q", vmCtx.VSphereVM.Spec.Datastore, ctx)
}
datastoreRef = types.NewReference(datastore.Reference())
spec.Location.Datastore = datastoreRef
}
var storageProfileID string
if vmCtx.VSphereVM.Spec.StoragePolicyName != "" {
pbmClient, err := pbm.NewClient(ctx, vmCtx.Session.Client.Client)
if err != nil {
return errors.Wrapf(err, "unable to create pbm client for %q", ctx)
}
storageProfileID, err = pbmClient.ProfileIDByName(ctx, vmCtx.VSphereVM.Spec.StoragePolicyName)
if err != nil {
return errors.Wrapf(err, "unable to get storageProfileID from name %s for %q", vmCtx.VSphereVM.Spec.StoragePolicyName, ctx)
}
var hubs []pbmTypes.PbmPlacementHub
// If there's a Datastore configured, it should be the only one for which we check if it matches the requirements of the Storage Policy
if datastoreRef != nil {
hubs = append(hubs, pbmTypes.PbmPlacementHub{
HubType: datastoreRef.Type,
HubId: datastoreRef.Value,
})
} else {
// Otherwise we should get just the Datastores connected to our pool
cluster, err := pool.Owner(ctx)
if err != nil {
return errors.Wrapf(err, "failed to get owning cluster of resourcepool %q to calculate datastore based on storage policy", pool)
}
dsGetter := object.NewComputeResource(vmCtx.Session.Client.Client, cluster.Reference())
datastores, err := dsGetter.Datastores(ctx)
if err != nil {
return errors.Wrapf(err, "unable to list datastores from owning cluster of requested resourcepool")
}
for _, ds := range datastores {
hubs = append(hubs, pbmTypes.PbmPlacementHub{
HubType: ds.Reference().Type,
HubId: ds.Reference().Value,
})
}
}
var constraints []pbmTypes.BasePbmPlacementRequirement
constraints = append(constraints, &pbmTypes.PbmPlacementCapabilityProfileRequirement{ProfileId: pbmTypes.PbmProfileId{UniqueId: storageProfileID}})
result, err := pbmClient.CheckRequirements(ctx, hubs, nil, constraints)
if err != nil {
return errors.Wrapf(err, "unable to check requirements for storage policy")
}
if len(result.CompatibleDatastores()) == 0 {
return fmt.Errorf("no compatible datastores found for storage policy: %s", vmCtx.VSphereVM.Spec.StoragePolicyName)
}
// If datastoreRef is nil here it means that the user didn't specify a Datastore. So we should
// select one of the datastores of the owning cluster of the resource pool that matched the
// requirements of the storage policy.
if datastoreRef == nil {
r := rand.New(rand.NewSource(time.Now().UnixNano())) //nolint:gosec // We won't need cryptographically secure randomness here.
ds := result.CompatibleDatastores()[r.Intn(len(result.CompatibleDatastores()))]
datastoreRef = &types.ManagedObjectReference{Type: ds.HubType, Value: ds.HubId}
}
}

Copy link
Contributor Author

@RenilRaj-BR RenilRaj-BR Oct 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rikatz
removed the validation as it is handled in the vm creation phase

@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Oct 14, 2024
@rikatz
Copy link
Contributor

rikatz commented Oct 14, 2024

/lgtm

From a provisioning perspective, looks good :)

Will leave for CAPV approvers to do the final review.

Thanks!

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Oct 14, 2024
@k8s-ci-robot
Copy link
Contributor

LGTM label has been added.

Git tree hash: 87891601235f4276d92098b972a0b1061b38cbd5

@lubronzhan
Copy link
Contributor

LGTM

@k8s-ci-robot k8s-ci-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Oct 15, 2024
Copy link
Member

@neolit123 neolit123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a small change, so please squash the commits to 1.

Copy link
Member

@neolit123 neolit123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Oct 15, 2024
@k8s-ci-robot
Copy link
Contributor

LGTM label has been added.

Git tree hash: d1d412901bfeec0911d17feac7450b4890c55136

@neolit123
Copy link
Member

/assign @chrischdi

@rikatz
Copy link
Contributor

rikatz commented Oct 15, 2024

/lgtm
Thanks

@@ -896,14 +896,10 @@ func autoConvert_v1beta1_Topology_To_v1alpha4_Topology(in *v1beta1.Topology, out
out.Hosts = (*FailureDomainHosts)(unsafe.Pointer(in.Hosts))
out.Networks = *(*[]string)(unsafe.Pointer(&in.Networks))
out.Datastore = in.Datastore
// WARNING: in.StoragePolicy requires manual conversion: does not exist in peer-type
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't we need to do the manual conversion for this? (in both api versions)

In both vspherefailuredomain_conversion.go functions we have to do things similar to:

For the ConvertTo function:

https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/blob/main/apis/v1alpha4/vspheremachine_conversion.go#L33-L48

For the ConvertFrom function:

https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/blob/main/apis/v1alpha4/vspheremachine_conversion.go#L61-L64

@fabriziopandini
Copy link
Member

/hold

Currently failure domain doesn't support storage policy and it supports specifying a datastore only.

It seems that it is not required to have different storagePolicy on each failure domain, and the storagePolicy setting in the vsphere machine template already cover this use case

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Oct 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants