-
Notifications
You must be signed in to change notification settings - Fork 295
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
✨ enables storage policy in failure domain #3219
base: main
Are you sure you want to change the base?
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Welcome @RenilRaj-BR! |
Hi @RenilRaj-BR. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
||
// StoragePolicy is the name of the policy that is used to target a datastore | ||
// in which the virtual machine is created/located | ||
// +optional | ||
StoragePolicy string `json:"storagePolicy,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should only add this to v1beta1.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@chrischdi ,
I tried adding the storage policy only in v1beta1, but the conversion-gen fails to generate the zz_generated.conversion.go
without compilation error due to missing field.
Could you please suggest
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@chrischdi ,
I have removed the storagePolicy
from v1alpha3 and v1alpha4. Have introduced the manual conversion for the same
@RenilRaj-BR : you need to have signed the CLA, otherwise we will not be able to merge this. |
/ok-to-test |
/test help |
@chrischdi: The specified target(s) for
The following commands are available to trigger optional jobs:
Use
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/test pull-cluster-api-provider-vsphere-e2e-vcsim-govmomi-main |
CC @lubronzhan , @neolit123 , as this falls into govmomi territory :-) |
@@ -61,6 +61,11 @@ func (webhook *VSphereFailureDomainWebhook) ValidateCreate(_ context.Context, ra | |||
allErrs = append(allErrs, field.Forbidden(field.NewPath("spec", "Topology", "ComputeCluster"), "cannot be empty if Hosts is not empty")) | |||
} | |||
|
|||
// We should either pass a datastore or a storage policy, not both at the same time | |||
if obj.Spec.Topology.Datastore != "" && obj.Spec.Topology.StoragePolicy != "" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hum, why?
I am thinking on a situation where you want a failure domain to use a datastore, but also be compliant with the storage policy rules. As an example, let's say:
- Failure domain A - Have DS1 and Storage Policy X, which selects DS1 but adds something like "replicate just 1 time (see vsanDatastore)
- Failure domain B - Have DS2 and Storage Policy Y, with the same as above, but for SP Y I want to establish some other rules like replicate 2 times
I think having both datastores and storagepolicies is a valid scenario (unless I forgot something!)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah looks like the priority of these two are handled at the vm creation level
cluster-api-provider-vsphere/pkg/services/govmomi/vcenter/clone.go
Lines 206 to 274 in a2bac7b
var datastoreRef *types.ManagedObjectReference | |
if vmCtx.VSphereVM.Spec.Datastore != "" { | |
datastore, err := vmCtx.Session.Finder.Datastore(ctx, vmCtx.VSphereVM.Spec.Datastore) | |
if err != nil { | |
return errors.Wrapf(err, "unable to get datastore %s for %q", vmCtx.VSphereVM.Spec.Datastore, ctx) | |
} | |
datastoreRef = types.NewReference(datastore.Reference()) | |
spec.Location.Datastore = datastoreRef | |
} | |
var storageProfileID string | |
if vmCtx.VSphereVM.Spec.StoragePolicyName != "" { | |
pbmClient, err := pbm.NewClient(ctx, vmCtx.Session.Client.Client) | |
if err != nil { | |
return errors.Wrapf(err, "unable to create pbm client for %q", ctx) | |
} | |
storageProfileID, err = pbmClient.ProfileIDByName(ctx, vmCtx.VSphereVM.Spec.StoragePolicyName) | |
if err != nil { | |
return errors.Wrapf(err, "unable to get storageProfileID from name %s for %q", vmCtx.VSphereVM.Spec.StoragePolicyName, ctx) | |
} | |
var hubs []pbmTypes.PbmPlacementHub | |
// If there's a Datastore configured, it should be the only one for which we check if it matches the requirements of the Storage Policy | |
if datastoreRef != nil { | |
hubs = append(hubs, pbmTypes.PbmPlacementHub{ | |
HubType: datastoreRef.Type, | |
HubId: datastoreRef.Value, | |
}) | |
} else { | |
// Otherwise we should get just the Datastores connected to our pool | |
cluster, err := pool.Owner(ctx) | |
if err != nil { | |
return errors.Wrapf(err, "failed to get owning cluster of resourcepool %q to calculate datastore based on storage policy", pool) | |
} | |
dsGetter := object.NewComputeResource(vmCtx.Session.Client.Client, cluster.Reference()) | |
datastores, err := dsGetter.Datastores(ctx) | |
if err != nil { | |
return errors.Wrapf(err, "unable to list datastores from owning cluster of requested resourcepool") | |
} | |
for _, ds := range datastores { | |
hubs = append(hubs, pbmTypes.PbmPlacementHub{ | |
HubType: ds.Reference().Type, | |
HubId: ds.Reference().Value, | |
}) | |
} | |
} | |
var constraints []pbmTypes.BasePbmPlacementRequirement | |
constraints = append(constraints, &pbmTypes.PbmPlacementCapabilityProfileRequirement{ProfileId: pbmTypes.PbmProfileId{UniqueId: storageProfileID}}) | |
result, err := pbmClient.CheckRequirements(ctx, hubs, nil, constraints) | |
if err != nil { | |
return errors.Wrapf(err, "unable to check requirements for storage policy") | |
} | |
if len(result.CompatibleDatastores()) == 0 { | |
return fmt.Errorf("no compatible datastores found for storage policy: %s", vmCtx.VSphereVM.Spec.StoragePolicyName) | |
} | |
// If datastoreRef is nil here it means that the user didn't specify a Datastore. So we should | |
// select one of the datastores of the owning cluster of the resource pool that matched the | |
// requirements of the storage policy. | |
if datastoreRef == nil { | |
r := rand.New(rand.NewSource(time.Now().UnixNano())) //nolint:gosec // We won't need cryptographically secure randomness here. | |
ds := result.CompatibleDatastores()[r.Intn(len(result.CompatibleDatastores()))] | |
datastoreRef = &types.ManagedObjectReference{Type: ds.HubType, Value: ds.HubId} | |
} | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rikatz
removed the validation as it is handled in the vm creation phase
/lgtm From a provisioning perspective, looks good :) Will leave for CAPV approvers to do the final review. Thanks! |
LGTM label has been added. Git tree hash: 87891601235f4276d92098b972a0b1061b38cbd5
|
LGTM |
config/default/crd/bases/infrastructure.cluster.x-k8s.io_vspherefailuredomains.yaml
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is a small change, so please squash the commits to 1.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
LGTM label has been added. Git tree hash: d1d412901bfeec0911d17feac7450b4890c55136
|
/assign @chrischdi |
/lgtm |
@@ -896,14 +896,10 @@ func autoConvert_v1beta1_Topology_To_v1alpha4_Topology(in *v1beta1.Topology, out | |||
out.Hosts = (*FailureDomainHosts)(unsafe.Pointer(in.Hosts)) | |||
out.Networks = *(*[]string)(unsafe.Pointer(&in.Networks)) | |||
out.Datastore = in.Datastore | |||
// WARNING: in.StoragePolicy requires manual conversion: does not exist in peer-type |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't we need to do the manual conversion for this? (in both api versions)
In both vspherefailuredomain_conversion.go functions we have to do things similar to:
For the ConvertTo function:
For the ConvertFrom function:
/hold
It seems that it is not required to have different storagePolicy on each failure domain, and the storagePolicy setting in the vsphere machine template already cover this use case |
What this PR does:
Currently failure domain doesn't support storage policy and it supports specifying a datastore only. We want to add a feature to enable specifying a storage policy for the failure domain. This provides flexibility and we don't have to specify the exact datastore name. It can also target multiple datastores when we specify a storage policy instead of a specific datastore