Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consume Topology CR #670

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open

Conversation

fmount
Copy link
Contributor

@fmount fmount commented Dec 11, 2024

No description provided.

Copy link
Contributor

openshift-ci bot commented Dec 11, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: fmount

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@fmount fmount force-pushed the topology-1 branch 4 times, most recently from 2c2d3f8 to 01614fb Compare December 16, 2024 12:01
@fmount fmount changed the title POC - Consume Topology CR Consume Topology CR Dec 16, 2024
@@ -866,6 +866,13 @@ func (r *GlanceReconciler) apiDeploymentCreateOrUpdate(
if instance.Spec.KeystoneEndpoint == apiName {
apiAnnotations[glance.KeystoneEndpoint] = "true"
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can the topology of a deployment be updated after it get created, or is it immutable? We should probably watch the topology, like we do with the secrets. if it can be updated, all good, if not we have to think about what we should do in that case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not immutable and I'm able to see reconciliation loops being triggered by a change in the topology.
Adding the same watching logic here would simply be redundant, because glanceapi_controller (where I pass the topology reference) has already such watcher, and it triggers a reconciliation if something changes even when the topologyRef is a glance top level parameter (tested locally).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah right, have not looked closely at the api controller

@@ -866,6 +866,13 @@ func (r *GlanceReconciler) apiDeploymentCreateOrUpdate(
if instance.Spec.KeystoneEndpoint == apiName {
apiAnnotations[glance.KeystoneEndpoint] = "true"
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah right, have not looked closely at the api controller

Comment on lines 348 to 376
topologyFn := func(_ context.Context, o client.Object) []reconcile.Request {
result := []reconcile.Request{}

// get all GlanceAPIs CRs
glanceAPIs := &glancev1.GlanceAPIList{}
listOpts := []client.ListOption{
client.InNamespace(o.GetNamespace()),
}
if err := r.Client.List(context.Background(), glanceAPIs, listOpts...); err != nil {
r.Log.Error(err, "Unable to retrieve GlanceAPI CRs %w")
return nil
}

for _, cr := range glanceAPIs.Items {
if cr.Spec.Topology != nil {
if o.GetName() == cr.Spec.Topology.Name {
name := client.ObjectKey{
Namespace: o.GetNamespace(),
Name: cr.Name,
}
r.Log.Info(fmt.Sprintf("Topology %s is used by GlanceAPI CR %s", o.GetName(), cr.Name))
result = append(result, reconcile.Request{NamespacedName: name})
}
}
}
if len(result) > 0 {
return result
}
return nil
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wondering if we should align all those watch funcs and instead use IndexField where we have named objects to watch and use the findObjectsForSrc. I could see indexing the field is more efficient, but can not confirm.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me try this one and I get back with an update

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did a few local tests and I here my considerations:
we use a glanceAPIWatchFields []string, which is not a const but it points to some static fields we expect to be defined.
findObjectsForSrc might just work fine as long as I add some logic to inject the dynamic field I want to check with something like:

glanceWatchFields = addFieldToWatchList(glanceAPIWatchFields, fmt.Sprintf(".spec.%s.topology", instance.APIName()))

where addFieldToWatchList would be something like:

// addToWatchList - utility function to pull the element out of the watchlist
func addFieldToWatchList(glanceAPIWatchFields []string, field string) []string {
	for _, v := range glanceAPIWatchFields {
		if v == field {
			return glanceAPIWatchFields
		}
	}
	return append(glanceAPIWatchFields, field)
}

However, considering that TopologyRef might not exist at any point in time (because I simply edited my ctlplane CR and I removed the reference), I need to pop that value from the glanceAPIWatchList. I can also avoid doing that by simply reassigning the glanceWatchField the the original slice, but I do see that concurrent reconciliation loops might not see the latest version of this glanceWatchFields []string variable, leading to some errors because we're trying to get a field that does not exist.
I also have concerns in the way we IndexField for a dynamic one (based on the CR name).
For this reason, and to keep things simple, I would leave Topology to his dedicated function, but I agree (looking at the current code), that we can potentially map a few other parameters to this existing function, simplifying the code further. That would be part of a dedicated change.
If you think there's a better way to watch a dynamic field that can be created/updated/removed during the lifecycle of a GlanceAPI object, I can try to experiment more to find the right solution.

api/v1beta1/common_types.go Show resolved Hide resolved
@openshift-ci openshift-ci bot requested a review from dprince January 9, 2025 12:47
Signed-off-by: Francesco Pantano <[email protected]>
@fmount fmount force-pushed the topology-1 branch 3 times, most recently from b9b8ef3 to 8d8a8a0 Compare January 10, 2025 10:42
@fmount fmount requested a review from stuggi January 10, 2025 12:13
When a GlanceAPI references a Topology, a finalizer is added to such
resource. By doing this, we can clearly identify which topology is
applied to which API. For this reason, and because Glance is potentially
made by a list of APIs that can independently reference the same
Topology CR, the finalizer string is based on the concatenation between
the Glance service (prefix), and the API name. It is possible, as a day2
operation, to add, update or remove a Topology reference. When doing
this, especially for update and removal operations, a pointer to the CR
must be updated. To properly support this action, the information about
the LastAppliedTopology is saved in the Status, and it allows to
properly clean up unused resources and not locking them indefinitely.

Signed-off-by: Francesco Pantano <[email protected]>
With the goal of deduplicating the current code as much as possible and
help spreading the pattern across the operators, this patch adds the
topologyRef dynamic field to the existing glanceAPIWatchList. This
allows to both index the field (and use fieldSelector when the GlanceAPI
CR is retrieved), and remove the topologyFn function.

Signed-off-by: Francesco Pantano <[email protected]>
Copy link
Contributor

openshift-ci bot commented Jan 10, 2025

@fmount: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/glance-operator-build-deploy-tempest 3f8d466 link true /test glance-operator-build-deploy-tempest
ci/prow/glance-operator-build-deploy-kuttl 3f8d466 link true /test glance-operator-build-deploy-kuttl

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants