-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create several services routes based on annotations on the svc endpoint #457
Create several services routes based on annotations on the svc endpoint #457
Conversation
Merge Failed. This change or one of its cross-repo dependencies was unable to be automatically merged with the current state of its repository. Please rebase the change and upload a new patchset. |
ctx, | ||
instance, | ||
helper, | ||
placementAPI, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering if we need that the service CR owns the Route. I don't really see a technical issue with this now just feels like we are reaching over the head of the service operator and forcing it to "adopt" a route. What are the pros of having the owner reference pointing to the service CR instead of simply letting the OpenstackControlPlane CR to own the Route it creates.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the ctlplane CR still owns it as the controller, as mentioned above we just add the service CR to the ownerref list so that we block early deletion and make sure it stays until the service cr is gone
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. Is there a specific need for the route to outlive the OpenStackControlPlane CR? I guess the PlacementAPI only owner is also the OpenStackControlPlane CR? So during OpenStackControlPlane deletion both will be deleted anyhow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the case where this is required is for keystone. when the osctlplane CR gets deleted, it might immediately start cleaning the resources owned by the osctlplane. when the route for keystone is gone, the cleanup of services/endpoints might fail. Therefore the routes should be cleaned after the service is gone.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohh. Where do we rely on the public keystone endpoint to clean up resources? That seems like a bug. I would assume that the operators use the internal keystone endpoint to do the cleanup.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the admin service client from [1] is using the public endpoint. its probably ok to switch that to internal. but in general it might be better to keep the route available for a service and how it registered it until it is really gone?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would switch [1] to internal just to be on the safe side. But obviously that is a separate change.
I have no hard opinion about whether to keep or remove the owner ref.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll propose a separate PR to the keystone operator to change it to internal
I really like how this consolidates Route creation logic. One hypothetical concern I can think of is if we were to split out certain services from the OpenStackControlplane (making it more of a core or enabledByDefault set of controlplane services). Like if we were to make the AdvancedNetworking services like Octavia or Designate top level or something like that to make the OpenStackControlplane smaller. If we did that then perhaps we'd make an AdvancedNetworking umbrella CRD in the openstack-operator and could create routes there similarly. Again all this is hypothetical, sort of a what if. I'd vote we start landing the parts and pieces here in the dependent PRs |
1e4dc67
to
eed7ae8
Compare
Merge Failed. This change or one of its cross-repo dependencies was unable to be automatically merged with the current state of its repository. Please rebase the change and upload a new patchset. |
yes, controllers handling additional CRDs with split out/additional services would then just do the same. |
eed7ae8
to
74db956
Compare
Merge Failed. This change or one of its cross-repo dependencies was unable to be automatically merged with the current state of its repository. Please rebase the change and upload a new patchset. |
74db956
to
2535ec8
Compare
Merge Failed. This change or one of its cross-repo dependencies was unable to be automatically merged with the current state of its repository. Please rebase the change and upload a new patchset. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As we discussed we don't want to spend effort creating an abstraction over the Route object due to the cost of maintaining such abstraction. I feel that we will pay that cost in a different form, maintaining nova (and potentially other service operator) logic in openstack-operator.
ctx, | ||
instance, | ||
helper, | ||
placementAPI, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. Is there a specific need for the route to outlive the OpenStackControlPlane CR? I guess the PlacementAPI only owner is also the OpenStackControlPlane CR? So during OpenStackControlPlane deletion both will be deleted anyhow.
2535ec8
to
0c40184
Compare
Merge Failed. This change or one of its cross-repo dependencies was unable to be automatically merged with the current state of its repository. Please rebase the change and upload a new patchset. |
0c40184
to
78cceec
Compare
Merge Failed. This change or one of its cross-repo dependencies was unable to be automatically merged with the current state of its repository. Please rebase the change and upload a new patchset. |
78cceec
to
dea12b8
Compare
Merge Failed. This change or one of its cross-repo dependencies was unable to be automatically merged with the current state of its repository. Please rebase the change and upload a new patchset. |
dea12b8
to
ac146cf
Compare
Merge Failed. This change or one of its cross-repo dependencies was unable to be automatically merged with the current state of its repository. Please rebase the change and upload a new patchset. |
New changes are detected. LGTM label has been removed. |
rebased |
Build failed (check pipeline). Post https://review.rdoproject.org/zuul/buildset/1b51d98b72c94eeaa7a87a3d600a4ed8 ✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 30m 25s |
/test openstack-operator-build-deploy-kuttl |
recheck Jobs are fixed |
Build failed (check pipeline). Post https://review.rdoproject.org/zuul/buildset/34f5467eba9b4e78a0a78727114fb69b ✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 33m 09s |
recheck |
restoring lgtm from @abays , just rebased |
/test openstack-operator-build-deploy-kuttl |
Hiccup
/test openstack-operator-build-deploy-kuttl |
Same transient error as last time. /test openstack-operator-build-deploy-kuttl |
168cbd1
into
openstack-k8s-operators:main
Jira: OSP-26690 Depends-On: openstack-k8s-operators/openstack-operator#457
Jira: OSP-26690 Depends-On: openstack-k8s-operators/openstack-operator#457
Update samples to support the old and new configuration used to setup the services. This allows ci jobs to properly configure the new services with their service overrides. Cleanup of the samples will then be done in the openstack-operator PR openstack-k8s-operators#457. Jira: OSP-26690
openstack-k8s-operators/openstack-operator#457 Signed-off-by: Fabricio Aguiar <[email protected]>
openstack-k8s-operators/openstack-operator#457 Signed-off-by: Fabricio Aguiar <[email protected]>
openstack-k8s-operators/openstack-operator#457 Signed-off-by: Fabricio Aguiar <[email protected]>
openstack-k8s-operators/openstack-operator#457 Signed-off-by: Fabricio Aguiar <[email protected]>
openstack-k8s-operators/openstack-operator#457 Signed-off-by: Fabricio Aguiar <[email protected]>
Update samples to support the old and new configuration used to setup the services. This allows ci jobs to properly configure the new services with their service overrides. Cleanup of the samples will then be done in the openstack-operator PR openstack-k8s-operators#457. Jira: OSP-26690
@@ -152,9 +159,6 @@ func (r *OpenStackControlPlaneReconciler) Reconcile(ctx context.Context, req ctr | |||
return ctrl.Result{}, nil | |||
} | |||
|
|||
// Reset all ReadyConditons to 'Unknown' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the reason for this change? Should not we set conditions to unknown at the beginning of each reconcile? Won't conditions give confusing picture if reconcile fails earlier than the last failure?
Creates the route for the keystoneapi, glance, placement, cinder, neutron, nova, heat, horizon, manila and swift based on the annotation set by the service operator on the their service.
The service operator sets for public endpoint the following ingress_create and ingress_name annotation on the service:
core.openstack.org/ingress_name
annotation also allows the customization of the route prefix via the service override for the component. The default applied by the service operators is the service name specified in the service operator, e.g to get a route for cinder apimy-cinder-openstack.apps-crc.testing
:Depends-On: openstack-k8s-operators/lib-common#332
Depends-On: openstack-k8s-operators/keystone-operator#307
Depends-On: openstack-k8s-operators/glance-operator#309
Depends-On: openstack-k8s-operators/placement-operator#61
Depends-On: openstack-k8s-operators/cinder-operator#262
Depends-On: openstack-k8s-operators/neutron-operator#204
Depends-On: openstack-k8s-operators/nova-operator#522
Depends-On: openstack-k8s-operators/heat-operator#238
Depends-On: openstack-k8s-operators/horizon-operator#212
Depends-On: openstack-k8s-operators/manila-operator#130
Depends-On: openstack-k8s-operators/swift-operator#49
Depends-On: openstack-k8s-operators/infra-operator#119
Jira: OSP-26690