Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[spec]Grouping nodes #42

Closed
wants to merge 2 commits into from

Conversation

gibizer
Copy link
Contributor

@gibizer gibizer commented Jun 13, 2023

Depends-on: #41

@gibizer gibizer marked this pull request as draft June 13, 2023 04:31
@dprince
Copy link
Contributor

dprince commented Jun 13, 2023

@gibizer This came across my path today and I was a bit confused why it exists. We have at least 3 or 4 google docs discussing architectural options for Dataplane services. Is this meant to be a consolidation of the Google docs perhaps and you only want comments here now?

@gibizer
Copy link
Contributor Author

gibizer commented Jun 13, 2023

@gibizer This came across my path today and I was a bit confused why it exists. We have at least 3 or 4 google docs discussing architectural options for Dataplane services. Is this meant to be a consolidation of the Google docs perhaps and you only want comments here now?

I'm trying to follow the pattern started in #31. Where after the initial discussion started to settle in the google docs the actual proposal is cleaned up and moved to this repo. This is intentionally in Draft state as I just finished writing this up as a cleanup of the google doc where the question / comments seems to be settled.

I'm traveling this week. So this and #41 will remain in Draft state at least until next week. If you have comments feel free to provide it here or in the google docs (whichever is more convenient for you) in the meantime. I will sync things up next week and we can also chat about the way forward.

@gibizer gibizer changed the title Grouping nodes [spec]Grouping nodes Jun 20, 2023
@gibizer gibizer marked this pull request as ready for review June 20, 2023 08:29
@gibizer
Copy link
Contributor Author

gibizer commented Jun 20, 2023

/hold we need to land #41 first

Copy link

@kk7ds kk7ds left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This makes sense to be and addresses concerns we've had in previous discussions. A few supporting comments inline that may help with clarity in the long term.

spec/grouping-nodes.md Outdated Show resolved Hide resolved
spec/grouping-nodes.md Outdated Show resolved Hide resolved
spec/grouping-nodes.md Show resolved Hide resolved

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note too other this is actully not part of this PR its inherited form #41 so we can ignore this file

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes.

spec/grouping-nodes.md Outdated Show resolved Hide resolved
spec/grouping-nodes.md Show resolved Hide resolved
Each service operator that has EDPM specific service CRD (e.g.
NovaExternalCompute, OVNMetadataAgent) can define a Profile CRD (e.g.
ComputeProfile, NetworkProfile) to describe the service specific configuration
of a set of EDPM nodes. Then the human deployer can use annotations on the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if this is still too restrictive. The names used here suggest that there will be a single profile per operator, not per service. While I could imagine that e.g. NeutronOVNAgent may have a different scheme to map nodes to configuration from what NeutronSRIOVAgent perceives.

Of course, instead of a single NetworkProfile, we could have NeutronOVNAgentProfile and NeutronSRIOVAgentProfile. Even then, I wonder if there's space for a set of service specific profiles to be applied to a node.

(I am not ready to give a concrete example from Neutron or elsewhere for the scenario above, but I wonder if it would be wise to leave the interface slightly more generic, by using a set / list of profiles - global or per-service - as the interface here.)

(There may even be a scenario where the same profile makes sense for all services as a whole, e.g. for common oslo.* library configurations. You could e.g. mark a node with a "profile" called "Debug" and get all services deployed on the node configured with debug=True in their config files. Same for oslo.db and other tunings.)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

well the annotation and its type would technaily be up to the service operator to define

so if neutron wanted to support a list of profiles the neutron operator is free to do that.

we wanted to start with the simple case as for your node with both ovn and sriov you would create a thrid SRIOV_OVN profile that had the config for both.

granted that can lead to a large number of profiles if you need to multiple.

the current motivation to only having one profile si to avoid having to define any precedence relatantion or complex merging semantics. that does not mean the neutorn operator can support that we just didn't want to start from that state.

@gibizer gibizer marked this pull request as draft July 12, 2023 18:09
@gibizer
Copy link
Contributor Author

gibizer commented Jul 14, 2023

This is not applicable any more based on the decision that sevice-operators are should not run or trigger any ansible execution, and instead DataPlaneService CRs will do that. Such CRs forces to use DataPlaneRole for any kind of compute grouping. We might want to revisit the resulting Role explosion later.

@gibizer gibizer closed this Jul 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants