Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ getOrCreatePort: add support to configure QoS profile ID on ports #1008

Closed
wants to merge 1 commit into from
Closed

Conversation

EmilienM
Copy link
Contributor

What this PR does / why we need it:

Adding support to configure Neutron ports with a specific QoS profile
ID:
https://docs.openstack.org/api-ref/network/v2/?expanded=show-port-details-detail#id54

This feature is already supported by Gophercloud:
https://github.com/gophercloud/gophercloud/blob/master/openstack/networking/v2/extensions/qos/policies/requests.go

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #1007

@netlify
Copy link

netlify bot commented Sep 23, 2021

✔️ Deploy Preview for kubernetes-sigs-cluster-api-openstack ready!

🔨 Explore the source changes: 49f70c4

🔍 Inspect the deploy log: https://app.netlify.com/sites/kubernetes-sigs-cluster-api-openstack/deploys/615cbedb68868200070f2b28

😎 Browse the preview: https://deploy-preview-1008--kubernetes-sigs-cluster-api-openstack.netlify.app

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Sep 23, 2021
@k8s-ci-robot
Copy link
Contributor

Hi @EmilienM. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Sep 23, 2021
@EmilienM EmilienM changed the title getOrCreatePort: add support to configure QoS profile ID on ports ✨ getOrCreatePort: add support to configure QoS profile ID on ports Sep 23, 2021
@EmilienM
Copy link
Contributor Author

/ok-to-test

@k8s-ci-robot
Copy link
Contributor

@EmilienM: Cannot trigger testing until a trusted user reviews the PR and leaves an /ok-to-test message.

In response to this:

/ok-to-test

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@mdbooth
Copy link
Contributor

mdbooth commented Sep 23, 2021

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Sep 23, 2021
@mdbooth
Copy link
Contributor

mdbooth commented Sep 23, 2021

/assign

@jichenjc
Copy link
Contributor

/approve

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 23, 2021
@jichenjc
Copy link
Contributor

/approve cancel

oops, the failure is valid .. need some update
pkg/cloud/services/networking/port.go:145:26: invalid operation: portOpts.QoSPolicyID != nil (mismatched types string and nil)

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: EmilienM
To complete the pull request process, please assign neolit123 after the PR has been reviewed.
You can assign the PR to them by writing /assign @neolit123 in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot removed the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 23, 2021
Copy link
Contributor

@mdbooth mdbooth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@EmilienM Can you think of a way we could reasonably test this? Perhaps creating a QoS policy in an E2E test and creating a port to use it?

pkg/cloud/services/networking/port.go Show resolved Hide resolved
@mdbooth
Copy link
Contributor

mdbooth commented Sep 24, 2021

@EmilienM You need to run make generate

@EmilienM
Copy link
Contributor Author

@EmilienM Can you think of a way we could reasonably test this? Perhaps creating a QoS policy in an E2E test and creating a port to use it?

yes, this will imply the qos extension enabled in Neutron. I can look at that at some point, certainly.

@jichenjc
Copy link
Contributor

@EmilienM Can you think of a way we could reasonably test this? Perhaps creating a QoS policy in an E2E test and creating a port to use it?

yes, this will imply the qos extension enabled in Neutron. I can look at that at some point, certainly.

we can customize the devstack , it should be not that hard , some changes
already done here
https://github.com/kubernetes-sigs/cluster-api-provider-openstack/pull/1009/files
you can refer to

@mdbooth
Copy link
Contributor

mdbooth commented Sep 27, 2021

@EmilienM Can you think of a way we could reasonably test this? Perhaps creating a QoS policy in an E2E test and creating a port to use it?

yes, this will imply the qos extension enabled in Neutron. I can look at that at some point, certainly.

we can customize the devstack , it should be not that hard , some changes
already done here
https://github.com/kubernetes-sigs/cluster-api-provider-openstack/pull/1009/files
you can refer to

Looks like QoS is already in there.

@EmilienM
Copy link
Contributor Author

I'm looking at e2e testing today.

@EmilienM
Copy link
Contributor Author

/test pull-cluster-api-provider-openstack-e2e-test

@k8s-ci-robot k8s-ci-robot added needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. and removed needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Oct 4, 2021
@EmilienM
Copy link
Contributor Author

EmilienM commented Oct 4, 2021

TODO: I need to finish the e2e test to confirm that we actually get the port with the right QoS Policy ID. Will do asap.

@mdbooth
Copy link
Contributor

mdbooth commented Oct 5, 2021

@EmilienM re the e2e test, could it live with the other custom ports test instead? I was hoping that test might grow into a mega test of everything.

@mdbooth
Copy link
Contributor

mdbooth commented Oct 5, 2021

That test failure 🤔

    Request forbidden: [POST http://10.0.2.15:9696/v2.0/qos/policies], error message: {"NeutronError": {"type": "PolicyNotAuthorized", "message": "rule:create_policy is disallowed by policy", "detail": ""}}

Is policy creation admin-only by any chance? That shouldn't be an insurmountable problem, but it might make it a bit different.

@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Oct 5, 2021
policyID, err = shared.GetOpenStackQosPolicyID(e2eCtx, policyName)
Expect(err).To(BeNil())
Expect(policyID).NotTo(BeNil(), "Neutron QoS policy does not exist: %s", policyName)
os.Setenv("CLUSTER_QOS_POLICY_ID", policyID)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm missing the bits later in this test to confirm that the port has been created with a specific Policy ID.
@mdbooth I might use your help on this one

Copy link
Contributor

@iamemilio iamemilio Oct 12, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@EmilienM you should be able to unpack the the port into a struct that extends the port struct with the qos policy extension. I know openstack supports returning the qos policy for ports, and this is the gophercloud way to do it. It would look something like this:

type qosPort struct {
  ports.Port
  policies.QoSPolicyExt
}

opts := ports.ListOpts{ foo }
portList := []qosPort{}
err := ports.List(client, opts).ExtractInto(&portList)
if err != nil {
  ...
}

Expect (portList[0].QoSPolicyExt.QoSPolicyID).To(Equal(<uuid>))

@k8s-ci-robot
Copy link
Contributor

@EmilienM: PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 27, 2021
@hidekazuna
Copy link
Contributor

@EmilienM v1beta1 API is merged(#1047). update to change v1beta1 API, please.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 16, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 18, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@EmilienM EmilienM deleted the neutron_qos branch April 19, 2022 18:10
@EmilienM EmilienM restored the neutron_qos branch April 19, 2022 18:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Adding support to create Neutron ports with QoS policy ID
7 participants