Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow automatic recreation on LB ERROR state to be disabled #2596

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

baurmatt
Copy link

This is e.g. helpful when you try to debug the LB in ERROR state.

What this PR does / why we need it:
Allow automatic recreation on LB ERROR state to be disabled. This is e.g. helpful when you try to debug the LB in ERROR state.

Which issue this PR fixes(if applicable):
fixes #

Special notes for reviewers:

Release note:

NONE

This is e.g. helpful when you try to debug the LB in ERROR state.
@k8s-ci-robot k8s-ci-robot added release-note-none Denotes a PR that doesn't merit a release note. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels May 15, 2024
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign anguslees for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot
Copy link
Contributor

Welcome @baurmatt!

It looks like this is your first PR to kubernetes/cloud-provider-openstack 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/cloud-provider-openstack has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot
Copy link
Contributor

Hi @baurmatt. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels May 15, 2024
@baurmatt
Copy link
Author

@dulek Mentioning you, as you've initially implemented this functionality in #2341. Thanks btw, it helps in all other case - just debugging got really hard :D

Comment on lines +310 to +311
// Allow users to disable automatic recreation on Octavia ERROR state
recreateOnError := getBoolFromServiceAnnotation(service, ServiceAnnotationLoadBalancerRecreateOnError, true)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is exposing a debugging function to the end users. I think I'd rather make it an option in OCCM configuration, so that administrator can turn it on and investigate what's happening. What do you thinK?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dulek Thanks for your review! :) In general I agree, but in our Managed Kubernetes setup users wouldn't be able to change the OCCM configuration due to it not being (editable) exposed to the user. Implementing it as an OCCM configuration would also affect all LBs, while my implementation limits the functionality to a single LB.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

um.. does this mean every configuration option we have for OCCM will also not available for managed k8s?

not sure other sitaution we have in such scenario and how we handle it?

Copy link
Author

@baurmatt baurmatt May 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

um.. does this mean every configuration option we have for OCCM will also not available for managed k8s?

Just to clarify, when I'm talking about OCCM config I mean the cloud-config file/secret. In our managed k8s setup, we don't have (persistent) writeable access to it, because the cloudprovider runs on the master nodes which we don't have access to. cloud-config is readonly accessible on the worker nodes for csi-cinder-nodeplugin. So yes, other options aren't (freely) configurable for us as well.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@baurmatt: I don't believe the users of the managed K8s should be trying to debug stuff on Octavia side really. Can you provide an example where keeping the Octavia LB in ERROR state aids with debugging? The regular users should not have access to amphora resources (admin-only API) or Nova VMs backing amphoras (these should live in a service tenant). The LB itself does not expose any debugging information. The Nova VM does expose the error, but most of the time it's a NoValidHost anyway, so scheduler logs are required to do debugging.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dulek To the background: I've created a LoadBalancer Service with loadbalancer.openstack.org/network-id: $uuid and loadbalancer.openstack.org/member-subnet-id: $uuid, which failed because one of the UUIDs was wrong. Thus it was recreated "all the time". This was hard for the OpenStack team to debug because they only had seconds to take a look on the Octavia LB before it was deleted by cloudprovider. Keeping it in ERROR state allowed for easier debugging on my/OpenStack team side.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was the error missing from kubectl describe <svc-name>? We emit events that should be enough to debug such problems. What was Octavia returning? Just normally 201?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dulek It only shows that the load balancer went into ERROR state:

$ kubectl describe service cloudnative-pg-cluster-primary2
...
Events:
  Type     Reason                  Age                   From                Message
  ----     ------                  ----                  ----                -------
  Warning  SyncLoadBalancerFailed  29m                   service-controller  Error syncing load balancer: failed to ensure load balancer: error creating loadbalancer kube_service_2mqlgjjphg_cloudnative-pg-cluster_cloudnative-pg-cluster-primary2: loadbalancer has gone into ERROR state
  Normal   EnsuringLoadBalancer    24m (x7 over 31m)     service-controller  Ensuring load balancer
  Warning  SyncLoadBalancerFailed  24m (x6 over 29m)     service-controller  Error syncing load balancer: failed to ensure load balancer: load balancer 71f3ac5c-6740-4fca-8d57-a62d30697629 is not ACTIVE, current provisioning status: ERROR
  Normal   EnsuringLoadBalancer    2m34s (x10 over 22m)  service-controller  Ensuring load balancer
  Warning  SyncLoadBalancerFailed  2m34s (x10 over 22m)  service-controller  Error syncing load balancer: failed to ensure load balancer: load balancer 71f3ac5c-6740-4fca-8d57-a62d30697629 is not ACTIVE, current provisioning status: ERROR

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, I see, though in this case you end up with an LB in ERROR state. I still don't see how it staying there is helpful to debugging. Maybe seeing the full LB resource helps, as then you can see the wrong ID, but then we can solve that use case by making sure at some more granular log level, we log full request made to Octavia by Gophercloud instead of adding a new option.

I can also see value in CPO validating network and subnet IDs before creating the LB.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the late reply, I've been on vacation. It didn't help me directly, because as a user I still wasn't able to get more information.But when I was able to give our cloud operations team the ID, they were able to debug the problem and tell me the reason.

@jichenjc
Copy link
Contributor

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels May 23, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 29, 2024
@baurmatt
Copy link
Author

@dulek @jichenjc How should we proceed from here? :)

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note-none Denotes a PR that doesn't merit a release note. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants