-
Notifications
You must be signed in to change notification settings - Fork 261
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🌱 Adding logging verbosity levels for pkg/cloud/services/loadbalancer/loadbalancer.go #1636
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: anastaruno The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
✅ Deploy Preview for kubernetes-sigs-cluster-api-openstack ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
Hi @anastaruno. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@@ -169,7 +169,7 @@ func (s *Service) getOrCreateLoadBalancer(openStackCluster *infrav1.OpenStackClu | |||
return lb, nil | |||
} | |||
|
|||
s.scope.Logger().Info(fmt.Sprintf("Creating load balancer in subnet: %q", subnetID), "name", loadBalancerName) | |||
s.scope.Logger().V(2).Info("Creating load balancer in subnet", "subnetID", subnetID, "name", loadBalancerName) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can be just "id" here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But subnet is not the main subject of the message, load balancer is. Hence, no prefixes for load balancer, and "subnetID" for subnet. That was my logic
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep. We should also update the documentation with the upstream links we used, plus any supplemental conventions we've decided on such as this.
@@ -199,7 +199,7 @@ func (s *Service) getOrCreateListener(openStackCluster *infrav1.OpenStackCluster | |||
return listener, nil | |||
} | |||
|
|||
s.scope.Logger().Info("Creating load balancer listener", "name", listenerName, "lb-id", lbID) | |||
s.scope.Logger().V(2).Info("Creating load balancer listener", "name", listenerName, "loadBalancerID", lbID) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just "id" here too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same logic here. Load balancer is not the main subject
@@ -267,7 +267,7 @@ func (s *Service) getOrUpdateAllowedCIDRS(openStackCluster *infrav1.OpenStackClu | |||
listener.AllowedCIDRs = capostrings.Unique(listener.AllowedCIDRs) | |||
|
|||
if !reflect.DeepEqual(allowedCIDRs, listener.AllowedCIDRs) { | |||
s.scope.Logger().Info("CIDRs do not match, start to update listener", "expected CIDRs", allowedCIDRs, "load balancer existing CIDR", listener.AllowedCIDRs) | |||
s.scope.Logger().V(3).Info("CIDRs do not match, updating listener", "expectedCIDRs", allowedCIDRs, "currentCIDRs", listener.AllowedCIDRs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If V(2)
informs about changes in the system, it's reasonable this should be V(2)
too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it important or useful enough to be V(2)? Seems like something that the system does on its own without impacting the user much
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @dulek here. This branch is making an OpenStack API call and reconfiguring the listener (see line 275). This should probably log at V(2).
@@ -539,7 +539,7 @@ func (s *Service) DeleteLoadBalancerMember(openStackCluster *infrav1.OpenStackCl | |||
return err | |||
} | |||
if pool == nil { | |||
s.scope.Logger().Info("Load balancer pool does not exist", "name", lbPortObjectsName) | |||
s.scope.Logger().V(2).Info("Load balancer pool does not exist", "name", lbPortObjectsName) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like V(3)
or even V(4)
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, missed that one
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, this is really thoughtful. I have some queries for @dulek inline as he knows OpenStack load balancers much better than I do.
@@ -169,7 +169,7 @@ func (s *Service) getOrCreateLoadBalancer(openStackCluster *infrav1.OpenStackClu | |||
return lb, nil | |||
} | |||
|
|||
s.scope.Logger().Info(fmt.Sprintf("Creating load balancer in subnet: %q", subnetID), "name", loadBalancerName) | |||
s.scope.Logger().V(2).Info("Creating load balancer in subnet", "subnetID", subnetID, "name", loadBalancerName) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep. We should also update the documentation with the upstream links we used, plus any supplemental conventions we've decided on such as this.
@@ -267,7 +267,7 @@ func (s *Service) getOrUpdateAllowedCIDRS(openStackCluster *infrav1.OpenStackClu | |||
listener.AllowedCIDRs = capostrings.Unique(listener.AllowedCIDRs) | |||
|
|||
if !reflect.DeepEqual(allowedCIDRs, listener.AllowedCIDRs) { | |||
s.scope.Logger().Info("CIDRs do not match, start to update listener", "expected CIDRs", allowedCIDRs, "load balancer existing CIDR", listener.AllowedCIDRs) | |||
s.scope.Logger().V(3).Info("CIDRs do not match, updating listener", "expectedCIDRs", allowedCIDRs, "currentCIDRs", listener.AllowedCIDRs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @dulek here. This branch is making an OpenStack API call and reconfiguring the listener (see line 275). This should probably log at V(2).
@@ -539,7 +539,7 @@ func (s *Service) DeleteLoadBalancerMember(openStackCluster *infrav1.OpenStackCl | |||
return err | |||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dulek Not related this this PR, but this looks pretty janky. Does this need an issue to track it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should probably just make a call and ignore 404's? Is this what you mean here?
@@ -539,7 +539,7 @@ func (s *Service) DeleteLoadBalancerMember(openStackCluster *infrav1.OpenStackCl | |||
return err | |||
} | |||
if pool == nil { | |||
s.scope.Logger().Info("Load balancer pool does not exist", "name", lbPortObjectsName) | |||
s.scope.Logger().V(4).Info("Load balancer pool does not exist", "name", lbPortObjectsName) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
V(4) looks correct to me, but I'm interested to know what @dulek thinks. I don't know this code super well: would it be 'normal' for the lb pool to not exist, i.e. an expected state of affairs, even if only briefly? If so, V(4) is appropriate. If it would be unusual then perhaps V(3) would be more appropriate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nah, you're right, in this context the pool not existing at all is pretty unusual.
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@anastaruno This needs a rebase. |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
I'm convinced that we need to fix logging, but not just for that file, and do it consistently for all the files. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What this PR does / why we need it:
This PR is part of the process to improve logging in CAPO (issue #1314). It adds the proper verbosity levels to log messages in loadbalancer.go, as described in these guidelines.
Special notes for your reviewer:
In this file, I used mainly verbosity levels 2, and 3. According to the guidelines,
Thus, I used level 2 for the creation/deletion of important objects (load balancer, listener, pool, monitor).
I used level 3 for 'reconciling' and 'waiting for' messages. I think those are slightly less important for understanding the process flow or debugging, but they do signify changes and deserve the attention of the developers.
/hold