-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
loadbalancingexporter makes the collector accept data to produce a reject otelcol_receiver_refused_spans #32482
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
I printed the error log on otlp Receiver Export 。 |
lb.ring.item is empty |
k8sResolver.resolve backends id empty |
/label exporter/loadbalancing waiting-for-author |
Pinging code owners for exporter/loadbalancing: @jpkrohling. See Adding Labels via Comments if you do not have permissions to add labels yourself. |
When backends are empty, the pod update event is accompanied by the POD update event. The update event is deleted and then added. The intermediate time causes the exporter to not be found, resulting in refused_spans |
@linliaoy Can you turn on otel logs into debug level and try to capture some logs and paste them? |
@JaredTan95 Is it OK if I print the log with fmt |
@JaredTan95 I have changed the k8s update event update mechanism, and I have not rejected span for two days now |
Thank you! I left a couple of questions on the PR. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This issue has been closed as inactive because it has been stale for 120 days with no activity. |
Component(s)
exporter/loadbalancing
What happened?
Description
A lot of receiver refused span occurs when I use the svc resolution of k8s for loadbalancingexporter,I added a groupbytrace to fix this temporarily, but the current collector has a lot more cpu and memory footprint,exporter_send_failed_spacs will also increase。The other one that doesn't use loadbalancingexporter is good
Steps to Reproduce
Expected Result
Actual Result
Collector version
0.97.0
Environment information
Environment
OS: kernel 4.19.91
Compiler(if manually compiled): "go 1.21"
OpenTelemetry Collector configuration
Log output
Additional context
No response
The text was updated successfully, but these errors were encountered: