-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add HTTP exporting to OpenTelemetry Tracer #28454
Add HTTP exporting to OpenTelemetry Tracer #28454
Conversation
Signed-off-by: Alex Ellis <[email protected]>
Signed-off-by: Alex Ellis <[email protected]>
Signed-off-by: Alex Ellis <[email protected]>
Signed-off-by: Alex Ellis <[email protected]>
Signed-off-by: Alex Ellis <[email protected]>
Signed-off-by: Alex Ellis <[email protected]>
Signed-off-by: Alex Ellis <[email protected]>
Signed-off-by: Alex Ellis <[email protected]>
Signed-off-by: Alex Ellis <[email protected]>
CC @envoyproxy/api-shepherds: Your approval is needed for changes made to |
Signed-off-by: Alex Ellis <[email protected]>
Signed-off-by: Alex Ellis <[email protected]>
Signed-off-by: Alex Ellis <[email protected]>
Signed-off-by: Alex Ellis <[email protected]>
Signed-off-by: Alex Ellis <[email protected]>
/retest |
Looks like the only failing presubmit was Verify/examples with |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
api review
// See https://opentelemetry.io/docs/specs/otlp/#otlphttp. | ||
HttpFormat http_format = 2; | ||
|
||
// Optional path. If omitted, the default path of "/v1/traces" will be used. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The HttpUri contains path and host as part of URI, so just derive from there?
BINARY_PROTOBUF = 0; | ||
|
||
// JSON Protobuf Encoding. | ||
JSON_PROTOBUF = 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mark this not implemented as it is not in the implementation?
@@ -17,6 +18,31 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE; | |||
// Configuration for the OpenTelemetry tracer. | |||
// [#extension: envoy.tracers.opentelemetry] | |||
message OpenTelemetryConfig { | |||
// Configuration for sending traces to the collector over HTTP. Note that |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to also refactor the config in the same way done for the exporters? For example, keeping the base config in here, but then adding new types for OtlpHttpExporterConfig
and OtlpGrpcExporterConfig
?
That's also what OTel C++ does Http gRPC
Now we will have a config with HTTP-related things (HTTP Format, HTTP headers) that are not relevant for gRPC. Same would apply for possible future gRPC config only.
} | ||
|
||
// The upstream cluster that will receive OTLP traces over HTTP. | ||
core.v3.HttpUri http_uri = 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if this is appropriate for this PR, but the OTEL spec defines default URL/Port for gRPC and HTTP that targets these endpoints in the OTel collector. https://github.com/open-telemetry/opentelemetry-proto/blob/main/docs/specification.md#otlphttp-default-port.
OTel users using these exporters in Envoy might be surprised they have to configure these, since in other "OTel components" they are set OOTB.
|
||
// Optional path. If omitted, the default path of "/v1/traces" will be used. | ||
string collector_path = 3; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure it helps with the design/impl of the HTTP headers configuration for the HTTP exporter here, but the OTel C++ defines it as this https://github.com/open-telemetry/opentelemetry-cpp/blob/0c5f90dea5147461df4ff7d32cbb5d42e5011daa/exporters/otlp/include/opentelemetry/exporters/otlp/otlp_environment.h#L141 which is part of this struct https://github.com/open-telemetry/opentelemetry-cpp/blob/main/exporters/otlp/include/opentelemetry/exporters/otlp/otlp_http_exporter_options.h#L49
@AlexanderEllis Hi! I did some experimentation with the changes here. I basically used the sandbox from https://www.envoyproxy.io/docs/envoy/latest/start/sandboxes/opentelemetry#step-1-build-the-sandbox and replaced the envoy binary on the containers with the one built locally. I struggled a bit to configure it but after some trial/error I got it to work. Here are some points which are unclear to me:
But it seems it's not being used at all. I realized that the one coming from the cluster
General question: If all the above is true (the |
@AlexanderEllis what are the next steps for this PR? (friendly maintainer on-call stale PR ping ;) |
Hi @htuch I'm interested in getting this feature into Envoy, but I have some questions. I asked over on Slack but didn't get much response there, so maybe you can help clarify? I could also work on it, if @AlexanderEllis doesn't have the cycles. Basically it's my question on the last comment: The HTTP exporter here, depends on a cluster in order to obtain an HTTP client that does the post request with the OTLP data. Is there another way to obtain an HTTP client in Envoy, other than via the cluster "route"? I'm a beginner in Envoy so most of the concepts are new to me. I have read the docs but it's not clear if this is the way it works. The reason I'm asking is because maybe people have an "agent" that is running locally like the OTel collector, but also they could not have the collector and want to send the telemetry data directly to their back-end that's available publicly on the internet. I think that's the main goal of having an HTTP exporter like this. Does it still make sense to add the public back-end host name in a For example, is it possible to have a config like this, without a
|
You can easily do Internet hosts at clusters with something like strict / logical DNS clusters. |
friendly ping @AlexanderEllis I think this is pending changes on your end. /wait |
Just to re-iterate, I could take this over and continue the work if @AlexanderEllis does not have the cycles now. We are interested in getting this in, specially because of #28851 |
Would be great to hear from @AlexanderEllis on how we can collaborate with @joaopgrassi here. If @AlexanderEllis doesn't have cycles to push this PR right now we can have @joaopgrassi take it over. |
Hey @htuch and @joaopgrassi ! Sorry for the delay here, ended up extremely busy between work and personal life, and my OSS Envoy hobby work fell off the map. Maybe some day I'll be doing it for work again :) @joaopgrassi if you'd like to take over this PR, that would be totally fine with me! That would also be a great introduction for you for an Envoy PR — I know you mentioned in the other issue that you're new to making changes to the project. I'm also happy to advise as possible, with the caveat that I may be in and out. I think it's pretty much ready to go as is, with the comments to address around the proto config. |
@AlexanderEllis totally understandable, no problem! Alright! If it's OK with you I will fork your PR and submit a new one then. It would be awesome to get your feedback and be able to ask you things, since you wrote the initial OTel things in Envoy. Are you on the Envoy CNCF Slack? Would be great to have a channel to chat, of course with no expectations on your side :). Thanks again |
Hi all, the new PR is open! #29207 |
This pull request has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in 7 days if no further activity occurs. Please feel free to give a status update now, ping for review, or re-open when it's ready. Thank you for your contributions! |
This pull request has been automatically closed because it has not had activity in the last 37 days. Please feel free to give a status update now, ping for review, or re-open when it's ready. Thank you for your contributions! |
Commit Message: Add HTTP exporting to OpenTelemetry Tracer
Additional Description: This PR refactors the OpenTelemetry exporting to allow for a second option: OTLP spans over HTTP according to the OTLP HTTP spec. It refactors the existing gRPC exporter into a general
exporter
that is then used by the gRPC and the HTTP exporters. The HTTP exporter is relatively simple: it accesses the thread local cluster for the collector, then sends it a request.Risk Level: Low (new, optional functionality for existing tracing extension)
Testing: Unit test and manual testing with OpenTelemetry collector
Fixes #26352
Meta notes about the PR:
oneof
, with either gRPC or HTTP, but I hesitate to make a breaking change, opting instead for an optional new field that is ignored if the gRPC config is present. Happy to hear suggestions there.getThreadLocalCluster
, but would love recommendations there as wellAuthorization
headers to the request to the collector? Header mutation as part of theupstream_http_protocol_options
in the collector cluster config? Something new here? Wasn't sure, but open to ideas.