-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor(http1): Quite trace in opentelemetry contexts #2671
Conversation
Setting up tracing+opentelemetry+jaeger, with only a root span, produces no trace data except for output produced by these two `trace_span!` usages. The test stack was an application using Tokio+Hyper so there either should have been a lot or no trace output.
I should note I've proposed this change because it appears to be the only place in hyper where the *_span! flavor of these macros is used. However, it would, IMO, be preferable to use the The reason to use Unless there is a trick I'm missing in setting up tracing+opentelemetry+jaeger ? |
Use the correct trace! syntax not span! syntax
I'm not familiar with how tracing works with Jaeger, but the point of these spans is to allow people to view the timing of parsing and encoding headers. The change proposed here removes that. |
@seanmonstar I found this comment from Eliza Wiesman interesting on discord
Perhaps a better avenue to explore is use of |
two other examples from eliza on discord regarding spans across await points let span = debug_span!("request", ?request).entered();
debug!("sending request");
client.send_request(request)
.instrument(span.exit())
.await; And you could also write
let span = debug_span!("request", ?request).;
debug!(parent: &span, "sending request");
client.send_request(request)
.instrument(span)
.await; Apologies for formatting sent from my phone sans glasses :) |
Also possibly of interest going forward |
I wasn't either until yesterday ;)
Yes that is true, but also in a distributed tracing context (OT+tool) spans are the only output visible for Hyper, and currently we see only these two.
Yes, but only because I assumed the absence of To be clear: My preference is to switch from events everywhere (info/warn/debug/trace) to using spans. You may need a tracing crate guru to give you advice on best practices around how to set up Hyper to support distributed and localhost debugging scenarios? However, my understanding is that you can only setup a root span once you have the initialized some subscriber, hence it isn't possible for Hyper to be released with a root span in place where by all info/warn/debug/trace output fall under that span. Instead, it seems to me, the root span (hence subscriber choice) are the scope of an application and the libraries that want to be async tracing friendly use My understanding is that if Hyper used the fn main() {
tracing_subscriber::fmt()
.with_max_level(tracing::Level::TRACE)
.try_init()
.expect("Global default subscriber");
trace!("my_app_start");
... Will be the same 'volume' of output that greets me when I have: async fn main() {
// Build a Jaeger batch span processor
// Setup Tracer Provider
// Get new Tracer from TracerProvider
// Create a layer ( `layer`) with the configured Tracer
tracing_subscriber::registry()
.with(layer)
.try_init()
.expect("Global default subscriber.");
let root = trace_span!("my_app_start").entered();
... Right now, in the first case I see lots of data. In the second case I see only output from the two spans the subject of this PR, nothing else. Not sure if that helps or makes sense? |
@seanmonstar to offer some positive feedback - Hyper's current debugging story isn't exactly terrible,
In order to get visibility into Hyper events we have to Right now this is what a adding Obviously, |
Setting up tracing+opentelemetry+jaeger, with only a root span, produces
no trace data except for output produced by these two
trace_span!
usages.The test stack was an application using Tokio+Hyper so there either should
have been a lot or no trace output.