-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support "Tracing" / Spans #9415
Comments
The way we get spans in InfludDb is to use metrics which has the start/stop times as well as counters of what rows have been produced, time the execution started, stopped, and various other things The DataFusion metrics --> jarger span exporter for InfluxDB 3.0 can be found here https://github.com/influxdata/influxdb/blob/8fec1d636e82720389d06355d93245a06a8d90ad/iox_query/src/exec/query_tracing.rs#L93-L112 It uses our own span generation system as the rust ecosystem didn't have a standard library that we could find when we originally wrote it. One possibility might be to refactor the code out of |
The I think we could get most of the way there by implementing the following changes:
With these changes, all the data of the existing
for compatibility with code that is reading metrics from the existing Some benefits compared to the current metrics implementation:
Downsides:
|
Another downside of But otherwise the high level idea seems plausble, if a bunch of work. Maybe a quick POC could be done to see what it might look like in practice / how much work / change was required |
I think this is where custom
Agreed. What would be a good scope for a POC to be both quick to implement and broad enough to cover all related cases? Maybe a subset of operators/streams that are used by a specific, but non-trivial, query. The existing metrics code can be left as-is until we reach a point in implementation where we're confident that tracing can replace those functionalities. Also a lot of it will continue to be used I think for generating DataFusion-native metrics. |
After spending some time with the optimizer, I think it would be a good candidate to PoC the
|
|
Hi all, thanks for adding this and investigating the tracing crate. I'd like to suggest being a bit more specific about the goals of adding tracing, before jumping in both feet :). Maybe I can pitch in some use cases to help with this. My team is prototyping a distributed engine on top of ballista. Since ballista doesn't yet have a great UI, we started to look at adding some end-to-end tracing (think external client -> flight SQL query -> scheduler -> enqueue job -> executors -> DF engine). As we realised there is currently no tracing in either project, we quickly found this issue. I think the tracing crate, together with some of the community subscribers (e.g. opentelemetry stack) can solve this problem, even though there are a number of challenges:
To that end, I'd like to understand if reimplementing metrics on top of tracing is really what this issue is about, or just an attempt at consolidating some of the timing / metrics bookkeeping? Based on my experience with other systems (mostly on the JVM, building and tuning Spark / Kafka deployments), tracing and metrics work really well together, but they are rarely conflated. My suggestion would be to decouple adding tracing (as a tool for people that are monitoring / optimizing engines built on top of DF) from the core metrics refactoring. Lastly, if there is not a lot of work started here, I've already started to play around with some of the suggestions on this thread (add instrument to execute, instrument streams and async blocks, etc) and I'd be interested in contributing to this track, especially some of the lessons learned around tracing async code and streams. |
Since the the internal metrics in datafusion -- aka https://docs.rs/datafusion/latest/datafusion/physical_plan/metrics/index.html have start/stop timestamps on them already, we have found it relatively straightforward to convert them to "tracing" spans -- the link to do so is above: I am not clear what additional benefit more direct tracing integration in datafusion would provide, but I may be missing something |
Is your feature request related to a problem or challenge?
"Tracing" is a visualization technique for understanding how potentially concurrent operations happen, and is common in distributed systems.
You can also visualize DataFusion executions using traces. Here is an example trace we have at InfluxData that is integrated into the rest of our system and visualized using https://www.jaegertracing.io/
This visualization shows when each operator started and stopped (and the operators are annotated with how much CPU time is spent, etc). These spans are integrated into the overall trace of a request through our cloud service, which allows us to understand where a request's time is spent, both across services as well as within our DataFusion based engine.
For more background on tracing, this blog seems to give a reasonable overview: https://signoz.io/blog/distributed-tracing-span/
Describe the solution you'd like
I would like to make it easy for people to add DataFusion ExecutionPlan level tracing to their systems as well.
Given the various different libraries to generate traces I don't think picking any particular one to build into DataFusion is a good idea. However adding some way to walk the ExecutionPlan metrics and emit information that can be turned into traces would be very helpful I think
This came up twice recently, so I wanted to get it filed into a ticket
Describe alternatives you've considered
No response
Additional context
@simonvandel noted in Discord
It also came up in slack: https://the-asf.slack.com/archives/C04RJ0C85UZ/p1709051125059619
The text was updated successfully, but these errors were encountered: