Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support "Tracing" / Spans #9415

Open
alamb opened this issue Mar 1, 2024 · 8 comments
Open

Support "Tracing" / Spans #9415

alamb opened this issue Mar 1, 2024 · 8 comments
Labels
enhancement New feature or request

Comments

@alamb
Copy link
Contributor

alamb commented Mar 1, 2024

Is your feature request related to a problem or challenge?

"Tracing" is a visualization technique for understanding how potentially concurrent operations happen, and is common in distributed systems.

You can also visualize DataFusion executions using traces. Here is an example trace we have at InfluxData that is integrated into the rest of our system and visualized using https://www.jaegertracing.io/

Screenshot 2024-03-01 at 1 13 59 PM

This visualization shows when each operator started and stopped (and the operators are annotated with how much CPU time is spent, etc). These spans are integrated into the overall trace of a request through our cloud service, which allows us to understand where a request's time is spent, both across services as well as within our DataFusion based engine.

For more background on tracing, this blog seems to give a reasonable overview: https://signoz.io/blog/distributed-tracing-span/

Describe the solution you'd like

I would like to make it easy for people to add DataFusion ExecutionPlan level tracing to their systems as well.

Given the various different libraries to generate traces I don't think picking any particular one to build into DataFusion is a good idea. However adding some way to walk the ExecutionPlan metrics and emit information that can be turned into traces would be very helpful I think

This came up twice recently, so I wanted to get it filed into a ticket

Describe alternatives you've considered

No response

Additional context

@simonvandel noted in Discord

I have been playing a bit with https://github.com/wolfpld/tracy and https://docs.rs/tracing-tracy/latest/tracing_tracy/ and thought it would be nice to have some visibility into ExecutionPlans being executed.

It also came up in slack: https://the-asf.slack.com/archives/C04RJ0C85UZ/p1709051125059619

But I want to see traces from which I want to calculate how much time spent in executing the queries and how much time got spent into data transfer from a target storage system, It would be good, if you can guide me how can I get these metrics

@alamb alamb added the enhancement New feature or request label Mar 1, 2024
@alamb
Copy link
Contributor Author

alamb commented Mar 1, 2024

The way we get spans in InfludDb is to use metrics which has the start/stop times as well as counters of what rows have been produced, time the execution started, stopped, and various other things

The DataFusion metrics --> jarger span exporter for InfluxDB 3.0 can be found here https://github.com/influxdata/influxdb/blob/8fec1d636e82720389d06355d93245a06a8d90ad/iox_query/src/exec/query_tracing.rs#L93-L112

It uses our own span generation system as the rust ecosystem didn't have a standard library that we could find when we originally wrote it.

One possibility might be to refactor the code out of InfluxDB to make it generic (so it didn't depend on span, but instead was implemented in terms of some trait (which we could then impl for span

@erratic-pattern
Copy link
Contributor

erratic-pattern commented Apr 5, 2024

The tracing crate and related ecosystem would be a good choice for this. It is full-featured with an easy-to-use and unobtrusive API, and there is a lot of community support for tooling integration and export formats.

I think we could get most of the way there by implementing the following changes:

  • include the #[instrument] macro on all execute calls
  • call the with_current_subscriber method from the tracing_futures crate on all relevant streams and futures (replacing ObservedStream)
  • call the instrument method from the tracing_futures crate on all relevant streams and futures. (EDIT: alternatively we might be able to attach the #[instrument] macro to Stream poll_next functions but need to test this)
  • record when rows are added via a tracing event. example: info!(num_rows=1, "row added"). the tracing Subscriber can aggregate these events to obtain the total number of rows added
  • similar to num_rows, other metrics (spill count, count, memory used, etc) can be calculated via trace events that can be aggregated to produce the metric

With these changes, all the data of the existing MetricsSet code can captured:

  • start time (wall clock) corresponds to the first "enter" event of the Span
  • end time (wall clock) corresponds to the "close" event of the Span
  • compute time can be calculated by aggregating all the intervals between "enter" and "exit" events on a Span (Spans can be entered and exited multiple times, as is this case with polling streams). The tracing_timing crate provides a Subscriber to calculate this information.
  • other metrics can be calculated by aggregating all of the events containing field name-value pairs that the metric is interested in

for compatibility with code that is reading metrics from the existing MetricsSet, we could also implement a Subscriber and/or Layer that calculates these metrics and produces a MetricsSet. However, a better approach for most users would be to simply install one of the many tracing ecosystem crates, which provide subscribers for a number of tools and data formats.

Some benefits compared to the current metrics implementation:

  • event-based tracing allows for more detailed information to be captured than the current metrics-based approach
  • tracing of individual streams/futures provides more information for debugging async code
  • less API surface area since we don't need to include metrics in the ExecutionPlan trait
  • overall less code is needed to implement tracing as a lot of the boilerplate is handled automatically via tracing macros and implicit thread-local state. the tracing API is very easy to use from the event producing side.
  • access to a wide variety of ecosystem crates for exporting traces
  • doubles as a logging framework. DataFusion users can register a FmtSubscriber to get formatted logs (which we could also use in the DataFusion CLI), and there are many crates with additional logging subscribers for standardized logging formats and third-party logging tools.
  • tests can implement custom Subscribers to validate low-level events
  • tracing can be disabled when the overhead is not wanted

Downsides:

  • tracing event fields are free-form, so there is less static typing guarantees around the correctness of event fields. things like typos will not be caught as easily. this could potentially be remedied by writing custom macros which could enforce and/or automatically include some standardized set of fields, or accept well-formed structs instead of free-form key-value pairs.
  • tracing API is a bit less "explicit" than the current metrics code, because of the implicit thread-local state. if you don't include the relevant macros and function calls, the compiler won't complain about it. this can make it easier for programmers to forget about tracing when developing custom ExecutionPlans, and makes it more difficult to enforce consistent field name conventions and practices. again, I think this could be remedied to some degree by custom macros (and some good documentation)
  • a bit more complexity on the subscriber-side to aggregate events within custom Subscriber implementations

@alamb
Copy link
Contributor Author

alamb commented Apr 5, 2024

Another downside of tracing is that not all systems use that ecosystem so it might require some extra logic to use tracing than metrics

But otherwise the high level idea seems plausble, if a bunch of work. Maybe a quick POC could be done to see what it might look like in practice / how much work / change was required

@erratic-pattern
Copy link
Contributor

Another downside of tracing is that not all systems use that ecosystem so it might require some extra logic to use tracing than metrics

I think this is where custom Subscribers could be useful for generating DataFusion-native reporting and metrics. Having a Subscriber to generate the existing MetricsSet should be possible. From the user side there is no need to include tracing-related dependencies if configured with built-in Subscribers

But otherwise the high level idea seems plausble, if a bunch of work. Maybe a quick POC could be done to see what it might look like in practice / how much work / change was required

Agreed. What would be a good scope for a POC to be both quick to implement and broad enough to cover all related cases? Maybe a subset of operators/streams that are used by a specific, but non-trivial, query.

The existing metrics code can be left as-is until we reach a point in implementation where we're confident that tracing can replace those functionalities. Also a lot of it will continue to be used I think for generating DataFusion-native metrics.

@erratic-pattern
Copy link
Contributor

After spending some time with the optimizer, I think it would be a good candidate to PoC the tracing crate.

@erratic-pattern
Copy link
Contributor

create_physical_plan would be a good follow up, to PoC instrumentation of async code

@aditanase
Copy link
Contributor

Hi all, thanks for adding this and investigating the tracing crate. I'd like to suggest being a bit more specific about the goals of adding tracing, before jumping in both feet :). Maybe I can pitch in some use cases to help with this.

My team is prototyping a distributed engine on top of ballista. Since ballista doesn't yet have a great UI, we started to look at adding some end-to-end tracing (think external client -> flight SQL query -> scheduler -> enqueue job -> executors -> DF engine). As we realised there is currently no tracing in either project, we quickly found this issue.

I think the tracing crate, together with some of the community subscribers (e.g. opentelemetry stack) can solve this problem, even though there are a number of challenges:

  • correctly instrumenting streams and async blocks
  • using or_current consistently to make sure that changing the log level between trace, debug and info doesn't throw away good information
  • tracing across service boundaries (for ballista especially, but also for an app embedding datafusion)
  • tracing pub/sub patterns (in ballista we have a job queue through which scheduler and workers are decoupled)

To that end, I'd like to understand if reimplementing metrics on top of tracing is really what this issue is about, or just an attempt at consolidating some of the timing / metrics bookkeeping?

Based on my experience with other systems (mostly on the JVM, building and tuning Spark / Kafka deployments), tracing and metrics work really well together, but they are rarely conflated.
This goes especially for core internal metrics that are used by the query engine (e.g. explain analyse, cost based optimizations, building a nice UI for the scheduler) as opposed to tracing that is typically done through sampling (tracing everything at a certain level becomes expensive quickly), has a lot of user input (extra context, combined with app metrics) and is configurable through the log level.

My suggestion would be to decouple adding tracing (as a tool for people that are monitoring / optimizing engines built on top of DF) from the core metrics refactoring.
For metrics specifically there are other crates with more target concepts (counters, gauges, histograms) that have some integration with tracing, in order to propagate the current span context as metrics labels.

Lastly, if there is not a lot of work started here, I've already started to play around with some of the suggestions on this thread (add instrument to execute, instrument streams and async blocks, etc) and I'd be interested in contributing to this track, especially some of the lessons learned around tracing async code and streams.

@alamb
Copy link
Contributor Author

alamb commented May 29, 2024

Since the the internal metrics in datafusion -- aka https://docs.rs/datafusion/latest/datafusion/physical_plan/metrics/index.html have start/stop timestamps on them already, we have found it relatively straightforward to convert them to "tracing" spans -- the link to do so is above:

#9415 (comment)

I am not clear what additional benefit more direct tracing integration in datafusion would provide, but I may be missing something

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants