Skip to content

Commit

Permalink
Update lib and tracing module docs with examples (#563)
Browse files Browse the repository at this point in the history
  • Loading branch information
jtescher authored Jun 6, 2021
1 parent 1ca62d3 commit 99e51c1
Show file tree
Hide file tree
Showing 14 changed files with 270 additions and 153 deletions.
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,17 +34,18 @@ observability tools.
## Getting Started

```rust
use opentelemetry::{sdk::export::trace::stdout, trace::Tracer};
use opentelemetry::{global, sdk::export::trace::stdout, trace::Tracer};

fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync + 'static>> {
// Create a new instrumentation pipeline
fn main() {
// Create a new trace pipeline that prints to stdout
let tracer = stdout::new_pipeline().install_simple();

tracer.in_span("doing_work", |cx| {
// Traced app logic here...
});

Ok(())
// Shutdown trace pipeline
global::shutdown_tracer_provider();
}
```

Expand Down Expand Up @@ -76,7 +77,7 @@ In particular, the following crates are likely to be of interest:
otel conventions.
- [`opentelemetry-stackdriver`] provides an exporter for Google's [Cloud Trace]
(which used to be called StackDriver).

Additionally, there are also several third-party crates which are not
maintained by the `opentelemetry` project. These include:

Expand All @@ -89,7 +90,6 @@ maintained by the `opentelemetry` project. These include:
- [`opentelemetry-tide`] provides integration for the [`Tide`] web server and
ecosystem.


If you're the maintainer of an `opentelemetry` ecosystem crate not listed
above, please let us know! We'd love to add your project to the list!

Expand Down
2 changes: 1 addition & 1 deletion opentelemetry-otlp/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -394,7 +394,7 @@ impl TonicPipelineBuilder {
Ok(build_simple_with_exporter(exporter, self.trace_config))
}

/// Install a trace exporter using [tonic] as grpc lazer and a batch span processor using the
/// Install a trace exporter using [tonic] as grpc layer and a batch span processor using the
/// specified runtime.
///
/// Returns a [`Tracer`] with the name `opentelemetry-otlp` and current crate version.
Expand Down
2 changes: 0 additions & 2 deletions opentelemetry-semantic-conventions/src/lib.rs
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@
//! # OpenTelemetry Semantic Conventions
//!
//! OpenTelemetry semantic conventions are agreed standardized naming patterns
//! for OpenTelemetry things. This crate aims to be the centralized place to
//! interact with these conventions.
Expand Down
9 changes: 5 additions & 4 deletions opentelemetry/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,17 +33,18 @@ observability tools.
## Getting Started

```rust
use opentelemetry::{sdk::export::trace::stdout, trace::Tracer};
use opentelemetry::{global, sdk::export::trace::stdout, trace::Tracer};

fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync + 'static>> {
// Create a new instrumentation pipeline
fn main() {
// Create a new trace pipeline that prints to stdout
let tracer = stdout::new_pipeline().install_simple();

tracer.in_span("doing_work", |cx| {
// Traced app logic here...
});

Ok(())
// Shutdown trace pipeline
global::shutdown_tracer_provider();
}
```

Expand Down
11 changes: 5 additions & 6 deletions opentelemetry/src/global/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@
//! # }
//! ```
//!
//! [installing a trace pipeline]: crate::sdk::export::trace::stdout::PipelineBuilder::install
//! [installing a trace pipeline]: crate::sdk::export::trace::stdout::PipelineBuilder::install_simple
//! [`TracerProvider`]: crate::trace::TracerProvider
//! [`Span`]: crate::trace::Span
//!
Expand All @@ -80,7 +80,7 @@
//!
//! ### Usage in Applications
//!
//! Applications configure their tracer either by [installing a metrics pipeline],
//! Applications configure their meter either by [installing a metrics pipeline],
//! or calling [`set_meter_provider`].
//!
//! ```
Expand All @@ -99,8 +99,8 @@
//!
//! fn do_something_instrumented() {
//! // Then you can get a named tracer instance anywhere in your codebase.
//! let tracer = global::meter("my-component");
//! let counter = tracer.u64_counter("my_counter").init();
//! let meter = global::meter("my-component");
//! let counter = meter.u64_counter("my_counter").init();
//!
//! // record metrics
//! counter.add(1, &[KeyValue::new("mykey", "myvalue")]);
Expand All @@ -117,7 +117,6 @@
//! ```
//! # #[cfg(feature="metrics")]
//! # {
//! use opentelemetry::trace::Tracer;
//! use opentelemetry::{global, KeyValue};
//!
//! pub fn my_traced_library_function() {
Expand All @@ -132,7 +131,7 @@
//! # }
//! ```
//!
//! [installing a metrics pipeline]: crate::sdk::export::metrics::stdout::StdoutExporterBuilder::try_init
//! [installing a metrics pipeline]: crate::sdk::export::metrics::stdout::StdoutExporterBuilder::init
//! [`MeterProvider`]: crate::metrics::MeterProvider
//! [`set_meter_provider`]: crate::global::set_meter_provider
Expand Down
14 changes: 10 additions & 4 deletions opentelemetry/src/global/trace.rs
Original file line number Diff line number Diff line change
Expand Up @@ -328,11 +328,11 @@ pub fn force_flush_tracer_provider() {
// threads Use cargo test -- --ignored --test-threads=1 to run those tests.
mod tests {
use super::*;
#[cfg(any(feature = "rt-tokio", feature = "rt-tokio-current-thread"))]
use crate::runtime;
#[cfg(any(feature = "rt-tokio", feature = "rt-tokio-current-thread"))]
use crate::sdk::trace::TraceRuntime;
use crate::{
runtime,
trace::{NoopTracer, Tracer},
};
use crate::trace::{NoopTracer, Tracer};
use std::{
fmt::Debug,
io::Write,
Expand Down Expand Up @@ -480,6 +480,7 @@ mod tests {
assert!(second_resp.contains("thread 2"));
}

#[cfg(any(feature = "rt-tokio", feature = "rt-tokio-current-thread"))]
fn build_batch_tracer_provider<R: TraceRuntime>(
assert_writer: AssertWriter,
runtime: R,
Expand All @@ -501,6 +502,7 @@ mod tests {
.build()
}

#[cfg(any(feature = "rt-tokio", feature = "rt-tokio-current-thread"))]
async fn test_set_provider_in_tokio<R: TraceRuntime>(runtime: R) -> AssertWriter {
let buffer = AssertWriter::new();
let _ = set_tracer_provider(build_batch_tracer_provider(buffer.clone(), runtime));
Expand Down Expand Up @@ -529,6 +531,7 @@ mod tests {
// Test if the multiple thread tokio runtime could exit successfully when not force flushing spans
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
#[ignore = "requires --test-threads=1"]
#[cfg(feature = "rt-tokio")]
async fn test_set_provider_multiple_thread_tokio() {
let assert_writer = test_set_provider_in_tokio(runtime::Tokio).await;
assert_eq!(assert_writer.len(), 0);
Expand All @@ -537,6 +540,7 @@ mod tests {
// Test if the multiple thread tokio runtime could exit successfully when force flushing spans
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
#[ignore = "requires --test-threads=1"]
#[cfg(feature = "rt-tokio")]
async fn test_set_provider_multiple_thread_tokio_shutdown() {
let assert_writer = test_set_provider_in_tokio(runtime::Tokio).await;
shutdown_tracer_provider();
Expand All @@ -562,6 +566,7 @@ mod tests {
// Test if the single thread tokio runtime could exit successfully when not force flushing spans
#[tokio::test]
#[ignore = "requires --test-threads=1"]
#[cfg(feature = "rt-tokio-current-thread")]
async fn test_set_provider_single_thread_tokio() {
let assert_writer = test_set_provider_in_tokio(runtime::TokioCurrentThread).await;
assert_eq!(assert_writer.len(), 0)
Expand All @@ -570,6 +575,7 @@ mod tests {
// Test if the single thread tokio runtime could exit successfully when force flushing spans.
#[tokio::test]
#[ignore = "requires --test-threads=1"]
#[cfg(feature = "rt-tokio-current-thread")]
async fn test_set_provider_single_thread_tokio_shutdown() {
let assert_writer = test_set_provider_in_tokio(runtime::TokioCurrentThread).await;
shutdown_tracer_provider();
Expand Down
111 changes: 76 additions & 35 deletions opentelemetry/src/lib.rs
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@
//! The Rust [OpenTelemetry](https://opentelemetry.io/) implementation.
//!
//! OpenTelemetry provides a single set of APIs, libraries, agents, and collector
//! services to capture distributed traces and metrics from your application. You
//! can analyze them using [Prometheus], [Jaeger], and other observability tools.
Expand All @@ -10,30 +8,95 @@
//! [Jaeger]: https://www.jaegertracing.io
//! [msrv]: #supported-rust-versions
//!
//! ## Getting Started
//! # Getting Started
//!
//! ```no_run
//! # #[cfg(feature = "trace")]
//! # {
//! use opentelemetry::{sdk::export::trace::stdout, trace::Tracer, global};
//! use opentelemetry::{global, sdk::export::trace::stdout, trace::Tracer};
//!
//! fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync + 'static>> {
//! // Create a new instrumentation pipeline
//! fn main() {
//! // Create a new trace pipeline that prints to stdout
//! let tracer = stdout::new_pipeline().install_simple();
//!
//! tracer.in_span("doing_work", |cx| {
//! // Traced app logic here...
//! });
//!
//! global::shutdown_tracer_provider(); // sending remaining spans
//!
//! Ok(())
//! }
//! // Shutdown trace pipeline
//! global::shutdown_tracer_provider();
//! }
//! # }
//! ```
//!
//! See the [examples] directory for different integration patterns.
//!
//! [examples]: https://github.com/open-telemetry/opentelemetry-rust/tree/main/examples
//!
//! # Traces
//!
//! The [`trace`] module includes types for tracking the progression of a single
//! request while it is handled by services that make up an application. A trace
//! is a tree of [`Span`]s which are objects that represent the work being done
//! by individual services or components involved in a request as it flows
//! through a system.
//!
//! ### Creating and exporting spans
//!
//! ```
//! # #[cfg(feature = "trace")]
//! # {
//! use opentelemetry::{global, trace::{Span, Tracer}, KeyValue};
//!
//! // get a tracer from a provider
//! let tracer = global::tracer("my_service");
//!
//! // start a new span
//! let mut span = tracer.start("my_span");
//!
//! // set some attributes
//! span.set_attribute(KeyValue::new("http.client_ip", "83.164.160.102"));
//!
//! // perform some more work...
//!
//! // end or drop the span to export
//! span.end();
//! # }
//! ```
//!
//! See the [`trace`] module docs for more information on creating and managing
//! spans.
//!
//! [`Span`]: crate::trace::Span
//!
//! # Metrics
//!
//! Note: the metrics specification is **still in progress** and **subject to major
//! changes**.
//!
//! The [`metrics`] module includes types for recording measurements about a
//! service at runtime.
//!
//! ### Creating instruments and recording measurements
//!
//! ```
//! # #[cfg(feature = "metrics")]
//! # {
//! use opentelemetry::{global, KeyValue};
//!
//! See the [examples](https://github.com/open-telemetry/opentelemetry-rust/tree/main/examples)
//! directory for different integration patterns.
//! // get a meter from a provider
//! let meter = global::meter("my_service");
//!
//! // create an instrument
//! let counter = meter.u64_counter("my_counter").init();
//!
//! // record a measurement
//! counter.add(1, &[KeyValue::new("http.client_ip", "83.164.160.102")]);
//! # }
//! ```
//!
//! See the [`metrics`] module docs for more information on creating and
//! managing instruments.
//!
//! ## Crate Feature Flags
//!
Expand All @@ -54,28 +117,6 @@
//! [async-std]: https://crates.io/crates/async-std
//! [serde]: https://crates.io/crates/serde
//!
//! ## Working with runtimes
//!
//! Opentelemetry API & SDK supports different runtimes. When working with async runtime, we recommend
//! to use batch span processors where the spans will be sent in batch, reducing the number of requests
//! and resource needed.
//!
//! Batch span processors need to run a background task to collect and send spans. Different runtimes
//! need different ways to handle the background task. Using a `Runtime` that's not compatible with the
//! underlying runtime can cause deadlock.
//!
//! ### Tokio
//!
//! Tokio currently offers two different schedulers. One is `current_thread_scheduler`, the other is
//! `multiple_thread_scheduler`. Both of them default to use batch span processors to install span exporters.
//!
//! But for `current_thread_scheduler`. It can cause the program to hang forever if we schedule the backgroud
//! task with other tasks in the same runtime. Thus, users should enable `rt-tokio-current-thread` feature
//! to ask the background task be scheduled on a different runtime on a different thread.
//!
//! Note that by default `#[tokio::test]` uses `current_thread_scheduler` and should use `rt-tokio-current-thread`
//! feature.
//!
//! ## Related Crates
//!
//! In addition to `opentelemetry`, the [`open-telemetry/opentelemetry-rust`]
Expand Down Expand Up @@ -180,7 +221,7 @@ pub mod global;
pub mod sdk;

#[cfg(feature = "testing")]
#[allow(missing_docs)]
#[doc(hidden)]
pub mod testing;

pub mod baggage;
Expand Down
2 changes: 1 addition & 1 deletion opentelemetry/src/sdk/metrics/aggregators/ddsketch.rs
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
//!
//! DDSKetch, on the contrary, employs relative error rate that could work well on long tail dataset.
//!
//! The detail of this algorithm can be found in https://arxiv.org/pdf/1908.10693
//! The detail of this algorithm can be found in <https://arxiv.org/pdf/1908.10693>
use std::{
any::Any,
Expand Down
4 changes: 2 additions & 2 deletions opentelemetry/src/sdk/trace/runtime.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
//! Trace runtime is an extension to [`Runtime`]. Currently it provides a channel that used
//! by [`BatchSpanProcessor`].
//!
//! [`BatchSpanProcessor`]: crate::sdk::trace::span_processor::BatchSpanProcessor
//! [`BatchSpanProcessor`]: crate::sdk::trace::BatchSpanProcessor
//! [`Runtime`]: crate::runtime::Runtime
#[cfg(feature = "rt-async-std")]
use crate::runtime::AsyncStd;
Expand Down Expand Up @@ -34,7 +34,7 @@ const CHANNEL_CLOSED_ERROR: &str =
/// Trace runtime is an extension to [`Runtime`]. Currently it provides a channel that used
/// by [`BatchSpanProcessor`].
///
/// [`BatchSpanProcessor`]: crate::sdk::trace::span_processor::BatchSpanProcessor
/// [`BatchSpanProcessor`]: crate::sdk::trace::BatchSpanProcessor
/// [`Runtime`]: crate::runtime::Runtime
pub trait TraceRuntime: Runtime {
/// A future stream to receive the batch messages from channels.
Expand Down
Loading

0 comments on commit 99e51c1

Please sign in to comment.