From c60aa693be7814571fa332039d77dcf4cc7585ef Mon Sep 17 00:00:00 2001
From: snyder114 <31932630+snyder114@users.noreply.github.com>
Date: Thu, 19 Nov 2020 08:55:21 -0800
Subject: [PATCH] Edits to getting started docs (#1368)
---
docs/getting-started.rst | 120 +++++++++++++++++++--------------------
1 file changed, 59 insertions(+), 61 deletions(-)
diff --git a/docs/getting-started.rst b/docs/getting-started.rst
index 8129dfeac60..7e0ab3c312a 100644
--- a/docs/getting-started.rst
+++ b/docs/getting-started.rst
@@ -1,11 +1,11 @@
Getting Started with OpenTelemetry Python
=========================================
-This guide will walk you through instrumenting a Python application with ``opentelemetry-python``.
+This guide walks you through instrumenting a Python application with ``opentelemetry-python``.
-For more elaborate examples, see `examples`.
+For more elaborate examples, see `examples `_.
-Hello world: emitting a trace to your console
+Hello world: emit a trace to your console
---------------------------------------------
To get started, install both the opentelemetry API and SDK:
@@ -18,21 +18,21 @@ To get started, install both the opentelemetry API and SDK:
The API package provides the interfaces required by the application owner, as well
as some helper logic to load implementations.
-The SDK provides an implementation of those interfaces, designed to be generic and extensible enough
-that in many situations, the SDK will be sufficient.
+The SDK provides an implementation of those interfaces. The implementation is designed to be generic and extensible enough
+that in many situations, the SDK is sufficient.
-Once installed, we can now utilize the packages to emit spans from your application. A span
+Once installed, you can use the packages to emit spans from your application. A span
represents an action within your application that you want to instrument, such as an HTTP request
-or a database call. Once instrumented, the application owner can extract helpful information such as
-how long the action took, or add arbitrary attributes to the span that may provide more insight for debugging.
+or a database call. Once instrumented, you can extract helpful information such as
+how long the action took. You can also add arbitrary attributes to the span that provide more insight for debugging.
-Here's an example of a script that emits a trace containing three named spans: "foo", "bar", and "baz":
+The following example script emits a trace containing three named spans: "foo", "bar", and "baz":
.. literalinclude:: getting_started/tracing_example.py
:language: python
:lines: 15-
-We can run it, and see the traces print to your console:
+When you run the script you can see the traces printed to your console:
.. code-block:: sh
@@ -94,65 +94,64 @@ We can run it, and see the traces print to your console:
Each span typically represents a single operation or unit of work.
Spans can be nested, and have a parent-child relationship with other spans.
-While a given span is active, newly-created spans will inherit the active span's trace ID, options, and other attributes of its context.
-A span without a parent is called the "root span", and a trace is comprised of one root span and its descendants.
+While a given span is active, newly-created spans inherit the active span's trace ID, options, and other attributes of its context.
+A span without a parent is called the root span, and a trace is comprised of one root span and its descendants.
-In the example above, the OpenTelemetry Python library creates one trace containing three spans and prints it to STDOUT.
+In this example, the OpenTelemetry Python library creates one trace containing three spans and prints it to STDOUT.
Configure exporters to emit spans elsewhere
-------------------------------------------
-The example above does emit information about all spans, but the output is a bit hard to read.
-In common cases, you would instead *export* this data to an application performance monitoring backend, to be visualized and queried.
-It is also common to aggregate span and trace information from multiple services into a single database, so that actions that require multiple services can still all be visualized together.
+The previous example does emit information about all spans, but the output is a bit hard to read.
+In most cases, you can instead *export* this data to an application performance monitoring backend to be visualized and queried.
+It's also common to aggregate span and trace information from multiple services into a single database, so that actions requiring multiple services can still all be visualized together.
-This concept is known as distributed tracing. One such distributed tracing backend is known as Jaeger.
+This concept of aggregating span and trace information is known as distributed tracing. One such distributed tracing backend is known as Jaeger. The Jaeger project provides an all-in-one Docker container with a UI, database, and consumer.
-The Jaeger project provides an all-in-one docker container that provides a UI, database, and consumer. Let's bring
-it up now:
+Run the following command to start Jaeger:
.. code-block:: sh
docker run -p 16686:16686 -p 6831:6831/udp jaegertracing/all-in-one
-This will start Jaeger on port 16686 locally, and expose Jaeger thrift agent on port 6831. You can visit it at http://localhost:16686.
+This command starts Jaeger locally on port 16686 and exposes the Jaeger thrift agent on port 6831. You can visit Jaeger at http://localhost:16686.
-With this backend up, your application will now need to export traces to this system. ``opentelemetry-sdk`` does not provide an exporter
-for Jaeger, but you can install that as a separate package:
+After you spin up the backend, your application needs to export traces to this system. Although ``opentelemetry-sdk`` doesn't provide an exporter
+for Jaeger, you can install it as a separate package with the following command:
.. code-block:: sh
pip install opentelemetry-exporter-jaeger
-Once installed, update your code to import the Jaeger exporter, and use that instead:
+After you install the exporter, update your code to import the Jaeger exporter and use that instead:
.. literalinclude:: getting_started/jaeger_example.py
:language: python
:lines: 15-
-Run the script:
+Finally, run the Python script:
.. code-block:: python
python jaeger_example.py
-You can then visit the jaeger UI, see you service under "services", and find your traces!
+You can then visit the Jaeger UI, see your service under "services", and find your traces!
.. image:: images/jaeger_trace.png
-Integrations example with Flask
--------------------------------
+Instrumentation example with Flask
+------------------------------------
-The above is a great example, but it's very manual. Within the telemetry space, there are common actions that one wants to instrument:
+While the example in the previous section is great, it's very manual. The following are common actions you might want to track and include as part of your distributed tracing.
* HTTP responses from web services
* HTTP requests from clients
* Database calls
-To help instrument common scenarios, opentelemetry also has the concept of "instrumentations": packages that are designed to interface
-with a specific framework or library, such as Flask and psycopg2. A list of the currently curated extension packages can be found `at the Contrib repo `_.
+To track these common actions, OpenTelemetry has the concept of instrumentations. Instrumentations are packages designed to interface
+with a specific framework or library, such as Flask and psycopg2. You can find a list of the currently curated extension packages in the `Contrib repository `_.
-We will now instrument a basic Flask application that uses the requests library to send HTTP requests. First, install the instrumentation packages themselves:
+Instrument a basic Flask application that uses the requests library to send HTTP requests. First, install the instrumentation packages themselves:
.. code-block:: sh
@@ -160,33 +159,32 @@ We will now instrument a basic Flask application that uses the requests library
pip install opentelemetry-instrumentation-requests
-And let's write a small Flask application that sends an HTTP request, activating each instrumentation during the initialization:
+The following small Flask application sends an HTTP request and also activates each instrumentation during its initialization:
.. literalinclude:: getting_started/flask_example.py
:language: python
:lines: 15-
-Now run the above script, hit the root url (http://localhost:5000/) a few times, and watch your spans be emitted!
+Now run the script, hit the root URL (http://localhost:5000/) a few times, and watch your spans be emitted!
.. code-block:: sh
python flask_example.py
-Configure Your HTTP Propagator (b3, Baggage)
+Configure Your HTTP propagator (b3, Baggage)
-------------------------------------------------------
A major feature of distributed tracing is the ability to correlate a trace across
multiple services. However, those services need to propagate information about a
trace from one service to the other.
-To enable this, OpenTelemetry has the concept of `propagators `_,
-which provide a common method to encode and decode span information from a request and response,
-respectively.
+To enable this propagation, OpenTelemetry has the concept of `propagators `_,
+which provide a common method to encode and decode span information from a request and response, respectively.
-By default, opentelemetry-python is configured to use the `W3C Trace Context `_
-HTTP headers for HTTP requests. This can be configured to leverage different propagators. Here's
+By default, ``opentelemetry-python`` is configured to use the `W3C Trace Context `_
+HTTP headers for HTTP requests, but you can configure it to leverage different propagators. Here's
an example using Zipkin's `b3 propagation `_:
.. code-block:: python
@@ -197,24 +195,24 @@ an example using Zipkin's `b3 propagation `_.
+It's valuable to have a data store for metrics so you can visualize and query the data. A common solution is
+`Prometheus `_, which provides a server to scrape and store time series data.
-Let's start by bringing up a Prometheus instance ourselves, to scrape our application. Write the following configuration:
+Start by bringing up a Prometheus instance to scrape your application. Write the following configuration:
.. code-block:: yaml
@@ -239,7 +237,7 @@ Let's start by bringing up a Prometheus instance ourselves, to scrape our applic
static_configs:
- targets: ['localhost:8000']
-And start a docker container for it:
+Then start a Docker container for the instance:
.. code-block:: sh
@@ -247,35 +245,35 @@ And start a docker container for it:
docker run --net=host -v /tmp/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus \
--log.level=debug --config.file=/etc/prometheus/prometheus.yml
-For our Python application, we will need to install an exporter specific to Prometheus:
+Install an exporter specific to Prometheus for your Python application:
.. code-block:: sh
pip install opentelemetry-exporter-prometheus
-And use that instead of the `ConsoleMetricsExporter`:
+Use that exporter instead of the `ConsoleMetricsExporter`:
.. literalinclude:: getting_started/prometheus_example.py
:language: python
:lines: 15-
-The Prometheus server will run locally on port 8000, and the instrumented code will make metrics available to Prometheus via the `PrometheusMetricsExporter`.
+The Prometheus server runs locally on port 8000. The instrumented code makes metrics available to Prometheus via the `PrometheusMetricsExporter`.
Visit the Prometheus UI (http://localhost:9090) to view your metrics.
-Using the OpenTelemetry Collector for traces and metrics
+Use the OpenTelemetry Collector for traces and metrics
--------------------------------------------------------
-Although it's possible to directly export your telemetry data to specific backends, you may more complex use cases, including:
+Although it's possible to directly export your telemetry data to specific backends, you might have more complex use cases such as the following:
-* having a single telemetry sink shared by multiple services, to reduce overhead of switching exporters
-* aggregating metrics or traces across multiple services, running on multiple hosts
+* A single telemetry sink shared by multiple services, to reduce overhead of switching exporters.
+* Aggregaing metrics or traces across multiple services, running on multiple hosts.
To enable a broad range of aggregation strategies, OpenTelemetry provides the `opentelemetry-collector `_.
The Collector is a flexible application that can consume trace and metric data and export to multiple other backends, including to another instance of the Collector.
-To see how this works in practice, let's start the Collector locally. Write the following file:
+Start the Collector locally to see how the Collector works in practice. Write the following file:
.. code-block:: yaml
@@ -299,7 +297,7 @@ To see how this works in practice, let's start the Collector locally. Write the
receivers: [opencensus]
exporters: [logging]
-Start the docker container:
+Then start the Docker container:
.. code-block:: sh
@@ -314,7 +312,7 @@ Install the OpenTelemetry Collector exporter:
pip install opentelemetry-exporter-otlp
-And execute the following script:
+Finally, execute the following script:
.. literalinclude:: getting_started/otlpcollector_example.py
:language: python