From c432da344f5bd542af28f46b18ca58d8aead106b Mon Sep 17 00:00:00 2001 From: Hector Hernandez Guzman Date: Thu, 26 Sep 2019 13:40:42 -0700 Subject: [PATCH] Span add override parameters for start_time and end_time Make lint happy Addressing comments Addressing comments Allowing 0 as start and end time Fix lint issues Metrics API RFC 0003 cont'd (#136) * Create functions Comments for Meter More comments Add more comments Fix typos * fix lint * Fix lint * fix typing * Remove options, constructors, seperate labels * Consistent naming for float and int * Abstract time series * Use ABC * Fix typo * Fix docs * seperate measure classes * Add examples * fix lint * Update to RFC 0003 * Add spancontext, measurebatch * Fix docs * Fix comments * fix lint * fix lint * fix lint * skip examples * white space * fix spacing * fix imports * fix imports * LabelValues to str * Black formatting * fix isort * Remove aggregation * Fix names * Remove aggregation from docs * Fix lint * metric changes * Typing * Fix lint * Fix lint * Add space * Fix lint * fix comments * address comments * fix comments Adding a working propagator, adding to integrations and example (#137) Adding a full, end-to-end example of propagation at work in the example application, including a test. Adding the use of propagators into the integrations. Metrics API RFC 0009 (#140) * Create functions Comments for Meter More comments Add more comments Fix typos * fix lint * Fix lint * fix typing * Remove options, constructors, seperate labels * Consistent naming for float and int * Abstract time series * Use ABC * Fix typo * Fix docs * seperate measure classes * Add examples * fix lint * Update to RFC 0003 * Add spancontext, measurebatch * Fix docs * Fix comments * fix lint * fix lint * fix lint * skip examples * white space * fix spacing * fix imports * fix imports * LabelValues to str * Black formatting * fix isort * Remove aggregation * Fix names * Remove aggregation from docs * Fix lint * metric changes * Typing * Fix lint * Fix lint * Add space * Fix lint * fix comments * handle, recordbatch * docs * Update recordbatch * black * Fix typo * remove ValueType * fix lint Console exporter (#156) Make use_span more flexible (closes #147). (#154) Co-Authored-By: Reiley Yang Co-Authored-By: Chris Kleinknecht WSGI fixes (#148) Fix http.url. Don't delay calling wrapped app. Skeleton for azure monitor exporters (#151) Add link to docs to README (#170) Move example app to the examples folder (#172) WSGI: Fix port 80 always appended in http.host (#173) Build and host docs via github action (#167) Add missing license boilerplate to a few files (#176) sdk/trace/exporters: add batch span processor exporter (#153) The exporters specification states that two built-in span processors should be implemented, the simple processor span and the batch processor span. This commit implements the latter, it is mainly based on the opentelemetry/java one. The algorithm implements the following logic: - a condition variable is used to notify the worker thread in case the queue is half full, so that exporting can start before the queue gets full and spans are dropped. - export is called each schedule_delay_millis if there is a least one new span to export. - when the processor is shutdown all remaining spans are exported. Implementing W3C TraceContext (fixes #116) (#180) * Implementing TraceContext (fixes #116) This introduces a w3c TraceContext propagator, primarily inspired by opencensus. fix time conversion bug (#182) Introduce Context.suppress_instrumentation (#181) Metrics Implementation (#160) * Create functions Comments for Meter More comments Add more comments Fix typos * fix lint * Fix lint * fix typing * Remove options, constructors, seperate labels * Consistent naming for float and int * Abstract time series * Use ABC * Fix typo * Fix docs * seperate measure classes * Add examples * fix lint * Update to RFC 0003 * Add spancontext, measurebatch * Fix docs * Fix comments * fix lint * fix lint * fix lint * skip examples * white space * fix spacing * fix imports * fix imports * LabelValues to str * Black formatting * fix isort * Remove aggregation * Fix names * Remove aggregation from docs * Fix lint * metric changes * Typing * Fix lint * Fix lint * Add space * Fix lint * fix comments * handle, recordbatch * docs * Update recordbatch * black * Fix typo * remove ValueType * fix lint * sdk * metrics * example * counter * Tests * Address comments * ADd tests * Fix typing and examples * black * fix lint * remove override * Fix tests * mypy * fix lint * fix type * fix typing * fix tests * isort * isort * isort * isort * noop * lint * lint * fix tuple typing * fix type * black * address comments * fix type * fix lint * remove imports * default tests * fix lint * usse sequence * remove ellipses * remove ellipses * black * Fix typo * fix example * fix type * fix type * address comments Implement Azure Monitor Exporter (#175) Span add override parameters for start_time and end_time (#179) CONTRIBUTING.md: Fix clone URL (#177) Add B3 exporter to alpha release table (#164) Update README for alpha release (#189) Update Contributing.md doc (#194) Add **simple** client/server examples (#191) Remove unused dev-requirements.txt (#200) The requirements are contained in tox.ini now. Fx bug in BoundedList for Python 3.4 and add tests (#199) * fix bug in BoundedList for python 3.4 and add tests collections.deque.copy() was introduced in python 3.5, this commit changes that by the deque constructor and adds some tests to BoundedList and BoundedDict to avoid similar problems in the future. Also, improve docstrings of BoundedList and BoundedDict classes Move util.time_ns to API. (#205) Add Jaeger exporter (#174) This adds a Jeager exporter for OpenTelemetry. This exporter is based on https://github.com/census-instrumentation/opencensus-python/tree/master/contrib/opencensus-ext-jaeger. The exporter uses thrift and can be configured to send data to the agent and also to a remote collector. There is a long discussion going on about how to include generated files in the repo, so for now just put them here. Add code coverage Revert latest commit Fix some "errors" found by mypy. (#204) Fix some errors found by mypy (split from #201). Update README for new milestones (#218) Refactor current span handling for newly created spans. (#198) 1. Make Tracer.start_span() simply create and start the Span, without setting it as the current instance. 2. Add an extra Tracer.start_as_current_span() to create the Span and set it as the current instance automatically. Co-Authored-By: Chris Kleinknecht Add set_status to Span (#213) Initial commit Initial version --- .flake8 | 1 + .github/workflows/docs.yml | 23 + .isort.cfg | 1 + .pylintrc | 2 +- CONTRIBUTING.md | 36 +- README.md | 99 ++- dev-requirements.txt | 1 - docs/conf.py | 3 + docs/opentelemetry.metrics.rst | 7 - docs/opentelemetry.metrics.time_series.rst | 5 - docs/opentelemetry.trace.rst | 12 +- docs/opentelemetry.trace.status.rst | 7 + .../opentelemetry-example-app}/README.rst | 0 .../opentelemetry-example-app}/setup.py | 0 .../src/opentelemetry_example_app/__init__.py | 0 .../flask_example.py | 18 +- .../metrics_example.py | 39 + .../tests/__init__.py | 0 .../tests/test_flask_example.py | 73 ++ examples/trace/client.py | 42 + examples/trace/server.py | 55 ++ .../README.rst | 17 + .../examples/client.py | 30 + .../examples/server.py | 44 + .../examples/trace.py | 27 + ext/opentelemetry-ext-azure-monitor/setup.cfg | 46 + ext/opentelemetry-ext-azure-monitor/setup.py | 26 + .../ext/azure_monitor/__init__.py | 23 + .../ext/azure_monitor/protocol.py | 201 +++++ .../opentelemetry/ext/azure_monitor/trace.py | 173 ++++ .../opentelemetry/ext/azure_monitor/util.py | 42 + .../ext/azure_monitor/version.py | 15 + .../tests/__init__.py | 0 .../ext/http_requests/__init__.py | 8 +- .../tests/test_requests_integration.py | 14 + ext/opentelemetry-ext-jaeger/README.rst | 67 ++ .../examples/jaeger_exporter_example.py | 51 ++ ext/opentelemetry-ext-jaeger/setup.cfg | 47 + ext/opentelemetry-ext-jaeger/setup.py | 26 + .../src/opentelemetry/ext/jaeger/__init__.py | 363 ++++++++ .../opentelemetry/ext/jaeger/gen/__init__.py | 4 + .../ext/jaeger/gen/agent/Agent-remote | 124 +++ .../ext/jaeger/gen/agent/Agent.py | 246 ++++++ .../ext/jaeger/gen/agent/__init__.py | 1 + .../ext/jaeger/gen/agent/constants.py | 12 + .../ext/jaeger/gen/agent/ttypes.py | 15 + .../ext/jaeger/gen/jaeger/Collector-remote | 117 +++ .../ext/jaeger/gen/jaeger/Collector.py | 243 +++++ .../ext/jaeger/gen/jaeger/__init__.py | 1 + .../ext/jaeger/gen/jaeger/constants.py | 12 + .../ext/jaeger/gen/jaeger/ttypes.py | 831 ++++++++++++++++++ .../gen/zipkincore/ZipkinCollector-remote | 117 +++ .../jaeger/gen/zipkincore/ZipkinCollector.py | 243 +++++ .../ext/jaeger/gen/zipkincore/__init__.py | 1 + .../ext/jaeger/gen/zipkincore/constants.py | 28 + .../ext/jaeger/gen/zipkincore/ttypes.py | 647 ++++++++++++++ .../src/opentelemetry/ext/jaeger/version.py | 16 + .../tests/__init__.py | 0 .../tests/test_jaeger_exporter.py | 254 ++++++ .../thrift/agent.thrift | 27 + .../thrift/jaeger.thrift | 85 ++ .../thrift/zipkincore.thrift | 346 ++++++++ ext/opentelemetry-ext-pymongo/README.rst | 27 + ext/opentelemetry-ext-pymongo/setup.cfg | 46 + ext/opentelemetry-ext-pymongo/setup.py | 26 + .../src/opentelemetry/ext/pymongo/__init__.py | 80 ++ .../src/opentelemetry/ext/pymongo/version.py | 15 + .../tests/__init__.py | 0 .../tests/test_pymongo_integration.py | 144 +++ .../src/opentelemetry/ext/wsgi/__init__.py | 100 ++- .../tests/test_wsgi_middleware.py | 135 ++- .../src/opentelemetry/context/__init__.py | 7 +- .../context/propagation/__init__.py | 14 + .../context/propagation/httptextformat.py | 10 +- .../propagation/tracecontexthttptextformat.py | 162 ++++ .../src/opentelemetry/metrics/__init__.py | 359 +++----- .../metrics/examples/pre_aggregated.py | 36 - .../src/opentelemetry/metrics/examples/raw.py | 45 - .../src/opentelemetry/metrics/time_series.py | 39 - .../src/opentelemetry/propagators/__init__.py | 84 ++ .../src/opentelemetry/trace/__init__.py | 115 ++- .../src/opentelemetry/trace/status.py | 185 ++++ .../src/opentelemetry/util/__init__.py | 12 + opentelemetry-api/tests/context/__init__.py | 0 .../tests/context/propagation/__init__.py | 0 .../test_tracecontexthttptextformat.py | 215 +++++ opentelemetry-api/tests/metrics/__init__.py | 13 + .../tests/metrics/test_metrics.py | 68 ++ opentelemetry-api/tests/mypysmoke.py | 14 + .../tests/test_flask_example.py | 14 - .../src/opentelemetry/sdk/__init__.py | 4 +- .../sdk/context/propagation/b3_format.py | 22 +- .../src/opentelemetry/sdk/metrics/__init__.py | 247 ++++++ .../src/opentelemetry/sdk/trace/__init__.py | 224 ++--- .../sdk/trace/export/__init__.py | 150 ++++ .../trace/export/in_memory_span_exporter.py | 2 +- .../src/opentelemetry/sdk/util.py | 131 ++- opentelemetry-sdk/tests/metrics/__init__.py | 13 + .../tests/metrics/test_metrics.py | 189 ++++ .../tests/resources/test_init.py | 14 + opentelemetry-sdk/tests/test_util.py | 213 +++++ .../tests/trace/export/test_export.py | 167 +++- opentelemetry-sdk/tests/trace/test_trace.py | 165 +++- pyproject.toml | 19 + tox.ini | 35 +- 105 files changed, 7843 insertions(+), 731 deletions(-) create mode 100644 .github/workflows/docs.yml delete mode 100644 dev-requirements.txt delete mode 100644 docs/opentelemetry.metrics.time_series.rst create mode 100644 docs/opentelemetry.trace.status.rst rename {opentelemetry-example-app => examples/opentelemetry-example-app}/README.rst (100%) rename {opentelemetry-example-app => examples/opentelemetry-example-app}/setup.py (100%) rename {opentelemetry-example-app => examples/opentelemetry-example-app}/src/opentelemetry_example_app/__init__.py (100%) rename {opentelemetry-example-app => examples/opentelemetry-example-app}/src/opentelemetry_example_app/flask_example.py (79%) create mode 100644 examples/opentelemetry-example-app/src/opentelemetry_example_app/metrics_example.py rename {opentelemetry-example-app => examples/opentelemetry-example-app}/tests/__init__.py (100%) create mode 100644 examples/opentelemetry-example-app/tests/test_flask_example.py create mode 100755 examples/trace/client.py create mode 100755 examples/trace/server.py create mode 100644 ext/opentelemetry-ext-azure-monitor/README.rst create mode 100644 ext/opentelemetry-ext-azure-monitor/examples/client.py create mode 100644 ext/opentelemetry-ext-azure-monitor/examples/server.py create mode 100644 ext/opentelemetry-ext-azure-monitor/examples/trace.py create mode 100644 ext/opentelemetry-ext-azure-monitor/setup.cfg create mode 100644 ext/opentelemetry-ext-azure-monitor/setup.py create mode 100644 ext/opentelemetry-ext-azure-monitor/src/opentelemetry/ext/azure_monitor/__init__.py create mode 100644 ext/opentelemetry-ext-azure-monitor/src/opentelemetry/ext/azure_monitor/protocol.py create mode 100644 ext/opentelemetry-ext-azure-monitor/src/opentelemetry/ext/azure_monitor/trace.py create mode 100644 ext/opentelemetry-ext-azure-monitor/src/opentelemetry/ext/azure_monitor/util.py create mode 100644 ext/opentelemetry-ext-azure-monitor/src/opentelemetry/ext/azure_monitor/version.py create mode 100644 ext/opentelemetry-ext-azure-monitor/tests/__init__.py create mode 100644 ext/opentelemetry-ext-jaeger/README.rst create mode 100644 ext/opentelemetry-ext-jaeger/examples/jaeger_exporter_example.py create mode 100644 ext/opentelemetry-ext-jaeger/setup.cfg create mode 100644 ext/opentelemetry-ext-jaeger/setup.py create mode 100644 ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/__init__.py create mode 100644 ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/__init__.py create mode 100755 ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/agent/Agent-remote create mode 100644 ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/agent/Agent.py create mode 100644 ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/agent/__init__.py create mode 100644 ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/agent/constants.py create mode 100644 ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/agent/ttypes.py create mode 100755 ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/jaeger/Collector-remote create mode 100644 ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/jaeger/Collector.py create mode 100644 ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/jaeger/__init__.py create mode 100644 ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/jaeger/constants.py create mode 100644 ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/jaeger/ttypes.py create mode 100755 ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/zipkincore/ZipkinCollector-remote create mode 100644 ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/zipkincore/ZipkinCollector.py create mode 100644 ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/zipkincore/__init__.py create mode 100644 ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/zipkincore/constants.py create mode 100644 ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/zipkincore/ttypes.py create mode 100644 ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/version.py create mode 100644 ext/opentelemetry-ext-jaeger/tests/__init__.py create mode 100644 ext/opentelemetry-ext-jaeger/tests/test_jaeger_exporter.py create mode 100644 ext/opentelemetry-ext-jaeger/thrift/agent.thrift create mode 100644 ext/opentelemetry-ext-jaeger/thrift/jaeger.thrift create mode 100644 ext/opentelemetry-ext-jaeger/thrift/zipkincore.thrift create mode 100644 ext/opentelemetry-ext-pymongo/README.rst create mode 100644 ext/opentelemetry-ext-pymongo/setup.cfg create mode 100644 ext/opentelemetry-ext-pymongo/setup.py create mode 100644 ext/opentelemetry-ext-pymongo/src/opentelemetry/ext/pymongo/__init__.py create mode 100644 ext/opentelemetry-ext-pymongo/src/opentelemetry/ext/pymongo/version.py create mode 100644 ext/opentelemetry-ext-pymongo/tests/__init__.py create mode 100644 ext/opentelemetry-ext-pymongo/tests/test_pymongo_integration.py create mode 100644 opentelemetry-api/src/opentelemetry/context/propagation/tracecontexthttptextformat.py delete mode 100644 opentelemetry-api/src/opentelemetry/metrics/examples/pre_aggregated.py delete mode 100644 opentelemetry-api/src/opentelemetry/metrics/examples/raw.py delete mode 100644 opentelemetry-api/src/opentelemetry/metrics/time_series.py create mode 100644 opentelemetry-api/src/opentelemetry/propagators/__init__.py create mode 100644 opentelemetry-api/src/opentelemetry/trace/status.py create mode 100644 opentelemetry-api/src/opentelemetry/util/__init__.py create mode 100644 opentelemetry-api/tests/context/__init__.py create mode 100644 opentelemetry-api/tests/context/propagation/__init__.py create mode 100644 opentelemetry-api/tests/context/propagation/test_tracecontexthttptextformat.py create mode 100644 opentelemetry-api/tests/metrics/__init__.py create mode 100644 opentelemetry-api/tests/metrics/test_metrics.py delete mode 100644 opentelemetry-example-app/tests/test_flask_example.py create mode 100644 opentelemetry-sdk/src/opentelemetry/sdk/metrics/__init__.py create mode 100644 opentelemetry-sdk/tests/metrics/__init__.py create mode 100644 opentelemetry-sdk/tests/metrics/test_metrics.py create mode 100644 opentelemetry-sdk/tests/test_util.py diff --git a/.flake8 b/.flake8 index 5384053b3bc..a3411a16147 100644 --- a/.flake8 +++ b/.flake8 @@ -1,2 +1,3 @@ [flake8] ignore = E501,W503,E203 +exclude = .svn,CVS,.bzr,.hg,.git,__pycache__,.tox,ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/,ext/opentelemetry-ext-jaeger/build/* diff --git a/.github/workflows/docs.yml b/.github/workflows/docs.yml new file mode 100644 index 00000000000..423396ea8d7 --- /dev/null +++ b/.github/workflows/docs.yml @@ -0,0 +1,23 @@ +name: Docs + +on: + push: + branches: + - master + +jobs: + docs: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v1 + - uses: actions/setup-python@v1 + - name: Build docs + run: | + pip install --upgrade tox + tox -e docs + - name: Publish to gh-pages + uses: JamesIves/github-pages-deploy-action@2.0.2 + env: + ACCESS_TOKEN: ${{ secrets.DocsPushToken }} + BRANCH: gh-pages + FOLDER: docs/_build/html/ diff --git a/.isort.cfg b/.isort.cfg index 43cafae1971..4bf64a34f1a 100644 --- a/.isort.cfg +++ b/.isort.cfg @@ -12,3 +12,4 @@ line_length=79 ; ) ; docs: https://github.com/timothycrosley/isort#multi-line-output-modes multi_line_output=3 +skip_glob=ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/* diff --git a/.pylintrc b/.pylintrc index 8130305d70d..782fc58700e 100644 --- a/.pylintrc +++ b/.pylintrc @@ -7,7 +7,7 @@ extension-pkg-whitelist= # Add files or directories to the blacklist. They should be base names, not # paths. -ignore=CVS +ignore=CVS,gen # Add files or directories matching the regex patterns to the blacklist. The # regex matches against base names, not paths. diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index b327ce96e1c..9bb81d71652 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -8,6 +8,11 @@ See the [public meeting notes](https://docs.google.com/document/d/1CIMGoIOZ-c3-i for a summary description of past meetings. To request edit access, join the meeting or get in touch on [Gitter](https://gitter.im/open-telemetry/opentelemetry-python). +See to the [community membership document](https://github.com/open-telemetry/community/blob/master/community-membership.md) +on how to become a [**Member**](https://github.com/open-telemetry/community/blob/master/community-membership.md#member), +[**Approver**](https://github.com/open-telemetry/community/blob/master/community-membership.md#approver) +and [**Maintainer**](https://github.com/open-telemetry/community/blob/master/community-membership.md#maintainer). + ## Development This project uses [`tox`](https://tox.readthedocs.io) to automate some aspects @@ -37,7 +42,7 @@ pull requests (PRs). To create a new PR, fork the project in GitHub and clone the upstream repo: ```sh -$ git clone https://https://github.com/open-telemetry/opentelemetry-python.git +$ git clone https://github.com/open-telemetry/opentelemetry-python.git ``` Add your fork as an origin: @@ -74,15 +79,16 @@ Open a pull request against the main `opentelemetry-python` repo. ### How to Get PRs Merged A PR is considered to be **ready to merge** when: -* It has received two approvals from Collaborators/Maintainers (at different - companies). +* It has received two approvals from [Approvers](https://github.com/open-telemetry/community/blob/master/community-membership.md#approver) + / [Maintainers](https://github.com/open-telemetry/community/blob/master/community-membership.md#maintainer) + (at different companies). * Major feedbacks are resolved. * It has been open for review for at least one working day. This gives people reasonable time to review. * Trivial change (typo, cosmetic, doc, etc.) doesn't have to wait for one day. * Urgent fix can take exception as long as it has been actively communicated. -Any Collaborator/Maintainer can merge the PR once it is **ready to merge**. +Any Approver / Maintainer can merge the PR once it is **ready to merge**. ## Design Choices @@ -111,25 +117,3 @@ For a deeper discussion, see: https://github.com/open-telemetry/opentelemetry-sp as specified with the [napolean extension](http://www.sphinx-doc.org/en/master/usage/extensions/napoleon.html#google-vs-numpy) extension in [Sphinx](http://www.sphinx-doc.org/en/master/index.html). - -## Become a Collaborator - -Collaborators have write access to the repo. - -To become a Collaborator: -* Become an active Contributor by working on PRs. -* Actively participate in the community meeting, design discussion, PR review - and issue discussion. -* Contact the Maintainers, express the willingness and commitment. -* Acknowledged and approved by two Maintainers (at different companies). - -## Become a Maintainer - -Maintainers have admin access to the repo. - -To become a Maintainer: -* Become a [member of OpenTelemetry organization](https://github.com/orgs/open-telemetry/people). -* Become a Collaborator. -* Demonstrate the ability and commitment. -* Contact the Maintainers, express the willingness and commitment. -* Acknowledged and approved by all the current Maintainers. diff --git a/README.md b/README.md index b597d6f6119..6edad08a2bc 100644 --- a/README.md +++ b/README.md @@ -16,16 +16,16 @@ The `opentelemetry-sdk` package is the reference implementation of the API. Libraries that produce telemetry data should only depend on `opentelemetry-api`, and defer the choice of the SDK to the application developer. Applications may -depend on `opentelemetry-sdk` or another package that implements the API. +depend on `opentelemetry-sdk` or another package that implements the API. -To install the API and SDK packages, fork or clone this repo and do an -[editable -install](https://pip.pypa.io/en/stable/reference/pip_install/#editable-installs) -via `pip`: +**Please note** that this library is currently in _alpha_, and shouldn't be +used in production environments. + +The API and SDK packages are available on PyPI, and can installed via `pip`: ```sh -pip install -e ./opentelemetry-api -pip install -e ./opentelemetry-sdk +pip install opentelemetry-api +pip install opentelemetry-sdk ``` The @@ -34,6 +34,16 @@ directory includes OpenTelemetry integration packages, which can be installed separately as: ```sh +pip install opentelemetry-ext-{integration} +``` + +To install the development versions of these packages instead, clone or fork +this repo and do an [editable +install](https://pip.pypa.io/en/stable/reference/pip_install/#editable-installs): + +```sh +pip install -e ./opentelemetry-api +pip install -e ./opentelemetry-sdk pip install -e ./ext/opentelemetry-ext-{integration} ``` @@ -43,20 +53,25 @@ pip install -e ./ext/opentelemetry-ext-{integration} from opentelemetry import trace from opentelemetry.context import Context from opentelemetry.sdk.trace import Tracer +from opentelemetry.sdk.trace.export import ConsoleSpanExporter +from opentelemetry.sdk.trace.export import SimpleExportSpanProcessor trace.set_preferred_tracer_implementation(lambda T: Tracer()) tracer = trace.tracer() +tracer.add_span_processor( + SimpleExportSpanProcessor(ConsoleSpanExporter()) +) with tracer.start_span('foo'): - print(Context) with tracer.start_span('bar'): - print(Context) with tracer.start_span('baz'): print(Context) - print(Context) - print(Context) ``` -See [opentelemetry-example-app](./opentelemetry-example-app/README.rst) for a complete example. +See the [API +documentation](https://open-telemetry.github.io/opentelemetry-python/) for more +detail, and the +[opentelemetry-example-app](./examples/opentelemetry-example-app/README.rst) +for a complete example. ## Contributing @@ -64,20 +79,46 @@ See [CONTRIBUTING.md](CONTRIBUTING.md) ## Release Schedule -OpenTelemetry Python is under active development. Our goal is to release an -_alpha_ version of the library at the end of September 2019. This release isn't -guaranteed to conform to a specific version of the specification, and future -releases will not attempt to maintain backwards compatibility with the alpha -release. - -| Component | Version | Target Date | -| --------------------------- | ------- | ----------------- | -| Tracing API | Alpha | September 30 2019 | -| Tracing SDK | Alpha | September 30 2019 | -| Metrics API | Alpha | September 30 2019 | -| Metrics SDK | Alpha | September 30 2019 | -| Jaeger Trace Exporter | Alpha | Unknown | -| Prometheus Metrics Exporter | Alpha | Unknown | -| Context Propagation | Alpha | September 30 2019 | -| OpenTracing Bridge | Alpha | Unknown | -| OpenCensus Bridge | Alpha | Unknown | +OpenTelemetry Python is under active development. + +The library is not yet _generally available_, and releases aren't guaranteed to +conform to a specific version of the specification. Future releases will not +attempt to maintain backwards compatibility with previous releases. + +The [v0.1 alpha +release](https://github.com/open-telemetry/opentelemetry-python/releases/tag/v0.1.0) +includes: + +- Tracing API +- Tracing SDK +- Metrics API +- Metrics SDK (Partial) +- W3C Trace Context Propagation +- B3 Context Propagation +- HTTP Integrations + +See the [project +milestones](https://github.com/open-telemetry/opentelemetry-python/milestones) +for details on upcoming releases. The dates and features described here are +estimates, and subject to change. + +Future releases targets include: + +| Component | Version | Target Date | +| --------------------------- | ---------- | --------------- | +| Jaeger Trace Exporter | Alpha v0.2 | October 28 2019 | +| Metrics SDK (Complete) | Alpha v0.2 | October 28 2019 | +| Prometheus Metrics Exporter | Alpha v0.2 | October 28 2019 | +| OpenTracing Bridge | Alpha v0.2 | October 28 2019 | + +| Component | Version | Target Date | +| ----------------------------------- | ---------- | ---------------- | +| Zipkin Trace Exporter | Alpha v0.3 | November 15 2019 | +| W3C Correlation Context Propagation | Alpha v0.3 | November 15 2019 | +| Support for Tags/Baggage | Alpha v0.3 | November 15 2019 | +| Metrics Aggregation | Alpha v0.3 | November 15 2019 | +| gRPC Integrations | Alpha v0.3 | November 15 2019 | + +| Component | Version | Target Date | +| ----------------- | ---------- | ---------------- | +| OpenCensus Bridge | Alpha v0.4 | December 31 2019 | diff --git a/dev-requirements.txt b/dev-requirements.txt deleted file mode 100644 index c326b529fde..00000000000 --- a/dev-requirements.txt +++ /dev/null @@ -1 +0,0 @@ -pylint~=2.3 diff --git a/docs/conf.py b/docs/conf.py index d719ebe9311..694ba7f0056 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -42,6 +42,9 @@ "sphinx.ext.viewcode", # Link to other sphinx docs "sphinx.ext.intersphinx", + # Add a .nojekyll file to the generated HTML docs + # https://help.github.com/en/articles/files-that-start-with-an-underscore-are-missing + "sphinx.ext.githubpages", ] intersphinx_mapping = {"python": ("https://docs.python.org/3/", None)} diff --git a/docs/opentelemetry.metrics.rst b/docs/opentelemetry.metrics.rst index 2d025d3197f..358d5491a69 100644 --- a/docs/opentelemetry.metrics.rst +++ b/docs/opentelemetry.metrics.rst @@ -1,13 +1,6 @@ opentelemetry.metrics package ============================= -Submodules ----------- - -.. toctree:: - - opentelemetry.metrics.time_series - Module contents --------------- diff --git a/docs/opentelemetry.metrics.time_series.rst b/docs/opentelemetry.metrics.time_series.rst deleted file mode 100644 index 16297d7eacd..00000000000 --- a/docs/opentelemetry.metrics.time_series.rst +++ /dev/null @@ -1,5 +0,0 @@ -opentelemetry.metrics.time\_series module -========================================== - -.. automodule:: opentelemetry.metrics.time_series - diff --git a/docs/opentelemetry.trace.rst b/docs/opentelemetry.trace.rst index cec44bd8178..a57b5dcbff8 100644 --- a/docs/opentelemetry.trace.rst +++ b/docs/opentelemetry.trace.rst @@ -1,4 +1,14 @@ opentelemetry.trace package =========================== -.. automodule:: opentelemetry.trace +Submodules +---------- + +.. toctree:: + + opentelemetry.trace.status + +Module contents +--------------- + +.. automodule:: opentelemetry.trace \ No newline at end of file diff --git a/docs/opentelemetry.trace.status.rst b/docs/opentelemetry.trace.status.rst new file mode 100644 index 00000000000..0205446c808 --- /dev/null +++ b/docs/opentelemetry.trace.status.rst @@ -0,0 +1,7 @@ +opentelemetry.trace.status +========================== + +.. automodule:: opentelemetry.trace.status + :members: + :undoc-members: + :show-inheritance: diff --git a/opentelemetry-example-app/README.rst b/examples/opentelemetry-example-app/README.rst similarity index 100% rename from opentelemetry-example-app/README.rst rename to examples/opentelemetry-example-app/README.rst diff --git a/opentelemetry-example-app/setup.py b/examples/opentelemetry-example-app/setup.py similarity index 100% rename from opentelemetry-example-app/setup.py rename to examples/opentelemetry-example-app/setup.py diff --git a/opentelemetry-example-app/src/opentelemetry_example_app/__init__.py b/examples/opentelemetry-example-app/src/opentelemetry_example_app/__init__.py similarity index 100% rename from opentelemetry-example-app/src/opentelemetry_example_app/__init__.py rename to examples/opentelemetry-example-app/src/opentelemetry_example_app/__init__.py diff --git a/opentelemetry-example-app/src/opentelemetry_example_app/flask_example.py b/examples/opentelemetry-example-app/src/opentelemetry_example_app/flask_example.py similarity index 79% rename from opentelemetry-example-app/src/opentelemetry_example_app/flask_example.py rename to examples/opentelemetry-example-app/src/opentelemetry_example_app/flask_example.py index 9568f270c98..229acdfb435 100644 --- a/opentelemetry-example-app/src/opentelemetry_example_app/flask_example.py +++ b/examples/opentelemetry-example-app/src/opentelemetry_example_app/flask_example.py @@ -16,13 +16,13 @@ This module serves as an example to integrate with flask, using the requests library to perform downstream requests """ -import time - import flask +import requests import opentelemetry.ext.http_requests -from opentelemetry import trace +from opentelemetry import propagators, trace from opentelemetry.ext.wsgi import OpenTelemetryMiddleware +from opentelemetry.sdk.context.propagation.b3_format import B3Format from opentelemetry.sdk.trace import Tracer @@ -39,7 +39,6 @@ def configure_opentelemetry(flask_app: flask.Flask): * processors? * exporters? - * propagators? """ # Start by configuring all objects required to ensure # a complete end to end workflow. @@ -47,6 +46,13 @@ def configure_opentelemetry(flask_app: flask.Flask): # as the opentelemetry-api defines the interface with a no-op # implementation. trace.set_preferred_tracer_implementation(lambda _: Tracer()) + # Next, we need to configure how the values that are used by + # traces and metrics are propagated (such as what specific headers + # carry this value). + + # TBD: can remove once default TraceContext propagators are installed. + propagators.set_global_httptextformat(B3Format()) + # Integrations are the glue that binds the OpenTelemetry API # and the frameworks and libraries that are used together, automatically # creating Spans and propagating context as appropriate. @@ -61,8 +67,8 @@ def configure_opentelemetry(flask_app: flask.Flask): def hello(): # emit a trace that measures how long the # sleep takes - with trace.tracer().start_span("sleep"): - time.sleep(0.001) + with trace.tracer().start_span("example-request"): + requests.get("http://www.example.com") return "hello" diff --git a/examples/opentelemetry-example-app/src/opentelemetry_example_app/metrics_example.py b/examples/opentelemetry-example-app/src/opentelemetry_example_app/metrics_example.py new file mode 100644 index 00000000000..dd9509feb2b --- /dev/null +++ b/examples/opentelemetry-example-app/src/opentelemetry_example_app/metrics_example.py @@ -0,0 +1,39 @@ +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +""" +This module serves as an example for a simple application using metrics +""" + +from opentelemetry import metrics +from opentelemetry.sdk.metrics import Counter, Meter + +metrics.set_preferred_meter_implementation(lambda _: Meter()) +meter = metrics.meter() +counter = meter.create_metric( + "available memory", + "available memory", + "bytes", + int, + Counter, + ("environment",), +) + +label_values = ("staging",) +counter_handle = counter.get_handle(label_values) +counter_handle.add(100) +meter.record_batch(label_values, [(counter, 50)]) +print(counter_handle.data) + +# TODO: exporters diff --git a/opentelemetry-example-app/tests/__init__.py b/examples/opentelemetry-example-app/tests/__init__.py similarity index 100% rename from opentelemetry-example-app/tests/__init__.py rename to examples/opentelemetry-example-app/tests/__init__.py diff --git a/examples/opentelemetry-example-app/tests/test_flask_example.py b/examples/opentelemetry-example-app/tests/test_flask_example.py new file mode 100644 index 00000000000..fd0b89e98c3 --- /dev/null +++ b/examples/opentelemetry-example-app/tests/test_flask_example.py @@ -0,0 +1,73 @@ +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import unittest +from unittest import mock + +import requests +from werkzeug.test import Client +from werkzeug.wrappers import BaseResponse + +import opentelemetry_example_app.flask_example as flask_example +from opentelemetry.sdk import trace +from opentelemetry.sdk.context.propagation import b3_format + + +class TestFlaskExample(unittest.TestCase): + @classmethod + def setUpClass(cls): + cls.app = flask_example.app + + def setUp(self): + mocked_response = requests.models.Response() + mocked_response.status_code = 200 + mocked_response.reason = "Roger that!" + self.send_patcher = mock.patch.object( + requests.Session, + "send", + autospec=True, + spec_set=True, + return_value=mocked_response, + ) + self.send = self.send_patcher.start() + + def tearDown(self): + self.send_patcher.stop() + + def test_full_path(self): + trace_id = trace.generate_trace_id() + # We need to use the Werkzeug test app because + # The headers are injected at the wsgi layer. + # The flask test app will not include these, and + # result in the values not propagated. + client = Client(self.app.wsgi_app, BaseResponse) + # emulate b3 headers + client.get( + "/", + headers={ + "x-b3-traceid": b3_format.format_trace_id(trace_id), + "x-b3-spanid": b3_format.format_span_id( + trace.generate_span_id() + ), + "x-b3-sampled": "1", + }, + ) + # assert the http request header was propagated through. + prepared_request = self.send.call_args[0][1] + headers = prepared_request.headers + for required_header in {"x-b3-traceid", "x-b3-spanid", "x-b3-sampled"}: + self.assertIn(required_header, headers) + self.assertEqual( + headers["x-b3-traceid"], b3_format.format_trace_id(trace_id) + ) diff --git a/examples/trace/client.py b/examples/trace/client.py new file mode 100755 index 00000000000..662cea8d969 --- /dev/null +++ b/examples/trace/client.py @@ -0,0 +1,42 @@ +#!/usr/bin/env python3 +# +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import requests + +from opentelemetry import trace +from opentelemetry.ext import http_requests +from opentelemetry.sdk.trace import Tracer +from opentelemetry.sdk.trace.export import ( + ConsoleSpanExporter, + SimpleExportSpanProcessor, +) + +# The preferred tracer implementation must be set, as the opentelemetry-api +# defines the interface with a no-op implementation. +trace.set_preferred_tracer_implementation(lambda T: Tracer()) +tracer = trace.tracer() + +# Integrations are the glue that binds the OpenTelemetry API and the +# frameworks and libraries that are used together, automatically creating +# Spans and propagating context as appropriate. +http_requests.enable(tracer) + +# SpanExporter receives the spans and send them to the target location. +span_processor = SimpleExportSpanProcessor(ConsoleSpanExporter()) +tracer.add_span_processor(span_processor) + +response = requests.get(url="http://127.0.0.1:5000/") +span_processor.shutdown() diff --git a/examples/trace/server.py b/examples/trace/server.py new file mode 100755 index 00000000000..878898593c4 --- /dev/null +++ b/examples/trace/server.py @@ -0,0 +1,55 @@ +#!/usr/bin/env python3 +# +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import flask +import requests + +from opentelemetry import trace +from opentelemetry.ext import http_requests +from opentelemetry.ext.wsgi import OpenTelemetryMiddleware +from opentelemetry.sdk.trace import Tracer +from opentelemetry.sdk.trace.export import ( + ConsoleSpanExporter, + SimpleExportSpanProcessor, +) + +# The preferred tracer implementation must be set, as the opentelemetry-api +# defines the interface with a no-op implementation. +trace.set_preferred_tracer_implementation(lambda T: Tracer()) + +# Integrations are the glue that binds the OpenTelemetry API and the +# frameworks and libraries that are used together, automatically creating +# Spans and propagating context as appropriate. +http_requests.enable(trace.tracer()) + +# SpanExporter receives the spans and send them to the target location. +span_processor = SimpleExportSpanProcessor(ConsoleSpanExporter()) +trace.tracer().add_span_processor(span_processor) + +app = flask.Flask(__name__) +app.wsgi_app = OpenTelemetryMiddleware(app.wsgi_app) + + +@app.route("/") +def hello(): + with trace.tracer().start_span("parent"): + requests.get("https://www.wikipedia.org/wiki/Rabbit") + return "hello" + + +if __name__ == "__main__": + app.run(debug=True) + span_processor.shutdown() diff --git a/ext/opentelemetry-ext-azure-monitor/README.rst b/ext/opentelemetry-ext-azure-monitor/README.rst new file mode 100644 index 00000000000..976d9a531e0 --- /dev/null +++ b/ext/opentelemetry-ext-azure-monitor/README.rst @@ -0,0 +1,17 @@ +OpenTelemetry Azure Monitor Exporters +===================================== + +This library provides integration with Microsoft Azure Monitor. + +Installation +------------ + +:: + + pip install opentelemetry-ext-azure-monitor + +References +---------- + +* `Azure Monitor `_ +* `OpenTelemetry Project `_ diff --git a/ext/opentelemetry-ext-azure-monitor/examples/client.py b/ext/opentelemetry-ext-azure-monitor/examples/client.py new file mode 100644 index 00000000000..ff954788e66 --- /dev/null +++ b/ext/opentelemetry-ext-azure-monitor/examples/client.py @@ -0,0 +1,30 @@ +# Copyright 2019, OpenCensus Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import requests + +from opentelemetry import trace +from opentelemetry.ext import http_requests +from opentelemetry.ext.azure_monitor import AzureMonitorSpanExporter +from opentelemetry.sdk.trace import Tracer +from opentelemetry.sdk.trace.export import BatchExportSpanProcessor + +trace.set_preferred_tracer_implementation(lambda T: Tracer()) +tracer = trace.tracer() +http_requests.enable(tracer) +span_processor = BatchExportSpanProcessor(AzureMonitorSpanExporter()) +tracer.add_span_processor(span_processor) + +response = requests.get(url="http://127.0.0.1:5000/") +span_processor.shutdown() diff --git a/ext/opentelemetry-ext-azure-monitor/examples/server.py b/ext/opentelemetry-ext-azure-monitor/examples/server.py new file mode 100644 index 00000000000..54727ef737b --- /dev/null +++ b/ext/opentelemetry-ext-azure-monitor/examples/server.py @@ -0,0 +1,44 @@ +# Copyright 2019, OpenCensus Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import flask +import requests + +from opentelemetry import trace +from opentelemetry.ext import http_requests +from opentelemetry.ext.azure_monitor import AzureMonitorSpanExporter +from opentelemetry.ext.wsgi import OpenTelemetryMiddleware +from opentelemetry.sdk.trace import Tracer +from opentelemetry.sdk.trace.export import BatchExportSpanProcessor + +trace.set_preferred_tracer_implementation(lambda T: Tracer()) + +http_requests.enable(trace.tracer()) +span_processor = BatchExportSpanProcessor(AzureMonitorSpanExporter()) +trace.tracer().add_span_processor(span_processor) + +app = flask.Flask(__name__) +app.wsgi_app = OpenTelemetryMiddleware(app.wsgi_app) + + +@app.route("/") +def hello(): + with trace.tracer().start_span("parent"): + requests.get("https://www.wikipedia.org/wiki/Rabbit") + return "hello" + + +if __name__ == "__main__": + app.run(debug=True) + span_processor.shutdown() diff --git a/ext/opentelemetry-ext-azure-monitor/examples/trace.py b/ext/opentelemetry-ext-azure-monitor/examples/trace.py new file mode 100644 index 00000000000..8e8f887aa11 --- /dev/null +++ b/ext/opentelemetry-ext-azure-monitor/examples/trace.py @@ -0,0 +1,27 @@ +# Copyright 2019, OpenCensus Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from opentelemetry import trace +from opentelemetry.ext.azure_monitor import AzureMonitorSpanExporter +from opentelemetry.sdk.trace import Tracer +from opentelemetry.sdk.trace.export import SimpleExportSpanProcessor + +trace.set_preferred_tracer_implementation(lambda T: Tracer()) +tracer = trace.tracer() +tracer.add_span_processor( + SimpleExportSpanProcessor(AzureMonitorSpanExporter()) +) + +with tracer.start_span("hello") as span: + print("Hello, World!") diff --git a/ext/opentelemetry-ext-azure-monitor/setup.cfg b/ext/opentelemetry-ext-azure-monitor/setup.cfg new file mode 100644 index 00000000000..3110ed35312 --- /dev/null +++ b/ext/opentelemetry-ext-azure-monitor/setup.cfg @@ -0,0 +1,46 @@ +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +[metadata] +name = opentelemetry-ext-azure-monitor +description = Azure Monitor integration for OpenTelemetry +long_description = file: README.rst +long_description_content_type = text/x-rst +author = OpenTelemetry Authors +author_email = cncf-opentelemetry-contributors@lists.cncf.io +url = https://github.com/open-telemetry/opentelemetry-python/ext/opentelemetry-ext-azure-monitor +platforms = any +license = Apache-2.0 +classifiers = + Development Status :: 3 - Alpha + Intended Audience :: Developers + License :: OSI Approved :: Apache Software License + Programming Language :: Python + Programming Language :: Python :: 3 + Programming Language :: Python :: 3.4 + Programming Language :: Python :: 3.5 + Programming Language :: Python :: 3.6 + Programming Language :: Python :: 3.7 + +[options] +python_requires = >=3.4 +package_dir= + =src +packages=find_namespace: +install_requires = + opentelemetry-api + opentelemetry-sdk + +[options.packages.find] +where = src diff --git a/ext/opentelemetry-ext-azure-monitor/setup.py b/ext/opentelemetry-ext-azure-monitor/setup.py new file mode 100644 index 00000000000..5f8afcb23a2 --- /dev/null +++ b/ext/opentelemetry-ext-azure-monitor/setup.py @@ -0,0 +1,26 @@ +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import os + +import setuptools + +BASE_DIR = os.path.dirname(__file__) +VERSION_FILENAME = os.path.join( + BASE_DIR, "src", "opentelemetry", "ext", "azure_monitor", "version.py" +) +PACKAGE_INFO = {} +with open(VERSION_FILENAME) as f: + exec(f.read(), PACKAGE_INFO) + +setuptools.setup(version=PACKAGE_INFO["__version__"]) diff --git a/ext/opentelemetry-ext-azure-monitor/src/opentelemetry/ext/azure_monitor/__init__.py b/ext/opentelemetry-ext-azure-monitor/src/opentelemetry/ext/azure_monitor/__init__.py new file mode 100644 index 00000000000..81222c546eb --- /dev/null +++ b/ext/opentelemetry-ext-azure-monitor/src/opentelemetry/ext/azure_monitor/__init__.py @@ -0,0 +1,23 @@ +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +""" +The opentelemetry-ext-azure-monitor package provides integration with +Microsoft Azure Monitor. +""" + +from opentelemetry.ext.azure_monitor.trace import AzureMonitorSpanExporter +from opentelemetry.ext.azure_monitor.version import __version__ # noqa + +__all__ = ["AzureMonitorSpanExporter"] diff --git a/ext/opentelemetry-ext-azure-monitor/src/opentelemetry/ext/azure_monitor/protocol.py b/ext/opentelemetry-ext-azure-monitor/src/opentelemetry/ext/azure_monitor/protocol.py new file mode 100644 index 00000000000..ccdf5eef8d2 --- /dev/null +++ b/ext/opentelemetry-ext-azure-monitor/src/opentelemetry/ext/azure_monitor/protocol.py @@ -0,0 +1,201 @@ +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +class BaseObject(dict): + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + for key in kwargs: + self[key] = kwargs[key] + + def __repr__(self): + tmp = {} + current = self + while True: + for item in current.items(): + if item[0] not in tmp: + tmp[item[0]] = item[1] + if ( + current._default # noqa pylint: disable=protected-access + == current + ): + break + current = current._default # noqa pylint: disable=protected-access + return repr(tmp) + + def __setattr__(self, name, value): + self[name] = value + + def __getattr__(self, name): + try: + return self[name] + except KeyError: + raise AttributeError( + "'{}' object has no attribute {}".format( + type(self).__name__, name + ) + ) + + def __getitem__(self, key): + if self._default is self: + return super().__getitem__(key) + if key in self: + return super().__getitem__(key) + return self._default[key] + + +BaseObject._default = BaseObject() # noqa pylint: disable=protected-access + + +class Data(BaseObject): + _default = BaseObject(baseData=None, baseType=None) + + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + self.baseData = self.baseData # noqa pylint: disable=invalid-name + self.baseType = self.baseType # noqa pylint: disable=invalid-name + + +class DataPoint(BaseObject): + _default = BaseObject( + ns="", + name="", + kind=None, + value=0.0, + count=None, + min=None, + max=None, + stdDev=None, + ) + + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + self.name = self.name + self.value = self.value + + +class Envelope(BaseObject): + _default = BaseObject( + ver=1, + name="", + time="", + sampleRate=None, + seq=None, + iKey=None, + flags=None, + tags=None, + data=None, + ) + + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + self.name = self.name + self.time = self.time + + +class Event(BaseObject): + _default = BaseObject(ver=2, name="", properties=None, measurements=None) + + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + self.ver = self.ver + self.name = self.name + + +class ExceptionData(BaseObject): + _default = BaseObject( + ver=2, + exceptions=[], + severityLevel=None, + problemId=None, + properties=None, + measurements=None, + ) + + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + self.ver = self.ver + self.exceptions = self.exceptions + + +class Message(BaseObject): + _default = BaseObject( + ver=2, + message="", + severityLevel=None, + properties=None, + measurements=None, + ) + + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + self.ver = self.ver + self.message = self.message + + +class MetricData(BaseObject): + _default = BaseObject(ver=2, metrics=[], properties=None) + + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + self.ver = self.ver + self.metrics = self.metrics + + +class RemoteDependency(BaseObject): + _default = BaseObject( + ver=2, + name="", + id="", + resultCode="", + duration="", + success=True, + data=None, + type=None, + target=None, + properties=None, + measurements=None, + ) + + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + self.ver = self.ver + self.name = self.name + self.resultCode = self.resultCode # noqa pylint: disable=invalid-name + self.duration = self.duration + + +class Request(BaseObject): + _default = BaseObject( + ver=2, + id="", + duration="", + responseCode="", + success=True, + source=None, + name=None, + url=None, + properties=None, + measurements=None, + ) + + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + self.ver = self.ver + self.id = self.id # noqa pylint: disable=invalid-name + self.duration = self.duration + self.responseCode = ( # noqa pylint: disable=invalid-name + self.responseCode + ) + self.success = self.success diff --git a/ext/opentelemetry-ext-azure-monitor/src/opentelemetry/ext/azure_monitor/trace.py b/ext/opentelemetry-ext-azure-monitor/src/opentelemetry/ext/azure_monitor/trace.py new file mode 100644 index 00000000000..16f9252fd01 --- /dev/null +++ b/ext/opentelemetry-ext-azure-monitor/src/opentelemetry/ext/azure_monitor/trace.py @@ -0,0 +1,173 @@ +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import json +import logging +from urllib.parse import urlparse + +import requests + +from opentelemetry.ext.azure_monitor import protocol, util +from opentelemetry.sdk.trace.export import SpanExporter, SpanExportResult +from opentelemetry.sdk.util import ns_to_iso_str +from opentelemetry.trace import Span, SpanKind + +logger = logging.getLogger(__name__) + + +class AzureMonitorSpanExporter(SpanExporter): + def __init__(self, **options): + self.options = util.Options(**options) + if not self.options.instrumentation_key: + raise ValueError("The instrumentation_key is not provided.") + + def export(self, spans): + envelopes = tuple(map(self.span_to_envelope, spans)) + + try: + response = requests.post( + url=self.options.endpoint, + data=json.dumps(envelopes), + headers={ + "Accept": "application/json", + "Content-Type": "application/json; charset=utf-8", + }, + timeout=self.options.timeout, + ) + except requests.RequestException as ex: + logger.warning("Transient client side error %s.", ex) + return SpanExportResult.FAILED_RETRYABLE + + text = "N/A" + data = None # noqa pylint: disable=unused-variable + try: + text = response.text + except Exception as ex: # noqa pylint: disable=broad-except + logger.warning("Error while reading response body %s.", ex) + else: + try: + data = json.loads(text) # noqa pylint: disable=unused-variable + except Exception: # noqa pylint: disable=broad-except + pass + + if response.status_code == 200: + logger.info("Transmission succeeded: %s.", text) + return SpanExportResult.SUCCESS + + if response.status_code in ( + 206, # Partial Content + 429, # Too Many Requests + 500, # Internal Server Error + 503, # Service Unavailable + ): + return SpanExportResult.FAILED_RETRYABLE + + return SpanExportResult.FAILED_NOT_RETRYABLE + + @staticmethod + def ns_to_duration(nanoseconds): + value = (nanoseconds + 500000) // 1000000 # duration in milliseconds + value, microseconds = divmod(value, 1000) + value, seconds = divmod(value, 60) + value, minutes = divmod(value, 60) + days, hours = divmod(value, 24) + return "{:d}.{:02d}:{:02d}:{:02d}.{:03d}".format( + days, hours, minutes, seconds, microseconds + ) + + def span_to_envelope(self, span): # noqa pylint: disable=too-many-branches + envelope = protocol.Envelope( + iKey=self.options.instrumentation_key, + tags=dict(util.azure_monitor_context), + time=ns_to_iso_str(span.start_time), + ) + envelope.tags["ai.operation.id"] = "{:032x}".format( + span.context.trace_id + ) + parent = span.parent + if isinstance(parent, Span): + parent = parent.context + if parent: + envelope.tags[ + "ai.operation.parentId" + ] = "|{:032x}.{:016x}.".format(parent.trace_id, parent.span_id) + if span.kind in (SpanKind.CONSUMER, SpanKind.SERVER): + envelope.name = "Microsoft.ApplicationInsights.Request" + data = protocol.Request( + id="|{:032x}.{:016x}.".format( + span.context.trace_id, span.context.span_id + ), + duration=self.ns_to_duration(span.end_time - span.start_time), + responseCode="0", + success=False, + properties={}, + ) + envelope.data = protocol.Data( + baseData=data, baseType="RequestData" + ) + if "http.method" in span.attributes: + data.name = span.attributes["http.method"] + if "http.route" in span.attributes: + data.name = data.name + " " + span.attributes["http.route"] + envelope.tags["ai.operation.name"] = data.name + if "http.url" in span.attributes: + data.url = span.attributes["http.url"] + if "http.status_code" in span.attributes: + status_code = span.attributes["http.status_code"] + data.responseCode = str(status_code) + data.success = 200 <= status_code < 400 + else: + envelope.name = "Microsoft.ApplicationInsights.RemoteDependency" + data = protocol.RemoteDependency( + name=span.name, + id="|{:032x}.{:016x}.".format( + span.context.trace_id, span.context.span_id + ), + resultCode="0", # TODO + duration=self.ns_to_duration(span.end_time - span.start_time), + success=True, # TODO + properties={}, + ) + envelope.data = protocol.Data( + baseData=data, baseType="RemoteDependencyData" + ) + if span.kind in (SpanKind.CLIENT, SpanKind.PRODUCER): + data.type = "HTTP" # TODO + if "http.url" in span.attributes: + url = span.attributes["http.url"] + # TODO: error handling, probably put scheme as well + data.name = urlparse(url).netloc + if "http.status_code" in span.attributes: + data.resultCode = str(span.attributes["http.status_code"]) + else: # SpanKind.INTERNAL + data.type = "InProc" + for key in span.attributes: + data.properties[key] = span.attributes[key] + if span.links: + links = [] + for link in span.links: + links.append( + { + "operation_Id": "{:032x}".format( + link.context.trace_id + ), + "id": "|{:032x}.{:016x}.".format( + link.context.trace_id, link.context.span_id + ), + } + ) + data.properties["_MS.links"] = json.dumps(links) + print(data.properties["_MS.links"]) + # TODO: tracestate, tags + return envelope diff --git a/ext/opentelemetry-ext-azure-monitor/src/opentelemetry/ext/azure_monitor/util.py b/ext/opentelemetry-ext-azure-monitor/src/opentelemetry/ext/azure_monitor/util.py new file mode 100644 index 00000000000..f97dbd3e33c --- /dev/null +++ b/ext/opentelemetry-ext-azure-monitor/src/opentelemetry/ext/azure_monitor/util.py @@ -0,0 +1,42 @@ +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import locale +import os +import platform +import sys + +from opentelemetry.ext.azure_monitor.protocol import BaseObject +from opentelemetry.ext.azure_monitor.version import __version__ as ext_version +from opentelemetry.sdk.version import __version__ as opentelemetry_version + +azure_monitor_context = { + "ai.cloud.role": os.path.basename(sys.argv[0]) or "Python Application", + "ai.cloud.roleInstance": platform.node(), + "ai.device.id": platform.node(), + "ai.device.locale": locale.getdefaultlocale()[0], + "ai.device.osVersion": platform.version(), + "ai.device.type": "Other", + "ai.internal.sdkVersion": "py{}:ot{}:ext{}".format( + platform.python_version(), opentelemetry_version, ext_version + ), +} + + +class Options(BaseObject): + _default = BaseObject( + endpoint="https://dc.services.visualstudio.com/v2/track", + instrumentation_key=os.getenv("APPINSIGHTS_INSTRUMENTATIONKEY", None), + timeout=10.0, # networking timeout in seconds + ) diff --git a/ext/opentelemetry-ext-azure-monitor/src/opentelemetry/ext/azure_monitor/version.py b/ext/opentelemetry-ext-azure-monitor/src/opentelemetry/ext/azure_monitor/version.py new file mode 100644 index 00000000000..a457c2b6651 --- /dev/null +++ b/ext/opentelemetry-ext-azure-monitor/src/opentelemetry/ext/azure_monitor/version.py @@ -0,0 +1,15 @@ +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +__version__ = "0.1.dev0" diff --git a/ext/opentelemetry-ext-azure-monitor/tests/__init__.py b/ext/opentelemetry-ext-azure-monitor/tests/__init__.py new file mode 100644 index 00000000000..e69de29bb2d diff --git a/ext/opentelemetry-ext-http-requests/src/opentelemetry/ext/http_requests/__init__.py b/ext/opentelemetry-ext-http-requests/src/opentelemetry/ext/http_requests/__init__.py index d06e78b0520..a117da5daa0 100644 --- a/ext/opentelemetry-ext-http-requests/src/opentelemetry/ext/http_requests/__init__.py +++ b/ext/opentelemetry-ext-http-requests/src/opentelemetry/ext/http_requests/__init__.py @@ -22,6 +22,8 @@ from requests.sessions import Session +from opentelemetry import propagators +from opentelemetry.context import Context from opentelemetry.trace import SpanKind @@ -49,8 +51,8 @@ def enable(tracer): @functools.wraps(wrapped) def instrumented_request(self, method, url, *args, **kwargs): - # TODO: Check if we are in an exporter, cf. OpenCensus - # execution_context.is_exporter() + if Context.suppress_instrumentation: + return wrapped(self, method, url, *args, **kwargs) # See # https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/data-semantic-conventions.md#http-client @@ -71,6 +73,8 @@ def instrumented_request(self, method, url, *args, **kwargs): # TODO: Propagate the trace context via headers once we have a way # to access propagators. + headers = kwargs.setdefault("headers", {}) + propagators.inject(tracer, type(headers).__setitem__, headers) result = wrapped(self, method, url, *args, **kwargs) # *** PROCEED span.set_attribute("http.status_code", result.status_code) diff --git a/ext/opentelemetry-ext-http-requests/tests/test_requests_integration.py b/ext/opentelemetry-ext-http-requests/tests/test_requests_integration.py index d2624f63d1a..fe0cd886078 100644 --- a/ext/opentelemetry-ext-http-requests/tests/test_requests_integration.py +++ b/ext/opentelemetry-ext-http-requests/tests/test_requests_integration.py @@ -1,3 +1,17 @@ +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + import sys import unittest from unittest import mock diff --git a/ext/opentelemetry-ext-jaeger/README.rst b/ext/opentelemetry-ext-jaeger/README.rst new file mode 100644 index 00000000000..2c2e94cd9f0 --- /dev/null +++ b/ext/opentelemetry-ext-jaeger/README.rst @@ -0,0 +1,67 @@ +OpenTelemetry Jaeger Exporter +============================= + +Installation +------------ + +:: + + pip install opentelemetry-ext-jaeger + + +Usage +----- + +The **OpenTelemetry Jaeger Exporter** allows to export `OpenTelemetry`_ traces to `Jaeger`_. +This exporter always send traces to the configured agent using Thrift compact protocol over UDP. +An optional collector can be configured, in this case Thrift binary protocol over HTTP is used. +gRPC is still not supported by this implementation. + + +.. _Jaeger: https://www.jaegertracing.io/ +.. _OpenTelemetry: https://github.com/opentelemetry/opentelemetry-python/ + +.. code:: python + + from opentelemetry import trace + from opentelemetry.ext import jaeger + from opentelemetry.sdk.trace import Tracer + from opentelemetry.sdk.trace.export import BatchExportSpanProcessor + + trace.set_preferred_tracer_implementation(lambda T: Tracer()) + tracer = trace.tracer() + + # create a JaegerSpanExporter + jaeger_exporter = jaeger.JaegerSpanExporter( + service_name='my-helloworld-service', + # configure agent + agent_host_name='localhost', + agent_port=6831, + # optional: configure also collector + # collector_host_name='localhost', + # collector_port=14268, + # collector_endpoint='/api/traces?format=jaeger.thrift', + # username=xxxx, # optional + # password=xxxx, # optional + ) + + # Create a BatchExportSpanProcessor and add the exporter to it + span_processor = BatchExportSpanProcessor(jaeger_exporter) + + # add to the tracer + tracer.add_span_processor(span_processor) + + with tracer.start_span('foo'): + print('Hello world!') + + # shutdown the span processor + # TODO: this has to be improved so user doesn't need to call it manually + span_processor.shutdown() + +The `examples <./examples>`_ folder contains more elaborated examples. + +References +---------- + +* `Jaeger `_ +* `OpenTelemetry Project `_ diff --git a/ext/opentelemetry-ext-jaeger/examples/jaeger_exporter_example.py b/ext/opentelemetry-ext-jaeger/examples/jaeger_exporter_example.py new file mode 100644 index 00000000000..b43b158e853 --- /dev/null +++ b/ext/opentelemetry-ext-jaeger/examples/jaeger_exporter_example.py @@ -0,0 +1,51 @@ +import time + +from opentelemetry import trace +from opentelemetry.ext import jaeger +from opentelemetry.sdk.trace import Tracer +from opentelemetry.sdk.trace.export import BatchExportSpanProcessor + +trace.set_preferred_tracer_implementation(lambda T: Tracer()) +tracer = trace.tracer() + +# create a JaegerSpanExporter +jaeger_exporter = jaeger.JaegerSpanExporter( + service_name="my-helloworld-service", + # configure agent + agent_host_name="localhost", + agent_port=6831, + # optional: configure also collector + # collector_host_name="localhost", + # collector_port=14268, + # collector_endpoint="/api/traces?format=jaeger.thrift", + # username=xxxx, # optional + # password=xxxx, # optional +) + +# create a BatchExportSpanProcessor and add the exporter to it +span_processor = BatchExportSpanProcessor(jaeger_exporter) + +# add to the tracer +tracer.add_span_processor(span_processor) + +# create some spans for testing +with tracer.start_span("foo") as foo: + time.sleep(0.1) + foo.set_attribute("my_atribbute", True) + foo.add_event("event in foo", {"name": "foo1"}) + with tracer.start_span("bar") as bar: + time.sleep(0.2) + bar.set_attribute("speed", 100.0) + bar.add_link(foo.get_context()) + + with tracer.start_span("baz") as baz: + time.sleep(0.3) + baz.set_attribute("name", "mauricio") + + time.sleep(0.2) + + time.sleep(0.1) + +# shutdown the span processor +# TODO: this has to be improved so user doesn't need to call it manually +span_processor.shutdown() diff --git a/ext/opentelemetry-ext-jaeger/setup.cfg b/ext/opentelemetry-ext-jaeger/setup.cfg new file mode 100644 index 00000000000..a5f04f1e9b3 --- /dev/null +++ b/ext/opentelemetry-ext-jaeger/setup.cfg @@ -0,0 +1,47 @@ +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +[metadata] +name = opentelemetry-ext-jaeger +description = Jaeger Exporter for OpenTelemetry +long_description = file: README.rst +long_description_content_type = text/x-rst +author = OpenTelemetry Authors +author_email = cncf-opentelemetry-contributors@lists.cncf.io +url = https://github.com/open-telemetry/opentelemetry-python/ext/opentelemetry-ext-jaeger +platforms = any +license = Apache-2.0 +classifiers = + Development Status :: 3 - Alpha + Intended Audience :: Developers + License :: OSI Approved :: Apache Software License + Programming Language :: Python + Programming Language :: Python :: 3 + Programming Language :: Python :: 3.4 + Programming Language :: Python :: 3.5 + Programming Language :: Python :: 3.6 + Programming Language :: Python :: 3.7 + +[options] +python_requires = >=3.4 +package_dir= + =src +packages=find_namespace: +install_requires = + thrift >= 0.10.0 + opentelemetry-api + opentelemetry-sdk + +[options.packages.find] +where = src diff --git a/ext/opentelemetry-ext-jaeger/setup.py b/ext/opentelemetry-ext-jaeger/setup.py new file mode 100644 index 00000000000..44f6eb32b1d --- /dev/null +++ b/ext/opentelemetry-ext-jaeger/setup.py @@ -0,0 +1,26 @@ +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import os + +import setuptools + +BASE_DIR = os.path.dirname(__file__) +VERSION_FILENAME = os.path.join( + BASE_DIR, "src", "opentelemetry", "ext", "jaeger", "version.py" +) +PACKAGE_INFO = {} +with open(VERSION_FILENAME) as f: + exec(f.read(), PACKAGE_INFO) + +setuptools.setup(version=PACKAGE_INFO["__version__"]) diff --git a/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/__init__.py b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/__init__.py new file mode 100644 index 00000000000..b824c1a51bb --- /dev/null +++ b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/__init__.py @@ -0,0 +1,363 @@ +# Copyright 2018, OpenCensus Authors +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Jaeger Span Exporter for OpenTelemetry.""" + +import base64 +import logging +import socket + +from thrift.protocol import TBinaryProtocol, TCompactProtocol +from thrift.transport import THttpClient, TTransport + +import opentelemetry.trace as trace_api +from opentelemetry.ext.jaeger.gen.agent import Agent as agent +from opentelemetry.ext.jaeger.gen.jaeger import Collector as jaeger +from opentelemetry.sdk.trace.export import Span, SpanExporter, SpanExportResult + +DEFAULT_AGENT_HOST_NAME = "localhost" +DEFAULT_AGENT_PORT = 6831 +DEFAULT_COLLECTOR_ENDPOINT = "/api/traces?format=jaeger.thrift" + +UDP_PACKET_MAX_LENGTH = 65000 + +logger = logging.getLogger(__name__) + + +class JaegerSpanExporter(SpanExporter): + """Jaeger span exporter for OpenTelemetry. + + Args: + service_name: Service that logged an annotation in a trace.Classifier + when query for spans. + agent_host_name: The host name of the Jaeger-Agent. + agent_port: The port of the Jaeger-Agent. + collector_host_name: The host name of the Jaeger-Collector HTTP + Thrift. + collector_port: The port of the Jaeger-Collector HTTP Thrift. + collector_endpoint: The endpoint of the Jaeger-Collector HTTP Thrift. + username: The user name of the Basic Auth if authentication is + required. + password: The password of the Basic Auth if authentication is + required. + """ + + def __init__( + self, + service_name, + agent_host_name=DEFAULT_AGENT_HOST_NAME, + agent_port=DEFAULT_AGENT_PORT, + collector_host_name=None, + collector_port=None, + collector_endpoint=DEFAULT_COLLECTOR_ENDPOINT, + username=None, + password=None, + ): + self.service_name = service_name + self.agent_host_name = agent_host_name + self.agent_port = agent_port + self._agent_client = None + self.collector_host_name = collector_host_name + self.collector_port = collector_port + self.collector_endpoint = collector_endpoint + self.username = username + self.password = password + self._collector = None + + @property + def agent_client(self): + if self._agent_client is None: + self._agent_client = AgentClientUDP( + host_name=self.agent_host_name, port=self.agent_port + ) + return self._agent_client + + @property + def collector(self): + if self._collector is not None: + return self._collector + + if self.collector_host_name is None or self.collector_port is None: + return None + + thrift_url = "http://{}:{}{}".format( + self.collector_host_name, + self.collector_port, + self.collector_endpoint, + ) + + auth = None + if self.username is not None and self.password is not None: + auth = (self.username, self.password) + + self._collector = Collector(thrift_url=thrift_url, auth=auth) + return self._collector + + def export(self, spans): + jaeger_spans = _translate_to_jaeger(spans) + + batch = jaeger.Batch( + spans=jaeger_spans, + process=jaeger.Process(serviceName=self.service_name), + ) + + if self.collector is not None: + self.collector.submit(batch) + self.agent_client.emit(batch) + + return SpanExportResult.SUCCESS + + def shutdown(self): + pass + + +def _translate_to_jaeger(spans: Span): + """Translate the spans to Jaeger format. + + Args: + spans: Tuple of spans to convert + """ + + jaeger_spans = [] + + for span in spans: + ctx = span.get_context() + trace_id = ctx.trace_id + span_id = ctx.span_id + + start_time_us = span.start_time // 1e3 + duration_us = (span.end_time - span.start_time) // 1e3 + + parent_id = 0 + if isinstance(span.parent, trace_api.Span): + parent_id = span.parent.get_context().span_id + elif isinstance(span.parent, trace_api.SpanContext): + parent_id = span.parent.span_id + + tags = _extract_tags(span.attributes) + + # TODO: status is missing: + # https://github.com/open-telemetry/opentelemetry-python/issues/98 + + refs = _extract_refs_from_span(span) + logs = _extract_logs_from_span(span) + + flags = int(ctx.trace_options) + + jaeger_span = jaeger.Span( + traceIdHigh=_get_trace_id_high(trace_id), + traceIdLow=_get_trace_id_low(trace_id), + # generated code expects i64 + spanId=_convert_int_to_i64(span_id), + operationName=span.name, + startTime=start_time_us, + duration=duration_us, + tags=tags, + logs=logs, + references=refs, + flags=flags, + parentSpanId=_convert_int_to_i64(parent_id), + ) + + jaeger_spans.append(jaeger_span) + + return jaeger_spans + + +def _extract_refs_from_span(span): + if not span.links: + return None + + refs = [] + for link in span.links: + trace_id = link.context.trace_id + span_id = link.context.span_id + refs.append( + jaeger.SpanRef( + refType=jaeger.SpanRefType.FOLLOWS_FROM, + traceIdHigh=_get_trace_id_high(trace_id), + traceIdLow=_get_trace_id_low(trace_id), + spanId=_convert_int_to_i64(span_id), + ) + ) + return refs + + +def _convert_int_to_i64(val): + """Convert integer to signed int64 (i64)""" + if val > 0x7FFFFFFFFFFFFFFF: + val -= 0x10000000000000000 + return val + + +def _get_trace_id_low(trace_id): + return _convert_int_to_i64(trace_id & 0xFFFFFFFFFFFFFFFF) + + +def _get_trace_id_high(trace_id): + return _convert_int_to_i64((trace_id >> 64) & 0xFFFFFFFFFFFFFFFF) + + +def _extract_logs_from_span(span): + if not span.events: + return None + + logs = [] + + for event in span.events: + fields = [] + if event.attributes is not None: + fields = _extract_tags(event.attributes) + + fields.append( + jaeger.Tag( + key="message", vType=jaeger.TagType.STRING, vStr=event.name + ) + ) + + event_timestamp_us = event.timestamp // 1e3 + logs.append( + jaeger.Log(timestamp=int(event_timestamp_us), fields=fields) + ) + return logs + + +def _extract_tags(attr): + if not attr: + return None + tags = [] + for attribute_key, attribute_value in attr.items(): + tag = _convert_attribute_to_tag(attribute_key, attribute_value) + if tag is None: + continue + tags.append(tag) + return tags + + +def _convert_attribute_to_tag(key, attr): + """Convert the attributes to jaeger tags.""" + if isinstance(attr, bool): + return jaeger.Tag(key=key, vBool=attr, vType=jaeger.TagType.BOOL) + if isinstance(attr, str): + return jaeger.Tag(key=key, vStr=attr, vType=jaeger.TagType.STRING) + if isinstance(attr, int): + return jaeger.Tag(key=key, vLong=attr, vType=jaeger.TagType.LONG) + if isinstance(attr, float): + return jaeger.Tag(key=key, vDouble=attr, vType=jaeger.TagType.DOUBLE) + logger.warning("Could not serialize attribute %s:%r to tag", key, attr) + return None + + +class AgentClientUDP: + """Implement a UDP client to agent. + + Args: + host_name: The host name of the Jaeger server. + port: The port of the Jaeger server. + max_packet_size: Maximum size of UDP packet. + client: Class for creating new client objects for agencies. + """ + + def __init__( + self, + host_name, + port, + max_packet_size=UDP_PACKET_MAX_LENGTH, + client=agent.Client, + ): + self.address = (host_name, port) + self.max_packet_size = max_packet_size + self.buffer = TTransport.TMemoryBuffer() + self.client = client( + iprot=TCompactProtocol.TCompactProtocol(trans=self.buffer) + ) + + def emit(self, batch: jaeger.Batch): + """ + Args: + batch: Object to emit Jaeger spans. + """ + + # pylint: disable=protected-access + self.client._seqid = 0 + # truncate and reset the position of BytesIO object + self.buffer._buffer.truncate(0) + self.buffer._buffer.seek(0) + self.client.emitBatch(batch) + buff = self.buffer.getvalue() + if len(buff) > self.max_packet_size: + logger.warning( + "Data exceeds the max UDP packet size; size %r, max %r", + len(buff), + self.max_packet_size, + ) + return + + with socket.socket(socket.AF_INET, socket.SOCK_DGRAM) as udp_socket: + udp_socket.sendto(buff, self.address) + + +class Collector: + """Submits collected spans to Thrift HTTP server. + + Args: + thrift_url: URL of the Jaeger HTTP Thrift. + auth: Auth tuple that contains username and password for Basic Auth. + client: Class for creating a Jaeger collector client. + http_transport: Class for creating new client for Thrift HTTP server. + """ + + def __init__( + self, + thrift_url="", + auth=None, + client=jaeger.Client, + http_transport=THttpClient.THttpClient, + ): + self.thrift_url = thrift_url + self.auth = auth + self.http_transport = http_transport(uri_or_host=thrift_url) + self.client = client( + iprot=TBinaryProtocol.TBinaryProtocol(trans=self.http_transport) + ) + + # set basic auth header + if auth is not None: + auth_header = "{}:{}".format(*auth) + decoded = base64.b64encode(auth_header.encode()).decode("ascii") + basic_auth = dict(Authorization="Basic {}".format(decoded)) + self.http_transport.setCustomHeaders(basic_auth) + + def submit(self, batch: jaeger.Batch): + """Submits batches to Thrift HTTP Server through Binary Protocol. + + Args: + batch: Object to emit Jaeger spans. + """ + try: + self.client.submitBatches([batch]) + # it will call http_transport.flush() and + # status code and message will be updated + code = self.http_transport.code + msg = self.http_transport.message + if code >= 300 or code < 200: + logger.error( + "Traces cannot be uploaded; HTTP status code: %s, message %s", + code, + msg, + ) + finally: + if self.http_transport.isOpen(): + self.http_transport.close() diff --git a/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/__init__.py b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/__init__.py new file mode 100644 index 00000000000..52b3cfb3e9c --- /dev/null +++ b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/__init__.py @@ -0,0 +1,4 @@ + +import sys +from os.path import dirname +sys.path.append(dirname(__file__)) diff --git a/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/agent/Agent-remote b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/agent/Agent-remote new file mode 100755 index 00000000000..5db3d20804a --- /dev/null +++ b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/agent/Agent-remote @@ -0,0 +1,124 @@ +#!/usr/bin/env python +# +# Autogenerated by Thrift Compiler (0.10.0) +# +# DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING +# +# options string: py +# + +import sys +import pprint +if sys.version_info[0] > 2: + from urllib.parse import urlparse +else: + from urlparse import urlparse +from thrift.transport import TTransport, TSocket, TSSLSocket, THttpClient +from thrift.protocol.TBinaryProtocol import TBinaryProtocol + +from agent import Agent +from agent.ttypes import * + +if len(sys.argv) <= 1 or sys.argv[1] == '--help': + print('') + print('Usage: ' + sys.argv[0] + ' [-h host[:port]] [-u url] [-f[ramed]] [-s[sl]] [-novalidate] [-ca_certs certs] [-keyfile keyfile] [-certfile certfile] function [arg1 [arg2...]]') + print('') + print('Functions:') + print(' void emitZipkinBatch( spans)') + print(' void emitBatch(Batch batch)') + print('') + sys.exit(0) + +pp = pprint.PrettyPrinter(indent=2) +host = 'localhost' +port = 9090 +uri = '' +framed = False +ssl = False +validate = True +ca_certs = None +keyfile = None +certfile = None +http = False +argi = 1 + +if sys.argv[argi] == '-h': + parts = sys.argv[argi + 1].split(':') + host = parts[0] + if len(parts) > 1: + port = int(parts[1]) + argi += 2 + +if sys.argv[argi] == '-u': + url = urlparse(sys.argv[argi + 1]) + parts = url[1].split(':') + host = parts[0] + if len(parts) > 1: + port = int(parts[1]) + else: + port = 80 + uri = url[2] + if url[4]: + uri += '?%s' % url[4] + http = True + argi += 2 + +if sys.argv[argi] == '-f' or sys.argv[argi] == '-framed': + framed = True + argi += 1 + +if sys.argv[argi] == '-s' or sys.argv[argi] == '-ssl': + ssl = True + argi += 1 + +if sys.argv[argi] == '-novalidate': + validate = False + argi += 1 + +if sys.argv[argi] == '-ca_certs': + ca_certs = sys.argv[argi+1] + argi += 2 + +if sys.argv[argi] == '-keyfile': + keyfile = sys.argv[argi+1] + argi += 2 + +if sys.argv[argi] == '-certfile': + certfile = sys.argv[argi+1] + argi += 2 + +cmd = sys.argv[argi] +args = sys.argv[argi + 1:] + +if http: + transport = THttpClient.THttpClient(host, port, uri) +else: + if ssl: + socket = TSSLSocket.TSSLSocket(host, port, validate=validate, ca_certs=ca_certs, keyfile=keyfile, certfile=certfile) + else: + socket = TSocket.TSocket(host, port) + if framed: + transport = TTransport.TFramedTransport(socket) + else: + transport = TTransport.TBufferedTransport(socket) +protocol = TBinaryProtocol(transport) +client = Agent.Client(protocol) +transport.open() + +if cmd == 'emitZipkinBatch': + if len(args) != 1: + print('emitZipkinBatch requires 1 args') + sys.exit(1) + pp.pprint(client.emitZipkinBatch(eval(args[0]),)) + +elif cmd == 'emitBatch': + if len(args) != 1: + print('emitBatch requires 1 args') + sys.exit(1) + pp.pprint(client.emitBatch(eval(args[0]),)) + +else: + print('Unrecognized method %s' % cmd) + sys.exit(1) + +transport.close() diff --git a/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/agent/Agent.py b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/agent/Agent.py new file mode 100644 index 00000000000..e8e0fe662ed --- /dev/null +++ b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/agent/Agent.py @@ -0,0 +1,246 @@ +# +# Autogenerated by Thrift Compiler (0.10.0) +# +# DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING +# +# options string: py +# + +from thrift.Thrift import TType, TMessageType, TFrozenDict, TException, TApplicationException +from thrift.protocol.TProtocol import TProtocolException +import sys +import logging +from .ttypes import * +from thrift.Thrift import TProcessor +from thrift.transport import TTransport + + +class Iface(object): + def emitZipkinBatch(self, spans): + """ + Parameters: + - spans + """ + pass + + def emitBatch(self, batch): + """ + Parameters: + - batch + """ + pass + + +class Client(Iface): + def __init__(self, iprot, oprot=None): + self._iprot = self._oprot = iprot + if oprot is not None: + self._oprot = oprot + self._seqid = 0 + + def emitZipkinBatch(self, spans): + """ + Parameters: + - spans + """ + self.send_emitZipkinBatch(spans) + + def send_emitZipkinBatch(self, spans): + self._oprot.writeMessageBegin('emitZipkinBatch', TMessageType.ONEWAY, self._seqid) + args = emitZipkinBatch_args() + args.spans = spans + args.write(self._oprot) + self._oprot.writeMessageEnd() + self._oprot.trans.flush() + + def emitBatch(self, batch): + """ + Parameters: + - batch + """ + self.send_emitBatch(batch) + + def send_emitBatch(self, batch): + self._oprot.writeMessageBegin('emitBatch', TMessageType.ONEWAY, self._seqid) + args = emitBatch_args() + args.batch = batch + args.write(self._oprot) + self._oprot.writeMessageEnd() + self._oprot.trans.flush() + + +class Processor(Iface, TProcessor): + def __init__(self, handler): + self._handler = handler + self._processMap = {} + self._processMap["emitZipkinBatch"] = Processor.process_emitZipkinBatch + self._processMap["emitBatch"] = Processor.process_emitBatch + + def process(self, iprot, oprot): + (name, type, seqid) = iprot.readMessageBegin() + if name not in self._processMap: + iprot.skip(TType.STRUCT) + iprot.readMessageEnd() + x = TApplicationException(TApplicationException.UNKNOWN_METHOD, 'Unknown function %s' % (name)) + oprot.writeMessageBegin(name, TMessageType.EXCEPTION, seqid) + x.write(oprot) + oprot.writeMessageEnd() + oprot.trans.flush() + return + else: + self._processMap[name](self, seqid, iprot, oprot) + return True + + def process_emitZipkinBatch(self, seqid, iprot, oprot): + args = emitZipkinBatch_args() + args.read(iprot) + iprot.readMessageEnd() + try: + self._handler.emitZipkinBatch(args.spans) + except (TTransport.TTransportException, KeyboardInterrupt, SystemExit): + raise + except: + pass + + def process_emitBatch(self, seqid, iprot, oprot): + args = emitBatch_args() + args.read(iprot) + iprot.readMessageEnd() + try: + self._handler.emitBatch(args.batch) + except (TTransport.TTransportException, KeyboardInterrupt, SystemExit): + raise + except: + pass + +# HELPER FUNCTIONS AND STRUCTURES + + +class emitZipkinBatch_args(object): + """ + Attributes: + - spans + """ + + thrift_spec = ( + None, # 0 + (1, TType.LIST, 'spans', (TType.STRUCT, (zipkincore.ttypes.Span, zipkincore.ttypes.Span.thrift_spec), False), None, ), # 1 + ) + + def __init__(self, spans=None,): + self.spans = spans + + def read(self, iprot): + if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: + iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + return + iprot.readStructBegin() + while True: + (fname, ftype, fid) = iprot.readFieldBegin() + if ftype == TType.STOP: + break + if fid == 1: + if ftype == TType.LIST: + self.spans = [] + (_etype3, _size0) = iprot.readListBegin() + for _i4 in range(_size0): + _elem5 = zipkincore.ttypes.Span() + _elem5.read(iprot) + self.spans.append(_elem5) + iprot.readListEnd() + else: + iprot.skip(ftype) + else: + iprot.skip(ftype) + iprot.readFieldEnd() + iprot.readStructEnd() + + def write(self, oprot): + if oprot._fast_encode is not None and self.thrift_spec is not None: + oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + return + oprot.writeStructBegin('emitZipkinBatch_args') + if self.spans is not None: + oprot.writeFieldBegin('spans', TType.LIST, 1) + oprot.writeListBegin(TType.STRUCT, len(self.spans)) + for iter6 in self.spans: + iter6.write(oprot) + oprot.writeListEnd() + oprot.writeFieldEnd() + oprot.writeFieldStop() + oprot.writeStructEnd() + + def validate(self): + return + + def __repr__(self): + L = ['%s=%r' % (key, value) + for key, value in self.__dict__.items()] + return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) + + def __eq__(self, other): + return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ + + def __ne__(self, other): + return not (self == other) + + +class emitBatch_args(object): + """ + Attributes: + - batch + """ + + thrift_spec = ( + None, # 0 + (1, TType.STRUCT, 'batch', (jaeger.ttypes.Batch, jaeger.ttypes.Batch.thrift_spec), None, ), # 1 + ) + + def __init__(self, batch=None,): + self.batch = batch + + def read(self, iprot): + if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: + iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + return + iprot.readStructBegin() + while True: + (fname, ftype, fid) = iprot.readFieldBegin() + if ftype == TType.STOP: + break + if fid == 1: + if ftype == TType.STRUCT: + self.batch = jaeger.ttypes.Batch() + self.batch.read(iprot) + else: + iprot.skip(ftype) + else: + iprot.skip(ftype) + iprot.readFieldEnd() + iprot.readStructEnd() + + def write(self, oprot): + if oprot._fast_encode is not None and self.thrift_spec is not None: + oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + return + oprot.writeStructBegin('emitBatch_args') + if self.batch is not None: + oprot.writeFieldBegin('batch', TType.STRUCT, 1) + self.batch.write(oprot) + oprot.writeFieldEnd() + oprot.writeFieldStop() + oprot.writeStructEnd() + + def validate(self): + return + + def __repr__(self): + L = ['%s=%r' % (key, value) + for key, value in self.__dict__.items()] + return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) + + def __eq__(self, other): + return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ + + def __ne__(self, other): + return not (self == other) diff --git a/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/agent/__init__.py b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/agent/__init__.py new file mode 100644 index 00000000000..1059cfbc013 --- /dev/null +++ b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/agent/__init__.py @@ -0,0 +1 @@ +__all__ = ['ttypes', 'constants', 'Agent'] diff --git a/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/agent/constants.py b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/agent/constants.py new file mode 100644 index 00000000000..eb0d35aa127 --- /dev/null +++ b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/agent/constants.py @@ -0,0 +1,12 @@ +# +# Autogenerated by Thrift Compiler (0.10.0) +# +# DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING +# +# options string: py +# + +from thrift.Thrift import TType, TMessageType, TFrozenDict, TException, TApplicationException +from thrift.protocol.TProtocol import TProtocolException +import sys +from .ttypes import * diff --git a/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/agent/ttypes.py b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/agent/ttypes.py new file mode 100644 index 00000000000..fc8743cba9f --- /dev/null +++ b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/agent/ttypes.py @@ -0,0 +1,15 @@ +# +# Autogenerated by Thrift Compiler (0.10.0) +# +# DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING +# +# options string: py +# + +from thrift.Thrift import TType, TMessageType, TFrozenDict, TException, TApplicationException +from thrift.protocol.TProtocol import TProtocolException +import sys +import jaeger.ttypes +import zipkincore.ttypes + +from thrift.transport import TTransport diff --git a/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/jaeger/Collector-remote b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/jaeger/Collector-remote new file mode 100755 index 00000000000..5903f023603 --- /dev/null +++ b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/jaeger/Collector-remote @@ -0,0 +1,117 @@ +#!/usr/bin/env python +# +# Autogenerated by Thrift Compiler (0.10.0) +# +# DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING +# +# options string: py +# + +import sys +import pprint +if sys.version_info[0] > 2: + from urllib.parse import urlparse +else: + from urlparse import urlparse +from thrift.transport import TTransport, TSocket, TSSLSocket, THttpClient +from thrift.protocol.TBinaryProtocol import TBinaryProtocol + +from jaeger import Collector +from jaeger.ttypes import * + +if len(sys.argv) <= 1 or sys.argv[1] == '--help': + print('') + print('Usage: ' + sys.argv[0] + ' [-h host[:port]] [-u url] [-f[ramed]] [-s[sl]] [-novalidate] [-ca_certs certs] [-keyfile keyfile] [-certfile certfile] function [arg1 [arg2...]]') + print('') + print('Functions:') + print(' submitBatches( batches)') + print('') + sys.exit(0) + +pp = pprint.PrettyPrinter(indent=2) +host = 'localhost' +port = 9090 +uri = '' +framed = False +ssl = False +validate = True +ca_certs = None +keyfile = None +certfile = None +http = False +argi = 1 + +if sys.argv[argi] == '-h': + parts = sys.argv[argi + 1].split(':') + host = parts[0] + if len(parts) > 1: + port = int(parts[1]) + argi += 2 + +if sys.argv[argi] == '-u': + url = urlparse(sys.argv[argi + 1]) + parts = url[1].split(':') + host = parts[0] + if len(parts) > 1: + port = int(parts[1]) + else: + port = 80 + uri = url[2] + if url[4]: + uri += '?%s' % url[4] + http = True + argi += 2 + +if sys.argv[argi] == '-f' or sys.argv[argi] == '-framed': + framed = True + argi += 1 + +if sys.argv[argi] == '-s' or sys.argv[argi] == '-ssl': + ssl = True + argi += 1 + +if sys.argv[argi] == '-novalidate': + validate = False + argi += 1 + +if sys.argv[argi] == '-ca_certs': + ca_certs = sys.argv[argi+1] + argi += 2 + +if sys.argv[argi] == '-keyfile': + keyfile = sys.argv[argi+1] + argi += 2 + +if sys.argv[argi] == '-certfile': + certfile = sys.argv[argi+1] + argi += 2 + +cmd = sys.argv[argi] +args = sys.argv[argi + 1:] + +if http: + transport = THttpClient.THttpClient(host, port, uri) +else: + if ssl: + socket = TSSLSocket.TSSLSocket(host, port, validate=validate, ca_certs=ca_certs, keyfile=keyfile, certfile=certfile) + else: + socket = TSocket.TSocket(host, port) + if framed: + transport = TTransport.TFramedTransport(socket) + else: + transport = TTransport.TBufferedTransport(socket) +protocol = TBinaryProtocol(transport) +client = Collector.Client(protocol) +transport.open() + +if cmd == 'submitBatches': + if len(args) != 1: + print('submitBatches requires 1 args') + sys.exit(1) + pp.pprint(client.submitBatches(eval(args[0]),)) + +else: + print('Unrecognized method %s' % cmd) + sys.exit(1) + +transport.close() diff --git a/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/jaeger/Collector.py b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/jaeger/Collector.py new file mode 100644 index 00000000000..f6f809b0894 --- /dev/null +++ b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/jaeger/Collector.py @@ -0,0 +1,243 @@ +# +# Autogenerated by Thrift Compiler (0.10.0) +# +# DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING +# +# options string: py +# + +from thrift.Thrift import TType, TMessageType, TFrozenDict, TException, TApplicationException +from thrift.protocol.TProtocol import TProtocolException +import sys +import logging +from .ttypes import * +from thrift.Thrift import TProcessor +from thrift.transport import TTransport + + +class Iface(object): + def submitBatches(self, batches): + """ + Parameters: + - batches + """ + pass + + +class Client(Iface): + def __init__(self, iprot, oprot=None): + self._iprot = self._oprot = iprot + if oprot is not None: + self._oprot = oprot + self._seqid = 0 + + def submitBatches(self, batches): + """ + Parameters: + - batches + """ + self.send_submitBatches(batches) + return self.recv_submitBatches() + + def send_submitBatches(self, batches): + self._oprot.writeMessageBegin('submitBatches', TMessageType.CALL, self._seqid) + args = submitBatches_args() + args.batches = batches + args.write(self._oprot) + self._oprot.writeMessageEnd() + self._oprot.trans.flush() + + def recv_submitBatches(self): + iprot = self._iprot + (fname, mtype, rseqid) = iprot.readMessageBegin() + if mtype == TMessageType.EXCEPTION: + x = TApplicationException() + x.read(iprot) + iprot.readMessageEnd() + raise x + result = submitBatches_result() + result.read(iprot) + iprot.readMessageEnd() + if result.success is not None: + return result.success + raise TApplicationException(TApplicationException.MISSING_RESULT, "submitBatches failed: unknown result") + + +class Processor(Iface, TProcessor): + def __init__(self, handler): + self._handler = handler + self._processMap = {} + self._processMap["submitBatches"] = Processor.process_submitBatches + + def process(self, iprot, oprot): + (name, type, seqid) = iprot.readMessageBegin() + if name not in self._processMap: + iprot.skip(TType.STRUCT) + iprot.readMessageEnd() + x = TApplicationException(TApplicationException.UNKNOWN_METHOD, 'Unknown function %s' % (name)) + oprot.writeMessageBegin(name, TMessageType.EXCEPTION, seqid) + x.write(oprot) + oprot.writeMessageEnd() + oprot.trans.flush() + return + else: + self._processMap[name](self, seqid, iprot, oprot) + return True + + def process_submitBatches(self, seqid, iprot, oprot): + args = submitBatches_args() + args.read(iprot) + iprot.readMessageEnd() + result = submitBatches_result() + try: + result.success = self._handler.submitBatches(args.batches) + msg_type = TMessageType.REPLY + except (TTransport.TTransportException, KeyboardInterrupt, SystemExit): + raise + except Exception as ex: + msg_type = TMessageType.EXCEPTION + logging.exception(ex) + result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error') + oprot.writeMessageBegin("submitBatches", msg_type, seqid) + result.write(oprot) + oprot.writeMessageEnd() + oprot.trans.flush() + +# HELPER FUNCTIONS AND STRUCTURES + + +class submitBatches_args(object): + """ + Attributes: + - batches + """ + + thrift_spec = ( + None, # 0 + (1, TType.LIST, 'batches', (TType.STRUCT, (Batch, Batch.thrift_spec), False), None, ), # 1 + ) + + def __init__(self, batches=None,): + self.batches = batches + + def read(self, iprot): + if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: + iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + return + iprot.readStructBegin() + while True: + (fname, ftype, fid) = iprot.readFieldBegin() + if ftype == TType.STOP: + break + if fid == 1: + if ftype == TType.LIST: + self.batches = [] + (_etype45, _size42) = iprot.readListBegin() + for _i46 in range(_size42): + _elem47 = Batch() + _elem47.read(iprot) + self.batches.append(_elem47) + iprot.readListEnd() + else: + iprot.skip(ftype) + else: + iprot.skip(ftype) + iprot.readFieldEnd() + iprot.readStructEnd() + + def write(self, oprot): + if oprot._fast_encode is not None and self.thrift_spec is not None: + oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + return + oprot.writeStructBegin('submitBatches_args') + if self.batches is not None: + oprot.writeFieldBegin('batches', TType.LIST, 1) + oprot.writeListBegin(TType.STRUCT, len(self.batches)) + for iter48 in self.batches: + iter48.write(oprot) + oprot.writeListEnd() + oprot.writeFieldEnd() + oprot.writeFieldStop() + oprot.writeStructEnd() + + def validate(self): + return + + def __repr__(self): + L = ['%s=%r' % (key, value) + for key, value in self.__dict__.items()] + return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) + + def __eq__(self, other): + return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ + + def __ne__(self, other): + return not (self == other) + + +class submitBatches_result(object): + """ + Attributes: + - success + """ + + thrift_spec = ( + (0, TType.LIST, 'success', (TType.STRUCT, (BatchSubmitResponse, BatchSubmitResponse.thrift_spec), False), None, ), # 0 + ) + + def __init__(self, success=None,): + self.success = success + + def read(self, iprot): + if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: + iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + return + iprot.readStructBegin() + while True: + (fname, ftype, fid) = iprot.readFieldBegin() + if ftype == TType.STOP: + break + if fid == 0: + if ftype == TType.LIST: + self.success = [] + (_etype52, _size49) = iprot.readListBegin() + for _i53 in range(_size49): + _elem54 = BatchSubmitResponse() + _elem54.read(iprot) + self.success.append(_elem54) + iprot.readListEnd() + else: + iprot.skip(ftype) + else: + iprot.skip(ftype) + iprot.readFieldEnd() + iprot.readStructEnd() + + def write(self, oprot): + if oprot._fast_encode is not None and self.thrift_spec is not None: + oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + return + oprot.writeStructBegin('submitBatches_result') + if self.success is not None: + oprot.writeFieldBegin('success', TType.LIST, 0) + oprot.writeListBegin(TType.STRUCT, len(self.success)) + for iter55 in self.success: + iter55.write(oprot) + oprot.writeListEnd() + oprot.writeFieldEnd() + oprot.writeFieldStop() + oprot.writeStructEnd() + + def validate(self): + return + + def __repr__(self): + L = ['%s=%r' % (key, value) + for key, value in self.__dict__.items()] + return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) + + def __eq__(self, other): + return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ + + def __ne__(self, other): + return not (self == other) diff --git a/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/jaeger/__init__.py b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/jaeger/__init__.py new file mode 100644 index 00000000000..515d97d6727 --- /dev/null +++ b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/jaeger/__init__.py @@ -0,0 +1 @@ +__all__ = ['ttypes', 'constants', 'Collector'] diff --git a/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/jaeger/constants.py b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/jaeger/constants.py new file mode 100644 index 00000000000..eb0d35aa127 --- /dev/null +++ b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/jaeger/constants.py @@ -0,0 +1,12 @@ +# +# Autogenerated by Thrift Compiler (0.10.0) +# +# DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING +# +# options string: py +# + +from thrift.Thrift import TType, TMessageType, TFrozenDict, TException, TApplicationException +from thrift.protocol.TProtocol import TProtocolException +import sys +from .ttypes import * diff --git a/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/jaeger/ttypes.py b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/jaeger/ttypes.py new file mode 100644 index 00000000000..a43252b79dc --- /dev/null +++ b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/jaeger/ttypes.py @@ -0,0 +1,831 @@ +# +# Autogenerated by Thrift Compiler (0.10.0) +# +# DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING +# +# options string: py +# + +from thrift.Thrift import TType, TMessageType, TFrozenDict, TException, TApplicationException +from thrift.protocol.TProtocol import TProtocolException +import sys + +from thrift.transport import TTransport + + +class TagType(object): + STRING = 0 + DOUBLE = 1 + BOOL = 2 + LONG = 3 + BINARY = 4 + + _VALUES_TO_NAMES = { + 0: "STRING", + 1: "DOUBLE", + 2: "BOOL", + 3: "LONG", + 4: "BINARY", + } + + _NAMES_TO_VALUES = { + "STRING": 0, + "DOUBLE": 1, + "BOOL": 2, + "LONG": 3, + "BINARY": 4, + } + + +class SpanRefType(object): + CHILD_OF = 0 + FOLLOWS_FROM = 1 + + _VALUES_TO_NAMES = { + 0: "CHILD_OF", + 1: "FOLLOWS_FROM", + } + + _NAMES_TO_VALUES = { + "CHILD_OF": 0, + "FOLLOWS_FROM": 1, + } + + +class Tag(object): + """ + Attributes: + - key + - vType + - vStr + - vDouble + - vBool + - vLong + - vBinary + """ + + thrift_spec = ( + None, # 0 + (1, TType.STRING, 'key', 'UTF8', None, ), # 1 + (2, TType.I32, 'vType', None, None, ), # 2 + (3, TType.STRING, 'vStr', 'UTF8', None, ), # 3 + (4, TType.DOUBLE, 'vDouble', None, None, ), # 4 + (5, TType.BOOL, 'vBool', None, None, ), # 5 + (6, TType.I64, 'vLong', None, None, ), # 6 + (7, TType.STRING, 'vBinary', 'BINARY', None, ), # 7 + ) + + def __init__(self, key=None, vType=None, vStr=None, vDouble=None, vBool=None, vLong=None, vBinary=None,): + self.key = key + self.vType = vType + self.vStr = vStr + self.vDouble = vDouble + self.vBool = vBool + self.vLong = vLong + self.vBinary = vBinary + + def read(self, iprot): + if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: + iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + return + iprot.readStructBegin() + while True: + (fname, ftype, fid) = iprot.readFieldBegin() + if ftype == TType.STOP: + break + if fid == 1: + if ftype == TType.STRING: + self.key = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString() + else: + iprot.skip(ftype) + elif fid == 2: + if ftype == TType.I32: + self.vType = iprot.readI32() + else: + iprot.skip(ftype) + elif fid == 3: + if ftype == TType.STRING: + self.vStr = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString() + else: + iprot.skip(ftype) + elif fid == 4: + if ftype == TType.DOUBLE: + self.vDouble = iprot.readDouble() + else: + iprot.skip(ftype) + elif fid == 5: + if ftype == TType.BOOL: + self.vBool = iprot.readBool() + else: + iprot.skip(ftype) + elif fid == 6: + if ftype == TType.I64: + self.vLong = iprot.readI64() + else: + iprot.skip(ftype) + elif fid == 7: + if ftype == TType.STRING: + self.vBinary = iprot.readBinary() + else: + iprot.skip(ftype) + else: + iprot.skip(ftype) + iprot.readFieldEnd() + iprot.readStructEnd() + + def write(self, oprot): + if oprot._fast_encode is not None and self.thrift_spec is not None: + oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + return + oprot.writeStructBegin('Tag') + if self.key is not None: + oprot.writeFieldBegin('key', TType.STRING, 1) + oprot.writeString(self.key.encode('utf-8') if sys.version_info[0] == 2 else self.key) + oprot.writeFieldEnd() + if self.vType is not None: + oprot.writeFieldBegin('vType', TType.I32, 2) + oprot.writeI32(self.vType) + oprot.writeFieldEnd() + if self.vStr is not None: + oprot.writeFieldBegin('vStr', TType.STRING, 3) + oprot.writeString(self.vStr.encode('utf-8') if sys.version_info[0] == 2 else self.vStr) + oprot.writeFieldEnd() + if self.vDouble is not None: + oprot.writeFieldBegin('vDouble', TType.DOUBLE, 4) + oprot.writeDouble(self.vDouble) + oprot.writeFieldEnd() + if self.vBool is not None: + oprot.writeFieldBegin('vBool', TType.BOOL, 5) + oprot.writeBool(self.vBool) + oprot.writeFieldEnd() + if self.vLong is not None: + oprot.writeFieldBegin('vLong', TType.I64, 6) + oprot.writeI64(self.vLong) + oprot.writeFieldEnd() + if self.vBinary is not None: + oprot.writeFieldBegin('vBinary', TType.STRING, 7) + oprot.writeBinary(self.vBinary) + oprot.writeFieldEnd() + oprot.writeFieldStop() + oprot.writeStructEnd() + + def validate(self): + if self.key is None: + raise TProtocolException(message='Required field key is unset!') + if self.vType is None: + raise TProtocolException(message='Required field vType is unset!') + return + + def __repr__(self): + L = ['%s=%r' % (key, value) + for key, value in self.__dict__.items()] + return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) + + def __eq__(self, other): + return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ + + def __ne__(self, other): + return not (self == other) + + +class Log(object): + """ + Attributes: + - timestamp + - fields + """ + + thrift_spec = ( + None, # 0 + (1, TType.I64, 'timestamp', None, None, ), # 1 + (2, TType.LIST, 'fields', (TType.STRUCT, (Tag, Tag.thrift_spec), False), None, ), # 2 + ) + + def __init__(self, timestamp=None, fields=None,): + self.timestamp = timestamp + self.fields = fields + + def read(self, iprot): + if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: + iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + return + iprot.readStructBegin() + while True: + (fname, ftype, fid) = iprot.readFieldBegin() + if ftype == TType.STOP: + break + if fid == 1: + if ftype == TType.I64: + self.timestamp = iprot.readI64() + else: + iprot.skip(ftype) + elif fid == 2: + if ftype == TType.LIST: + self.fields = [] + (_etype3, _size0) = iprot.readListBegin() + for _i4 in range(_size0): + _elem5 = Tag() + _elem5.read(iprot) + self.fields.append(_elem5) + iprot.readListEnd() + else: + iprot.skip(ftype) + else: + iprot.skip(ftype) + iprot.readFieldEnd() + iprot.readStructEnd() + + def write(self, oprot): + if oprot._fast_encode is not None and self.thrift_spec is not None: + oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + return + oprot.writeStructBegin('Log') + if self.timestamp is not None: + oprot.writeFieldBegin('timestamp', TType.I64, 1) + oprot.writeI64(self.timestamp) + oprot.writeFieldEnd() + if self.fields is not None: + oprot.writeFieldBegin('fields', TType.LIST, 2) + oprot.writeListBegin(TType.STRUCT, len(self.fields)) + for iter6 in self.fields: + iter6.write(oprot) + oprot.writeListEnd() + oprot.writeFieldEnd() + oprot.writeFieldStop() + oprot.writeStructEnd() + + def validate(self): + if self.timestamp is None: + raise TProtocolException(message='Required field timestamp is unset!') + if self.fields is None: + raise TProtocolException(message='Required field fields is unset!') + return + + def __repr__(self): + L = ['%s=%r' % (key, value) + for key, value in self.__dict__.items()] + return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) + + def __eq__(self, other): + return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ + + def __ne__(self, other): + return not (self == other) + + +class SpanRef(object): + """ + Attributes: + - refType + - traceIdLow + - traceIdHigh + - spanId + """ + + thrift_spec = ( + None, # 0 + (1, TType.I32, 'refType', None, None, ), # 1 + (2, TType.I64, 'traceIdLow', None, None, ), # 2 + (3, TType.I64, 'traceIdHigh', None, None, ), # 3 + (4, TType.I64, 'spanId', None, None, ), # 4 + ) + + def __init__(self, refType=None, traceIdLow=None, traceIdHigh=None, spanId=None,): + self.refType = refType + self.traceIdLow = traceIdLow + self.traceIdHigh = traceIdHigh + self.spanId = spanId + + def read(self, iprot): + if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: + iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + return + iprot.readStructBegin() + while True: + (fname, ftype, fid) = iprot.readFieldBegin() + if ftype == TType.STOP: + break + if fid == 1: + if ftype == TType.I32: + self.refType = iprot.readI32() + else: + iprot.skip(ftype) + elif fid == 2: + if ftype == TType.I64: + self.traceIdLow = iprot.readI64() + else: + iprot.skip(ftype) + elif fid == 3: + if ftype == TType.I64: + self.traceIdHigh = iprot.readI64() + else: + iprot.skip(ftype) + elif fid == 4: + if ftype == TType.I64: + self.spanId = iprot.readI64() + else: + iprot.skip(ftype) + else: + iprot.skip(ftype) + iprot.readFieldEnd() + iprot.readStructEnd() + + def write(self, oprot): + if oprot._fast_encode is not None and self.thrift_spec is not None: + oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + return + oprot.writeStructBegin('SpanRef') + if self.refType is not None: + oprot.writeFieldBegin('refType', TType.I32, 1) + oprot.writeI32(self.refType) + oprot.writeFieldEnd() + if self.traceIdLow is not None: + oprot.writeFieldBegin('traceIdLow', TType.I64, 2) + oprot.writeI64(self.traceIdLow) + oprot.writeFieldEnd() + if self.traceIdHigh is not None: + oprot.writeFieldBegin('traceIdHigh', TType.I64, 3) + oprot.writeI64(self.traceIdHigh) + oprot.writeFieldEnd() + if self.spanId is not None: + oprot.writeFieldBegin('spanId', TType.I64, 4) + oprot.writeI64(self.spanId) + oprot.writeFieldEnd() + oprot.writeFieldStop() + oprot.writeStructEnd() + + def validate(self): + if self.refType is None: + raise TProtocolException(message='Required field refType is unset!') + if self.traceIdLow is None: + raise TProtocolException(message='Required field traceIdLow is unset!') + if self.traceIdHigh is None: + raise TProtocolException(message='Required field traceIdHigh is unset!') + if self.spanId is None: + raise TProtocolException(message='Required field spanId is unset!') + return + + def __repr__(self): + L = ['%s=%r' % (key, value) + for key, value in self.__dict__.items()] + return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) + + def __eq__(self, other): + return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ + + def __ne__(self, other): + return not (self == other) + + +class Span(object): + """ + Attributes: + - traceIdLow + - traceIdHigh + - spanId + - parentSpanId + - operationName + - references + - flags + - startTime + - duration + - tags + - logs + """ + + thrift_spec = ( + None, # 0 + (1, TType.I64, 'traceIdLow', None, None, ), # 1 + (2, TType.I64, 'traceIdHigh', None, None, ), # 2 + (3, TType.I64, 'spanId', None, None, ), # 3 + (4, TType.I64, 'parentSpanId', None, None, ), # 4 + (5, TType.STRING, 'operationName', 'UTF8', None, ), # 5 + (6, TType.LIST, 'references', (TType.STRUCT, (SpanRef, SpanRef.thrift_spec), False), None, ), # 6 + (7, TType.I32, 'flags', None, None, ), # 7 + (8, TType.I64, 'startTime', None, None, ), # 8 + (9, TType.I64, 'duration', None, None, ), # 9 + (10, TType.LIST, 'tags', (TType.STRUCT, (Tag, Tag.thrift_spec), False), None, ), # 10 + (11, TType.LIST, 'logs', (TType.STRUCT, (Log, Log.thrift_spec), False), None, ), # 11 + ) + + def __init__(self, traceIdLow=None, traceIdHigh=None, spanId=None, parentSpanId=None, operationName=None, references=None, flags=None, startTime=None, duration=None, tags=None, logs=None,): + self.traceIdLow = traceIdLow + self.traceIdHigh = traceIdHigh + self.spanId = spanId + self.parentSpanId = parentSpanId + self.operationName = operationName + self.references = references + self.flags = flags + self.startTime = startTime + self.duration = duration + self.tags = tags + self.logs = logs + + def read(self, iprot): + if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: + iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + return + iprot.readStructBegin() + while True: + (fname, ftype, fid) = iprot.readFieldBegin() + if ftype == TType.STOP: + break + if fid == 1: + if ftype == TType.I64: + self.traceIdLow = iprot.readI64() + else: + iprot.skip(ftype) + elif fid == 2: + if ftype == TType.I64: + self.traceIdHigh = iprot.readI64() + else: + iprot.skip(ftype) + elif fid == 3: + if ftype == TType.I64: + self.spanId = iprot.readI64() + else: + iprot.skip(ftype) + elif fid == 4: + if ftype == TType.I64: + self.parentSpanId = iprot.readI64() + else: + iprot.skip(ftype) + elif fid == 5: + if ftype == TType.STRING: + self.operationName = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString() + else: + iprot.skip(ftype) + elif fid == 6: + if ftype == TType.LIST: + self.references = [] + (_etype10, _size7) = iprot.readListBegin() + for _i11 in range(_size7): + _elem12 = SpanRef() + _elem12.read(iprot) + self.references.append(_elem12) + iprot.readListEnd() + else: + iprot.skip(ftype) + elif fid == 7: + if ftype == TType.I32: + self.flags = iprot.readI32() + else: + iprot.skip(ftype) + elif fid == 8: + if ftype == TType.I64: + self.startTime = iprot.readI64() + else: + iprot.skip(ftype) + elif fid == 9: + if ftype == TType.I64: + self.duration = iprot.readI64() + else: + iprot.skip(ftype) + elif fid == 10: + if ftype == TType.LIST: + self.tags = [] + (_etype16, _size13) = iprot.readListBegin() + for _i17 in range(_size13): + _elem18 = Tag() + _elem18.read(iprot) + self.tags.append(_elem18) + iprot.readListEnd() + else: + iprot.skip(ftype) + elif fid == 11: + if ftype == TType.LIST: + self.logs = [] + (_etype22, _size19) = iprot.readListBegin() + for _i23 in range(_size19): + _elem24 = Log() + _elem24.read(iprot) + self.logs.append(_elem24) + iprot.readListEnd() + else: + iprot.skip(ftype) + else: + iprot.skip(ftype) + iprot.readFieldEnd() + iprot.readStructEnd() + + def write(self, oprot): + if oprot._fast_encode is not None and self.thrift_spec is not None: + oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + return + oprot.writeStructBegin('Span') + if self.traceIdLow is not None: + oprot.writeFieldBegin('traceIdLow', TType.I64, 1) + oprot.writeI64(self.traceIdLow) + oprot.writeFieldEnd() + if self.traceIdHigh is not None: + oprot.writeFieldBegin('traceIdHigh', TType.I64, 2) + oprot.writeI64(self.traceIdHigh) + oprot.writeFieldEnd() + if self.spanId is not None: + oprot.writeFieldBegin('spanId', TType.I64, 3) + oprot.writeI64(self.spanId) + oprot.writeFieldEnd() + if self.parentSpanId is not None: + oprot.writeFieldBegin('parentSpanId', TType.I64, 4) + oprot.writeI64(self.parentSpanId) + oprot.writeFieldEnd() + if self.operationName is not None: + oprot.writeFieldBegin('operationName', TType.STRING, 5) + oprot.writeString(self.operationName.encode('utf-8') if sys.version_info[0] == 2 else self.operationName) + oprot.writeFieldEnd() + if self.references is not None: + oprot.writeFieldBegin('references', TType.LIST, 6) + oprot.writeListBegin(TType.STRUCT, len(self.references)) + for iter25 in self.references: + iter25.write(oprot) + oprot.writeListEnd() + oprot.writeFieldEnd() + if self.flags is not None: + oprot.writeFieldBegin('flags', TType.I32, 7) + oprot.writeI32(self.flags) + oprot.writeFieldEnd() + if self.startTime is not None: + oprot.writeFieldBegin('startTime', TType.I64, 8) + oprot.writeI64(self.startTime) + oprot.writeFieldEnd() + if self.duration is not None: + oprot.writeFieldBegin('duration', TType.I64, 9) + oprot.writeI64(self.duration) + oprot.writeFieldEnd() + if self.tags is not None: + oprot.writeFieldBegin('tags', TType.LIST, 10) + oprot.writeListBegin(TType.STRUCT, len(self.tags)) + for iter26 in self.tags: + iter26.write(oprot) + oprot.writeListEnd() + oprot.writeFieldEnd() + if self.logs is not None: + oprot.writeFieldBegin('logs', TType.LIST, 11) + oprot.writeListBegin(TType.STRUCT, len(self.logs)) + for iter27 in self.logs: + iter27.write(oprot) + oprot.writeListEnd() + oprot.writeFieldEnd() + oprot.writeFieldStop() + oprot.writeStructEnd() + + def validate(self): + if self.traceIdLow is None: + raise TProtocolException(message='Required field traceIdLow is unset!') + if self.traceIdHigh is None: + raise TProtocolException(message='Required field traceIdHigh is unset!') + if self.spanId is None: + raise TProtocolException(message='Required field spanId is unset!') + if self.parentSpanId is None: + raise TProtocolException(message='Required field parentSpanId is unset!') + if self.operationName is None: + raise TProtocolException(message='Required field operationName is unset!') + if self.flags is None: + raise TProtocolException(message='Required field flags is unset!') + if self.startTime is None: + raise TProtocolException(message='Required field startTime is unset!') + if self.duration is None: + raise TProtocolException(message='Required field duration is unset!') + return + + def __repr__(self): + L = ['%s=%r' % (key, value) + for key, value in self.__dict__.items()] + return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) + + def __eq__(self, other): + return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ + + def __ne__(self, other): + return not (self == other) + + +class Process(object): + """ + Attributes: + - serviceName + - tags + """ + + thrift_spec = ( + None, # 0 + (1, TType.STRING, 'serviceName', 'UTF8', None, ), # 1 + (2, TType.LIST, 'tags', (TType.STRUCT, (Tag, Tag.thrift_spec), False), None, ), # 2 + ) + + def __init__(self, serviceName=None, tags=None,): + self.serviceName = serviceName + self.tags = tags + + def read(self, iprot): + if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: + iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + return + iprot.readStructBegin() + while True: + (fname, ftype, fid) = iprot.readFieldBegin() + if ftype == TType.STOP: + break + if fid == 1: + if ftype == TType.STRING: + self.serviceName = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString() + else: + iprot.skip(ftype) + elif fid == 2: + if ftype == TType.LIST: + self.tags = [] + (_etype31, _size28) = iprot.readListBegin() + for _i32 in range(_size28): + _elem33 = Tag() + _elem33.read(iprot) + self.tags.append(_elem33) + iprot.readListEnd() + else: + iprot.skip(ftype) + else: + iprot.skip(ftype) + iprot.readFieldEnd() + iprot.readStructEnd() + + def write(self, oprot): + if oprot._fast_encode is not None and self.thrift_spec is not None: + oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + return + oprot.writeStructBegin('Process') + if self.serviceName is not None: + oprot.writeFieldBegin('serviceName', TType.STRING, 1) + oprot.writeString(self.serviceName.encode('utf-8') if sys.version_info[0] == 2 else self.serviceName) + oprot.writeFieldEnd() + if self.tags is not None: + oprot.writeFieldBegin('tags', TType.LIST, 2) + oprot.writeListBegin(TType.STRUCT, len(self.tags)) + for iter34 in self.tags: + iter34.write(oprot) + oprot.writeListEnd() + oprot.writeFieldEnd() + oprot.writeFieldStop() + oprot.writeStructEnd() + + def validate(self): + if self.serviceName is None: + raise TProtocolException(message='Required field serviceName is unset!') + return + + def __repr__(self): + L = ['%s=%r' % (key, value) + for key, value in self.__dict__.items()] + return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) + + def __eq__(self, other): + return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ + + def __ne__(self, other): + return not (self == other) + + +class Batch(object): + """ + Attributes: + - process + - spans + """ + + thrift_spec = ( + None, # 0 + (1, TType.STRUCT, 'process', (Process, Process.thrift_spec), None, ), # 1 + (2, TType.LIST, 'spans', (TType.STRUCT, (Span, Span.thrift_spec), False), None, ), # 2 + ) + + def __init__(self, process=None, spans=None,): + self.process = process + self.spans = spans + + def read(self, iprot): + if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: + iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + return + iprot.readStructBegin() + while True: + (fname, ftype, fid) = iprot.readFieldBegin() + if ftype == TType.STOP: + break + if fid == 1: + if ftype == TType.STRUCT: + self.process = Process() + self.process.read(iprot) + else: + iprot.skip(ftype) + elif fid == 2: + if ftype == TType.LIST: + self.spans = [] + (_etype38, _size35) = iprot.readListBegin() + for _i39 in range(_size35): + _elem40 = Span() + _elem40.read(iprot) + self.spans.append(_elem40) + iprot.readListEnd() + else: + iprot.skip(ftype) + else: + iprot.skip(ftype) + iprot.readFieldEnd() + iprot.readStructEnd() + + def write(self, oprot): + if oprot._fast_encode is not None and self.thrift_spec is not None: + oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + return + oprot.writeStructBegin('Batch') + if self.process is not None: + oprot.writeFieldBegin('process', TType.STRUCT, 1) + self.process.write(oprot) + oprot.writeFieldEnd() + if self.spans is not None: + oprot.writeFieldBegin('spans', TType.LIST, 2) + oprot.writeListBegin(TType.STRUCT, len(self.spans)) + for iter41 in self.spans: + iter41.write(oprot) + oprot.writeListEnd() + oprot.writeFieldEnd() + oprot.writeFieldStop() + oprot.writeStructEnd() + + def validate(self): + if self.process is None: + raise TProtocolException(message='Required field process is unset!') + if self.spans is None: + raise TProtocolException(message='Required field spans is unset!') + return + + def __repr__(self): + L = ['%s=%r' % (key, value) + for key, value in self.__dict__.items()] + return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) + + def __eq__(self, other): + return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ + + def __ne__(self, other): + return not (self == other) + + +class BatchSubmitResponse(object): + """ + Attributes: + - ok + """ + + thrift_spec = ( + None, # 0 + (1, TType.BOOL, 'ok', None, None, ), # 1 + ) + + def __init__(self, ok=None,): + self.ok = ok + + def read(self, iprot): + if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: + iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + return + iprot.readStructBegin() + while True: + (fname, ftype, fid) = iprot.readFieldBegin() + if ftype == TType.STOP: + break + if fid == 1: + if ftype == TType.BOOL: + self.ok = iprot.readBool() + else: + iprot.skip(ftype) + else: + iprot.skip(ftype) + iprot.readFieldEnd() + iprot.readStructEnd() + + def write(self, oprot): + if oprot._fast_encode is not None and self.thrift_spec is not None: + oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + return + oprot.writeStructBegin('BatchSubmitResponse') + if self.ok is not None: + oprot.writeFieldBegin('ok', TType.BOOL, 1) + oprot.writeBool(self.ok) + oprot.writeFieldEnd() + oprot.writeFieldStop() + oprot.writeStructEnd() + + def validate(self): + if self.ok is None: + raise TProtocolException(message='Required field ok is unset!') + return + + def __repr__(self): + L = ['%s=%r' % (key, value) + for key, value in self.__dict__.items()] + return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) + + def __eq__(self, other): + return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ + + def __ne__(self, other): + return not (self == other) diff --git a/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/zipkincore/ZipkinCollector-remote b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/zipkincore/ZipkinCollector-remote new file mode 100755 index 00000000000..2b59c3275d1 --- /dev/null +++ b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/zipkincore/ZipkinCollector-remote @@ -0,0 +1,117 @@ +#!/usr/bin/env python +# +# Autogenerated by Thrift Compiler (0.10.0) +# +# DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING +# +# options string: py +# + +import sys +import pprint +if sys.version_info[0] > 2: + from urllib.parse import urlparse +else: + from urlparse import urlparse +from thrift.transport import TTransport, TSocket, TSSLSocket, THttpClient +from thrift.protocol.TBinaryProtocol import TBinaryProtocol + +from zipkincore import ZipkinCollector +from zipkincore.ttypes import * + +if len(sys.argv) <= 1 or sys.argv[1] == '--help': + print('') + print('Usage: ' + sys.argv[0] + ' [-h host[:port]] [-u url] [-f[ramed]] [-s[sl]] [-novalidate] [-ca_certs certs] [-keyfile keyfile] [-certfile certfile] function [arg1 [arg2...]]') + print('') + print('Functions:') + print(' submitZipkinBatch( spans)') + print('') + sys.exit(0) + +pp = pprint.PrettyPrinter(indent=2) +host = 'localhost' +port = 9090 +uri = '' +framed = False +ssl = False +validate = True +ca_certs = None +keyfile = None +certfile = None +http = False +argi = 1 + +if sys.argv[argi] == '-h': + parts = sys.argv[argi + 1].split(':') + host = parts[0] + if len(parts) > 1: + port = int(parts[1]) + argi += 2 + +if sys.argv[argi] == '-u': + url = urlparse(sys.argv[argi + 1]) + parts = url[1].split(':') + host = parts[0] + if len(parts) > 1: + port = int(parts[1]) + else: + port = 80 + uri = url[2] + if url[4]: + uri += '?%s' % url[4] + http = True + argi += 2 + +if sys.argv[argi] == '-f' or sys.argv[argi] == '-framed': + framed = True + argi += 1 + +if sys.argv[argi] == '-s' or sys.argv[argi] == '-ssl': + ssl = True + argi += 1 + +if sys.argv[argi] == '-novalidate': + validate = False + argi += 1 + +if sys.argv[argi] == '-ca_certs': + ca_certs = sys.argv[argi+1] + argi += 2 + +if sys.argv[argi] == '-keyfile': + keyfile = sys.argv[argi+1] + argi += 2 + +if sys.argv[argi] == '-certfile': + certfile = sys.argv[argi+1] + argi += 2 + +cmd = sys.argv[argi] +args = sys.argv[argi + 1:] + +if http: + transport = THttpClient.THttpClient(host, port, uri) +else: + if ssl: + socket = TSSLSocket.TSSLSocket(host, port, validate=validate, ca_certs=ca_certs, keyfile=keyfile, certfile=certfile) + else: + socket = TSocket.TSocket(host, port) + if framed: + transport = TTransport.TFramedTransport(socket) + else: + transport = TTransport.TBufferedTransport(socket) +protocol = TBinaryProtocol(transport) +client = ZipkinCollector.Client(protocol) +transport.open() + +if cmd == 'submitZipkinBatch': + if len(args) != 1: + print('submitZipkinBatch requires 1 args') + sys.exit(1) + pp.pprint(client.submitZipkinBatch(eval(args[0]),)) + +else: + print('Unrecognized method %s' % cmd) + sys.exit(1) + +transport.close() diff --git a/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/zipkincore/ZipkinCollector.py b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/zipkincore/ZipkinCollector.py new file mode 100644 index 00000000000..6167a8e9f1d --- /dev/null +++ b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/zipkincore/ZipkinCollector.py @@ -0,0 +1,243 @@ +# +# Autogenerated by Thrift Compiler (0.10.0) +# +# DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING +# +# options string: py +# + +from thrift.Thrift import TType, TMessageType, TFrozenDict, TException, TApplicationException +from thrift.protocol.TProtocol import TProtocolException +import sys +import logging +from .ttypes import * +from thrift.Thrift import TProcessor +from thrift.transport import TTransport + + +class Iface(object): + def submitZipkinBatch(self, spans): + """ + Parameters: + - spans + """ + pass + + +class Client(Iface): + def __init__(self, iprot, oprot=None): + self._iprot = self._oprot = iprot + if oprot is not None: + self._oprot = oprot + self._seqid = 0 + + def submitZipkinBatch(self, spans): + """ + Parameters: + - spans + """ + self.send_submitZipkinBatch(spans) + return self.recv_submitZipkinBatch() + + def send_submitZipkinBatch(self, spans): + self._oprot.writeMessageBegin('submitZipkinBatch', TMessageType.CALL, self._seqid) + args = submitZipkinBatch_args() + args.spans = spans + args.write(self._oprot) + self._oprot.writeMessageEnd() + self._oprot.trans.flush() + + def recv_submitZipkinBatch(self): + iprot = self._iprot + (fname, mtype, rseqid) = iprot.readMessageBegin() + if mtype == TMessageType.EXCEPTION: + x = TApplicationException() + x.read(iprot) + iprot.readMessageEnd() + raise x + result = submitZipkinBatch_result() + result.read(iprot) + iprot.readMessageEnd() + if result.success is not None: + return result.success + raise TApplicationException(TApplicationException.MISSING_RESULT, "submitZipkinBatch failed: unknown result") + + +class Processor(Iface, TProcessor): + def __init__(self, handler): + self._handler = handler + self._processMap = {} + self._processMap["submitZipkinBatch"] = Processor.process_submitZipkinBatch + + def process(self, iprot, oprot): + (name, type, seqid) = iprot.readMessageBegin() + if name not in self._processMap: + iprot.skip(TType.STRUCT) + iprot.readMessageEnd() + x = TApplicationException(TApplicationException.UNKNOWN_METHOD, 'Unknown function %s' % (name)) + oprot.writeMessageBegin(name, TMessageType.EXCEPTION, seqid) + x.write(oprot) + oprot.writeMessageEnd() + oprot.trans.flush() + return + else: + self._processMap[name](self, seqid, iprot, oprot) + return True + + def process_submitZipkinBatch(self, seqid, iprot, oprot): + args = submitZipkinBatch_args() + args.read(iprot) + iprot.readMessageEnd() + result = submitZipkinBatch_result() + try: + result.success = self._handler.submitZipkinBatch(args.spans) + msg_type = TMessageType.REPLY + except (TTransport.TTransportException, KeyboardInterrupt, SystemExit): + raise + except Exception as ex: + msg_type = TMessageType.EXCEPTION + logging.exception(ex) + result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error') + oprot.writeMessageBegin("submitZipkinBatch", msg_type, seqid) + result.write(oprot) + oprot.writeMessageEnd() + oprot.trans.flush() + +# HELPER FUNCTIONS AND STRUCTURES + + +class submitZipkinBatch_args(object): + """ + Attributes: + - spans + """ + + thrift_spec = ( + None, # 0 + (1, TType.LIST, 'spans', (TType.STRUCT, (Span, Span.thrift_spec), False), None, ), # 1 + ) + + def __init__(self, spans=None,): + self.spans = spans + + def read(self, iprot): + if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: + iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + return + iprot.readStructBegin() + while True: + (fname, ftype, fid) = iprot.readFieldBegin() + if ftype == TType.STOP: + break + if fid == 1: + if ftype == TType.LIST: + self.spans = [] + (_etype17, _size14) = iprot.readListBegin() + for _i18 in range(_size14): + _elem19 = Span() + _elem19.read(iprot) + self.spans.append(_elem19) + iprot.readListEnd() + else: + iprot.skip(ftype) + else: + iprot.skip(ftype) + iprot.readFieldEnd() + iprot.readStructEnd() + + def write(self, oprot): + if oprot._fast_encode is not None and self.thrift_spec is not None: + oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + return + oprot.writeStructBegin('submitZipkinBatch_args') + if self.spans is not None: + oprot.writeFieldBegin('spans', TType.LIST, 1) + oprot.writeListBegin(TType.STRUCT, len(self.spans)) + for iter20 in self.spans: + iter20.write(oprot) + oprot.writeListEnd() + oprot.writeFieldEnd() + oprot.writeFieldStop() + oprot.writeStructEnd() + + def validate(self): + return + + def __repr__(self): + L = ['%s=%r' % (key, value) + for key, value in self.__dict__.items()] + return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) + + def __eq__(self, other): + return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ + + def __ne__(self, other): + return not (self == other) + + +class submitZipkinBatch_result(object): + """ + Attributes: + - success + """ + + thrift_spec = ( + (0, TType.LIST, 'success', (TType.STRUCT, (Response, Response.thrift_spec), False), None, ), # 0 + ) + + def __init__(self, success=None,): + self.success = success + + def read(self, iprot): + if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: + iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + return + iprot.readStructBegin() + while True: + (fname, ftype, fid) = iprot.readFieldBegin() + if ftype == TType.STOP: + break + if fid == 0: + if ftype == TType.LIST: + self.success = [] + (_etype24, _size21) = iprot.readListBegin() + for _i25 in range(_size21): + _elem26 = Response() + _elem26.read(iprot) + self.success.append(_elem26) + iprot.readListEnd() + else: + iprot.skip(ftype) + else: + iprot.skip(ftype) + iprot.readFieldEnd() + iprot.readStructEnd() + + def write(self, oprot): + if oprot._fast_encode is not None and self.thrift_spec is not None: + oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + return + oprot.writeStructBegin('submitZipkinBatch_result') + if self.success is not None: + oprot.writeFieldBegin('success', TType.LIST, 0) + oprot.writeListBegin(TType.STRUCT, len(self.success)) + for iter27 in self.success: + iter27.write(oprot) + oprot.writeListEnd() + oprot.writeFieldEnd() + oprot.writeFieldStop() + oprot.writeStructEnd() + + def validate(self): + return + + def __repr__(self): + L = ['%s=%r' % (key, value) + for key, value in self.__dict__.items()] + return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) + + def __eq__(self, other): + return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ + + def __ne__(self, other): + return not (self == other) diff --git a/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/zipkincore/__init__.py b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/zipkincore/__init__.py new file mode 100644 index 00000000000..90e4f9d9c7f --- /dev/null +++ b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/zipkincore/__init__.py @@ -0,0 +1 @@ +__all__ = ['ttypes', 'constants', 'ZipkinCollector'] diff --git a/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/zipkincore/constants.py b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/zipkincore/constants.py new file mode 100644 index 00000000000..d66961b02bc --- /dev/null +++ b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/zipkincore/constants.py @@ -0,0 +1,28 @@ +# +# Autogenerated by Thrift Compiler (0.10.0) +# +# DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING +# +# options string: py +# + +from thrift.Thrift import TType, TMessageType, TFrozenDict, TException, TApplicationException +from thrift.protocol.TProtocol import TProtocolException +import sys +from .ttypes import * +CLIENT_SEND = "cs" +CLIENT_RECV = "cr" +SERVER_SEND = "ss" +SERVER_RECV = "sr" +MESSAGE_SEND = "ms" +MESSAGE_RECV = "mr" +WIRE_SEND = "ws" +WIRE_RECV = "wr" +CLIENT_SEND_FRAGMENT = "csf" +CLIENT_RECV_FRAGMENT = "crf" +SERVER_SEND_FRAGMENT = "ssf" +SERVER_RECV_FRAGMENT = "srf" +LOCAL_COMPONENT = "lc" +CLIENT_ADDR = "ca" +SERVER_ADDR = "sa" +MESSAGE_ADDR = "ma" diff --git a/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/zipkincore/ttypes.py b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/zipkincore/ttypes.py new file mode 100644 index 00000000000..251c5a36944 --- /dev/null +++ b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen/zipkincore/ttypes.py @@ -0,0 +1,647 @@ +# +# Autogenerated by Thrift Compiler (0.10.0) +# +# DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING +# +# options string: py +# + +from thrift.Thrift import TType, TMessageType, TFrozenDict, TException, TApplicationException +from thrift.protocol.TProtocol import TProtocolException +import sys + +from thrift.transport import TTransport + + +class AnnotationType(object): + BOOL = 0 + BYTES = 1 + I16 = 2 + I32 = 3 + I64 = 4 + DOUBLE = 5 + STRING = 6 + + _VALUES_TO_NAMES = { + 0: "BOOL", + 1: "BYTES", + 2: "I16", + 3: "I32", + 4: "I64", + 5: "DOUBLE", + 6: "STRING", + } + + _NAMES_TO_VALUES = { + "BOOL": 0, + "BYTES": 1, + "I16": 2, + "I32": 3, + "I64": 4, + "DOUBLE": 5, + "STRING": 6, + } + + +class Endpoint(object): + """ + Indicates the network context of a service recording an annotation with two + exceptions. + + When a BinaryAnnotation, and key is CLIENT_ADDR or SERVER_ADDR, + the endpoint indicates the source or destination of an RPC. This exception + allows zipkin to display network context of uninstrumented services, or + clients such as web browsers. + + Attributes: + - ipv4: IPv4 host address packed into 4 bytes. + + Ex for the ip 1.2.3.4, it would be (1 << 24) | (2 << 16) | (3 << 8) | 4 + - port: IPv4 port + + Note: this is to be treated as an unsigned integer, so watch for negatives. + + Conventionally, when the port isn't known, port = 0. + - service_name: Service name in lowercase, such as "memcache" or "zipkin-web" + + Conventionally, when the service name isn't known, service_name = "unknown". + - ipv6: IPv6 host address packed into 16 bytes. Ex Inet6Address.getBytes() + """ + + thrift_spec = ( + None, # 0 + (1, TType.I32, 'ipv4', None, None, ), # 1 + (2, TType.I16, 'port', None, None, ), # 2 + (3, TType.STRING, 'service_name', 'UTF8', None, ), # 3 + (4, TType.STRING, 'ipv6', 'BINARY', None, ), # 4 + ) + + def __init__(self, ipv4=None, port=None, service_name=None, ipv6=None,): + self.ipv4 = ipv4 + self.port = port + self.service_name = service_name + self.ipv6 = ipv6 + + def read(self, iprot): + if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: + iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + return + iprot.readStructBegin() + while True: + (fname, ftype, fid) = iprot.readFieldBegin() + if ftype == TType.STOP: + break + if fid == 1: + if ftype == TType.I32: + self.ipv4 = iprot.readI32() + else: + iprot.skip(ftype) + elif fid == 2: + if ftype == TType.I16: + self.port = iprot.readI16() + else: + iprot.skip(ftype) + elif fid == 3: + if ftype == TType.STRING: + self.service_name = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString() + else: + iprot.skip(ftype) + elif fid == 4: + if ftype == TType.STRING: + self.ipv6 = iprot.readBinary() + else: + iprot.skip(ftype) + else: + iprot.skip(ftype) + iprot.readFieldEnd() + iprot.readStructEnd() + + def write(self, oprot): + if oprot._fast_encode is not None and self.thrift_spec is not None: + oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + return + oprot.writeStructBegin('Endpoint') + if self.ipv4 is not None: + oprot.writeFieldBegin('ipv4', TType.I32, 1) + oprot.writeI32(self.ipv4) + oprot.writeFieldEnd() + if self.port is not None: + oprot.writeFieldBegin('port', TType.I16, 2) + oprot.writeI16(self.port) + oprot.writeFieldEnd() + if self.service_name is not None: + oprot.writeFieldBegin('service_name', TType.STRING, 3) + oprot.writeString(self.service_name.encode('utf-8') if sys.version_info[0] == 2 else self.service_name) + oprot.writeFieldEnd() + if self.ipv6 is not None: + oprot.writeFieldBegin('ipv6', TType.STRING, 4) + oprot.writeBinary(self.ipv6) + oprot.writeFieldEnd() + oprot.writeFieldStop() + oprot.writeStructEnd() + + def validate(self): + return + + def __repr__(self): + L = ['%s=%r' % (key, value) + for key, value in self.__dict__.items()] + return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) + + def __eq__(self, other): + return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ + + def __ne__(self, other): + return not (self == other) + + +class Annotation(object): + """ + An annotation is similar to a log statement. It includes a host field which + allows these events to be attributed properly, and also aggregatable. + + Attributes: + - timestamp: Microseconds from epoch. + + This value should use the most precise value possible. For example, + gettimeofday or syncing nanoTime against a tick of currentTimeMillis. + - value + - host: Always the host that recorded the event. By specifying the host you allow + rollup of all events (such as client requests to a service) by IP address. + """ + + thrift_spec = ( + None, # 0 + (1, TType.I64, 'timestamp', None, None, ), # 1 + (2, TType.STRING, 'value', 'UTF8', None, ), # 2 + (3, TType.STRUCT, 'host', (Endpoint, Endpoint.thrift_spec), None, ), # 3 + ) + + def __init__(self, timestamp=None, value=None, host=None,): + self.timestamp = timestamp + self.value = value + self.host = host + + def read(self, iprot): + if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: + iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + return + iprot.readStructBegin() + while True: + (fname, ftype, fid) = iprot.readFieldBegin() + if ftype == TType.STOP: + break + if fid == 1: + if ftype == TType.I64: + self.timestamp = iprot.readI64() + else: + iprot.skip(ftype) + elif fid == 2: + if ftype == TType.STRING: + self.value = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString() + else: + iprot.skip(ftype) + elif fid == 3: + if ftype == TType.STRUCT: + self.host = Endpoint() + self.host.read(iprot) + else: + iprot.skip(ftype) + else: + iprot.skip(ftype) + iprot.readFieldEnd() + iprot.readStructEnd() + + def write(self, oprot): + if oprot._fast_encode is not None and self.thrift_spec is not None: + oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + return + oprot.writeStructBegin('Annotation') + if self.timestamp is not None: + oprot.writeFieldBegin('timestamp', TType.I64, 1) + oprot.writeI64(self.timestamp) + oprot.writeFieldEnd() + if self.value is not None: + oprot.writeFieldBegin('value', TType.STRING, 2) + oprot.writeString(self.value.encode('utf-8') if sys.version_info[0] == 2 else self.value) + oprot.writeFieldEnd() + if self.host is not None: + oprot.writeFieldBegin('host', TType.STRUCT, 3) + self.host.write(oprot) + oprot.writeFieldEnd() + oprot.writeFieldStop() + oprot.writeStructEnd() + + def validate(self): + return + + def __repr__(self): + L = ['%s=%r' % (key, value) + for key, value in self.__dict__.items()] + return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) + + def __eq__(self, other): + return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ + + def __ne__(self, other): + return not (self == other) + + +class BinaryAnnotation(object): + """ + Binary annotations are tags applied to a Span to give it context. For + example, a binary annotation of "http.uri" could the path to a resource in a + RPC call. + + Binary annotations of type STRING are always queryable, though more a + historical implementation detail than a structural concern. + + Binary annotations can repeat, and vary on the host. Similar to Annotation, + the host indicates who logged the event. This allows you to tell the + difference between the client and server side of the same key. For example, + the key "http.uri" might be different on the client and server side due to + rewriting, like "/api/v1/myresource" vs "/myresource. Via the host field, + you can see the different points of view, which often help in debugging. + + Attributes: + - key + - value + - annotation_type + - host: The host that recorded tag, which allows you to differentiate between + multiple tags with the same key. There are two exceptions to this. + + When the key is CLIENT_ADDR or SERVER_ADDR, host indicates the source or + destination of an RPC. This exception allows zipkin to display network + context of uninstrumented services, or clients such as web browsers. + """ + + thrift_spec = ( + None, # 0 + (1, TType.STRING, 'key', 'UTF8', None, ), # 1 + (2, TType.STRING, 'value', 'BINARY', None, ), # 2 + (3, TType.I32, 'annotation_type', None, None, ), # 3 + (4, TType.STRUCT, 'host', (Endpoint, Endpoint.thrift_spec), None, ), # 4 + ) + + def __init__(self, key=None, value=None, annotation_type=None, host=None,): + self.key = key + self.value = value + self.annotation_type = annotation_type + self.host = host + + def read(self, iprot): + if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: + iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + return + iprot.readStructBegin() + while True: + (fname, ftype, fid) = iprot.readFieldBegin() + if ftype == TType.STOP: + break + if fid == 1: + if ftype == TType.STRING: + self.key = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString() + else: + iprot.skip(ftype) + elif fid == 2: + if ftype == TType.STRING: + self.value = iprot.readBinary() + else: + iprot.skip(ftype) + elif fid == 3: + if ftype == TType.I32: + self.annotation_type = iprot.readI32() + else: + iprot.skip(ftype) + elif fid == 4: + if ftype == TType.STRUCT: + self.host = Endpoint() + self.host.read(iprot) + else: + iprot.skip(ftype) + else: + iprot.skip(ftype) + iprot.readFieldEnd() + iprot.readStructEnd() + + def write(self, oprot): + if oprot._fast_encode is not None and self.thrift_spec is not None: + oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + return + oprot.writeStructBegin('BinaryAnnotation') + if self.key is not None: + oprot.writeFieldBegin('key', TType.STRING, 1) + oprot.writeString(self.key.encode('utf-8') if sys.version_info[0] == 2 else self.key) + oprot.writeFieldEnd() + if self.value is not None: + oprot.writeFieldBegin('value', TType.STRING, 2) + oprot.writeBinary(self.value) + oprot.writeFieldEnd() + if self.annotation_type is not None: + oprot.writeFieldBegin('annotation_type', TType.I32, 3) + oprot.writeI32(self.annotation_type) + oprot.writeFieldEnd() + if self.host is not None: + oprot.writeFieldBegin('host', TType.STRUCT, 4) + self.host.write(oprot) + oprot.writeFieldEnd() + oprot.writeFieldStop() + oprot.writeStructEnd() + + def validate(self): + return + + def __repr__(self): + L = ['%s=%r' % (key, value) + for key, value in self.__dict__.items()] + return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) + + def __eq__(self, other): + return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ + + def __ne__(self, other): + return not (self == other) + + +class Span(object): + """ + A trace is a series of spans (often RPC calls) which form a latency tree. + + The root span is where trace_id = id and parent_id = Nil. The root span is + usually the longest interval in the trace, starting with a SERVER_RECV + annotation and ending with a SERVER_SEND. + + Attributes: + - trace_id + - name: Span name in lowercase, rpc method for example + + Conventionally, when the span name isn't known, name = "unknown". + - id + - parent_id + - annotations + - binary_annotations + - debug + - timestamp: Microseconds from epoch of the creation of this span. + + This value should be set directly by instrumentation, using the most + precise value possible. For example, gettimeofday or syncing nanoTime + against a tick of currentTimeMillis. + + For compatibilty with instrumentation that precede this field, collectors + or span stores can derive this via Annotation.timestamp. + For example, SERVER_RECV.timestamp or CLIENT_SEND.timestamp. + + This field is optional for compatibility with old data: first-party span + stores are expected to support this at time of introduction. + - duration: Measurement of duration in microseconds, used to support queries. + + This value should be set directly, where possible. Doing so encourages + precise measurement decoupled from problems of clocks, such as skew or NTP + updates causing time to move backwards. + + For compatibilty with instrumentation that precede this field, collectors + or span stores can derive this by subtracting Annotation.timestamp. + For example, SERVER_SEND.timestamp - SERVER_RECV.timestamp. + + If this field is persisted as unset, zipkin will continue to work, except + duration query support will be implementation-specific. Similarly, setting + this field non-atomically is implementation-specific. + + This field is i64 vs i32 to support spans longer than 35 minutes. + - trace_id_high: Optional unique 8-byte additional identifier for a trace. If non zero, this + means the trace uses 128 bit traceIds instead of 64 bit. + """ + + thrift_spec = ( + None, # 0 + (1, TType.I64, 'trace_id', None, None, ), # 1 + None, # 2 + (3, TType.STRING, 'name', 'UTF8', None, ), # 3 + (4, TType.I64, 'id', None, None, ), # 4 + (5, TType.I64, 'parent_id', None, None, ), # 5 + (6, TType.LIST, 'annotations', (TType.STRUCT, (Annotation, Annotation.thrift_spec), False), None, ), # 6 + None, # 7 + (8, TType.LIST, 'binary_annotations', (TType.STRUCT, (BinaryAnnotation, BinaryAnnotation.thrift_spec), False), None, ), # 8 + (9, TType.BOOL, 'debug', None, False, ), # 9 + (10, TType.I64, 'timestamp', None, None, ), # 10 + (11, TType.I64, 'duration', None, None, ), # 11 + (12, TType.I64, 'trace_id_high', None, None, ), # 12 + ) + + def __init__(self, trace_id=None, name=None, id=None, parent_id=None, annotations=None, binary_annotations=None, debug=thrift_spec[9][4], timestamp=None, duration=None, trace_id_high=None,): + self.trace_id = trace_id + self.name = name + self.id = id + self.parent_id = parent_id + self.annotations = annotations + self.binary_annotations = binary_annotations + self.debug = debug + self.timestamp = timestamp + self.duration = duration + self.trace_id_high = trace_id_high + + def read(self, iprot): + if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: + iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + return + iprot.readStructBegin() + while True: + (fname, ftype, fid) = iprot.readFieldBegin() + if ftype == TType.STOP: + break + if fid == 1: + if ftype == TType.I64: + self.trace_id = iprot.readI64() + else: + iprot.skip(ftype) + elif fid == 3: + if ftype == TType.STRING: + self.name = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString() + else: + iprot.skip(ftype) + elif fid == 4: + if ftype == TType.I64: + self.id = iprot.readI64() + else: + iprot.skip(ftype) + elif fid == 5: + if ftype == TType.I64: + self.parent_id = iprot.readI64() + else: + iprot.skip(ftype) + elif fid == 6: + if ftype == TType.LIST: + self.annotations = [] + (_etype3, _size0) = iprot.readListBegin() + for _i4 in range(_size0): + _elem5 = Annotation() + _elem5.read(iprot) + self.annotations.append(_elem5) + iprot.readListEnd() + else: + iprot.skip(ftype) + elif fid == 8: + if ftype == TType.LIST: + self.binary_annotations = [] + (_etype9, _size6) = iprot.readListBegin() + for _i10 in range(_size6): + _elem11 = BinaryAnnotation() + _elem11.read(iprot) + self.binary_annotations.append(_elem11) + iprot.readListEnd() + else: + iprot.skip(ftype) + elif fid == 9: + if ftype == TType.BOOL: + self.debug = iprot.readBool() + else: + iprot.skip(ftype) + elif fid == 10: + if ftype == TType.I64: + self.timestamp = iprot.readI64() + else: + iprot.skip(ftype) + elif fid == 11: + if ftype == TType.I64: + self.duration = iprot.readI64() + else: + iprot.skip(ftype) + elif fid == 12: + if ftype == TType.I64: + self.trace_id_high = iprot.readI64() + else: + iprot.skip(ftype) + else: + iprot.skip(ftype) + iprot.readFieldEnd() + iprot.readStructEnd() + + def write(self, oprot): + if oprot._fast_encode is not None and self.thrift_spec is not None: + oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + return + oprot.writeStructBegin('Span') + if self.trace_id is not None: + oprot.writeFieldBegin('trace_id', TType.I64, 1) + oprot.writeI64(self.trace_id) + oprot.writeFieldEnd() + if self.name is not None: + oprot.writeFieldBegin('name', TType.STRING, 3) + oprot.writeString(self.name.encode('utf-8') if sys.version_info[0] == 2 else self.name) + oprot.writeFieldEnd() + if self.id is not None: + oprot.writeFieldBegin('id', TType.I64, 4) + oprot.writeI64(self.id) + oprot.writeFieldEnd() + if self.parent_id is not None: + oprot.writeFieldBegin('parent_id', TType.I64, 5) + oprot.writeI64(self.parent_id) + oprot.writeFieldEnd() + if self.annotations is not None: + oprot.writeFieldBegin('annotations', TType.LIST, 6) + oprot.writeListBegin(TType.STRUCT, len(self.annotations)) + for iter12 in self.annotations: + iter12.write(oprot) + oprot.writeListEnd() + oprot.writeFieldEnd() + if self.binary_annotations is not None: + oprot.writeFieldBegin('binary_annotations', TType.LIST, 8) + oprot.writeListBegin(TType.STRUCT, len(self.binary_annotations)) + for iter13 in self.binary_annotations: + iter13.write(oprot) + oprot.writeListEnd() + oprot.writeFieldEnd() + if self.debug is not None: + oprot.writeFieldBegin('debug', TType.BOOL, 9) + oprot.writeBool(self.debug) + oprot.writeFieldEnd() + if self.timestamp is not None: + oprot.writeFieldBegin('timestamp', TType.I64, 10) + oprot.writeI64(self.timestamp) + oprot.writeFieldEnd() + if self.duration is not None: + oprot.writeFieldBegin('duration', TType.I64, 11) + oprot.writeI64(self.duration) + oprot.writeFieldEnd() + if self.trace_id_high is not None: + oprot.writeFieldBegin('trace_id_high', TType.I64, 12) + oprot.writeI64(self.trace_id_high) + oprot.writeFieldEnd() + oprot.writeFieldStop() + oprot.writeStructEnd() + + def validate(self): + return + + def __repr__(self): + L = ['%s=%r' % (key, value) + for key, value in self.__dict__.items()] + return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) + + def __eq__(self, other): + return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ + + def __ne__(self, other): + return not (self == other) + + +class Response(object): + """ + Attributes: + - ok + """ + + thrift_spec = ( + None, # 0 + (1, TType.BOOL, 'ok', None, None, ), # 1 + ) + + def __init__(self, ok=None,): + self.ok = ok + + def read(self, iprot): + if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None: + iprot._fast_decode(self, iprot, (self.__class__, self.thrift_spec)) + return + iprot.readStructBegin() + while True: + (fname, ftype, fid) = iprot.readFieldBegin() + if ftype == TType.STOP: + break + if fid == 1: + if ftype == TType.BOOL: + self.ok = iprot.readBool() + else: + iprot.skip(ftype) + else: + iprot.skip(ftype) + iprot.readFieldEnd() + iprot.readStructEnd() + + def write(self, oprot): + if oprot._fast_encode is not None and self.thrift_spec is not None: + oprot.trans.write(oprot._fast_encode(self, (self.__class__, self.thrift_spec))) + return + oprot.writeStructBegin('Response') + if self.ok is not None: + oprot.writeFieldBegin('ok', TType.BOOL, 1) + oprot.writeBool(self.ok) + oprot.writeFieldEnd() + oprot.writeFieldStop() + oprot.writeStructEnd() + + def validate(self): + if self.ok is None: + raise TProtocolException(message='Required field ok is unset!') + return + + def __repr__(self): + L = ['%s=%r' % (key, value) + for key, value in self.__dict__.items()] + return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) + + def __eq__(self, other): + return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ + + def __ne__(self, other): + return not (self == other) diff --git a/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/version.py b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/version.py new file mode 100644 index 00000000000..262f2467144 --- /dev/null +++ b/ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/version.py @@ -0,0 +1,16 @@ +# Copyright 2019, OpenCensus Authors +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +__version__ = "0.1.dev0" diff --git a/ext/opentelemetry-ext-jaeger/tests/__init__.py b/ext/opentelemetry-ext-jaeger/tests/__init__.py new file mode 100644 index 00000000000..e69de29bb2d diff --git a/ext/opentelemetry-ext-jaeger/tests/test_jaeger_exporter.py b/ext/opentelemetry-ext-jaeger/tests/test_jaeger_exporter.py new file mode 100644 index 00000000000..3fb14cd3548 --- /dev/null +++ b/ext/opentelemetry-ext-jaeger/tests/test_jaeger_exporter.py @@ -0,0 +1,254 @@ +# Copyright 2018, OpenCensus Authors +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import unittest + +# pylint:disable=no-name-in-module +# pylint:disable=import-error +import opentelemetry.ext.jaeger as jaeger_exporter +from opentelemetry import trace as trace_api +from opentelemetry.ext.jaeger.gen.jaeger import ttypes as jaeger +from opentelemetry.sdk import trace + + +class TestJaegerSpanExporter(unittest.TestCase): + def test_constructor_default(self): + service_name = "my-service-name" + host_name = "localhost" + thrift_port = None + agent_port = 6831 + collector_endpoint = "/api/traces?format=jaeger.thrift" + exporter = jaeger_exporter.JaegerSpanExporter(service_name) + + self.assertEqual(exporter.service_name, service_name) + self.assertEqual(exporter.collector_host_name, None) + self.assertEqual(exporter.agent_host_name, host_name) + self.assertEqual(exporter.agent_port, agent_port) + self.assertEqual(exporter.collector_port, thrift_port) + self.assertEqual(exporter.collector_endpoint, collector_endpoint) + self.assertEqual(exporter.username, None) + self.assertEqual(exporter.password, None) + self.assertTrue(exporter.collector is None) + self.assertTrue(exporter.agent_client is not None) + + def test_constructor_explicit(self): + service = "my-opentelemetry-jaeger" + collector_host_name = "opentelemetry.io" + collector_port = 15875 + collector_endpoint = "/myapi/traces?format=jaeger.thrift" + + agent_port = 14268 + agent_host_name = "opentelemetry.com" + + username = "username" + password = "password" + auth = (username, password) + + exporter = jaeger_exporter.JaegerSpanExporter( + service_name=service, + collector_host_name=collector_host_name, + collector_port=collector_port, + collector_endpoint=collector_endpoint, + agent_host_name=agent_host_name, + agent_port=agent_port, + username=username, + password=password, + ) + self.assertEqual(exporter.service_name, service) + self.assertEqual(exporter.agent_host_name, agent_host_name) + self.assertEqual(exporter.agent_port, agent_port) + self.assertEqual(exporter.collector_host_name, collector_host_name) + self.assertEqual(exporter.collector_port, collector_port) + self.assertTrue(exporter.collector is not None) + self.assertEqual(exporter.collector.auth, auth) + # property should not construct new object + collector = exporter.collector + self.assertEqual(exporter.collector, collector) + # property should construct new object + # pylint: disable=protected-access + exporter._collector = None + exporter.username = None + exporter.password = None + self.assertNotEqual(exporter.collector, collector) + self.assertTrue(exporter.collector.auth is None) + + # pylint: disable=too-many-locals + def test_translate_to_jaeger(self): + # pylint: disable=invalid-name + self.maxDiff = None + + span_names = ("test1", "test2", "test3") + trace_id = 0x6E0C63257DE34C926F9EFCD03927272E + trace_id_high = 0x6E0C63257DE34C92 + trace_id_low = 0x6F9EFCD03927272E + span_id = 0x34BF92DEEFC58C92 + parent_id = 0x1111111111111111 + other_id = 0x2222222222222222 + + base_time = 683647322 * 1e9 # in ns + start_times = (base_time, base_time + 150 * 1e6, base_time + 300 * 1e6) + durations = (50 * 1e6, 100 * 1e6, 200 * 1e6) + end_times = ( + start_times[0] + durations[0], + start_times[1] + durations[1], + start_times[2] + durations[2], + ) + + span_context = trace_api.SpanContext(trace_id, span_id) + parent_context = trace_api.SpanContext(trace_id, parent_id) + other_context = trace_api.SpanContext(trace_id, other_id) + + event_attributes = { + "annotation_bool": True, + "annotation_string": "annotation_test", + "key_float": 0.3, + } + + event_timestamp = base_time + 50e6 + event = trace_api.Event( + name="event0", + timestamp=event_timestamp, + attributes=event_attributes, + ) + + link_attributes = {"key_bool": True} + + link = trace_api.Link( + context=other_context, attributes=link_attributes + ) + + otel_spans = [ + trace.Span( + name=span_names[0], + context=span_context, + parent=parent_context, + events=(event,), + links=(link,), + ), + trace.Span( + name=span_names[1], context=parent_context, parent=None + ), + trace.Span(name=span_names[2], context=other_context, parent=None), + ] + + otel_spans[0].start_time = start_times[0] + # added here to preserve order + otel_spans[0].set_attribute("key_bool", False) + otel_spans[0].set_attribute("key_string", "hello_world") + otel_spans[0].set_attribute("key_float", 111.22) + otel_spans[0].end_time = end_times[0] + + otel_spans[1].start_time = start_times[1] + otel_spans[1].end_time = end_times[1] + + otel_spans[2].start_time = start_times[2] + otel_spans[2].end_time = end_times[2] + + # pylint: disable=protected-access + spans = jaeger_exporter._translate_to_jaeger(otel_spans) + + expected_spans = [ + jaeger.Span( + operationName=span_names[0], + traceIdHigh=trace_id_high, + traceIdLow=trace_id_low, + spanId=span_id, + parentSpanId=parent_id, + startTime=start_times[0] / 1e3, + duration=durations[0] / 1e3, + flags=0, + tags=[ + jaeger.Tag( + key="key_bool", vType=jaeger.TagType.BOOL, vBool=False + ), + jaeger.Tag( + key="key_string", + vType=jaeger.TagType.STRING, + vStr="hello_world", + ), + jaeger.Tag( + key="key_float", + vType=jaeger.TagType.DOUBLE, + vDouble=111.22, + ), + ], + references=[ + jaeger.SpanRef( + refType=jaeger.SpanRefType.FOLLOWS_FROM, + traceIdHigh=trace_id_high, + traceIdLow=trace_id_low, + spanId=other_id, + ) + ], + logs=[ + jaeger.Log( + timestamp=event_timestamp / 1e3, + fields=[ + jaeger.Tag( + key="annotation_bool", + vType=jaeger.TagType.BOOL, + vBool=True, + ), + jaeger.Tag( + key="annotation_string", + vType=jaeger.TagType.STRING, + vStr="annotation_test", + ), + jaeger.Tag( + key="key_float", + vType=jaeger.TagType.DOUBLE, + vDouble=0.3, + ), + jaeger.Tag( + key="message", + vType=jaeger.TagType.STRING, + vStr="event0", + ), + ], + ) + ], + ), + jaeger.Span( + operationName=span_names[1], + traceIdHigh=trace_id_high, + traceIdLow=trace_id_low, + spanId=parent_id, + parentSpanId=0, + startTime=int(start_times[1] // 1e3), + duration=int(durations[1] // 1e3), + flags=0, + ), + jaeger.Span( + operationName=span_names[2], + traceIdHigh=trace_id_high, + traceIdLow=trace_id_low, + spanId=other_id, + parentSpanId=0, + startTime=int(start_times[2] // 1e3), + duration=int(durations[2] // 1e3), + flags=0, + ), + ] + + # events are complicated to compare because order of fields + # (attributes) is otel is not important but in jeager it is + self.assertCountEqual( + spans[0].logs[0].fields, expected_spans[0].logs[0].fields + ) + # get rid of fields to be able to compare the whole spans + spans[0].logs[0].fields = None + expected_spans[0].logs[0].fields = None + + self.assertEqual(spans, expected_spans) diff --git a/ext/opentelemetry-ext-jaeger/thrift/agent.thrift b/ext/opentelemetry-ext-jaeger/thrift/agent.thrift new file mode 100644 index 00000000000..5d3c9201b67 --- /dev/null +++ b/ext/opentelemetry-ext-jaeger/thrift/agent.thrift @@ -0,0 +1,27 @@ +# Copyright (c) 2016 Uber Technologies, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +include "jaeger.thrift" +include "zipkincore.thrift" + +namespace cpp jaegertracing.agent.thrift +namespace java io.jaegertracing.agent.thrift +namespace php Jaeger.Thrift.Agent +namespace netcore Jaeger.Thrift.Agent +namespace lua jaeger.thrift.agent + +service Agent { + oneway void emitZipkinBatch(1: list spans) + oneway void emitBatch(1: jaeger.Batch batch) +} diff --git a/ext/opentelemetry-ext-jaeger/thrift/jaeger.thrift b/ext/opentelemetry-ext-jaeger/thrift/jaeger.thrift new file mode 100644 index 00000000000..ae9fcaa0145 --- /dev/null +++ b/ext/opentelemetry-ext-jaeger/thrift/jaeger.thrift @@ -0,0 +1,85 @@ +# Copyright (c) 2016 Uber Technologies, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +namespace cpp jaegertracing.thrift +namespace java io.jaegertracing.thriftjava +namespace php Jaeger.Thrift +namespace netcore Jaeger.Thrift +namespace lua jaeger.thrift + +# TagType denotes the type of a Tag's value. +enum TagType { STRING, DOUBLE, BOOL, LONG, BINARY } + +# Tag is a basic strongly typed key/value pair. It has been flattened to reduce the use of pointers in golang +struct Tag { + 1: required string key + 2: required TagType vType + 3: optional string vStr + 4: optional double vDouble + 5: optional bool vBool + 6: optional i64 vLong + 7: optional binary vBinary +} + +# Log is a timed even with an arbitrary set of tags. +struct Log { + 1: required i64 timestamp + 2: required list fields +} + +enum SpanRefType { CHILD_OF, FOLLOWS_FROM } + +# SpanRef describes causal relationship of the current span to another span (e.g. 'child-of') +struct SpanRef { + 1: required SpanRefType refType + 2: required i64 traceIdLow + 3: required i64 traceIdHigh + 4: required i64 spanId +} + +# Span represents a named unit of work performed by a service. +struct Span { + 1: required i64 traceIdLow # the least significant 64 bits of a traceID + 2: required i64 traceIdHigh # the most significant 64 bits of a traceID; 0 when only 64bit IDs are used + 3: required i64 spanId # unique span id (only unique within a given trace) + 4: required i64 parentSpanId # since nearly all spans will have parents spans, CHILD_OF refs do not have to be explicit + 5: required string operationName + 6: optional list references # causal references to other spans + 7: required i32 flags # a bit field used to propagate sampling decisions. 1 signifies a SAMPLED span, 2 signifies a DEBUG span. + 8: required i64 startTime + 9: required i64 duration + 10: optional list tags + 11: optional list logs +} + +# Process describes the traced process/service that emits spans. +struct Process { + 1: required string serviceName + 2: optional list tags +} + +# Batch is a collection of spans reported out of process. +struct Batch { + 1: required Process process + 2: required list spans +} + +# BatchSubmitResponse is the response on submitting a batch. +struct BatchSubmitResponse { + 1: required bool ok # The Collector's client is expected to only log (or emit a counter) when not ok equals false +} + +service Collector { + list submitBatches(1: list batches) +} diff --git a/ext/opentelemetry-ext-jaeger/thrift/zipkincore.thrift b/ext/opentelemetry-ext-jaeger/thrift/zipkincore.thrift new file mode 100644 index 00000000000..d5259e78b95 --- /dev/null +++ b/ext/opentelemetry-ext-jaeger/thrift/zipkincore.thrift @@ -0,0 +1,346 @@ +# Copyright 2012 Twitter Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +namespace cpp twitter.zipkin.thrift +namespace java com.twitter.zipkin.thriftjava +#@namespace scala com.twitter.zipkin.thriftscala +namespace rb Zipkin +namespace php Jaeger.Thrift.Agent.Zipkin +namespace netcore Jaeger.Thrift.Agent.Zipkin +namespace lua jaeger.thrift.agent + + +#************** Annotation.value ************** +/** + * The client sent ("cs") a request to a server. There is only one send per + * span. For example, if there's a transport error, each attempt can be logged + * as a WIRE_SEND annotation. + * + * If chunking is involved, each chunk could be logged as a separate + * CLIENT_SEND_FRAGMENT in the same span. + * + * Annotation.host is not the server. It is the host which logged the send + * event, almost always the client. When logging CLIENT_SEND, instrumentation + * should also log the SERVER_ADDR. + */ +const string CLIENT_SEND = "cs" +/** + * The client received ("cr") a response from a server. There is only one + * receive per span. For example, if duplicate responses were received, each + * can be logged as a WIRE_RECV annotation. + * + * If chunking is involved, each chunk could be logged as a separate + * CLIENT_RECV_FRAGMENT in the same span. + * + * Annotation.host is not the server. It is the host which logged the receive + * event, almost always the client. The actual endpoint of the server is + * recorded separately as SERVER_ADDR when CLIENT_SEND is logged. + */ +const string CLIENT_RECV = "cr" +/** + * The server sent ("ss") a response to a client. There is only one response + * per span. If there's a transport error, each attempt can be logged as a + * WIRE_SEND annotation. + * + * Typically, a trace ends with a server send, so the last timestamp of a trace + * is often the timestamp of the root span's server send. + * + * If chunking is involved, each chunk could be logged as a separate + * SERVER_SEND_FRAGMENT in the same span. + * + * Annotation.host is not the client. It is the host which logged the send + * event, almost always the server. The actual endpoint of the client is + * recorded separately as CLIENT_ADDR when SERVER_RECV is logged. + */ +const string SERVER_SEND = "ss" +/** + * The server received ("sr") a request from a client. There is only one + * request per span. For example, if duplicate responses were received, each + * can be logged as a WIRE_RECV annotation. + * + * Typically, a trace starts with a server receive, so the first timestamp of a + * trace is often the timestamp of the root span's server receive. + * + * If chunking is involved, each chunk could be logged as a separate + * SERVER_RECV_FRAGMENT in the same span. + * + * Annotation.host is not the client. It is the host which logged the receive + * event, almost always the server. When logging SERVER_RECV, instrumentation + * should also log the CLIENT_ADDR. + */ +const string SERVER_RECV = "sr" +/** + * Message send ("ms") is a request to send a message to a destination, usually + * a broker. This may be the only annotation in a messaging span. If WIRE_SEND + * exists in the same span, it follows this moment and clarifies delays sending + * the message, such as batching. + * + * Unlike RPC annotations like CLIENT_SEND, messaging spans never share a span + * ID. For example, "ms" should always be the parent of "mr". + * + * Annotation.host is not the destination, it is the host which logged the send + * event: the producer. When annotating MESSAGE_SEND, instrumentation should + * also tag the MESSAGE_ADDR. + */ +const string MESSAGE_SEND = "ms" +/** + * A consumer received ("mr") a message from a broker. This may be the only + * annotation in a messaging span. If WIRE_RECV exists in the same span, it + * precedes this moment and clarifies any local queuing delay. + * + * Unlike RPC annotations like SERVER_RECV, messaging spans never share a span + * ID. For example, "mr" should always be a child of "ms" unless it is a root + * span. + * + * Annotation.host is not the broker, it is the host which logged the receive + * event: the consumer. When annotating MESSAGE_RECV, instrumentation should + * also tag the MESSAGE_ADDR. + */ +const string MESSAGE_RECV = "mr" +/** + * Optionally logs an attempt to send a message on the wire. Multiple wire send + * events could indicate network retries. A lag between client or server send + * and wire send might indicate queuing or processing delay. + */ +const string WIRE_SEND = "ws" +/** + * Optionally logs an attempt to receive a message from the wire. Multiple wire + * receive events could indicate network retries. A lag between wire receive + * and client or server receive might indicate queuing or processing delay. + */ +const string WIRE_RECV = "wr" +/** + * Optionally logs progress of a (CLIENT_SEND, WIRE_SEND). For example, this + * could be one chunk in a chunked request. + */ +const string CLIENT_SEND_FRAGMENT = "csf" +/** + * Optionally logs progress of a (CLIENT_RECV, WIRE_RECV). For example, this + * could be one chunk in a chunked response. + */ +const string CLIENT_RECV_FRAGMENT = "crf" +/** + * Optionally logs progress of a (SERVER_SEND, WIRE_SEND). For example, this + * could be one chunk in a chunked response. + */ +const string SERVER_SEND_FRAGMENT = "ssf" +/** + * Optionally logs progress of a (SERVER_RECV, WIRE_RECV). For example, this + * could be one chunk in a chunked request. + */ +const string SERVER_RECV_FRAGMENT = "srf" + +#***** BinaryAnnotation.key ****** +/** + * The value of "lc" is the component or namespace of a local span. + * + * BinaryAnnotation.host adds service context needed to support queries. + * + * Local Component("lc") supports three key features: flagging, query by + * service and filtering Span.name by namespace. + * + * While structurally the same, local spans are fundamentally different than + * RPC spans in how they should be interpreted. For example, zipkin v1 tools + * center on RPC latency and service graphs. Root local-spans are neither + * indicative of critical path RPC latency, nor have impact on the shape of a + * service graph. By flagging with "lc", tools can special-case local spans. + * + * Zipkin v1 Spans are unqueryable unless they can be indexed by service name. + * The only path to a service name is by (Binary)?Annotation.host.serviceName. + * By logging "lc", a local span can be queried even if no other annotations + * are logged. + * + * The value of "lc" is the namespace of Span.name. For example, it might be + * "finatra2", for a span named "bootstrap". "lc" allows you to resolves + * conflicts for the same Span.name, for example "finatra/bootstrap" vs + * "finch/bootstrap". Using local component, you'd search for spans named + * "bootstrap" where "lc=finch" + */ +const string LOCAL_COMPONENT = "lc" + +#***** BinaryAnnotation.key where value = [1] and annotation_type = BOOL ****** +/** + * Indicates a client address ("ca") in a span. Most likely, there's only one. + * Multiple addresses are possible when a client changes its ip or port within + * a span. + */ +const string CLIENT_ADDR = "ca" +/** + * Indicates a server address ("sa") in a span. Most likely, there's only one. + * Multiple addresses are possible when a client is redirected, or fails to a + * different server ip or port. + */ +const string SERVER_ADDR = "sa" +/** + * Indicates the remote address of a messaging span, usually the broker. + */ +const string MESSAGE_ADDR = "ma" + +/** + * Indicates the network context of a service recording an annotation with two + * exceptions. + * + * When a BinaryAnnotation, and key is CLIENT_ADDR or SERVER_ADDR, + * the endpoint indicates the source or destination of an RPC. This exception + * allows zipkin to display network context of uninstrumented services, or + * clients such as web browsers. + */ +struct Endpoint { + /** + * IPv4 host address packed into 4 bytes. + * + * Ex for the ip 1.2.3.4, it would be (1 << 24) | (2 << 16) | (3 << 8) | 4 + */ + 1: i32 ipv4 + /** + * IPv4 port + * + * Note: this is to be treated as an unsigned integer, so watch for negatives. + * + * Conventionally, when the port isn't known, port = 0. + */ + 2: i16 port + /** + * Service name in lowercase, such as "memcache" or "zipkin-web" + * + * Conventionally, when the service name isn't known, service_name = "unknown". + */ + 3: string service_name + /** + * IPv6 host address packed into 16 bytes. Ex Inet6Address.getBytes() + */ + 4: optional binary ipv6 +} + +/** + * An annotation is similar to a log statement. It includes a host field which + * allows these events to be attributed properly, and also aggregatable. + */ +struct Annotation { + /** + * Microseconds from epoch. + * + * This value should use the most precise value possible. For example, + * gettimeofday or syncing nanoTime against a tick of currentTimeMillis. + */ + 1: i64 timestamp + 2: string value // what happened at the timestamp? + /** + * Always the host that recorded the event. By specifying the host you allow + * rollup of all events (such as client requests to a service) by IP address. + */ + 3: optional Endpoint host + // don't reuse 4: optional i32 OBSOLETE_duration // how long did the operation take? microseconds +} + +enum AnnotationType { BOOL, BYTES, I16, I32, I64, DOUBLE, STRING } + +/** + * Binary annotations are tags applied to a Span to give it context. For + * example, a binary annotation of "http.uri" could the path to a resource in a + * RPC call. + * + * Binary annotations of type STRING are always queryable, though more a + * historical implementation detail than a structural concern. + * + * Binary annotations can repeat, and vary on the host. Similar to Annotation, + * the host indicates who logged the event. This allows you to tell the + * difference between the client and server side of the same key. For example, + * the key "http.uri" might be different on the client and server side due to + * rewriting, like "/api/v1/myresource" vs "/myresource. Via the host field, + * you can see the different points of view, which often help in debugging. + */ +struct BinaryAnnotation { + 1: string key, + 2: binary value, + 3: AnnotationType annotation_type, + /** + * The host that recorded tag, which allows you to differentiate between + * multiple tags with the same key. There are two exceptions to this. + * + * When the key is CLIENT_ADDR or SERVER_ADDR, host indicates the source or + * destination of an RPC. This exception allows zipkin to display network + * context of uninstrumented services, or clients such as web browsers. + */ + 4: optional Endpoint host +} + +/** + * A trace is a series of spans (often RPC calls) which form a latency tree. + * + * The root span is where trace_id = id and parent_id = Nil. The root span is + * usually the longest interval in the trace, starting with a SERVER_RECV + * annotation and ending with a SERVER_SEND. + */ +struct Span { + 1: i64 trace_id # unique trace id, use for all spans in trace + /** + * Span name in lowercase, rpc method for example + * + * Conventionally, when the span name isn't known, name = "unknown". + */ + 3: string name, + 4: i64 id, # unique span id, only used for this span + 5: optional i64 parent_id, # parent span id + 6: list annotations, # all annotations/events that occured, sorted by timestamp + 8: list binary_annotations # any binary annotations + 9: optional bool debug = 0 # if true, we DEMAND that this span passes all samplers + /** + * Microseconds from epoch of the creation of this span. + * + * This value should be set directly by instrumentation, using the most + * precise value possible. For example, gettimeofday or syncing nanoTime + * against a tick of currentTimeMillis. + * + * For compatibilty with instrumentation that precede this field, collectors + * or span stores can derive this via Annotation.timestamp. + * For example, SERVER_RECV.timestamp or CLIENT_SEND.timestamp. + * + * This field is optional for compatibility with old data: first-party span + * stores are expected to support this at time of introduction. + */ + 10: optional i64 timestamp, + /** + * Measurement of duration in microseconds, used to support queries. + * + * This value should be set directly, where possible. Doing so encourages + * precise measurement decoupled from problems of clocks, such as skew or NTP + * updates causing time to move backwards. + * + * For compatibilty with instrumentation that precede this field, collectors + * or span stores can derive this by subtracting Annotation.timestamp. + * For example, SERVER_SEND.timestamp - SERVER_RECV.timestamp. + * + * If this field is persisted as unset, zipkin will continue to work, except + * duration query support will be implementation-specific. Similarly, setting + * this field non-atomically is implementation-specific. + * + * This field is i64 vs i32 to support spans longer than 35 minutes. + */ + 11: optional i64 duration + /** + * Optional unique 8-byte additional identifier for a trace. If non zero, this + * means the trace uses 128 bit traceIds instead of 64 bit. + */ + 12: optional i64 trace_id_high +} + +# define TChannel service + +struct Response { + 1: required bool ok +} + +service ZipkinCollector { + list submitZipkinBatch(1: list spans) +} diff --git a/ext/opentelemetry-ext-pymongo/README.rst b/ext/opentelemetry-ext-pymongo/README.rst new file mode 100644 index 00000000000..1e8011f4c22 --- /dev/null +++ b/ext/opentelemetry-ext-pymongo/README.rst @@ -0,0 +1,27 @@ +OpenTelemetry pymongo integration +================================= + +The integration with MongoDB supports the `pymongo`_ library and is specified +to ``trace_integration`` using ``'pymongo'``. + +.. _pymongo: https://pypi.org/project/pymongo + +Usage +----- + +.. code:: python + + from pymongo import MongoClient + from opentelemetry.trace import tracer + from opentelemetry.trace.ext.pymongo import trace_integration + + trace_integration(tracer()) + client = MongoClient() + db = client["MongoDB_Database"] + collection = db["MongoDB_Collection"] + collection.find_one() + +References +---------- + +* `OpenTelemetry Project `_ diff --git a/ext/opentelemetry-ext-pymongo/setup.cfg b/ext/opentelemetry-ext-pymongo/setup.cfg new file mode 100644 index 00000000000..821dfd1b0b9 --- /dev/null +++ b/ext/opentelemetry-ext-pymongo/setup.cfg @@ -0,0 +1,46 @@ +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +[metadata] +name = opentelemetry-ext-pymongo +description = OpenTelemetry pymongo integration +long_description = file: README.rst +long_description_content_type = text/x-rst +author = OpenTelemetry Authors +author_email = cncf-opentelemetry-contributors@lists.cncf.io +url = https://github.com/open-telemetry/opentelemetry-python/ext/opentelemetry-ext-pymongo +platforms = any +license = Apache-2.0 +classifiers = + Development Status :: 3 - Alpha + Intended Audience :: Developers + License :: OSI Approved :: Apache Software License + Programming Language :: Python + Programming Language :: Python :: 3 + Programming Language :: Python :: 3.4 + Programming Language :: Python :: 3.5 + Programming Language :: Python :: 3.6 + Programming Language :: Python :: 3.7 + +[options] +python_requires = >=3.4 +package_dir= + =src +packages=find_namespace: +install_requires = + opentelemetry-api >= 0.1.dev0 + pymongo >= 3.1.0 + +[options.packages.find] +where = src diff --git a/ext/opentelemetry-ext-pymongo/setup.py b/ext/opentelemetry-ext-pymongo/setup.py new file mode 100644 index 00000000000..ed63ddf42da --- /dev/null +++ b/ext/opentelemetry-ext-pymongo/setup.py @@ -0,0 +1,26 @@ +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import os + +import setuptools + +BASE_DIR = os.path.dirname(__file__) +VERSION_FILENAME = os.path.join( + BASE_DIR, "src", "opentelemetry", "ext", "pymongo", "version.py" +) +PACKAGE_INFO = {} +with open(VERSION_FILENAME) as f: + exec(f.read(), PACKAGE_INFO) + +setuptools.setup(version=PACKAGE_INFO["__version__"]) diff --git a/ext/opentelemetry-ext-pymongo/src/opentelemetry/ext/pymongo/__init__.py b/ext/opentelemetry-ext-pymongo/src/opentelemetry/ext/pymongo/__init__.py new file mode 100644 index 00000000000..088f89c7bd2 --- /dev/null +++ b/ext/opentelemetry-ext-pymongo/src/opentelemetry/ext/pymongo/__init__.py @@ -0,0 +1,80 @@ +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +""" +The opentelemetry-ext-pymongo package allows tracing commands made by the +pymongo library. +""" + +from pymongo import monitoring +from opentelemetry.trace import SpanKind, Span +from opentelemetry.trace.status import Status, StatusCanonicalCode + +DATA_BASE_TYPE = "mongodb" +COMMAND_ATTRIBUTES = ["filter", "sort", "skip", "limit", "pipeline"] + + +def trace_integration(tracer=None): + """Integrate with pymongo to trace it using event listener. + https://api.mongodb.com/python/current/api/pymongo/monitoring.html + """ + monitoring.register(CommandTracer(tracer)) + + +class CommandTracer(monitoring.CommandListener): + def __init__(self, tracer): + self._tracer = tracer + + def started(self, event: monitoring.CommandStartedEvent): + name = ( + DATA_BASE_TYPE + + "." + + event.command_name + + "." + + event.command.get(event.command_name) + ) + with self._tracer.start_span(name, kind=SpanKind.CLIENT) as span: + span.set_attribute("component", DATA_BASE_TYPE) + span.set_attribute("db.type", DATA_BASE_TYPE) + span.set_attribute("db.instance", event.database_name) + span.set_attribute( + "db.statement", + event.command_name + + " " + + event.command.get(event.command_name), + ) + if event.connection_id is not None: + span.set_attribute("peer.address", str(event.connection_id)) + span.set_attribute("peer.hostname", event.connection_id[0]) + span.set_attribute("peer.port", event.connection_id[1]) + + span.set_attribute("operation_id", event.operation_id) + span.set_attribute("request_id", event.request_id) + + for attr in COMMAND_ATTRIBUTES: + _attr = event.command.get(attr) + if _attr is not None: + span.set_attribute(attr, str(_attr)) + + def succeeded(self, event: monitoring.CommandSucceededEvent): + span = self._tracer.get_current_span() + span.set_attribute("duration_micros", event.duration_micros) + span.set_status(Status(StatusCanonicalCode.OK, event.reply)) + span.end() + + def failed(self, event: monitoring.CommandFailedEvent): + span = self._tracer.get_current_span() + span.set_attribute("duration_micros", event.duration_micros) + span.set_status(Status(StatusCanonicalCode.UNKNOWN, event.failure)) + span.end() diff --git a/ext/opentelemetry-ext-pymongo/src/opentelemetry/ext/pymongo/version.py b/ext/opentelemetry-ext-pymongo/src/opentelemetry/ext/pymongo/version.py new file mode 100644 index 00000000000..a457c2b6651 --- /dev/null +++ b/ext/opentelemetry-ext-pymongo/src/opentelemetry/ext/pymongo/version.py @@ -0,0 +1,15 @@ +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +__version__ = "0.1.dev0" diff --git a/ext/opentelemetry-ext-pymongo/tests/__init__.py b/ext/opentelemetry-ext-pymongo/tests/__init__.py new file mode 100644 index 00000000000..e69de29bb2d diff --git a/ext/opentelemetry-ext-pymongo/tests/test_pymongo_integration.py b/ext/opentelemetry-ext-pymongo/tests/test_pymongo_integration.py new file mode 100644 index 00000000000..fa5d2716fbc --- /dev/null +++ b/ext/opentelemetry-ext-pymongo/tests/test_pymongo_integration.py @@ -0,0 +1,144 @@ +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import unittest +from unittest import mock + +from opentelemetry import trace as trace_api +from opentelemetry.ext.pymongo import trace_integration, CommandTracer +from opentelemetry.util import time_ns + + +class TestPymongoIntegration(unittest.TestCase): + def test_trace_integration(self): + mock_register = mock.Mock() + patch = mock.patch( + "pymongo.monitoring.register", side_effect=mock_register + ) + with patch: + trace_integration() + + self.assertTrue(mock_register.called) + + def test_started(self): + command_attrs = { + "filter": "filter", + "sort": "sort", + "limit": "limit", + "pipeline": "pipeline", + "command_name": "find", + } + mock_tracer = MockTracer() + CommandTracer(mock_tracer).started( + event=MockEvent(command_attrs, ("test.com","1234")) + ) + span = mock_tracer.get_current_span() + self.assertIs(span.kind, trace_api.SpanKind.CLIENT) + self.assertEqual(span.name, "mongodb.command_name.find") + self.assertEqual(span.attributes["component"], "mongodb") + self.assertEqual(span.attributes["db.type"], "mongodb") + self.assertEqual(span.attributes["db.instance"], "database_name") + self.assertEqual(span.attributes["db.statement"], "command_name find") + self.assertEqual( + span.attributes["peer.address"], "('test.com', '1234')" + ) + self.assertEqual(span.attributes["peer.hostname"], "test.com") + self.assertEqual(span.attributes["peer.port"], "1234") + self.assertEqual(span.attributes["operation_id"], "operation_id") + self.assertEqual(span.attributes["request_id"], "request_id") + + self.assertEqual(span.attributes["filter"], "filter") + self.assertEqual(span.attributes["sort"], "sort") + self.assertEqual(span.attributes["limit"], "limit") + self.assertEqual(span.attributes["pipeline"], "pipeline") + + def test_succeeded(self): + mock_tracer = MockTracer() + CommandTracer(mock_tracer).succeeded(event=MockEvent(None)) + span = mock_tracer.get_current_span() + self.assertEqual(span.attributes["duration_micros"], "duration_micros") + self.assertIs( + span.status.canonical_code, trace_api.status.StatusCanonicalCode.OK + ) + self.assertEqual(span.status.description, "reply") + self.assertIsNotNone(span.end_time) + + def test_failed(self): + mock_tracer = MockTracer() + CommandTracer(mock_tracer).failed(event=MockEvent(None)) + span = mock_tracer.get_current_span() + self.assertEqual(span.attributes["duration_micros"], "duration_micros") + self.assertIs( + span.status.canonical_code, + trace_api.status.StatusCanonicalCode.UNKNOWN, + ) + self.assertEqual(span.status.description, "failure") + self.assertIsNotNone(span.end_time) + + +class MockCommand(object): + def __init__(self, command_attrs): + self.command_attrs = command_attrs + + def get(self, key): + return self.command_attrs.get(key) + + +class MockEvent(object): + def __init__(self, command_attrs, connection_id=""): + self.command = MockCommand(command_attrs) + self.connection_id = connection_id + + def __getattr__(self, item): + return item + + +class MockSpan(object): + def __enter__(self): + return self + + def __exit__(self, exc_type, exc_val, exc_tb): + return False + + def __init__(self): + self.status = None + self.name = "" + self.kind = trace_api.SpanKind.INTERNAL + self.attributes = None + self.end_time = None + + def set_attribute(self, key, value): + self.attributes[key] = value + + def set_status(self, status): + self.status = status + + def end(self, end_time=None): + self.end_time = end_time if end_time is not None else time_ns() + + +class MockTracer(object): + def __init__(self): + self.span = MockSpan() + self.end_span = mock.Mock() + self.span.attributes = {} + self.span.status = None + + def start_span(self, name, kind): + self.span.name = name + self.span.kind = kind + return self.span + + def get_current_span(self): + return self.span diff --git a/ext/opentelemetry-ext-wsgi/src/opentelemetry/ext/wsgi/__init__.py b/ext/opentelemetry-ext-wsgi/src/opentelemetry/ext/wsgi/__init__.py index a3120c2c0a6..5e619eb7c6a 100644 --- a/ext/opentelemetry-ext-wsgi/src/opentelemetry/ext/wsgi/__init__.py +++ b/ext/opentelemetry-ext-wsgi/src/opentelemetry/ext/wsgi/__init__.py @@ -19,9 +19,10 @@ """ import functools +import typing import wsgiref.util as wsgiref_util -from opentelemetry import trace +from opentelemetry import propagators, trace from opentelemetry.ext.wsgi.version import __version__ # noqa @@ -35,25 +36,44 @@ class OpenTelemetryMiddleware: wsgi: The WSGI application callable. """ - def __init__(self, wsgi, propagators=None): + def __init__(self, wsgi): self.wsgi = wsgi - # TODO: implement context propagation - self.propagators = propagators - @staticmethod def _add_request_attributes(span, environ): span.set_attribute("component", "http") span.set_attribute("http.method", environ["REQUEST_METHOD"]) - host = environ.get("HTTP_HOST") or environ["SERVER_NAME"] + host = environ.get("HTTP_HOST") + if not host: + host = environ["SERVER_NAME"] + port = environ["SERVER_PORT"] + scheme = environ["wsgi.url_scheme"] + if ( + scheme == "http" + and port != "80" + or scheme == "https" + and port != "443" + ): + host += ":" + port + + # NOTE: Nonstandard span.set_attribute("http.host", host) - url = ( - environ.get("REQUEST_URI") - or environ.get("RAW_URI") - or wsgiref_util.request_uri(environ, include_query=False) - ) + url = environ.get("REQUEST_URI") or environ.get("RAW_URI") + + if url: + if url[0] == "/": + # We assume that no scheme-relative URLs will be in url here. + # After all, if a request is made to http://myserver//foo, we may get + # //foo which looks like scheme-relative but isn't. + url = environ["wsgi.url_scheme"] + "://" + host + url + elif not url.startswith(environ["wsgi.url_scheme"] + ":"): + # Something fishy is in RAW_URL. Let's fall back to request_uri() + url = wsgiref_util.request_uri(environ) + else: + url = wsgiref_util.request_uri(environ) + span.set_attribute("http.url", url) @staticmethod @@ -87,15 +107,51 @@ def __call__(self, environ, start_response): tracer = trace.tracer() path_info = environ["PATH_INFO"] or "/" + parent_span = propagators.extract(_get_header_from_environ, environ) - with tracer.start_span(path_info, kind=trace.SpanKind.SERVER) as span: - self._add_request_attributes(span, environ) - start_response = self._create_start_response(span, start_response) - - iterable = self.wsgi(environ, start_response) - try: - for yielded in iterable: - yield yielded - finally: - if hasattr(iterable, "close"): - iterable.close() + span = tracer.create_span( + path_info, parent_span, kind=trace.SpanKind.SERVER + ) + span.start() + try: + with tracer.use_span(span): + self._add_request_attributes(span, environ) + start_response = self._create_start_response( + span, start_response + ) + + iterable = self.wsgi(environ, start_response) + return _end_span_after_iterating(iterable, span, tracer) + except: # noqa + span.end() + raise + + +def _get_header_from_environ( + environ: dict, header_name: str +) -> typing.List[str]: + """Retrieve the header value from the wsgi environ dictionary. + + Returns: + A string with the header value if it exists, else None. + """ + environ_key = "HTTP_" + header_name.upper().replace("-", "_") + value = environ.get(environ_key) + if value: + return [value] + return [] + + +# Put this in a subfunction to not delay the call to the wrapped +# WSGI application (instrumentation should change the application +# behavior as little as possible). +def _end_span_after_iterating(iterable, span, tracer): + try: + with tracer.use_span(span): + for yielded in iterable: + yield yielded + finally: + close = getattr(iterable, "close", None) + if close: + close() + span.end() diff --git a/ext/opentelemetry-ext-wsgi/tests/test_wsgi_middleware.py b/ext/opentelemetry-ext-wsgi/tests/test_wsgi_middleware.py index 73bba504523..a88782d6428 100644 --- a/ext/opentelemetry-ext-wsgi/tests/test_wsgi_middleware.py +++ b/ext/opentelemetry-ext-wsgi/tests/test_wsgi_middleware.py @@ -17,6 +17,7 @@ import unittest import unittest.mock as mock import wsgiref.util as wsgiref_util +from urllib.parse import urlparse from opentelemetry import trace as trace_api from opentelemetry.ext.wsgi import OpenTelemetryMiddleware @@ -52,6 +53,15 @@ def iter_wsgi(environ, start_response): return iter_wsgi +def create_gen_wsgi(response): + def gen_wsgi(environ, start_response): + result = create_iter_wsgi(response)(environ, start_response) + yield from result + getattr(result, "close", lambda: None)() + + return gen_wsgi + + def error_wsgi(environ, start_response): assert isinstance(environ, dict) try: @@ -66,18 +76,15 @@ def error_wsgi(environ, start_response): class TestWsgiApplication(unittest.TestCase): def setUp(self): tracer = trace_api.tracer() - self.span_context_manager = mock.MagicMock() - self.span_context_manager.__enter__.return_value = mock.create_autospec( - trace_api.Span, spec_set=True - ) - self.patcher = mock.patch.object( + self.span = mock.create_autospec(trace_api.Span, spec_set=True) + self.create_span_patcher = mock.patch.object( tracer, - "start_span", + "create_span", autospec=True, spec_set=True, - return_value=self.span_context_manager, + return_value=self.span, ) - self.start_span = self.patcher.start() + self.create_span = self.create_span_patcher.start() self.write_buffer = io.BytesIO() self.write = self.write_buffer.write @@ -90,11 +97,11 @@ def setUp(self): self.exc_info = None def tearDown(self): - self.patcher.stop() + self.create_span_patcher.stop() def start_response(self, status, response_headers, exc_info=None): # The span should have started already - self.span_context_manager.__enter__.assert_called_with() + self.span.start.assert_called_once_with() self.status = status self.response_headers = response_headers @@ -105,12 +112,10 @@ def validate_response(self, response, error=None): while True: try: value = next(response) - self.span_context_manager.__exit__.assert_not_called() + self.assertEqual(0, self.span.end.call_count) self.assertEqual(value, b"*") except StopIteration: - self.span_context_manager.__exit__.assert_called_with( - None, None, None - ) + self.span.end.assert_called_once_with() break self.assertEqual(self.status, "200 OK") @@ -125,9 +130,10 @@ def validate_response(self, response, error=None): self.assertIsNone(self.exc_info) # Verify that start_span has been called - self.start_span.assert_called_once_with( - "/", kind=trace_api.SpanKind.SERVER + self.create_span.assert_called_with( + "/", trace_api.INVALID_SPAN_CONTEXT, kind=trace_api.SpanKind.SERVER ) + self.span.start.assert_called_with() def test_basic_wsgi_call(self): app = OpenTelemetryMiddleware(simple_wsgi) @@ -139,12 +145,24 @@ def test_wsgi_iterable(self): iter_wsgi = create_iter_wsgi(original_response) app = OpenTelemetryMiddleware(iter_wsgi) response = app(self.environ, self.start_response) - # Verify that start_response has not been called yet + # Verify that start_response has been called + self.assertTrue(self.status) + self.validate_response(response) + + # Verify that close has been called exactly once + self.assertEqual(original_response.close_calls, 1) + + def test_wsgi_generator(self): + original_response = Response() + gen_wsgi = create_gen_wsgi(original_response) + app = OpenTelemetryMiddleware(gen_wsgi) + response = app(self.environ, self.start_response) + # Verify that start_response has not been called self.assertIsNone(self.status) self.validate_response(response) # Verify that close has been called exactly once - assert original_response.close_calls == 1 + self.assertEqual(original_response.close_calls, 1) def test_wsgi_exc_info(self): app = OpenTelemetryMiddleware(error_wsgi) @@ -159,18 +177,97 @@ def setUp(self): self.span = mock.create_autospec(trace_api.Span, spec_set=True) def test_request_attributes(self): + self.environ["QUERY_STRING"] = "foo=bar" + OpenTelemetryMiddleware._add_request_attributes( # noqa pylint: disable=protected-access self.span, self.environ ) + expected = ( mock.call("component", "http"), mock.call("http.method", "GET"), mock.call("http.host", "127.0.0.1"), - mock.call("http.url", "http://127.0.0.1/"), + mock.call("http.url", "http://127.0.0.1/?foo=bar"), ) self.assertEqual(self.span.set_attribute.call_count, len(expected)) self.span.set_attribute.assert_has_calls(expected, any_order=True) + def validate_url(self, expected_url): + OpenTelemetryMiddleware._add_request_attributes( # noqa pylint: disable=protected-access + self.span, self.environ + ) + attrs = { + args[0][0]: args[0][1] + for args in self.span.set_attribute.call_args_list + } + self.assertIn("http.url", attrs) + self.assertEqual(attrs["http.url"], expected_url) + self.assertIn("http.host", attrs) + self.assertEqual(attrs["http.host"], urlparse(expected_url).netloc) + + def test_request_attributes_with_partial_raw_uri(self): + self.environ["RAW_URI"] = "/#top" + self.validate_url("http://127.0.0.1/#top") + + def test_request_attributes_with_partial_raw_uri_and_nonstandard_port( + self + ): + self.environ["RAW_URI"] = "/?" + del self.environ["HTTP_HOST"] + self.environ["SERVER_PORT"] = "8080" + self.validate_url("http://127.0.0.1:8080/?") + + def test_https_uri_port(self): + del self.environ["HTTP_HOST"] + self.environ["SERVER_PORT"] = "443" + self.environ["wsgi.url_scheme"] = "https" + self.validate_url("https://127.0.0.1/") + + self.environ["SERVER_PORT"] = "8080" + self.validate_url("https://127.0.0.1:8080/") + + self.environ["SERVER_PORT"] = "80" + self.validate_url("https://127.0.0.1:80/") + + def test_http_uri_port(self): + del self.environ["HTTP_HOST"] + self.environ["SERVER_PORT"] = "80" + self.environ["wsgi.url_scheme"] = "http" + self.validate_url("http://127.0.0.1/") + + self.environ["SERVER_PORT"] = "8080" + self.validate_url("http://127.0.0.1:8080/") + + self.environ["SERVER_PORT"] = "443" + self.validate_url("http://127.0.0.1:443/") + + def test_request_attributes_with_nonstandard_port_and_no_host(self): + del self.environ["HTTP_HOST"] + self.environ["SERVER_PORT"] = "8080" + self.validate_url("http://127.0.0.1:8080/") + + self.environ["SERVER_PORT"] = "443" + self.validate_url("http://127.0.0.1:443/") + + def test_request_attributes_with_nonstandard_port(self): + self.environ["HTTP_HOST"] += ":8080" + self.validate_url("http://127.0.0.1:8080/") + + def test_request_attributes_with_faux_scheme_relative_raw_uri(self): + self.environ["RAW_URI"] = "//127.0.0.1/?" + self.validate_url("http://127.0.0.1//127.0.0.1/?") + + def test_request_attributes_with_pathless_raw_uri(self): + self.environ["PATH_INFO"] = "" + self.environ["RAW_URI"] = "http://hello" + self.environ["HTTP_HOST"] = "hello" + self.validate_url("http://hello") + + def test_request_attributes_with_full_request_uri(self): + self.environ["HTTP_HOST"] = "127.0.0.1:8080" + self.environ["REQUEST_URI"] = "http://127.0.0.1:8080/?foo=bar#top" + self.validate_url("http://127.0.0.1:8080/?foo=bar#top") + def test_response_attributes(self): OpenTelemetryMiddleware._add_response_attributes( # noqa pylint: disable=protected-access self.span, "404 Not Found" diff --git a/opentelemetry-api/src/opentelemetry/context/__init__.py b/opentelemetry-api/src/opentelemetry/context/__init__.py index cf6c72dd8da..43a7722f885 100644 --- a/opentelemetry-api/src/opentelemetry/context/__init__.py +++ b/opentelemetry-api/src/opentelemetry/context/__init__.py @@ -138,19 +138,14 @@ async def main(): asyncio.run(main()) """ -import typing - from .base_context import BaseRuntimeContext __all__ = ["Context"] - -Context = None # type: typing.Optional[BaseRuntimeContext] - try: from .async_context import AsyncRuntimeContext - Context = AsyncRuntimeContext() + Context = AsyncRuntimeContext() # type: BaseRuntimeContext except ImportError: from .thread_local_context import ThreadLocalRuntimeContext diff --git a/opentelemetry-api/src/opentelemetry/context/propagation/__init__.py b/opentelemetry-api/src/opentelemetry/context/propagation/__init__.py index b964c2a968d..c8706281ad7 100644 --- a/opentelemetry-api/src/opentelemetry/context/propagation/__init__.py +++ b/opentelemetry-api/src/opentelemetry/context/propagation/__init__.py @@ -1,3 +1,17 @@ +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + from .binaryformat import BinaryFormat from .httptextformat import HTTPTextFormat diff --git a/opentelemetry-api/src/opentelemetry/context/propagation/httptextformat.py b/opentelemetry-api/src/opentelemetry/context/propagation/httptextformat.py index f3823a86d17..9b6098a9a42 100644 --- a/opentelemetry-api/src/opentelemetry/context/propagation/httptextformat.py +++ b/opentelemetry-api/src/opentelemetry/context/propagation/httptextformat.py @@ -17,8 +17,10 @@ from opentelemetry.trace import SpanContext -Setter = typing.Callable[[object, str, str], None] -Getter = typing.Callable[[object, str], typing.List[str]] +_T = typing.TypeVar("_T") + +Setter = typing.Callable[[_T, str, str], None] +Getter = typing.Callable[[_T, str], typing.List[str]] class HTTPTextFormat(abc.ABC): @@ -70,7 +72,7 @@ def example_route(): @abc.abstractmethod def extract( - self, get_from_carrier: Getter, carrier: object + self, get_from_carrier: Getter[_T], carrier: _T ) -> SpanContext: """Create a SpanContext from values in the carrier. @@ -93,7 +95,7 @@ def extract( @abc.abstractmethod def inject( - self, context: SpanContext, set_in_carrier: Setter, carrier: object + self, context: SpanContext, set_in_carrier: Setter[_T], carrier: _T ) -> None: """Inject values from a SpanContext into a carrier. diff --git a/opentelemetry-api/src/opentelemetry/context/propagation/tracecontexthttptextformat.py b/opentelemetry-api/src/opentelemetry/context/propagation/tracecontexthttptextformat.py new file mode 100644 index 00000000000..abe778db953 --- /dev/null +++ b/opentelemetry-api/src/opentelemetry/context/propagation/tracecontexthttptextformat.py @@ -0,0 +1,162 @@ +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +import re +import typing + +import opentelemetry.trace as trace +from opentelemetry.context.propagation import httptextformat + +_T = typing.TypeVar("_T") + +# Keys and values are strings of up to 256 printable US-ASCII characters. +# Implementations should conform to the the `W3C Trace Context - Tracestate`_ +# spec, which describes additional restrictions on valid field values. +# +# .. _W3C Trace Context - Tracestate: +# https://www.w3.org/TR/trace-context/#tracestate-field + + +_KEY_WITHOUT_VENDOR_FORMAT = r"[a-z][_0-9a-z\-\*\/]{0,255}" +_KEY_WITH_VENDOR_FORMAT = ( + r"[a-z][_0-9a-z\-\*\/]{0,240}@[a-z][_0-9a-z\-\*\/]{0,13}" +) + +_KEY_FORMAT = _KEY_WITHOUT_VENDOR_FORMAT + "|" + _KEY_WITH_VENDOR_FORMAT +_VALUE_FORMAT = ( + r"[\x20-\x2b\x2d-\x3c\x3e-\x7e]{0,255}[\x21-\x2b\x2d-\x3c\x3e-\x7e]" +) + +_DELIMITER_FORMAT = "[ \t]*,[ \t]*" +_MEMBER_FORMAT = "({})(=)({})".format(_KEY_FORMAT, _VALUE_FORMAT) + +_DELIMITER_FORMAT_RE = re.compile(_DELIMITER_FORMAT) +_MEMBER_FORMAT_RE = re.compile(_MEMBER_FORMAT) + + +class TraceContextHTTPTextFormat(httptextformat.HTTPTextFormat): + """Extracts and injects using w3c TraceContext's headers. + """ + + _TRACEPARENT_HEADER_NAME = "traceparent" + _TRACESTATE_HEADER_NAME = "tracestate" + _TRACEPARENT_HEADER_FORMAT = ( + "^[ \t]*([0-9a-f]{2})-([0-9a-f]{32})-([0-9a-f]{16})-([0-9a-f]{2})" + + "(-.*)?[ \t]*$" + ) + _TRACEPARENT_HEADER_FORMAT_RE = re.compile(_TRACEPARENT_HEADER_FORMAT) + + @classmethod + def extract( + cls, get_from_carrier: httptextformat.Getter[_T], carrier: _T + ) -> trace.SpanContext: + """Extracts a valid SpanContext from the carrier. + """ + header = get_from_carrier(carrier, cls._TRACEPARENT_HEADER_NAME) + + if not header: + return trace.INVALID_SPAN_CONTEXT + + match = re.search(cls._TRACEPARENT_HEADER_FORMAT_RE, header[0]) + if not match: + return trace.INVALID_SPAN_CONTEXT + + version = match.group(1) + trace_id = match.group(2) + span_id = match.group(3) + trace_options = match.group(4) + + if trace_id == "0" * 32 or span_id == "0" * 16: + return trace.INVALID_SPAN_CONTEXT + + if version == "00": + if match.group(5): + return trace.INVALID_SPAN_CONTEXT + if version == "ff": + return trace.INVALID_SPAN_CONTEXT + + tracestate = trace.TraceState() + for tracestate_header in get_from_carrier( + carrier, cls._TRACESTATE_HEADER_NAME + ): + # typing.Dict's update is not recognized by pylint: + # https://github.com/PyCQA/pylint/issues/2420 + tracestate.update( # pylint:disable=E1101 + _parse_tracestate(tracestate_header) + ) + + span_context = trace.SpanContext( + trace_id=int(trace_id, 16), + span_id=int(span_id, 16), + trace_options=trace.TraceOptions(trace_options), + trace_state=tracestate, + ) + + return span_context + + @classmethod + def inject( + cls, + context: trace.SpanContext, + set_in_carrier: httptextformat.Setter[_T], + carrier: _T, + ) -> None: + if context == trace.INVALID_SPAN_CONTEXT: + return + traceparent_string = "00-{:032x}-{:016x}-{:02x}".format( + context.trace_id, context.span_id, context.trace_options + ) + set_in_carrier( + carrier, cls._TRACEPARENT_HEADER_NAME, traceparent_string + ) + if context.trace_state: + tracestate_string = _format_tracestate(context.trace_state) + set_in_carrier( + carrier, cls._TRACESTATE_HEADER_NAME, tracestate_string + ) + + +def _parse_tracestate(string: str) -> trace.TraceState: + """Parse a w3c tracestate header into a TraceState. + + Args: + string: the value of the tracestate header. + + Returns: + A valid TraceState that contains values extracted from + the tracestate header. + """ + tracestate = trace.TraceState() + for member in re.split(_DELIMITER_FORMAT_RE, string): + match = _MEMBER_FORMAT_RE.match(member) + if not match: + raise ValueError("illegal key-value format %r" % (member)) + key, _eq, value = match.groups() + # typing.Dict's update is not recognized by pylint: + # https://github.com/PyCQA/pylint/issues/2420 + tracestate[key] = value # pylint:disable=E1137 + return tracestate + + +def _format_tracestate(tracestate: trace.TraceState) -> str: + """Parse a w3c tracestate header into a TraceState. + + Args: + tracestate: the tracestate header to write + + Returns: + A string that adheres to the w3c tracestate + header format. + """ + return ",".join(key + "=" + value for key, value in tracestate.items()) diff --git a/opentelemetry-api/src/opentelemetry/metrics/__init__.py b/opentelemetry-api/src/opentelemetry/metrics/__init__.py index 68563bd4928..61cc8bdfac7 100644 --- a/opentelemetry-api/src/opentelemetry/metrics/__init__.py +++ b/opentelemetry-api/src/opentelemetry/metrics/__init__.py @@ -26,292 +26,199 @@ """ - from abc import ABC, abstractmethod -from typing import List +from typing import Callable, Optional, Sequence, Tuple, Type, TypeVar -from opentelemetry.metrics.time_series import ( - CounterTimeSeries, - GaugeTimeSeries, - MeasureTimeSeries, -) -from opentelemetry.trace import SpanContext +from opentelemetry.util import loader -LabelKeys = List["LabelKey"] -LabelValues = List[str] +ValueT = TypeVar("ValueT", int, float) -class Meter: - """An interface to allow the recording of metrics. +class DefaultMetricHandle: + """The default MetricHandle. - `Metric` s are used for recording pre-defined aggregation (gauge and - counter), or raw values (measure) in which the aggregation and labels - for the exported metric are deferred. + Used when no MetricHandle implementation is available. """ - def create_float_counter( - self, - name: str, - description: str, - unit: str, - label_keys: LabelKeys, - span_context: SpanContext = None, - ) -> "FloatCounter": - """Creates a counter type metric that contains float values. - - Args: - name: The name of the counter. - description: Human readable description of the metric. - unit: Unit of the metric values. - label_keys: list of keys for the labels with dynamic values. - Order of the list is important as the same order MUST be used - on recording when suppling values for these labels. - span_context: The `SpanContext` that identifies the `Span` - that the metric is associated with. - - Returns: A new `FloatCounter` - """ - def create_int_counter( - self, - name: str, - description: str, - unit: str, - label_keys: LabelKeys, - span_context: SpanContext = None, - ) -> "IntCounter": - """Creates a counter type metric that contains int values. +class CounterHandle: + def add(self, value: ValueT) -> None: + """Increases the value of the handle by ``value``""" - Args: - name: The name of the counter. - description: Human readable description of the metric. - unit: Unit of the metric values. - label_keys: list of keys for the labels with dynamic values. - Order of the list is important as the same order MUST be used - on recording when suppling values for these labels. - span_context: The `SpanContext` that identifies the `Span` - that the metric is associated with. - - Returns: - A new `IntCounter` - """ - def create_float_gauge( - self, - name: str, - description: str, - unit: str, - label_keys: LabelKeys, - span_context: SpanContext = None, - ) -> "FloatGauge": - """Creates a gauge type metric that contains float values. +class GaugeHandle: + def set(self, value: ValueT) -> None: + """Sets the current value of the handle to ``value``.""" - Args: - name: The name of the counter. - description: Human readable description of the metric. - unit: Unit of the metric values. - label_keys: list of keys for the labels with dynamic values. - Order of the list is important as the same order MUST be used - on recording when suppling values for these labels. - span_context: The `SpanContext` that identifies the `Span` - that the metric is associated with. - - Returns: - A new `FloatGauge` - """ - def create_int_gauge( - self, - name: str, - description: str, - unit: str, - label_keys: LabelKeys, - span_context: SpanContext = None, - ) -> "IntGauge": - """Creates a gauge type metric that contains int values. - - Args: - name: The name of the counter. - description: Human readable description of the metric. - unit: Unit of the metric values. - label_keys: list of keys for the labels with dynamic values. - Order of the list is important as the same order MUST be used - on recording when suppling values for these labels. - span_context: The `SpanContext` that identifies the `Span` - that the metric is associated with. - - Returns: - A new `IntGauge` - """ - - def create_int_measure( - self, - name: str, - description: str, - unit: str, - label_keys: LabelKeys, - span_context: SpanContext = None, - ) -> "IntMeasure": - """Creates a measure used to record raw int values. - - Args: - name: The name of the measure. - description: Human readable description of this measure. - unit: Unit of the measure values. - label_keys: list of keys for the labels with dynamic values. - Order of the list is important as the same order MUST be used - on recording when suppling values for these labels. - span_context: The `SpanContext` that identifies the `Span` - that the metric is associated with. - - Returns: - A new `IntMeasure` - """ - - def create_float_measure( - self, - name: str, - description: str, - unit: str, - label_keys: LabelKeys, - span_context: SpanContext = None, - ) -> "FloatMeasure": - """Creates a Measure used to record raw float values. - - Args: - name: the name of the measure - description: Human readable description of this measure. - unit: Unit of the measure values. - label_keys: list of keys for the labels with dynamic values. - Order of the list is important as the same order MUST be used - on recording when suppling values for these labels. - span_context: The `SpanContext` that identifies the `Span` - that the metric is associated with. - - Returns: - A new `FloatMeasure` - """ +class MeasureHandle: + def record(self, value: ValueT) -> None: + """Records the given ``value`` to this handle.""" class Metric(ABC): """Base class for various types of metrics. Metric class that inherit from this class are specialized with the type of - time series that the metric holds. Metric is constructed from the meter. + handle that the metric holds. """ @abstractmethod - def get_or_create_time_series(self, label_values: LabelValues) -> "object": - """Gets and returns a timeseries, a container for a cumulative value. + def get_handle(self, label_values: Sequence[str]) -> "object": + """Gets a handle, used for repeated-use of metrics instruments. - If the provided label values are not already associated with this - metric, a new timeseries is returned, otherwise it returns the existing - timeseries with the exact label values. The timeseries returned - contains logic and behaviour specific to the type of metric that - overrides this function. + Handles are useful to reduce the cost of repeatedly recording a metric + with a pre-defined set of label values. All metric kinds (counter, + gauge, measure) support declaring a set of required label keys. The + values corresponding to these keys should be specified in every handle. + "Unspecified" label values, in cases where a handle is requested but + a value was not provided are permitted. Args: - label_values: A list of label values that will be associated - with the return timeseries. + label_values: Values to associate with the returned handle. """ - def remove_time_series(self, label_values: LabelValues) -> None: - """Removes the timeseries from the Metric, if present. - The timeseries with matching label values will be removed. +class DefaultMetric(Metric): + """The default Metric used when no Metric implementation is available.""" + + def get_handle(self, label_values: Sequence[str]) -> "DefaultMetricHandle": + return DefaultMetricHandle() - args: - label_values: The list of label values to match against. - """ - def clear(self) -> None: - """Removes all timeseries from the `Metric`.""" +class Counter(Metric): + """A counter type metric that expresses the computation of a sum.""" + def get_handle(self, label_values: Sequence[str]) -> "CounterHandle": + """Gets a `CounterHandle`.""" + return CounterHandle() -class FloatCounter(Metric): - """A counter type metric that holds float values. - Cumulative values can go up or stay the same, but can never go down. - Cumulative values cannot be negative. +class Gauge(Metric): + """A gauge type metric that expresses a pre-calculated value. + + Gauge metrics have a value that is either ``Set`` by explicit + instrumentation or observed through a callback. This kind of metric + should be used when the metric cannot be expressed as a sum or because + the measurement interval is arbitrary. """ - def get_or_create_time_series( - self, label_values: LabelValues - ) -> "CounterTimeSeries": - """Gets a `CounterTimeSeries` with a cumulative float value.""" + def get_handle(self, label_values: Sequence[str]) -> "GaugeHandle": + """Gets a `GaugeHandle`.""" + return GaugeHandle() -class IntCounter(Metric): - """A counter type metric that holds int values. +class Measure(Metric): + """A measure type metric that represent raw stats that are recorded. - Cumulative values can go up or stay the same, but can never go down. - Cumulative values cannot be negative. + Measure metrics represent raw statistics that are recorded. By + default, measure metrics can accept both positive and negatives. + Negative inputs will be discarded when monotonic is True. """ - def get_or_create_time_series( - self, label_values: LabelValues - ) -> "CounterTimeSeries": - """Gets a `CounterTimeSeries` with a cumulative int value.""" + def get_handle(self, label_values: Sequence[str]) -> "MeasureHandle": + """Gets a `MeasureHandle` with a float value.""" + return MeasureHandle() -class FloatGauge(Metric): - """A gauge type metric that holds float values. +MetricT = TypeVar("MetricT", Counter, Gauge, Measure) - Cumulative value can go both up and down. Values can be negative. - """ - def get_or_create_time_series( - self, label_values: LabelValues - ) -> "GaugeTimeSeries": - """Gets a `GaugeTimeSeries` with a cumulative float value.""" +# pylint: disable=unused-argument +class Meter: + """An interface to allow the recording of metrics. + `Metric` s are used for recording pre-defined aggregation (gauge and + counter), or raw values (measure) in which the aggregation and labels + for the exported metric are deferred. + """ -class IntGauge(Metric): - """A gauge type metric that holds int values. + def record_batch( + self, + label_values: Sequence[str], + record_tuples: Sequence[Tuple["Metric", ValueT]], + ) -> None: + """Atomically records a batch of `Metric` and value pairs. - Cumulative value can go both up and down. Values can be negative. - """ + Allows the functionality of acting upon multiple metrics with + a single API call. Implementations should find metric and handles that + match the key-value pairs in the label tuples. - def get_or_create_time_series( - self, label_values: LabelValues - ) -> "GaugeTimeSeries": - """Gets a `GaugeTimeSeries` with a cumulative int value.""" + Args: + label_values: The values that will be matched against to record for + the handles under each metric that has those labels. + record_tuples: A sequence of pairs of `Metric` s and the + corresponding value to record for that metric. + """ + def create_metric( + self, + name: str, + description: str, + unit: str, + value_type: Type[ValueT], + metric_type: Type[MetricT], + label_keys: Sequence[str] = None, + enabled: bool = True, + monotonic: bool = False, + ) -> "Metric": + """Creates a ``metric_kind`` metric with type ``value_type``. -class FloatMeasure(Metric): - """A measure type metric that holds float values. + Args: + name: The name of the metric. + description: Human-readable description of the metric. + unit: Unit of the metric values. + value_type: The type of values being recorded by the metric. + metric_type: The type of metric being created. + label_keys: The keys for the labels with dynamic values. + Order of the sequence is important as the same order must be + used on recording when suppling values for these labels. + enabled: Whether to report the metric by default. + monotonic: Whether to only allow non-negative values. + + Returns: A new ``metric_type`` metric with values of ``value_type``. + """ + # pylint: disable=no-self-use + return DefaultMetric() - Measure metrics represent raw statistics that are recorded. - """ - def get_or_create_time_series( - self, label_values: LabelValues - ) -> "MeasureTimeSeries": - """Gets a `MeasureTimeSeries` with a cumulated float value.""" +# Once https://github.com/python/mypy/issues/7092 is resolved, +# the following type definition should be replaced with +# from opentelemetry.util.loader import ImplementationFactory +ImplementationFactory = Callable[[Type[Meter]], Optional[Meter]] +_METER = None +_METER_FACTORY = None -class IntMeasure(Metric): - """A measure type metric that holds int values. - Measure metrics represent raw statistics that are recorded. +def meter() -> Meter: + """Gets the current global :class:`~.Meter` object. + + If there isn't one set yet, a default will be loaded. """ + global _METER, _METER_FACTORY # pylint:disable=global-statement + + if _METER is None: + # pylint:disable=protected-access + _METER = loader._load_impl(Meter, _METER_FACTORY) + del _METER_FACTORY - def get_or_create_time_series( - self, label_values: LabelValues - ) -> "MeasureTimeSeries": - """Gets a `MeasureTimeSeries` with a cumulated int value.""" + return _METER -class LabelKey: - """The label keys associated with the metric. +def set_preferred_meter_implementation(factory: ImplementationFactory) -> None: + """Set the factory to be used to create the meter. - :type key: str - :param key: the key for the label + See :mod:`opentelemetry.util.loader` for details. - :type description: str - :param description: description of the label + This function may not be called after a meter is already loaded. + + Args: + factory: Callback that should create a new :class:`Meter` instance. """ + global _METER, _METER_FACTORY # pylint:disable=global-statement + + if _METER: + raise RuntimeError("Meter already loaded.") - def __init__(self, key: str, description: str) -> None: - self.key = key - self.description = description + _METER_FACTORY = factory diff --git a/opentelemetry-api/src/opentelemetry/metrics/examples/pre_aggregated.py b/opentelemetry-api/src/opentelemetry/metrics/examples/pre_aggregated.py deleted file mode 100644 index c9c55f01b81..00000000000 --- a/opentelemetry-api/src/opentelemetry/metrics/examples/pre_aggregated.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright 2019, OpenTelemetry Authors -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# pylint: skip-file -from opentelemetry.metrics import LabelKey, LabelValue, Meter - -METER = Meter() -LABEL_KEYS = [ - LabelKey("environment", "the environment the application is running in") -] -COUNTER = METER.create_int_counter( - "sum numbers", # pragma: no cover - "sum numbers over time", - "number", - LABEL_KEYS, -) -LABEL_VALUE_TESTING = [LabelValue("Testing")] -LABEL_VALUE_STAGING = [LabelValue("Staging")] - -# Metrics sent to some exporter -METRIC_TESTING = COUNTER.get_or_create_time_series(LABEL_VALUE_TESTING) -METRIC_STAGING = COUNTER.get_or_create_time_series(LABEL_VALUE_STAGING) - -for i in range(100): - METRIC_STAGING.add(i) diff --git a/opentelemetry-api/src/opentelemetry/metrics/examples/raw.py b/opentelemetry-api/src/opentelemetry/metrics/examples/raw.py deleted file mode 100644 index 3c82e14d536..00000000000 --- a/opentelemetry-api/src/opentelemetry/metrics/examples/raw.py +++ /dev/null @@ -1,45 +0,0 @@ -# Copyright 2019, OpenTelemetry Authors -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# pylint: skip-file -from opentelemetry.metrics import LabelKey, LabelValue, MeasureBatch, Meter -from opentelemetry.metrics.aggregation import LastValueAggregation - -METER = Meter() -LABEL_KEYS = [ - LabelKey("environment", "the environment the application is running in") -] -MEASURE = METER.create_float_measure( - "idle_cpu_percentage", - "cpu idle over time", - "percentage", - LABEL_KEYS, - LastValueAggregation, -) -LABEL_VALUE_TESTING = [LabelValue("Testing")] -LABEL_VALUE_STAGING = [LabelValue("Staging")] - -# Metrics sent to some exporter -MEASURE_METRIC_TESTING = MEASURE.get_or_create_time_series(LABEL_VALUE_TESTING) -MEASURE_METRIC_STAGING = MEASURE.get_or_create_time_series(LABEL_VALUE_STAGING) - -# record individual measures -STATISTIC = 100 -MEASURE_METRIC_STAGING.record(STATISTIC) - -# record multiple observed values -BATCH = MeasureBatch() -BATCH.record( - [(MEASURE_METRIC_TESTING, STATISTIC), (MEASURE_METRIC_STAGING, STATISTIC)] -) diff --git a/opentelemetry-api/src/opentelemetry/metrics/time_series.py b/opentelemetry-api/src/opentelemetry/metrics/time_series.py deleted file mode 100644 index b14ef973ad4..00000000000 --- a/opentelemetry-api/src/opentelemetry/metrics/time_series.py +++ /dev/null @@ -1,39 +0,0 @@ -# Copyright 2019, OpenTelemetry Authors -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import typing - - -class CounterTimeSeries: - def add(self, value: typing.Union[float, int]) -> None: - """Adds the given value to the current value. Cannot be negative.""" - - def set(self, value: typing.Union[float, int]) -> None: - """Sets the current value to the given value. - - The given value must be larger than the current recorded value. - """ - - -class GaugeTimeSeries: - def set(self, value: typing.Union[float, int]) -> None: - """Sets the current value to the given value. Can be negative.""" - - -class MeasureTimeSeries: - def record(self, value: typing.Union[float, int]) -> None: - """Records the given value to this measure. - - Logic depends on type of aggregation used for this measure. - """ diff --git a/opentelemetry-api/src/opentelemetry/propagators/__init__.py b/opentelemetry-api/src/opentelemetry/propagators/__init__.py new file mode 100644 index 00000000000..5b71e8785a9 --- /dev/null +++ b/opentelemetry-api/src/opentelemetry/propagators/__init__.py @@ -0,0 +1,84 @@ +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import typing + +import opentelemetry.context.propagation.httptextformat as httptextformat +import opentelemetry.trace as trace +from opentelemetry.context.propagation.tracecontexthttptextformat import ( + TraceContextHTTPTextFormat, +) + +_T = typing.TypeVar("_T") + + +def extract( + get_from_carrier: httptextformat.Getter[_T], carrier: _T +) -> trace.SpanContext: + """Load the parent SpanContext from values in the carrier. + + Using the specified HTTPTextFormatter, the propagator will + extract a SpanContext from the carrier. If one is found, + it will be set as the parent context of the current span. + + Args: + get_from_carrier: a function that can retrieve zero + or more values from the carrier. In the case that + the value does not exist, return an empty list. + carrier: and object which contains values that are + used to construct a SpanContext. This object + must be paired with an appropriate get_from_carrier + which understands how to extract a value from it. + """ + return get_global_httptextformat().extract(get_from_carrier, carrier) + + +def inject( + tracer: trace.Tracer, + set_in_carrier: httptextformat.Setter[_T], + carrier: _T, +) -> None: + """Inject values from the current context into the carrier. + + inject enables the propagation of values into HTTP clients or + other objects which perform an HTTP request. Implementations + should use the set_in_carrier method to set values on the + carrier. + + Args: + set_in_carrier: A setter function that can set values + on the carrier. + carrier: An object that contains a representation of HTTP + headers. Should be paired with set_in_carrier, which + should know how to set header values on the carrier. + """ + get_global_httptextformat().inject( + tracer.get_current_span().get_context(), set_in_carrier, carrier + ) + + +_HTTP_TEXT_FORMAT = ( + TraceContextHTTPTextFormat() +) # type: httptextformat.HTTPTextFormat + + +def get_global_httptextformat() -> httptextformat.HTTPTextFormat: + return _HTTP_TEXT_FORMAT + + +def set_global_httptextformat( + http_text_format: httptextformat.HTTPTextFormat +) -> None: + global _HTTP_TEXT_FORMAT # pylint:disable=global-statement + _HTTP_TEXT_FORMAT = http_text_format diff --git a/opentelemetry-api/src/opentelemetry/trace/__init__.py b/opentelemetry-api/src/opentelemetry/trace/__init__.py index b79cdeb4df5..fac9f55da74 100644 --- a/opentelemetry-api/src/opentelemetry/trace/__init__.py +++ b/opentelemetry-api/src/opentelemetry/trace/__init__.py @@ -26,16 +26,16 @@ to use the API package alone without a supporting implementation. The tracer supports creating spans that are "attached" or "detached" from the -context. By default, new spans are "attached" to the context in that they are +context. New spans are "attached" to the context in that they are created as children of the currently active span, and the newly-created span -becomes the new active span:: +can optionally become the new active span:: from opentelemetry.trace import tracer # Create a new root span, set it as the current span in context - with tracer.start_span("parent"): + with tracer.start_as_current_span("parent"): # Attach a new child and update the current span - with tracer.start_span("child"): + with tracer.start_as_current_span("child"): do_work(): # Close child span, set parent as current # Close parent span, set default span as current @@ -62,9 +62,11 @@ """ import enum +import types as python_types import typing from contextlib import contextmanager +from opentelemetry.trace.status import Status from opentelemetry.util import loader, types # TODO: quarantine @@ -142,7 +144,7 @@ class SpanKind(enum.Enum): class Span: """A span represents a single operation within a trace.""" - def start(self) -> None: + def start(self, start_time: typing.Optional[int] = None) -> None: """Sets the current time as the span's start time. Each span represents a single operation. The span's start time is the @@ -152,7 +154,7 @@ def start(self) -> None: implementations are free to ignore or raise on further calls. """ - def end(self) -> None: + def end(self, end_time: int = None) -> None: """Sets the current time as the span's end time. The span's end time is the wall time at which the operation finished. @@ -226,6 +228,31 @@ def is_recording_events(self) -> bool: events with the add_event operation and attributes using set_attribute. """ + def set_status(self, status: Status) -> None: + """Sets the Status of the Span. If used, this will override the default + Span status, which is OK. + """ + + def __enter__(self) -> "Span": + """Invoked when `Span` is used as a context manager. + + Returns the `Span` itself. + """ + return self + + def __exit__( + self, + exc_type: typing.Optional[typing.Type[BaseException]], + exc_val: typing.Optional[BaseException], + exc_tb: typing.Optional[python_types.TracebackType], + ) -> typing.Optional[bool]: + """Ends context manager and calls `end` on the `Span`. + + Returns False. + """ + self.end() + return False + class TraceOptions(int): """A bitmask that represents options specific to the trace. @@ -376,30 +403,64 @@ def get_current_span(self) -> "Span": # pylint: disable=no-self-use return INVALID_SPAN - @contextmanager # type: ignore def start_span( self, name: str, parent: ParentSpan = CURRENT_SPAN, kind: SpanKind = SpanKind.INTERNAL, - ) -> typing.Iterator["Span"]: - """Context manager for span creation. + ) -> "Span": + """Starts a span. - Create a new span. Start the span and set it as the current span in - this tracer's context. + Create a new span. Start the span without setting it as the current + span in this tracer's context. By default the current span will be used as parent, but an explicit parent can also be specified, either a `Span` or a `SpanContext`. If the specified value is `None`, the created span will be a root span. - On exiting the context manager stop the span and set its parent as the + The span can be used as context manager. On exiting, the span will be + ended. + + Example:: + + # tracer.get_current_span() will be used as the implicit parent. + # If none is found, the created span will be a root instance. + with tracer.start_span("one") as child: + child.add_event("child's event") + + Applications that need to set the newly created span as the current + instance should use :meth:`start_as_current_span` instead. + + Args: + name: The name of the span to be created. + parent: The span's parent. Defaults to the current span. + kind: The span's kind (relationship to parent). Note that is + meaningful even if there is no parent. + + Returns: + The newly-created span. + """ + # pylint: disable=unused-argument,no-self-use + return INVALID_SPAN + + @contextmanager # type: ignore + def start_as_current_span( + self, + name: str, + parent: ParentSpan = CURRENT_SPAN, + kind: SpanKind = SpanKind.INTERNAL, + ) -> typing.Iterator["Span"]: + """Context manager for creating a new span and set it + as the current span in this tracer's context. + + On exiting the context manager stops the span and set its parent as the current span. Example:: - with tracer.start_span("one") as parent: + with tracer.start_as_current_span("one") as parent: parent.add_event("parent's event") - with tracer.start_span("two") as child: + with tracer.start_as_current_span("two") as child: child.add_event("child's event") tracer.get_current_span() # returns child tracer.get_current_span() # returns parent @@ -407,15 +468,15 @@ def start_span( This is a convenience method for creating spans attached to the tracer's context. Applications that need more control over the span - lifetime should use :meth:`create_span` instead. For example:: + lifetime should use :meth:`start_span` instead. For example:: - with tracer.start_span(name) as span: + with tracer.start_as_current_span(name) as span: do_work() is equivalent to:: - span = tracer.create_span(name) - with tracer.use_span(span): + span = tracer.start_span(name) + with tracer.use_span(span, end_on_exit=True): do_work() Args: @@ -427,6 +488,7 @@ def start_span( Yields: The newly-created span. """ + # pylint: disable=unused-argument,no-self-use yield INVALID_SPAN @@ -450,7 +512,7 @@ def create_span( Applications that need to create spans detached from the tracer's context should use this method. - with tracer.start_span(name) as span: + with tracer.start_as_current_span(name) as span: do_work() This is equivalent to:: @@ -472,17 +534,22 @@ def create_span( return INVALID_SPAN @contextmanager # type: ignore - def use_span(self, span: "Span") -> typing.Iterator[None]: + def use_span( + self, span: "Span", end_on_exit: bool = False + ) -> typing.Iterator[None]: """Context manager for controlling a span's lifetime. - Start the given span and set it as the current span in this tracer's - context. + Set the given span as the current span in this tracer's context. - On exiting the context manager stop the span and set its parent as the - current span. + On exiting the context manager set the span that was previously active + as the current span (this is usually but not necessarily the parent of + the given span). If ``end_on_exit`` is ``True``, then the span is also + ended when exiting the context manager. Args: span: The span to start and make current. + end_on_exit: Whether to end the span automatically when leaving the + context manager. """ # pylint: disable=unused-argument,no-self-use yield diff --git a/opentelemetry-api/src/opentelemetry/trace/status.py b/opentelemetry-api/src/opentelemetry/trace/status.py new file mode 100644 index 00000000000..4fc50b33e56 --- /dev/null +++ b/opentelemetry-api/src/opentelemetry/trace/status.py @@ -0,0 +1,185 @@ +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import enum +import typing + + +class StatusCanonicalCode(enum.Enum): + """Represents the canonical set of status codes of a finished Span.""" + + OK = 0 + """Not an error, returned on success.""" + + CANCELLED = 1 + """The operation was cancelled, typically by the caller.""" + + UNKNOWN = 2 + """Unknown error. + + For example, this error may be returned when a Status value received from + another address space belongs to an error space that is not known in this + address space. Also errors raised by APIs that do not return enough error + information may be converted to this error. + """ + + INVALID_ARGUMENT = 3 + """The client specified an invalid argument. + + Note that this differs from FAILED_PRECONDITION. INVALID_ARGUMENT indicates + arguments that are problematic regardless of the state of the system (e.g., + a malformed file name). + """ + + DEADLINE_EXCEEDED = 4 + """The deadline expired before the operation could complete. + + For operations that change the state of the system, this error may be + returned even if the operation has completed successfully. For example, a + successful response from a server could have been delayed long + """ + + NOT_FOUND = 5 + """Some requested entity (e.g., file or directory) was not found. + + Note to server developers: if a request is denied for an entire class of + users, such as gradual feature rollout or undocumented whitelist, NOT_FOUND + may be used. If a request is denied for some users within a class of users, + such as user-based access control, PERMISSION_DENIED must be used. + """ + + ALREADY_EXISTS = 6 + """The entity that a client attempted to create (e.g., file or directory) + already exists. + """ + + PERMISSION_DENIED = 7 + """The caller does not have permission to execute the specified operation. + + PERMISSION_DENIED must not be used for rejections caused by exhausting some + resource (use RESOURCE_EXHAUSTED instead for those errors). + PERMISSION_DENIED must not be used if the caller can not be identified (use + UNAUTHENTICATED instead for those errors). This error code does not imply + the request is valid or the requested entity exists or satisfies other + pre-conditions. + """ + + RESOURCE_EXHAUSTED = 8 + """Some resource has been exhausted, perhaps a per-user quota, or perhaps + the entire file system is out of space. + """ + + FAILED_PRECONDITION = 9 + """The operation was rejected because the system is not in a state required + for the operation's execution. + + For example, the directory to be deleted is non-empty, an rmdir operation + is applied to a non-directory, etc. Service implementors can use the + following guidelines to decide between FAILED_PRECONDITION, ABORTED, and + UNAVAILABLE: + + (a) Use UNAVAILABLE if the client can retry just the failing call. + (b) Use ABORTED if the client should retry at a higher level (e.g., + when a client-specified test-and-set fails, indicating the client + should restart a read-modify-write sequence). + (c) Use FAILED_PRECONDITION if the client should not retry until the + system state has been explicitly fixed. + + E.g., if an "rmdir" fails because the directory is non-empty, + FAILED_PRECONDITION should be returned since the client should not retry + unless the files are deleted from the directory. + """ + + ABORTED = 10 + """The operation was aborted, typically due to a concurrency issue such as a + sequencer check failure or transaction abort. + + See the guidelines above for deciding between FAILED_PRECONDITION, ABORTED, + and UNAVAILABLE. + """ + + OUT_OF_RANGE = 11 + """The operation was attempted past the valid range. + + E.g., seeking or reading past end-of-file. Unlike INVALID_ARGUMENT, this + error indicates a problem that may be fixed if the system state changes. + For example, a 32-bit file system will generate INVALID_ARGUMENT if asked + to read at an offset that is not in the range [0,2^32-1],but it will + generate OUT_OF_RANGE if asked to read from an offset past the current file + size. There is a fair bit of overlap between FAILED_PRECONDITION and + OUT_OF_RANGE. We recommend using OUT_OF_RANGE (the more specific error) + when it applies so that callers who are iterating through a space can + easily look for an OUT_OF_RANGE error to detect when they are done. + """ + + UNIMPLEMENTED = 12 + """The operation is not implemented or is not supported/enabled in this + service. + """ + + INTERNAL = 13 + """Internal errors. + + This means that some invariants expected by the underlying system have been + broken. This error code is reserved for serious errors. + """ + + UNAVAILABLE = 14 + """The service is currently unavailable. + + This is most likely a transient condition, which can be corrected by + retrying with a backoff. Note that it is not always safe to retry + non-idempotent operations. + """ + + DATA_LOSS = 15 + """Unrecoverable data loss or corruption.""" + + UNAUTHENTICATED = 16 + """The request does not have valid authentication credentials for the + operation. + """ + + +class Status: + """Represents the status of a finished Span. + + Args: + canonical_code: The canonical status code that describes the result + status of the operation. + description: An optional description of the status. + """ + + def __init__( + self, + canonical_code: "StatusCanonicalCode" = StatusCanonicalCode.OK, + description: typing.Optional[str] = None, + ): + self._canonical_code = canonical_code + self._description = description + + @property + def canonical_code(self) -> "StatusCanonicalCode": + """Represents the canonical status code of a finished Span.""" + return self._canonical_code + + @property + def description(self) -> typing.Optional[str]: + """Status description""" + return self._description + + @property + def is_ok(self) -> bool: + """Returns false if this represents an error, true otherwise.""" + return self._canonical_code is StatusCanonicalCode.OK diff --git a/opentelemetry-api/src/opentelemetry/util/__init__.py b/opentelemetry-api/src/opentelemetry/util/__init__.py new file mode 100644 index 00000000000..cbf36d4c05a --- /dev/null +++ b/opentelemetry-api/src/opentelemetry/util/__init__.py @@ -0,0 +1,12 @@ +import time + +# Since we want API users to be able to provide timestamps, +# this needs to be in the API. + +try: + time_ns = time.time_ns +# Python versions < 3.7 +except AttributeError: + + def time_ns() -> int: + return int(time.time() * 1e9) diff --git a/opentelemetry-api/tests/context/__init__.py b/opentelemetry-api/tests/context/__init__.py new file mode 100644 index 00000000000..e69de29bb2d diff --git a/opentelemetry-api/tests/context/propagation/__init__.py b/opentelemetry-api/tests/context/propagation/__init__.py new file mode 100644 index 00000000000..e69de29bb2d diff --git a/opentelemetry-api/tests/context/propagation/test_tracecontexthttptextformat.py b/opentelemetry-api/tests/context/propagation/test_tracecontexthttptextformat.py new file mode 100644 index 00000000000..aaf392be248 --- /dev/null +++ b/opentelemetry-api/tests/context/propagation/test_tracecontexthttptextformat.py @@ -0,0 +1,215 @@ +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import typing +import unittest + +from opentelemetry import trace +from opentelemetry.context.propagation import tracecontexthttptextformat + +FORMAT = tracecontexthttptextformat.TraceContextHTTPTextFormat() + + +def get_as_list( + dict_object: typing.Dict[str, str], key: str +) -> typing.List[str]: + value = dict_object.get(key) + return [value] if value is not None else [] + + +class TestTraceContextFormat(unittest.TestCase): + TRACE_ID = int("12345678901234567890123456789012", 16) # type:int + SPAN_ID = int("1234567890123456", 16) # type:int + + def test_no_traceparent_header(self): + """When tracecontext headers are not present, a new SpanContext + should be created. + + RFC 4.2.2: + + If no traceparent header is received, the vendor creates a new trace-id and parent-id that represents the current request. + """ + output = {} # type:typing.Dict[str, str] + span_context = FORMAT.extract(get_as_list, output) + self.assertTrue(isinstance(span_context, trace.SpanContext)) + + def test_from_headers_tracestate_entry_limit(self): + """If more than 33 entries are passed, allow them. + + We are explicitly choosing not to limit the list members + as outlined in RFC 3.3.1.1 + + RFC 3.3.1.1 + + There can be a maximum of 32 list-members in a list. + """ + + span_context = FORMAT.extract( + get_as_list, + { + "traceparent": "00-12345678901234567890123456789012-1234567890123456-00", + "tracestate": ",".join( + [ + "a00=0,a01=1,a02=2,a03=3,a04=4,a05=5,a06=6,a07=7,a08=8,a09=9", + "b00=0,b01=1,b02=2,b03=3,b04=4,b05=5,b06=6,b07=7,b08=8,b09=9", + "c00=0,c01=1,c02=2,c03=3,c04=4,c05=5,c06=6,c07=7,c08=8,c09=9", + "d00=0,d01=1,d02=2", + ] + ), + }, + ) + self.assertEqual(len(span_context.trace_state), 33) + + def test_from_headers_tracestate_duplicated_keys(self): + """If a duplicate tracestate header is present, the most recent entry + is used. + + RFC 3.3.1.4 + + Only one entry per key is allowed because the entry represents that last position in the trace. + Hence vendors must overwrite their entry upon reentry to their tracing system. + + For example, if a vendor name is Congo and a trace started in their system and then went through + a system named Rojo and later returned to Congo, the tracestate value would not be: + + congo=congosFirstPosition,rojo=rojosFirstPosition,congo=congosSecondPosition + + Instead, the entry would be rewritten to only include the most recent position: + + congo=congosSecondPosition,rojo=rojosFirstPosition + """ + span_context = FORMAT.extract( + get_as_list, + { + "traceparent": "00-12345678901234567890123456789012-1234567890123456-00", + "tracestate": "foo=1,bar=2,foo=3", + }, + ) + self.assertEqual(span_context.trace_state, {"foo": "3", "bar": "2"}) + + def test_headers_with_tracestate(self): + """When there is a traceparent and tracestate header, data from + both should be addded to the SpanContext. + """ + traceparent_value = "00-{trace_id}-{span_id}-00".format( + trace_id=format(self.TRACE_ID, "032x"), + span_id=format(self.SPAN_ID, "016x"), + ) + tracestate_value = "foo=1,bar=2,baz=3" + span_context = FORMAT.extract( + get_as_list, + {"traceparent": traceparent_value, "tracestate": tracestate_value}, + ) + self.assertEqual(span_context.trace_id, self.TRACE_ID) + self.assertEqual(span_context.span_id, self.SPAN_ID) + self.assertEqual( + span_context.trace_state, {"foo": "1", "bar": "2", "baz": "3"} + ) + + output = {} # type:typing.Dict[str, str] + FORMAT.inject(span_context, dict.__setitem__, output) + self.assertEqual(output["traceparent"], traceparent_value) + for pair in ["foo=1", "bar=2", "baz=3"]: + self.assertIn(pair, output["tracestate"]) + self.assertEqual(output["tracestate"].count(","), 2) + + def test_invalid_trace_id(self): + """If the trace id is invalid, we must ignore the full traceparent header. + + Also ignore any tracestate. + + RFC 3.2.2.3 + + If the trace-id value is invalid (for example if it contains non-allowed characters or all + zeros), vendors MUST ignore the traceparent. + + RFC 3.3 + + If the vendor failed to parse traceparent, it MUST NOT attempt to parse tracestate. + Note that the opposite is not true: failure to parse tracestate MUST NOT affect the parsing of traceparent. + """ + span_context = FORMAT.extract( + get_as_list, + { + "traceparent": "00-00000000000000000000000000000000-1234567890123456-00", + "tracestate": "foo=1,bar=2,foo=3", + }, + ) + self.assertEqual(span_context, trace.INVALID_SPAN_CONTEXT) + + def test_invalid_parent_id(self): + """If the parent id is invalid, we must ignore the full traceparent header. + + Also ignore any tracestate. + + RFC 3.2.2.3 + + Vendors MUST ignore the traceparent when the parent-id is invalid (for example, + if it contains non-lowercase hex characters). + + RFC 3.3 + + If the vendor failed to parse traceparent, it MUST NOT attempt to parse tracestate. + Note that the opposite is not true: failure to parse tracestate MUST NOT affect the parsing of traceparent. + """ + span_context = FORMAT.extract( + get_as_list, + { + "traceparent": "00-00000000000000000000000000000000-0000000000000000-00", + "tracestate": "foo=1,bar=2,foo=3", + }, + ) + self.assertEqual(span_context, trace.INVALID_SPAN_CONTEXT) + + def test_no_send_empty_tracestate(self): + """If the tracestate is empty, do not set the header. + + RFC 3.3.1.1 + + Empty and whitespace-only list members are allowed. Vendors MUST accept empty + tracestate headers but SHOULD avoid sending them. + """ + output = {} # type:typing.Dict[str, str] + FORMAT.inject( + trace.SpanContext(self.TRACE_ID, self.SPAN_ID), + dict.__setitem__, + output, + ) + self.assertTrue("traceparent" in output) + self.assertFalse("tracestate" in output) + + def test_format_not_supported(self): + """If the traceparent does not adhere to the supported format, discard it and + create a new tracecontext. + + RFC 4.3 + + If the version cannot be parsed, the vendor creates a new traceparent header and + deletes tracestate. + """ + span_context = FORMAT.extract( + get_as_list, + { + "traceparent": "00-12345678901234567890123456789012-1234567890123456-00-residue", + "tracestate": "foo=1,bar=2,foo=3", + }, + ) + self.assertEqual(span_context, trace.INVALID_SPAN_CONTEXT) + + def test_propagate_invalid_context(self): + """Do not propagate invalid trace context. + """ + output = {} # type:typing.Dict[str, str] + FORMAT.inject(trace.INVALID_SPAN_CONTEXT, dict.__setitem__, output) + self.assertFalse("traceparent" in output) diff --git a/opentelemetry-api/tests/metrics/__init__.py b/opentelemetry-api/tests/metrics/__init__.py new file mode 100644 index 00000000000..d853a7bcf65 --- /dev/null +++ b/opentelemetry-api/tests/metrics/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/opentelemetry-api/tests/metrics/test_metrics.py b/opentelemetry-api/tests/metrics/test_metrics.py new file mode 100644 index 00000000000..14667f62eaa --- /dev/null +++ b/opentelemetry-api/tests/metrics/test_metrics.py @@ -0,0 +1,68 @@ +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import unittest + +from opentelemetry import metrics + + +# pylint: disable=no-self-use +class TestMeter(unittest.TestCase): + def setUp(self): + self.meter = metrics.Meter() + + def test_record_batch(self): + counter = metrics.Counter() + self.meter.record_batch(("values"), ((counter, 1))) + + def test_create_metric(self): + metric = self.meter.create_metric("", "", "", float, metrics.Counter) + self.assertIsInstance(metric, metrics.DefaultMetric) + + +class TestMetrics(unittest.TestCase): + def test_default(self): + default = metrics.DefaultMetric() + handle = default.get_handle(("test", "test1")) + self.assertIsInstance(handle, metrics.DefaultMetricHandle) + + def test_counter(self): + counter = metrics.Counter() + handle = counter.get_handle(("test", "test1")) + self.assertIsInstance(handle, metrics.CounterHandle) + + def test_gauge(self): + gauge = metrics.Gauge() + handle = gauge.get_handle(("test", "test1")) + self.assertIsInstance(handle, metrics.GaugeHandle) + + def test_measure(self): + measure = metrics.Measure() + handle = measure.get_handle(("test", "test1")) + self.assertIsInstance(handle, metrics.MeasureHandle) + + def test_default_handle(self): + metrics.DefaultMetricHandle() + + def test_counter_handle(self): + handle = metrics.CounterHandle() + handle.add(1) + + def test_gauge_handle(self): + handle = metrics.GaugeHandle() + handle.set(1) + + def test_measure_handle(self): + handle = metrics.MeasureHandle() + handle.record(1) diff --git a/opentelemetry-api/tests/mypysmoke.py b/opentelemetry-api/tests/mypysmoke.py index 25601ebf840..7badc13b696 100644 --- a/opentelemetry-api/tests/mypysmoke.py +++ b/opentelemetry-api/tests/mypysmoke.py @@ -1,3 +1,17 @@ +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + import opentelemetry.trace diff --git a/opentelemetry-example-app/tests/test_flask_example.py b/opentelemetry-example-app/tests/test_flask_example.py deleted file mode 100644 index ca2a237a60f..00000000000 --- a/opentelemetry-example-app/tests/test_flask_example.py +++ /dev/null @@ -1,14 +0,0 @@ -import unittest - -import opentelemetry_example_app.flask_example as flask_example - - -class TestFlaskExample(unittest.TestCase): - @classmethod - def setUpClass(cls): - cls.app = flask_example.app - - def test_full_path(self): - with self.app.test_client() as client: - response = client.get("/") - assert response.data.decode() == "hello" diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/__init__.py index 81366d9d47d..0f3bff4571e 100644 --- a/opentelemetry-sdk/src/opentelemetry/sdk/__init__.py +++ b/opentelemetry-sdk/src/opentelemetry/sdk/__init__.py @@ -12,6 +12,6 @@ # See the License for the specific language governing permissions and # limitations under the License. -from . import trace, util +from . import metrics, trace, util -__all__ = ["trace", "util"] +__all__ = ["metrics", "trace", "util"] diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/context/propagation/b3_format.py b/opentelemetry-sdk/src/opentelemetry/sdk/context/propagation/b3_format.py index 72d02d60700..2eca8afaa1b 100644 --- a/opentelemetry-sdk/src/opentelemetry/sdk/context/propagation/b3_format.py +++ b/opentelemetry-sdk/src/opentelemetry/sdk/context/propagation/b3_format.py @@ -35,7 +35,7 @@ class B3Format(HTTPTextFormat): def extract(cls, get_from_carrier, carrier): trace_id = format_trace_id(trace.INVALID_TRACE_ID) span_id = format_span_id(trace.INVALID_SPAN_ID) - sampled = 0 + sampled = "0" flags = None single_header = _extract_first_element( @@ -91,13 +91,12 @@ def extract(cls, get_from_carrier, carrier): # header is set to allow. if sampled in cls._SAMPLE_PROPAGATE_VALUES or flags == "1": options |= trace.TraceOptions.RECORDED - return trace.SpanContext( # trace an span ids are encoded in hex, so must be converted trace_id=int(trace_id, 16), span_id=int(span_id, 16), - trace_options=options, - trace_state={}, + trace_options=trace.TraceOptions(options), + trace_state=trace.TraceState(), ) @classmethod @@ -112,17 +111,20 @@ def inject(cls, context, set_in_carrier, carrier): set_in_carrier(carrier, cls.SAMPLED_KEY, "1" if sampled else "0") -def format_trace_id(trace_id: int): +def format_trace_id(trace_id: int) -> str: """Format the trace id according to b3 specification.""" return format(trace_id, "032x") -def format_span_id(span_id: int): +def format_span_id(span_id: int) -> str: """Format the span id according to b3 specification.""" return format(span_id, "016x") -def _extract_first_element(list_object: list) -> typing.Optional[object]: - if list_object: - return list_object[0] - return None +_T = typing.TypeVar("_T") + + +def _extract_first_element(items: typing.Iterable[_T]) -> typing.Optional[_T]: + if items is None: + return None + return next(iter(items), None) diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/__init__.py new file mode 100644 index 00000000000..041d0e5dcdf --- /dev/null +++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/__init__.py @@ -0,0 +1,247 @@ +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import logging +from typing import Sequence, Tuple, Type + +from opentelemetry import metrics as metrics_api + +logger = logging.getLogger(__name__) + + +class BaseHandle: + def __init__( + self, + value_type: Type[metrics_api.ValueT], + enabled: bool, + monotonic: bool, + ): + self.data = value_type() + self.value_type = value_type + self.enabled = enabled + self.monotonic = monotonic + + def _validate_update(self, value: metrics_api.ValueT) -> bool: + if not self.enabled: + return False + if not isinstance(value, self.value_type): + logger.warning( + "Invalid value passed for %s.", self.value_type.__name__ + ) + return False + return True + + +class CounterHandle(metrics_api.CounterHandle, BaseHandle): + def update(self, value: metrics_api.ValueT) -> None: + if self._validate_update(value): + if self.monotonic and value < 0: + logger.warning("Monotonic counter cannot descend.") + return + self.data += value + + def add(self, value: metrics_api.ValueT) -> None: + """See `opentelemetry.metrics.CounterHandle._add`.""" + self.update(value) + + +class GaugeHandle(metrics_api.GaugeHandle, BaseHandle): + def update(self, value: metrics_api.ValueT) -> None: + if self._validate_update(value): + if self.monotonic and value < self.data: + logger.warning("Monotonic gauge cannot descend.") + return + self.data = value + + def set(self, value: metrics_api.ValueT) -> None: + """See `opentelemetry.metrics.GaugeHandle._set`.""" + self.update(value) + + +class MeasureHandle(metrics_api.MeasureHandle, BaseHandle): + def update(self, value: metrics_api.ValueT) -> None: + if self._validate_update(value): + if self.monotonic and value < 0: + logger.warning("Monotonic measure cannot accept negatives.") + return + # TODO: record + + def record(self, value: metrics_api.ValueT) -> None: + """See `opentelemetry.metrics.MeasureHandle._record`.""" + self.update(value) + + +class Metric(metrics_api.Metric): + """See `opentelemetry.metrics.Metric`.""" + + HANDLE_TYPE = BaseHandle + + def __init__( + self, + name: str, + description: str, + unit: str, + value_type: Type[metrics_api.ValueT], + label_keys: Sequence[str] = None, + enabled: bool = True, + monotonic: bool = False, + ): + self.name = name + self.description = description + self.unit = unit + self.value_type = value_type + self.label_keys = label_keys + self.enabled = enabled + self.monotonic = monotonic + self.handles = {} + + def get_handle(self, label_values: Sequence[str]) -> BaseHandle: + """See `opentelemetry.metrics.Metric.get_handle`.""" + handle = self.handles.get(label_values) + if not handle: + handle = self.HANDLE_TYPE( + self.value_type, self.enabled, self.monotonic + ) + self.handles[label_values] = handle + return handle + + +class Counter(Metric): + """See `opentelemetry.metrics.Counter`. + + By default, counter values can only go up (monotonic). Negative inputs + will be discarded for monotonic counter metrics. Counter metrics that + have a monotonic option set to False allows negative inputs. + """ + + HANDLE_TYPE = CounterHandle + + def __init__( + self, + name: str, + description: str, + unit: str, + value_type: Type[metrics_api.ValueT], + label_keys: Sequence[str] = None, + enabled: bool = True, + monotonic: bool = True, + ): + super().__init__( + name, + description, + unit, + value_type, + label_keys=label_keys, + enabled=enabled, + monotonic=monotonic, + ) + + +class Gauge(Metric): + """See `opentelemetry.metrics.Gauge`. + + By default, gauge values can go both up and down (non-monotonic). + Negative inputs will be discarded for monotonic gauge metrics. + """ + + HANDLE_TYPE = GaugeHandle + + def __init__( + self, + name: str, + description: str, + unit: str, + value_type: Type[metrics_api.ValueT], + label_keys: Sequence[str] = None, + enabled: bool = True, + monotonic: bool = False, + ): + super().__init__( + name, + description, + unit, + value_type, + label_keys=label_keys, + enabled=enabled, + monotonic=monotonic, + ) + + +class Measure(Metric): + """See `opentelemetry.metrics.Measure`. + + By default, measure metrics can accept both positive and negatives. + Negative inputs will be discarded when monotonic is True. + """ + + HANDLE_TYPE = MeasureHandle + + def __init__( + self, + name: str, + description: str, + unit: str, + value_type: Type[metrics_api.ValueT], + label_keys: Sequence[str] = None, + enabled: bool = False, + monotonic: bool = False, + ): + super().__init__( + name, + description, + unit, + value_type, + label_keys=label_keys, + enabled=enabled, + monotonic=monotonic, + ) + + +class Meter(metrics_api.Meter): + """See `opentelemetry.metrics.Meter`.""" + + def record_batch( + self, + label_values: Sequence[str], + record_tuples: Sequence[Tuple[metrics_api.Metric, metrics_api.ValueT]], + ) -> None: + """See `opentelemetry.metrics.Meter.record_batch`.""" + for metric, value in record_tuples: + metric.get_handle(label_values).update(value) + + def create_metric( + self, + name: str, + description: str, + unit: str, + value_type: Type[metrics_api.ValueT], + metric_type: Type[metrics_api.MetricT], + label_keys: Sequence[str] = None, + enabled: bool = True, + monotonic: bool = False, + ) -> metrics_api.MetricT: + """See `opentelemetry.metrics.Meter.create_metric`.""" + # Ignore type b/c of mypy bug in addition to missing annotations + return metric_type( # type: ignore + name, + description, + unit, + value_type, + label_keys=label_keys, + enabled=enabled, + monotonic=monotonic, + ) + + +meter = Meter() diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/trace/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/trace/__init__.py index a694476e1fe..7b274f852f4 100644 --- a/opentelemetry-sdk/src/opentelemetry/sdk/trace/__init__.py +++ b/opentelemetry-sdk/src/opentelemetry/sdk/trace/__init__.py @@ -17,132 +17,21 @@ import random import threading import typing -from collections import OrderedDict, deque from contextlib import contextmanager from opentelemetry import trace as trace_api from opentelemetry.context import Context from opentelemetry.sdk import util -from opentelemetry.util import types +from opentelemetry.sdk.util import BoundedDict, BoundedList +from opentelemetry.util import time_ns, types logger = logging.getLogger(__name__) -try: - # pylint: disable=ungrouped-imports - from collections.abc import MutableMapping - from collections.abc import Sequence -except ImportError: - # pylint: disable=no-name-in-module,ungrouped-imports - from collections import MutableMapping - from collections import Sequence - MAX_NUM_ATTRIBUTES = 32 MAX_NUM_EVENTS = 128 MAX_NUM_LINKS = 32 -class BoundedList(Sequence): - """An append only list with a fixed max size.""" - - def __init__(self, maxlen): - self.dropped = 0 - self._dq = deque(maxlen=maxlen) - self._lock = threading.Lock() - - def __repr__(self): - return "{}({}, maxlen={})".format( - type(self).__name__, list(self._dq), self._dq.maxlen - ) - - def __getitem__(self, index): - return self._dq[index] - - def __len__(self): - return len(self._dq) - - def __iter__(self): - with self._lock: - return iter(self._dq.copy()) - - def append(self, item): - with self._lock: - if len(self._dq) == self._dq.maxlen: - self.dropped += 1 - self._dq.append(item) - - def extend(self, seq): - with self._lock: - to_drop = len(seq) + len(self._dq) - self._dq.maxlen - if to_drop > 0: - self.dropped += to_drop - self._dq.extend(seq) - - @classmethod - def from_seq(cls, maxlen, seq): - seq = tuple(seq) - if len(seq) > maxlen: - raise ValueError - bounded_list = cls(maxlen) - # pylint: disable=protected-access - bounded_list._dq = deque(seq, maxlen=maxlen) - return bounded_list - - -class BoundedDict(MutableMapping): - """A dict with a fixed max capacity.""" - - def __init__(self, maxlen): - if not isinstance(maxlen, int): - raise ValueError - if maxlen < 0: - raise ValueError - self.maxlen = maxlen - self.dropped = 0 - self._dict = OrderedDict() - self._lock = threading.Lock() - - def __repr__(self): - return "{}({}, maxlen={})".format( - type(self).__name__, dict(self._dict), self.maxlen - ) - - def __getitem__(self, key): - return self._dict[key] - - def __setitem__(self, key, value): - with self._lock: - if self.maxlen == 0: - self.dropped += 1 - return - - if key in self._dict: - del self._dict[key] - elif len(self._dict) == self.maxlen: - del self._dict[next(iter(self._dict.keys()))] - self.dropped += 1 - self._dict[key] = value - - def __delitem__(self, key): - del self._dict[key] - - def __iter__(self): - with self._lock: - return iter(self._dict.copy()) - - def __len__(self): - return len(self._dict) - - @classmethod - def from_map(cls, maxlen, mapping): - mapping = OrderedDict(mapping) - if len(mapping) > maxlen: - raise ValueError - bounded_dict = cls(maxlen) - # pylint: disable=protected-access - bounded_dict._dict = mapping - return bounded_dict - - class SpanProcessor: """Interface which allows hooks for SDK's `Span`s start and end method invocations. @@ -184,10 +73,10 @@ class MultiSpanProcessor(SpanProcessor): def __init__(self): # use a tuple to avoid race conditions when adding a new span and # iterating through it on "on_start" and "on_end". - self._span_processors = () + self._span_processors = () # type: typing.Tuple[SpanProcessor, ...] self._lock = threading.Lock() - def add_span_processor(self, span_processor: SpanProcessor): + def add_span_processor(self, span_processor: SpanProcessor) -> None: """Adds a SpanProcessor to the list handled by this instance.""" with self._lock: self._span_processors = self._span_processors + (span_processor,) @@ -233,11 +122,11 @@ class Span(trace_api.Span): def __init__( self, name: str, - context: "trace_api.SpanContext", + context: trace_api.SpanContext, parent: trace_api.ParentSpan = None, - sampler=None, # TODO - trace_config=None, # TODO - resource=None, # TODO + sampler: None = None, # TODO + trace_config: None = None, # TODO + resource: None = None, # TODO attributes: types.Attributes = None, # TODO events: typing.Sequence[trace_api.Event] = None, # TODO links: typing.Sequence[trace_api.Link] = None, # TODO @@ -251,12 +140,10 @@ def __init__( self.sampler = sampler self.trace_config = trace_config self.resource = resource - self.attributes = attributes - self.events = events - self.links = links self.kind = kind self.span_processor = span_processor + self.status = trace_api.Status() self._lock = threading.Lock() if attributes is None: @@ -276,11 +163,24 @@ def __init__( else: self.links = BoundedList.from_seq(MAX_NUM_LINKS, links) - self.end_time = None - self.start_time = None + self.end_time = None # type: typing.Optional[int] + self.start_time = None # type: typing.Optional[int] def __repr__(self): - return '{}(name="{}")'.format(type(self).__name__, self.name) + return '{}(name="{}", context={})'.format( + type(self).__name__, self.name, self.context + ) + + def __str__(self): + return '{}(name="{}", context={}, kind={}, parent={}, start_time={}, end_time={})'.format( + type(self).__name__, + self.name, + self.context, + self.kind, + repr(self.parent), + util.ns_to_iso_str(self.start_time) if self.start_time else "None", + util.ns_to_iso_str(self.end_time) if self.end_time else "None", + ) def get_context(self): return self.context @@ -301,9 +201,13 @@ def set_attribute(self, key: str, value: types.AttributeValue) -> None: def add_event( self, name: str, attributes: types.Attributes = None ) -> None: - if attributes is None: - attributes = Span.empty_attributes - self.add_lazy_event(trace_api.Event(name, util.time_ns(), attributes)) + self.add_lazy_event( + trace_api.Event( + name, + time_ns(), + Span.empty_attributes if attributes is None else attributes, + ) + ) def add_lazy_event(self, event: trace_api.Event) -> None: with self._lock: @@ -324,7 +228,9 @@ def add_link( attributes: types.Attributes = None, ) -> None: if attributes is None: - attributes = Span.empty_attributes + attributes = ( + Span.empty_attributes + ) # TODO: empty_attributes is not a Dict. Use Mapping? self.add_lazy_link(trace_api.Link(link_target_context, attributes)) def add_lazy_link(self, link: "trace_api.Link") -> None: @@ -340,19 +246,21 @@ def add_lazy_link(self, link: "trace_api.Link") -> None: return self.links.append(link) - def start(self): + def start(self, start_time: typing.Optional[int] = None) -> None: with self._lock: if not self.is_recording_events(): return has_started = self.start_time is not None if not has_started: - self.start_time = util.time_ns() + self.start_time = ( + start_time if start_time is not None else time_ns() + ) if has_started: logger.warning("Calling start() on a started span.") return self.span_processor.on_start(self) - def end(self): + def end(self, end_time: int = None) -> None: with self._lock: if not self.is_recording_events(): return @@ -360,7 +268,7 @@ def end(self): raise RuntimeError("Calling end() on a not started span.") has_ended = self.end_time is not None if not has_ended: - self.end_time = util.time_ns() + self.end_time = end_time if end_time is not None else time_ns() if has_ended: logger.warning("Calling end() on an ended span.") return @@ -378,8 +286,16 @@ def update_name(self, name: str) -> None: def is_recording_events(self) -> bool: return True + def set_status(self, status: trace_api.Status) -> None: + with self._lock: + has_ended = self.end_time is not None + if has_ended: + logger.warning("Calling set_status() on an ended span.") + return + self.status = status + -def generate_span_id(): +def generate_span_id() -> int: """Get a new random span ID. Returns: @@ -388,7 +304,7 @@ def generate_span_id(): return random.getrandbits(64) -def generate_trace_id(): +def generate_trace_id() -> int: """Get a new random trace ID. Returns: @@ -415,16 +331,28 @@ def get_current_span(self): """See `opentelemetry.trace.Tracer.get_current_span`.""" return self._current_span_slot.get() - @contextmanager def start_span( self, name: str, parent: trace_api.ParentSpan = trace_api.Tracer.CURRENT_SPAN, kind: trace_api.SpanKind = trace_api.SpanKind.INTERNAL, - ) -> typing.Iterator["Span"]: + ) -> "Span": """See `opentelemetry.trace.Tracer.start_span`.""" - with self.use_span(self.create_span(name, parent, kind)) as span: - yield span + + span = self.create_span(name, parent, kind) + span.start() + return span + + def start_as_current_span( + self, + name: str, + parent: trace_api.ParentSpan = trace_api.Tracer.CURRENT_SPAN, + kind: trace_api.SpanKind = trace_api.SpanKind.INTERNAL, + ) -> typing.Iterator[trace_api.Span]: + """See `opentelemetry.trace.Tracer.start_as_current_span`.""" + + span = self.start_span(name, parent, kind) + return self.use_span(span, end_on_exit=True) def create_span( self, @@ -460,16 +388,20 @@ def create_span( ) @contextmanager - def use_span(self, span: "Span") -> typing.Iterator["Span"]: + def use_span( + self, span: trace_api.Span, end_on_exit: bool = False + ) -> typing.Iterator[trace_api.Span]: """See `opentelemetry.trace.Tracer.use_span`.""" - span.start() - span_snapshot = self._current_span_slot.get() - self._current_span_slot.set(span) try: - yield span + span_snapshot = self._current_span_slot.get() + self._current_span_slot.set(span) + try: + yield span + finally: + self._current_span_slot.set(span_snapshot) finally: - self._current_span_slot.set(span_snapshot) - span.end() + if end_on_exit: + span.end() def add_span_processor(self, span_processor: SpanProcessor) -> None: """Registers a new :class:`SpanProcessor` for this `Tracer`. diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/trace/export/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/trace/export/__init__.py index 0c011f99761..a76a658b3a3 100644 --- a/opentelemetry-sdk/src/opentelemetry/sdk/trace/export/__init__.py +++ b/opentelemetry-sdk/src/opentelemetry/sdk/trace/export/__init__.py @@ -12,10 +12,15 @@ # See the License for the specific language governing permissions and # limitations under the License. +import collections import logging +import threading import typing from enum import Enum +from opentelemetry.context import Context +from opentelemetry.util import time_ns + from .. import Span, SpanProcessor logger = logging.getLogger(__name__) @@ -68,11 +73,156 @@ def on_start(self, span: Span) -> None: pass def on_end(self, span: Span) -> None: + suppress_instrumentation = Context.suppress_instrumentation try: + Context.suppress_instrumentation = True self.span_exporter.export((span,)) # pylint: disable=broad-except except Exception as exc: logger.warning("Exception while exporting data: %s", exc) + finally: + Context.suppress_instrumentation = suppress_instrumentation def shutdown(self) -> None: self.span_exporter.shutdown() + + +class BatchExportSpanProcessor(SpanProcessor): + """Batch span processor implementation. + + BatchExportSpanProcessor is an implementation of `SpanProcessor` that + batches ended spans and pushes them to the configured `SpanExporter`. + """ + + def __init__( + self, + span_exporter: SpanExporter, + max_queue_size: int = 2048, + schedule_delay_millis: float = 5000, + max_export_batch_size: int = 512, + ): + if max_queue_size <= 0: + raise ValueError("max_queue_size must be a positive integer.") + + if schedule_delay_millis <= 0: + raise ValueError("schedule_delay_millis must be positive.") + + if max_export_batch_size <= 0: + raise ValueError( + "max_export_batch_size must be a positive integer." + ) + + if max_export_batch_size > max_queue_size: + raise ValueError( + "max_export_batch_size must be less than and equal to max_export_batch_size." + ) + + self.span_exporter = span_exporter + self.queue = collections.deque( + [], max_queue_size + ) # type: typing.Deque[Span] + self.worker_thread = threading.Thread(target=self.worker, daemon=True) + self.condition = threading.Condition(threading.Lock()) + self.schedule_delay_millis = schedule_delay_millis + self.max_export_batch_size = max_export_batch_size + self.max_queue_size = max_queue_size + self.done = False + # flag that indicates that spans are being dropped + self._spans_dropped = False + # precallocated list to send spans to exporter + self.spans_list = [ + None + ] * self.max_export_batch_size # type: typing.List[typing.Optional[Span]] + self.worker_thread.start() + + def on_start(self, span: Span) -> None: + pass + + def on_end(self, span: Span) -> None: + if self.done: + logging.warning("Already shutdown, dropping span.") + return + if len(self.queue) == self.max_queue_size: + if not self._spans_dropped: + logging.warning("Queue is full, likely spans will be dropped.") + self._spans_dropped = True + + self.queue.appendleft(span) + + if len(self.queue) >= self.max_queue_size // 2: + with self.condition: + self.condition.notify() + + def worker(self): + timeout = self.schedule_delay_millis / 1e3 + while not self.done: + if len(self.queue) < self.max_export_batch_size: + with self.condition: + self.condition.wait(timeout) + if not self.queue: + # spurious notification, let's wait again + continue + if self.done: + # missing spans will be sent when calling flush + break + + # substract the duration of this export call to the next timeout + start = time_ns() + self.export() + end = time_ns() + duration = (end - start) / 1e9 + timeout = self.schedule_delay_millis / 1e3 - duration + + # be sure that all spans are sent + self._flush() + + def export(self) -> None: + """Exports at most max_export_batch_size spans.""" + idx = 0 + + # currently only a single thread acts as consumer, so queue.pop() will + # not raise an exception + while idx < self.max_export_batch_size and self.queue: + self.spans_list[idx] = self.queue.pop() + idx += 1 + suppress_instrumentation = Context.suppress_instrumentation + try: + Context.suppress_instrumentation = True + # Ignore type b/c the Optional[None]+slicing is too "clever" for mypy + self.span_exporter.export(self.spans_list[:idx]) # type: ignore + # pylint: disable=broad-except + except Exception: + logger.exception("Exception while exporting data.") + finally: + Context.suppress_instrumentation = suppress_instrumentation + + # clean up list + for index in range(idx): + self.spans_list[index] = None + + def _flush(self): + # export all elements until queue is empty + while self.queue: + self.export() + + def shutdown(self) -> None: + # signal the worker thread to finish and then wait for it + self.done = True + with self.condition: + self.condition.notify_all() + self.worker_thread.join() + self.span_exporter.shutdown() + + +class ConsoleSpanExporter(SpanExporter): + """Implementation of :class:`SpanExporter` that prints spans to the + console. + + This class can be used for diagnostic purposes. It prints the exported + spans to the console STDOUT. + """ + + def export(self, spans: typing.Sequence[Span]) -> SpanExportResult: + for span in spans: + print(span) + return SpanExportResult.SUCCESS diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/trace/export/in_memory_span_exporter.py b/opentelemetry-sdk/src/opentelemetry/sdk/trace/export/in_memory_span_exporter.py index b5361205602..0818805f39c 100644 --- a/opentelemetry-sdk/src/opentelemetry/sdk/trace/export/in_memory_span_exporter.py +++ b/opentelemetry-sdk/src/opentelemetry/sdk/trace/export/in_memory_span_exporter.py @@ -20,7 +20,7 @@ class InMemorySpanExporter(SpanExporter): - """Implementation of :class:`.Exporter` that stores spans in memory. + """Implementation of :class:`.SpanExporter` that stores spans in memory. This class can be used for testing purposes. It stores the exported spans in a list in memory that can be retrieved using the diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/util.py b/opentelemetry-sdk/src/opentelemetry/sdk/util.py index d814433eeec..2265c29460b 100644 --- a/opentelemetry-sdk/src/opentelemetry/sdk/util.py +++ b/opentelemetry-sdk/src/opentelemetry/sdk/util.py @@ -12,12 +12,131 @@ # See the License for the specific language governing permissions and # limitations under the License. -import time +import datetime +import threading +from collections import OrderedDict, deque try: - time_ns = time.time_ns -# Python versions < 3.7 -except AttributeError: + # pylint: disable=ungrouped-imports + from collections.abc import MutableMapping + from collections.abc import Sequence +except ImportError: + # pylint: disable=no-name-in-module,ungrouped-imports + from collections import MutableMapping + from collections import Sequence - def time_ns(): - return int(time.time() * 1e9) + +def ns_to_iso_str(nanoseconds): + """Get an ISO 8601 string from time_ns value.""" + ts = datetime.datetime.utcfromtimestamp(nanoseconds / 1e9) + return ts.strftime("%Y-%m-%dT%H:%M:%S.%fZ") + + +class BoundedList(Sequence): + """An append only list with a fixed max size. + + Calls to `append` and `extend` will drop the oldest elements if there is + not enough room. + """ + + def __init__(self, maxlen): + self.dropped = 0 + self._dq = deque(maxlen=maxlen) # type: deque + self._lock = threading.Lock() + + def __repr__(self): + return "{}({}, maxlen={})".format( + type(self).__name__, list(self._dq), self._dq.maxlen + ) + + def __getitem__(self, index): + return self._dq[index] + + def __len__(self): + return len(self._dq) + + def __iter__(self): + with self._lock: + return iter(deque(self._dq)) + + def append(self, item): + with self._lock: + if len(self._dq) == self._dq.maxlen: + self.dropped += 1 + self._dq.append(item) + + def extend(self, seq): + with self._lock: + to_drop = len(seq) + len(self._dq) - self._dq.maxlen + if to_drop > 0: + self.dropped += to_drop + self._dq.extend(seq) + + @classmethod + def from_seq(cls, maxlen, seq): + seq = tuple(seq) + if len(seq) > maxlen: + raise ValueError + bounded_list = cls(maxlen) + # pylint: disable=protected-access + bounded_list._dq = deque(seq, maxlen=maxlen) + return bounded_list + + +class BoundedDict(MutableMapping): + """An ordered dict with a fixed max capacity. + + Oldest elements are dropped when the dict is full and a new element is + added. + """ + + def __init__(self, maxlen): + if not isinstance(maxlen, int): + raise ValueError + if maxlen < 0: + raise ValueError + self.maxlen = maxlen + self.dropped = 0 + self._dict = OrderedDict() # type: OrderedDict + self._lock = threading.Lock() # type: threading.Lock + + def __repr__(self): + return "{}({}, maxlen={})".format( + type(self).__name__, dict(self._dict), self.maxlen + ) + + def __getitem__(self, key): + return self._dict[key] + + def __setitem__(self, key, value): + with self._lock: + if self.maxlen == 0: + self.dropped += 1 + return + + if key in self._dict: + del self._dict[key] + elif len(self._dict) == self.maxlen: + del self._dict[next(iter(self._dict.keys()))] + self.dropped += 1 + self._dict[key] = value + + def __delitem__(self, key): + del self._dict[key] + + def __iter__(self): + with self._lock: + return iter(self._dict.copy()) + + def __len__(self): + return len(self._dict) + + @classmethod + def from_map(cls, maxlen, mapping): + mapping = OrderedDict(mapping) + if len(mapping) > maxlen: + raise ValueError + bounded_dict = cls(maxlen) + # pylint: disable=protected-access + bounded_dict._dict = mapping + return bounded_dict diff --git a/opentelemetry-sdk/tests/metrics/__init__.py b/opentelemetry-sdk/tests/metrics/__init__.py new file mode 100644 index 00000000000..d853a7bcf65 --- /dev/null +++ b/opentelemetry-sdk/tests/metrics/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/opentelemetry-sdk/tests/metrics/test_metrics.py b/opentelemetry-sdk/tests/metrics/test_metrics.py new file mode 100644 index 00000000000..dc4151c4ee2 --- /dev/null +++ b/opentelemetry-sdk/tests/metrics/test_metrics.py @@ -0,0 +1,189 @@ +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import unittest +from unittest import mock + +from opentelemetry import metrics as metrics_api +from opentelemetry.sdk import metrics + + +class TestMeter(unittest.TestCase): + def test_extends_api(self): + meter = metrics.Meter() + self.assertIsInstance(meter, metrics_api.Meter) + + def test_record_batch(self): + meter = metrics.Meter() + label_keys = ("key1",) + label_values = ("value1",) + counter = metrics.Counter("name", "desc", "unit", float, label_keys) + record_tuples = [(counter, 1.0)] + meter.record_batch(label_values, record_tuples) + self.assertEqual(counter.get_handle(label_values).data, 1.0) + + def test_record_batch_multiple(self): + meter = metrics.Meter() + label_keys = ("key1", "key2", "key3") + label_values = ("value1", "value2", "value3") + counter = metrics.Counter("name", "desc", "unit", float, label_keys) + gauge = metrics.Gauge("name", "desc", "unit", int, label_keys) + measure = metrics.Measure("name", "desc", "unit", float, label_keys) + record_tuples = [(counter, 1.0), (gauge, 5), (measure, 3.0)] + meter.record_batch(label_values, record_tuples) + self.assertEqual(counter.get_handle(label_values).data, 1.0) + self.assertEqual(gauge.get_handle(label_values).data, 5) + self.assertEqual(measure.get_handle(label_values).data, 0) + + def test_record_batch_exists(self): + meter = metrics.Meter() + label_keys = ("key1",) + label_values = ("value1",) + counter = metrics.Counter("name", "desc", "unit", float, label_keys) + handle = counter.get_handle(label_values) + handle.update(1.0) + record_tuples = [(counter, 1.0)] + meter.record_batch(label_values, record_tuples) + self.assertEqual(counter.get_handle(label_values), handle) + self.assertEqual(handle.data, 2.0) + + def test_create_metric(self): + meter = metrics.Meter() + counter = meter.create_metric( + "name", "desc", "unit", int, metrics.Counter, () + ) + self.assertTrue(isinstance(counter, metrics.Counter)) + self.assertEqual(counter.value_type, int) + self.assertEqual(counter.name, "name") + + def test_create_gauge(self): + meter = metrics.Meter() + gauge = meter.create_metric( + "name", "desc", "unit", float, metrics.Gauge, () + ) + self.assertTrue(isinstance(gauge, metrics.Gauge)) + self.assertEqual(gauge.value_type, float) + self.assertEqual(gauge.name, "name") + + def test_create_measure(self): + meter = metrics.Meter() + measure = meter.create_metric( + "name", "desc", "unit", float, metrics.Measure, () + ) + self.assertTrue(isinstance(measure, metrics.Measure)) + self.assertEqual(measure.value_type, float) + self.assertEqual(measure.name, "name") + + +class TestMetric(unittest.TestCase): + def test_get_handle(self): + metric_types = [metrics.Counter, metrics.Gauge, metrics.Measure] + for _type in metric_types: + metric = _type("name", "desc", "unit", int, ("key",)) + label_values = ("value",) + handle = metric.get_handle(label_values) + self.assertEqual(metric.handles.get(label_values), handle) + + +class TestCounterHandle(unittest.TestCase): + def test_update(self): + handle = metrics.CounterHandle(float, True, False) + handle.update(2.0) + self.assertEqual(handle.data, 2.0) + + def test_add(self): + handle = metrics.CounterHandle(int, True, False) + handle.add(3) + self.assertEqual(handle.data, 3) + + def test_add_disabled(self): + handle = metrics.CounterHandle(int, False, False) + handle.add(3) + self.assertEqual(handle.data, 0) + + @mock.patch("opentelemetry.sdk.metrics.logger") + def test_add_monotonic(self, logger_mock): + handle = metrics.CounterHandle(int, True, True) + handle.add(-3) + self.assertEqual(handle.data, 0) + self.assertTrue(logger_mock.warning.called) + + @mock.patch("opentelemetry.sdk.metrics.logger") + def test_add_incorrect_type(self, logger_mock): + handle = metrics.CounterHandle(int, True, False) + handle.add(3.0) + self.assertEqual(handle.data, 0) + self.assertTrue(logger_mock.warning.called) + + +class TestGaugeHandle(unittest.TestCase): + def test_update(self): + handle = metrics.GaugeHandle(float, True, False) + handle.update(2.0) + self.assertEqual(handle.data, 2.0) + + def test_set(self): + handle = metrics.GaugeHandle(int, True, False) + handle.set(3) + self.assertEqual(handle.data, 3) + + def test_set_disabled(self): + handle = metrics.GaugeHandle(int, False, False) + handle.set(3) + self.assertEqual(handle.data, 0) + + @mock.patch("opentelemetry.sdk.metrics.logger") + def test_set_monotonic(self, logger_mock): + handle = metrics.GaugeHandle(int, True, True) + handle.set(-3) + self.assertEqual(handle.data, 0) + self.assertTrue(logger_mock.warning.called) + + @mock.patch("opentelemetry.sdk.metrics.logger") + def test_set_incorrect_type(self, logger_mock): + handle = metrics.GaugeHandle(int, True, False) + handle.set(3.0) + self.assertEqual(handle.data, 0) + self.assertTrue(logger_mock.warning.called) + + +class TestMeasureHandle(unittest.TestCase): + def test_update(self): + handle = metrics.MeasureHandle(float, False, False) + handle.update(2.0) + self.assertEqual(handle.data, 0) + + def test_record(self): + handle = metrics.MeasureHandle(int, False, False) + handle.record(3) + self.assertEqual(handle.data, 0) + + def test_record_disabled(self): + handle = metrics.MeasureHandle(int, False, False) + handle.record(3) + self.assertEqual(handle.data, 0) + + @mock.patch("opentelemetry.sdk.metrics.logger") + def test_record_monotonic(self, logger_mock): + handle = metrics.MeasureHandle(int, True, True) + handle.record(-3) + self.assertEqual(handle.data, 0) + self.assertTrue(logger_mock.warning.called) + + @mock.patch("opentelemetry.sdk.metrics.logger") + def test_record_incorrect_type(self, logger_mock): + handle = metrics.MeasureHandle(int, True, False) + handle.record(3.0) + self.assertEqual(handle.data, 0) + self.assertTrue(logger_mock.warning.called) diff --git a/opentelemetry-sdk/tests/resources/test_init.py b/opentelemetry-sdk/tests/resources/test_init.py index 47e986bd35d..2afe17e5633 100644 --- a/opentelemetry-sdk/tests/resources/test_init.py +++ b/opentelemetry-sdk/tests/resources/test_init.py @@ -1,3 +1,17 @@ +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + import unittest from opentelemetry.sdk import resources diff --git a/opentelemetry-sdk/tests/test_util.py b/opentelemetry-sdk/tests/test_util.py new file mode 100644 index 00000000000..ead310bd8d4 --- /dev/null +++ b/opentelemetry-sdk/tests/test_util.py @@ -0,0 +1,213 @@ +# Copyright 2019, OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import collections +import unittest + +from opentelemetry.sdk.util import BoundedDict, BoundedList + + +class TestBoundedList(unittest.TestCase): + base = [52, 36, 53, 29, 54, 99, 56, 48, 22, 35, 21, 65, 10, 95, 42, 60] + + def test_raises(self): + """Test corner cases + + - negative list size + - access out of range indexes + """ + with self.assertRaises(ValueError): + BoundedList(-1) + + blist = BoundedList(4) + blist.append(37) + blist.append(13) + + with self.assertRaises(IndexError): + _ = blist[2] + + with self.assertRaises(IndexError): + _ = blist[4] + + with self.assertRaises(IndexError): + _ = blist[-3] + + def test_from_seq(self): + list_len = len(self.base) + base_copy = list(self.base) + blist = BoundedList.from_seq(list_len, base_copy) + + self.assertEqual(len(blist), list_len) + + # modify base_copy and test that blist is not changed + for idx in range(list_len): + base_copy[idx] = idx * base_copy[idx] + + for idx in range(list_len): + self.assertEqual(blist[idx], self.base[idx]) + + # test that iter yields the correct number of elements + self.assertEqual(len(tuple(blist)), list_len) + + # sequence too big + with self.assertRaises(ValueError): + BoundedList.from_seq(list_len / 2, self.base) + + def test_append_no_drop(self): + """Append max capacity elements to the list without dropping elements.""" + # create empty list + list_len = len(self.base) + blist = BoundedList(list_len) + self.assertEqual(len(blist), 0) + + # fill list + for item in self.base: + blist.append(item) + + self.assertEqual(len(blist), list_len) + self.assertEqual(blist.dropped, 0) + + for idx in range(list_len): + self.assertEqual(blist[idx], self.base[idx]) + + # test __iter__ in BoundedList + for idx, val in enumerate(blist): + self.assertEqual(val, self.base[idx]) + + def test_append_drop(self): + """Append more than max capacity elements and test that oldest ones are dropped.""" + list_len = len(self.base) + # create full BoundedList + blist = BoundedList.from_seq(list_len, self.base) + + # try to append more items + for val in self.base: + # should drop the element without raising exceptions + blist.append(2 * val) + + self.assertEqual(len(blist), list_len) + self.assertEqual(blist.dropped, list_len) + + # test that new elements are in the list + for idx in range(list_len): + self.assertEqual(blist[idx], 2 * self.base[idx]) + + def test_extend_no_drop(self): + # create empty list + list_len = len(self.base) + blist = BoundedList(list_len) + self.assertEqual(len(blist), 0) + + # fill list + blist.extend(self.base) + + self.assertEqual(len(blist), list_len) + self.assertEqual(blist.dropped, 0) + + for idx in range(list_len): + self.assertEqual(blist[idx], self.base[idx]) + + # test __iter__ in BoundedList + for idx, val in enumerate(blist): + self.assertEqual(val, self.base[idx]) + + def test_extend_drop(self): + list_len = len(self.base) + # create full BoundedList + blist = BoundedList.from_seq(list_len, self.base) + other_list = [13, 37, 51, 91] + + # try to extend with more elements + blist.extend(other_list) + + self.assertEqual(len(blist), list_len) + self.assertEqual(blist.dropped, len(other_list)) + + +class TestBoundedDict(unittest.TestCase): + base = collections.OrderedDict( + [ + ("name", "Firulais"), + ("age", 7), + ("weight", 13), + ("vaccinated", True), + ] + ) + + def test_negative_maxlen(self): + with self.assertRaises(ValueError): + BoundedDict(-1) + + def test_from_map(self): + dic_len = len(self.base) + base_copy = collections.OrderedDict(self.base) + bdict = BoundedDict.from_map(dic_len, base_copy) + + self.assertEqual(len(bdict), dic_len) + + # modify base_copy and test that bdict is not changed + base_copy["name"] = "Bruno" + base_copy["age"] = 3 + + for key in self.base: + self.assertEqual(bdict[key], self.base[key]) + + # test that iter yields the correct number of elements + self.assertEqual(len(tuple(bdict)), dic_len) + + # map too big + with self.assertRaises(ValueError): + BoundedDict.from_map(dic_len / 2, self.base) + + def test_bounded_dict(self): + # create empty dict + dic_len = len(self.base) + bdict = BoundedDict(dic_len) + self.assertEqual(len(bdict), 0) + + # fill dict + for key in self.base: + bdict[key] = self.base[key] + + self.assertEqual(len(bdict), dic_len) + self.assertEqual(bdict.dropped, 0) + + for key in self.base: + self.assertEqual(bdict[key], self.base[key]) + + # test __iter__ in BoundedDict + for key in bdict: + self.assertEqual(bdict[key], self.base[key]) + + # updating an existing element should not drop + bdict["name"] = "Bruno" + self.assertEqual(bdict.dropped, 0) + + # try to append more elements + for key in self.base: + bdict["new-" + key] = self.base[key] + + self.assertEqual(len(bdict), dic_len) + self.assertEqual(bdict.dropped, dic_len) + + # test that elements in the dict are the new ones + for key in self.base: + self.assertEqual(bdict["new-" + key], self.base[key]) + + # delete an element + del bdict["new-name"] + self.assertEqual(len(bdict), dic_len - 1) + + with self.assertRaises(KeyError): + _ = bdict["new-name"] diff --git a/opentelemetry-sdk/tests/trace/export/test_export.py b/opentelemetry-sdk/tests/trace/export/test_export.py index ef9786ca63b..de7a5cd9d7e 100644 --- a/opentelemetry-sdk/tests/trace/export/test_export.py +++ b/opentelemetry-sdk/tests/trace/export/test_export.py @@ -12,22 +12,34 @@ # See the License for the specific language governing permissions and # limitations under the License. +import time import unittest +from unittest import mock +from opentelemetry import trace as trace_api from opentelemetry.sdk import trace from opentelemetry.sdk.trace import export -class TestSimpleExportSpanProcessor(unittest.TestCase): - def test_simple_span_processor(self): - class MySpanExporter(export.SpanExporter): - def __init__(self, destination): - self.destination = destination +class MySpanExporter(export.SpanExporter): + """Very simple span exporter used for testing.""" + + def __init__(self, destination, max_export_batch_size=None): + self.destination = destination + self.max_export_batch_size = max_export_batch_size - def export(self, spans: trace.Span) -> export.SpanExportResult: - self.destination.extend(span.name for span in spans) - return export.SpanExportResult.SUCCESS + def export(self, spans: trace.Span) -> export.SpanExportResult: + if ( + self.max_export_batch_size is not None + and len(spans) > self.max_export_batch_size + ): + raise ValueError("Batch is too big") + self.destination.extend(span.name for span in spans) + return export.SpanExportResult.SUCCESS + +class TestSimpleExportSpanProcessor(unittest.TestCase): + def test_simple_span_processor(self): tracer = trace.Tracer() spans_names_list = [] @@ -42,3 +54,142 @@ def export(self, spans: trace.Span) -> export.SpanExportResult: pass self.assertListEqual(["xxx", "bar", "foo"], spans_names_list) + + +def _create_start_and_end_span(name, span_processor): + span = trace.Span( + name, + mock.Mock(spec=trace_api.SpanContext), + span_processor=span_processor, + ) + span.start() + span.end() + + +class TestBatchExportSpanProcessor(unittest.TestCase): + def test_batch_span_processor(self): + spans_names_list = [] + + my_exporter = MySpanExporter(destination=spans_names_list) + span_processor = export.BatchExportSpanProcessor(my_exporter) + + span_names = ["xxx", "bar", "foo"] + + for name in span_names: + _create_start_and_end_span(name, span_processor) + + span_processor.shutdown() + self.assertListEqual(span_names, spans_names_list) + + def test_batch_span_processor_lossless(self): + """Test that no spans are lost when sending max_queue_size spans""" + spans_names_list = [] + + my_exporter = MySpanExporter( + destination=spans_names_list, max_export_batch_size=128 + ) + span_processor = export.BatchExportSpanProcessor( + my_exporter, max_queue_size=512, max_export_batch_size=128 + ) + + for _ in range(512): + _create_start_and_end_span("foo", span_processor) + + span_processor.shutdown() + self.assertEqual(len(spans_names_list), 512) + + def test_batch_span_processor_many_spans(self): + """Test that no spans are lost when sending many spans""" + spans_names_list = [] + + my_exporter = MySpanExporter( + destination=spans_names_list, max_export_batch_size=128 + ) + span_processor = export.BatchExportSpanProcessor( + my_exporter, + max_queue_size=256, + max_export_batch_size=64, + schedule_delay_millis=100, + ) + + for _ in range(4): + for _ in range(256): + _create_start_and_end_span("foo", span_processor) + + time.sleep(0.05) # give some time for the exporter to upload spans + + span_processor.shutdown() + self.assertEqual(len(spans_names_list), 1024) + + def test_batch_span_processor_scheduled_delay(self): + """Test that spans are exported each schedule_delay_millis""" + spans_names_list = [] + + my_exporter = MySpanExporter(destination=spans_names_list) + span_processor = export.BatchExportSpanProcessor( + my_exporter, schedule_delay_millis=50 + ) + + # create single span + _create_start_and_end_span("foo", span_processor) + + time.sleep(0.05 + 0.02) + # span should be already exported + self.assertEqual(len(spans_names_list), 1) + + span_processor.shutdown() + + def test_batch_span_processor_parameters(self): + # zero max_queue_size + self.assertRaises( + ValueError, export.BatchExportSpanProcessor, None, max_queue_size=0 + ) + + # negative max_queue_size + self.assertRaises( + ValueError, + export.BatchExportSpanProcessor, + None, + max_queue_size=-500, + ) + + # zero schedule_delay_millis + self.assertRaises( + ValueError, + export.BatchExportSpanProcessor, + None, + schedule_delay_millis=0, + ) + + # negative schedule_delay_millis + self.assertRaises( + ValueError, + export.BatchExportSpanProcessor, + None, + schedule_delay_millis=-500, + ) + + # zero max_export_batch_size + self.assertRaises( + ValueError, + export.BatchExportSpanProcessor, + None, + max_export_batch_size=0, + ) + + # negative max_export_batch_size + self.assertRaises( + ValueError, + export.BatchExportSpanProcessor, + None, + max_export_batch_size=-500, + ) + + # max_export_batch_size > max_queue_size: + self.assertRaises( + ValueError, + export.BatchExportSpanProcessor, + None, + max_queue_size=256, + max_export_batch_size=512, + ) diff --git a/opentelemetry-sdk/tests/trace/test_trace.py b/opentelemetry-sdk/tests/trace/test_trace.py index 0570affc411..fa8547a0a5f 100644 --- a/opentelemetry-sdk/tests/trace/test_trace.py +++ b/opentelemetry-sdk/tests/trace/test_trace.py @@ -16,7 +16,8 @@ from unittest import mock from opentelemetry import trace as trace_api -from opentelemetry.sdk import trace, util +from opentelemetry.sdk import trace +from opentelemetry.util import time_ns class TestTracer(unittest.TestCase): @@ -31,17 +32,17 @@ def test_start_span_implicit(self): self.assertIsNone(tracer.get_current_span()) - with tracer.start_span("root") as root: - self.assertIs(tracer.get_current_span(), root) + root = tracer.start_span("root") + self.assertIsNotNone(root.start_time) + self.assertIsNone(root.end_time) + self.assertEqual(root.kind, trace_api.SpanKind.INTERNAL) - self.assertIsNotNone(root.start_time) - self.assertIsNone(root.end_time) - self.assertEqual(root.kind, trace_api.SpanKind.INTERNAL) + with tracer.use_span(root, True): + self.assertIs(tracer.get_current_span(), root) with tracer.start_span( "child", kind=trace_api.SpanKind.CLIENT ) as child: - self.assertIs(tracer.get_current_span(), child) self.assertIs(child.parent, root) self.assertEqual(child.kind, trace_api.SpanKind.CLIENT) @@ -63,9 +64,9 @@ def test_start_span_implicit(self): root_context.trace_options, child_context.trace_options ) - # After exiting the child's scope the parent should become the - # current span again. - self.assertIs(tracer.get_current_span(), root) + # Verify start_span() did not set the current span. + self.assertIs(tracer.get_current_span(), root) + self.assertIsNotNone(child.end_time) self.assertIsNone(tracer.get_current_span()) @@ -81,26 +82,25 @@ def test_start_span_explicit(self): self.assertIsNone(tracer.get_current_span()) + root = tracer.start_span("root") + self.assertIsNotNone(root.start_time) + self.assertIsNone(root.end_time) + # Test with the implicit root span - with tracer.start_span("root") as root: + with tracer.use_span(root, True): self.assertIs(tracer.get_current_span(), root) - self.assertIsNotNone(root.start_time) - self.assertIsNone(root.end_time) - with tracer.start_span("stepchild", other_parent) as child: - # The child should become the current span as usual, but its - # parent should be the one passed in, not the - # previously-current span. - self.assertIs(tracer.get_current_span(), child) + # The child's parent should be the one passed in, + # not the current span. self.assertNotEqual(child.parent, root) self.assertIs(child.parent, other_parent) self.assertIsNotNone(child.start_time) self.assertIsNone(child.end_time) - # The child should inherit its context fromr the explicit - # parent, not the previously-current span. + # The child should inherit its context from the explicit + # parent, not the current span. child_context = child.get_context() self.assertEqual(other_parent.trace_id, child_context.trace_id) self.assertNotEqual( @@ -113,6 +113,60 @@ def test_start_span_explicit(self): other_parent.trace_options, child_context.trace_options ) + # Verify start_span() did not set the current span. + self.assertIs(tracer.get_current_span(), root) + + # Verify ending the child did not set the current span. + self.assertIs(tracer.get_current_span(), root) + self.assertIsNotNone(child.end_time) + + def test_start_as_current_span_implicit(self): + tracer = trace.Tracer("test_start_as_current_span_implicit") + + self.assertIsNone(tracer.get_current_span()) + + with tracer.start_as_current_span("root") as root: + self.assertIs(tracer.get_current_span(), root) + + with tracer.start_as_current_span("child") as child: + self.assertIs(tracer.get_current_span(), child) + self.assertIs(child.parent, root) + + # After exiting the child's scope the parent should become the + # current span again. + self.assertIs(tracer.get_current_span(), root) + self.assertIsNotNone(child.end_time) + + self.assertIsNone(tracer.get_current_span()) + self.assertIsNotNone(root.end_time) + + def test_start_as_current_span_explicit(self): + tracer = trace.Tracer("test_start_as_current_span_explicit") + + other_parent = trace_api.SpanContext( + trace_id=0x000000000000000000000000DEADBEEF, + span_id=0x00000000DEADBEF0, + ) + + self.assertIsNone(tracer.get_current_span()) + + # Test with the implicit root span + with tracer.start_as_current_span("root") as root: + self.assertIs(tracer.get_current_span(), root) + + self.assertIsNotNone(root.start_time) + self.assertIsNone(root.end_time) + + with tracer.start_as_current_span( + "stepchild", other_parent + ) as child: + # The child should become the current span as usual, but its + # parent should be the one passed in, not the + # previously-current span. + self.assertIs(tracer.get_current_span(), child) + self.assertNotEqual(child.parent, root) + self.assertIs(child.parent, other_parent) + # After exiting the child's scope the last span on the stack should # become current, not the child's parent. self.assertNotEqual(tracer.get_current_span(), other_parent) @@ -143,7 +197,7 @@ def test_span_members(self): self.assertIsNone(tracer.get_current_span()) - with tracer.start_span("root") as root: + with tracer.start_as_current_span("root") as root: # attributes root.set_attribute("component", "http") root.set_attribute("http.method", "GET") @@ -174,7 +228,7 @@ def test_span_members(self): # events root.add_event("event0") root.add_event("event1", {"name": "birthday"}) - now = util.time_ns() + now = time_ns() root.add_lazy_event( trace_api.Event("event2", now, {"name": "hello"}) ) @@ -234,6 +288,34 @@ def test_start_span(self): span.start() self.assertEqual(start_time, span.start_time) + # default status + self.assertTrue(span.status.is_ok) + self.assertIs( + span.status.canonical_code, trace_api.status.StatusCanonicalCode.OK + ) + self.assertIs(span.status.description, None) + + # status + new_status = trace_api.status.Status( + trace_api.status.StatusCanonicalCode.CANCELLED, "Test description" + ) + span.set_status(new_status) + self.assertIs( + span.status.canonical_code, + trace_api.status.StatusCanonicalCode.CANCELLED, + ) + self.assertIs(span.status.description, "Test description") + + def test_span_override_start_and_end_time(self): + """Span sending custom start_time and end_time values""" + span = trace.Span("name", mock.Mock(spec=trace_api.SpanContext)) + start_time = 123 + span.start(start_time) + self.assertEqual(start_time, span.start_time) + end_time = 456 + span.end(end_time) + self.assertEqual(end_time, span.end_time) + def test_ended_span(self): """"Events, attributes are not allowed after span is ended""" tracer = trace.Tracer("test_ended_span") @@ -243,7 +325,7 @@ def test_ended_span(self): span_id=trace.generate_span_id(), ) - with tracer.start_span("root") as root: + with tracer.start_as_current_span("root") as root: # everything should be empty at the beginning self.assertEqual(len(root.attributes), 0) self.assertEqual(len(root.events), 0) @@ -270,6 +352,19 @@ def test_ended_span(self): root.update_name("xxx") self.assertEqual(root.name, "root") + new_status = trace_api.status.Status( + trace_api.status.StatusCanonicalCode.CANCELLED, + "Test description", + ) + root.set_status(new_status) + # default status + self.assertTrue(root.status.is_ok) + self.assertEqual( + root.status.canonical_code, + trace_api.status.StatusCanonicalCode.OK, + ) + self.assertIs(root.status.description, None) + def span_event_start_fmt(span_processor_name, span_name): return span_processor_name + ":" + span_name + ":start" @@ -302,9 +397,9 @@ def test_span_processor(self): sp1 = MySpanProcessor("SP1", spans_calls_list) sp2 = MySpanProcessor("SP2", spans_calls_list) - with tracer.start_span("foo"): - with tracer.start_span("bar"): - with tracer.start_span("baz"): + with tracer.start_as_current_span("foo"): + with tracer.start_as_current_span("bar"): + with tracer.start_as_current_span("baz"): pass # at this point lists must be empty @@ -313,13 +408,13 @@ def test_span_processor(self): # add single span processor tracer.add_span_processor(sp1) - with tracer.start_span("foo"): + with tracer.start_as_current_span("foo"): expected_list.append(span_event_start_fmt("SP1", "foo")) - with tracer.start_span("bar"): + with tracer.start_as_current_span("bar"): expected_list.append(span_event_start_fmt("SP1", "bar")) - with tracer.start_span("baz"): + with tracer.start_as_current_span("baz"): expected_list.append(span_event_start_fmt("SP1", "baz")) expected_list.append(span_event_end_fmt("SP1", "baz")) @@ -336,15 +431,15 @@ def test_span_processor(self): # go for multiple span processors tracer.add_span_processor(sp2) - with tracer.start_span("foo"): + with tracer.start_as_current_span("foo"): expected_list.append(span_event_start_fmt("SP1", "foo")) expected_list.append(span_event_start_fmt("SP2", "foo")) - with tracer.start_span("bar"): + with tracer.start_as_current_span("bar"): expected_list.append(span_event_start_fmt("SP1", "bar")) expected_list.append(span_event_start_fmt("SP2", "bar")) - with tracer.start_span("baz"): + with tracer.start_as_current_span("baz"): expected_list.append(span_event_start_fmt("SP1", "baz")) expected_list.append(span_event_start_fmt("SP2", "baz")) @@ -369,9 +464,9 @@ def test_add_span_processor_after_span_creation(self): # Span processors are created but not added to the tracer yet sp = MySpanProcessor("SP1", spans_calls_list) - with tracer.start_span("foo"): - with tracer.start_span("bar"): - with tracer.start_span("baz"): + with tracer.start_as_current_span("foo"): + with tracer.start_as_current_span("bar"): + with tracer.start_as_current_span("baz"): # add span processor after spans have been created tracer.add_span_processor(sp) diff --git a/pyproject.toml b/pyproject.toml index a8f43fefdf1..eff7e2e3ec6 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,2 +1,21 @@ [tool.black] line-length = 79 +exclude = ''' +( + /( + \.eggs # exclude a few common directories in the + | \.git # root of the project + | \.hg + | \.mypy_cache + | \.tox + | \.venv + | _build + | buck-out + | build + | dist + | ext/opentelemetry-ext-jaeger/src/opentelemetry/ext/jaeger/gen # generated files + )/ + | foo.py # also separately exclude a file named foo.py in + # the root of the project +) +''' diff --git a/tox.ini b/tox.ini index d0fb6cc2dc4..56249e74a20 100644 --- a/tox.ini +++ b/tox.ini @@ -2,8 +2,8 @@ skipsdist = True skip_missing_interpreters = True envlist = - py3{4,5,6,7,8}-test-{api,sdk,example-app,ext-wsgi,ext-http-requests} - pypy3-test-{api,sdk,example-app,ext-wsgi,ext-http-requests} + py3{4,5,6,7,8}-test-{api,sdk,example-app,ext-wsgi,ext-http-requests,ext-jaeger,ext-pymongo} + pypy3-test-{api,sdk,example-app,ext-wsgi,ext-http-requests,ext-jaeger,ext-pymongo} lint py37-{mypy,mypyinstalled} docs @@ -22,9 +22,11 @@ setenv = changedir = test-api: opentelemetry-api/tests test-sdk: opentelemetry-sdk/tests - test-example-app: opentelemetry-example-app/tests - test-ext-wsgi: ext/opentelemetry-ext-wsgi/tests test-ext-http-requests: ext/opentelemetry-ext-http-requests/tests + test-ext-jaeger: ext/opentelemetry-ext-jaeger/tests + test-ext-wsgi: ext/opentelemetry-ext-wsgi/tests + test-ext-pymongo: ext/opentelemetry-ext-pymongo/tests + test-example-app: examples/opentelemetry-example-app/tests commands_pre = ; Install without -e to test the actual installation @@ -32,12 +34,15 @@ commands_pre = test: pip install {toxinidir}/opentelemetry-api test-sdk: pip install {toxinidir}/opentelemetry-sdk example-app: pip install {toxinidir}/opentelemetry-sdk - example-app: pip install {toxinidir}/ext/opentelemetry-ext-wsgi example-app: pip install {toxinidir}/ext/opentelemetry-ext-http-requests - example-app: pip install {toxinidir}/opentelemetry-example-app + example-app: pip install {toxinidir}/ext/opentelemetry-ext-wsgi + example-app: pip install {toxinidir}/examples/opentelemetry-example-app ext: pip install {toxinidir}/opentelemetry-api wsgi: pip install {toxinidir}/ext/opentelemetry-ext-wsgi http-requests: pip install {toxinidir}/ext/opentelemetry-ext-http-requests + pymongo: pip install {toxinidir}/ext/opentelemetry-ext-pymongo + jaeger: pip install {toxinidir}/opentelemetry-sdk + jaeger: pip install {toxinidir}/ext/opentelemetry-ext-jaeger ; Using file:// here because otherwise tox invokes just "pip install ; opentelemetry-api", leading to an error @@ -66,9 +71,12 @@ deps = commands_pre = pip install -e {toxinidir}/opentelemetry-api pip install -e {toxinidir}/opentelemetry-sdk - pip install -e {toxinidir}/ext/opentelemetry-ext-wsgi + pip install -e {toxinidir}/ext/opentelemetry-ext-azure-monitor pip install -e {toxinidir}/ext/opentelemetry-ext-http-requests - pip install -e {toxinidir}/opentelemetry-example-app + pip install -e {toxinidir}/ext/opentelemetry-ext-jaeger + pip install -e {toxinidir}/ext/opentelemetry-ext-wsgi + pip install -e {toxinidir}/ext/opentelemetry-ext-pymongo + pip install -e {toxinidir}/examples/opentelemetry-example-app commands = ; Prefer putting everything in one pylint command to profit from duplication @@ -78,11 +86,18 @@ commands = opentelemetry-api/tests/ \ opentelemetry-sdk/src/opentelemetry \ opentelemetry-sdk/tests/ \ + ext/opentelemetry-ext-azure-monitor/examples/ \ + ext/opentelemetry-ext-azure-monitor/src/ \ + ext/opentelemetry-ext-azure-monitor/tests/ \ ext/opentelemetry-ext-http-requests/src/ \ ext/opentelemetry-ext-http-requests/tests/ \ + ext/opentelemetry-ext-jaeger/src/opentelemetry \ + ext/opentelemetry-ext-jaeger/tests/ \ ext/opentelemetry-ext-wsgi/tests/ \ - opentelemetry-example-app/src/opentelemetry_example_app/ \ - opentelemetry-example-app/tests/ + ext/opentelemetry-ext-pymongo/src/ \ + ext/opentelemetry-ext-pymongo/tests/ \ + examples/opentelemetry-example-app/src/opentelemetry_example_app/ \ + examples/opentelemetry-example-app/tests/ flake8 . isort --check-only --diff --recursive .