Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sdk: Implement force_flush for span processors #389

Merged

Conversation

mauriciovasquezbernal
Copy link
Member

open-telemetry/opentelemetry-specification#370 added
the requirement to have a "force_flush" method in the span processors.

This commit exposes an already existing internal method on the batch span
processor that does exactly the same, it also adds it to the span processor
interface and as a no-op to the simple span processor.

open-telemetry/opentelemetry-specification#370 added
the requirement to have a "force_flush" method in the span processors.

This commit exposes an already existing internal method on the batch span
processor that does exactly the same, it also adds it to the span processor
interface and as a no-op to the simple span processor.
@mauriciovasquezbernal mauriciovasquezbernal requested a review from a team January 29, 2020 16:44
@codecov-io
Copy link

codecov-io commented Jan 29, 2020

Codecov Report

Merging #389 into master will decrease coverage by 0.2%.
The diff coverage is 80%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #389      +/-   ##
==========================================
- Coverage   85.35%   85.14%   -0.21%     
==========================================
  Files          38       38              
  Lines        1925     1912      -13     
  Branches      226      226              
==========================================
- Hits         1643     1628      -15     
- Misses        218      219       +1     
- Partials       64       65       +1
Impacted Files Coverage Δ
...emetry-sdk/src/opentelemetry/sdk/trace/__init__.py 90.67% <100%> (-0.28%) ⬇️
...sdk/src/opentelemetry/sdk/trace/export/__init__.py 86.11% <75%> (-0.69%) ⬇️
...ry-sdk/src/opentelemetry/sdk/resources/__init__.py 68.18% <0%> (-2.66%) ⬇️
...ry-ext-wsgi/src/opentelemetry/ext/wsgi/__init__.py 67.59% <0%> (-0.59%) ⬇️
...opentelemetry/sdk/context/propagation/b3_format.py 86.79% <0%> (-0.49%) ⬇️
opentelemetry-sdk/src/opentelemetry/sdk/util.py 85.54% <0%> (-0.35%) ⬇️
...ts/src/opentelemetry/ext/http_requests/__init__.py 89.47% <0%> (-0.27%) ⬇️
...src/opentelemetry/ext/opentracing_shim/__init__.py 95.76% <0%> (-0.18%) ⬇️
...xt-jaeger/src/opentelemetry/ext/jaeger/__init__.py 86.39% <0%> (-0.16%) ⬇️
... and 2 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 3995075...7612e6e. Read the comment docs.

Copy link
Member

@c24t c24t left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM after updating the tests for https://github.com/open-telemetry/opentelemetry-python/pull/389/files#r372660672.

It sounds like it'd be sufficient to check that calling shutdown results in a flush even without calling force_flush first.

Copy link
Contributor

@codeboten codeboten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for updating the code to match the spec!

Test that:
- processor works after call to force_flush()
- shutdown() flushes the processor
@toumorokoshi toumorokoshi merged commit 9ec1f1f into open-telemetry:master Feb 4, 2020
mauriciovasquezbernal added a commit to kinvolk/opentelemetry-python that referenced this pull request Feb 6, 2020
open-telemetry#389 implemented
force_flush() for the span processor. For BatchSpanProcessor it was implemented
by exposing an already existing _flush() method, it created a race condition
because the _flush() method was intended to be called only from the context
of the worker thread, this because it uses the export() method that is not
thread safe.

The result after that PR is that some tests were failing randomly because export()
was being executed in two different threads, the worker thread and the user
thread calling force_flush().

This commit fixes it by implementing a more sophisticated flush mechanism.
When a flush is requested, a special span token is inserted in the spans queue,
a flag indicating a flush operation is on progress is set and the worker thread
is waken up, after it a condition variable is monitored waiting for the worker
thread to indicate that the token has been processed.

The worker thread has a new logic to avoid sleeping (waiting on the condition
variable) when there is a flush operation going on, it also notifies the
caller (using another condition variable) when the token has been processed.
c24t pushed a commit that referenced this pull request Feb 7, 2020
#389 implemented force_flush() for the span processor. For BatchSpanProcessor
it was implemented by exposing an already existing _flush() method, it created
a race condition because the _flush() method was intended to be called only
from the context of the worker thread, this because it uses the export() method
that is not thread safe.

The result after that PR is that some tests were failing randomly because
export() was being executed in two different threads, the worker thread and the
user thread calling force_flush().

This commit fixes it by implementing a more sophisticated flush mechanism.
When a flush is requested, a special span token is inserted in the spans queue,
a flag indicating a flush operation is on progress is set and the worker thread
is waken up, after it a condition variable is monitored waiting for the worker
thread to indicate that the token has been processed.

The worker thread has a new logic to avoid sleeping (waiting on the condition
variable) when there is a flush operation going on, it also notifies the caller
(using another condition variable) when the token has been processed.
toumorokoshi pushed a commit to toumorokoshi/opentelemetry-python that referenced this pull request Feb 17, 2020
open-telemetry/opentelemetry-specification#370 added
the requirement to have a "force_flush" method in the span processors.

This commit exposes an already existing internal method on the batch span
processor that does exactly the same, it also adds it to the span processor
interface and as a no-op to the simple span processor.
toumorokoshi pushed a commit to toumorokoshi/opentelemetry-python that referenced this pull request Feb 17, 2020
open-telemetry#389 implemented force_flush() for the span processor. For BatchSpanProcessor
it was implemented by exposing an already existing _flush() method, it created
a race condition because the _flush() method was intended to be called only
from the context of the worker thread, this because it uses the export() method
that is not thread safe.

The result after that PR is that some tests were failing randomly because
export() was being executed in two different threads, the worker thread and the
user thread calling force_flush().

This commit fixes it by implementing a more sophisticated flush mechanism.
When a flush is requested, a special span token is inserted in the spans queue,
a flag indicating a flush operation is on progress is set and the worker thread
is waken up, after it a condition variable is monitored waiting for the worker
thread to indicate that the token has been processed.

The worker thread has a new logic to avoid sleeping (waiting on the condition
variable) when there is a flush operation going on, it also notifies the caller
(using another condition variable) when the token has been processed.
@mauriciovasquezbernal mauriciovasquezbernal deleted the mauricio/implement-force-flush branch April 14, 2020 21:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants