-
Notifications
You must be signed in to change notification settings - Fork 657
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sdk: make test_batch_span_processor_scheduled_delay a bit more robust #3938
sdk: make test_batch_span_processor_scheduled_delay a bit more robust #3938
Conversation
2e6fe14
to
8b2cd41
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It happened that tests failed because the delay was fired some microseconds earlier:
> self.assertGreaterEqual((export_time - start_time) * 1e3, 500) E AssertionError: 499.9737739562988 not greater than or equal to 500
For the purposes of this test, I think some microseconds early or late should be perfectly acceptable. Can we instead just update L489 to use assertAlmostEqual()
with a reasonable delta?
Ah I see you mentioned this in the issue @xrmx. That would be my preferred fix |
The first version was:
but was asked to change it |
Thanks. @ocelotl I see your comment above. I imagine this test can be flaky on any platform since it depends on real world timing. IMO the PyPy behavior is working as intended.
The other issue, if the actual time is way larger 500 the test passes. IMO assertAlmostEqual would be an improvement across the board. Wdyt? |
762f4da
to
b115232
Compare
b115232
to
dcec3b0
Compare
Updated following @aabmass suggestion |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ocelotl PTAL
I was thinking the same thing, approved. |
Well, still not enough:
|
Ok, 2 things, in my opinion:
So:
|
ce74344
to
fc24d14
Compare
me: Bumped the delta to 16ms as seen on CI pypy: hold my beer: |
57be3e0
to
eca91f4
Compare
It happened that tests failed because the delay was fired some microseconds earlier: > self.assertGreaterEqual((export_time - start_time) * 1e3, 500) E AssertionError: 499.9737739562988 not greater than or equal to 500 Use assertAlmostEqual to accept a similar enough value (delta=25) and avoid too big values. Skip tests on windows pypy because of random huge spikes: E AssertionError: 2253.103017807007 != 500 within 25 delta (1744.1030178070068 difference) Fix open-telemetry#3911
The last metric collection after the thread has been notified to shutdown is not handling the submission to get a MetricsTimeoutError exception. Handle that to match what we are doing in the usual loop collection. See in TestBatchSpanProcessor.test_batch_span_processor_scheduled_delay failing with: opentelemetry-sdk/tests/metrics/test_periodic_exporting_metric_reader.py::TestPeriodicExportingMetricReader::test_metric_timeout_does_not_kill_worker_thread \_pytest\threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread OtelPeriodicExportingMetricReader Traceback (most recent call last): File "C:\hostedtoolcache\windows\Python\3.11.9\x64\Lib\threading.py", line 1045, in _bootstrap_inner self.run() File "C:\hostedtoolcache\windows\Python\3.11.9\x64\Lib\threading.py", line 982, in run self._target(*self._args, **self._kwargs) File "D:\a\opentelemetry-python\opentelemetry-python\opentelemetry-sdk\src\opentelemetry\sdk\metrics\_internal\export\__init__.py", line 522, in _ticker self.collect(timeout_millis=self._export_interval_millis) File "D:\a\opentelemetry-python\opentelemetry-python\opentelemetry-sdk\tests\metrics\test_periodic_exporting_metric_reader.py", line 87, in collect raise self._collect_exception opentelemetry.sdk.metrics._internal.exceptions.MetricsTimeoutError: test timeout
3704cfb
to
d600e87
Compare
Opened this for revising skipping tests on pypy / windows: #3967 Today I've added another commit in this PR noticing that in some failing pypy tests the MetricsTimeoutError was raised for the last collection. Since we are catching it in the loop I think we should catch it there too. You can find the stacktrace here:
|
Description
It happened that tests failed because the delay was fired some microseconds earlier:
We should probably revise all these skip on Windows Pypy once we have a Python 3.9 baseline and Pypy >= 7.3.12.
Fix #3911
Type of change
Please delete options that are not relevant.
How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
Does This PR Require a Contrib Repo Change?
Checklist: