-
Notifications
You must be signed in to change notification settings - Fork 453
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable metric integration test for req-blocking #2445
Enable metric integration test for req-blocking #2445
Conversation
@@ -293,7 +293,7 @@ mod tests { | |||
// Set up the exporter | |||
let exporter = create_exporter(); | |||
let reader = PeriodicReader::builder(exporter) | |||
.with_interval(Duration::from_millis(100)) | |||
.with_interval(Duration::from_secs(30)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
putting a large interval here, to make sure the test is actually testing shutdown triggered exporting, not a timer triggered.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea
@@ -30,7 +30,7 @@ async fn init_metrics() -> SdkMeterProvider { | |||
let exporter = create_exporter(); | |||
|
|||
let reader = PeriodicReader::builder(exporter) | |||
.with_interval(Duration::from_millis(100)) | |||
.with_interval(Duration::from_millis(500)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the sleep is around 5 seconds, so 500 msec interval is fine, and this reduces the amount of internal logs.
Also, I am thinking if we should rely on force_flush/shutdown for integration tests more, to avoid this sleep requirement. We should still have one set of tests for testing the export triggered by interval.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we'll still need a bit of a buffer because of the flushing on the otlp-collector side? What I think what would be the nicest would be to setup a simple, short exponential backoff mechanism in here - the function knows what it is looking for for the particular test, so it's well positioned to decide to wait a bit and look again:
pub fn fetch_latest_metrics_for_scope(scope_name: &str) -> Result<Value> { |
so that we can use a tighter timing in the best case.
I've also added this rotation
thing for logs, which I believe will decrease buffering on the collector side - doesn't help us with the other signals, though:
opentelemetry-rust/opentelemetry-otlp/tests/integration_test/otel-collector-config.yaml
Lines 12 to 14 in 9011f63
file/logs: | |
path: /testresults/logs.json | |
rotation: |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #2445 +/- ##
=====================================
Coverage 76.8% 76.8%
=====================================
Files 122 122
Lines 21823 21823
=====================================
Hits 16772 16772
Misses 5051 5051 ☔ View full report in Codecov by Sentry. |
@@ -146,7 +146,7 @@ async fn setup_metrics_test() -> Result<()> { | |||
println!("Running setup before any tests..."); | |||
*done = true; // Mark setup as done | |||
|
|||
// Initialise the metrics subsystem | |||
// Initialize the metrics subsystem |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
US-English spelling strikes again 🤣
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice - lgtm!
Reqwest-blocking-client should work for Metrics, so that is enabled.
Also changed the verbosity of internal logs to avoid overwhelming amount of debug logs. We still get debug level logs from OpenTelemetry.
Also enabled internal-logs from OTLP exporter itself.