-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
StackdriverExporter is broken #490
Comments
Fixes #445, #158 This PR addresses some Jaeger receiver config cleanup as well as makes some breaking changes to the way the config is handled. See below for details. **Fixes/Updates** - Disabled flag is respected per protocol - Unspecified protocols will no longer be started - Empty protocol configs can now be specified to start the protocol with defaults. e.g. ``` jaeger: protocols: grpc: ``` - Updated readmes - Naming and behavior of per protocol Addr/Enabled functions in `trace_reciever.go` has been standardized. - Added thrift tchannel test to meet code coverage **Breaking Change** Changed the way an empty `jaeger:` config is handled. An empty/default config does not start any jaeger protocols. Previously it started all three collector protocols. This is a consequence of not starting unspecified protocols.
@ocervell can you try using the package from my branch https://github.com/aabmass/opentelemetry-python/tree/fix-oc-exporter and see if that fixes it? It will log the whole metric proto it is sending for debugging, including the new index 840e74b..647274f 100644
--- a/custom-metrics-example/requirements.txt
+++ b/custom-metrics-example/requirements.txt
@@ -1,3 +1,3 @@
opentelemetry-api
opentelemetry-sdk
-opentelemetry-ext-opencensusexporter
+-e git+https://github.com/aabmass/opentelemetry-python.git@fix-oc-exporter#egg=opentelemetry-ext-opencensusexporter&subdirectory=ext/opentelemetry-ext-opencensusexporter |
Sure, will try this and let you know ! Thanks for finding the problem 👍 |
@aabmass it's working with the custom-metrics-example now ! Your fix is working, no more API errors in writing timeseries to Cloud Monitoring. In the Flask context though, the |
@ocervell I took a look, the issue is the Since these are all python bugs, can you open an issue in https://github.com/open-telemetry/opentelemetry-python and tag me, then close this one? |
Describe the bug
Deployed on Kubernetes using the following setup:
[OT SDK + OpenCensusMetricsExporter] --> [OT Collector + Cloud Monitoring Exporter] --> Cloud Monitoring API
Steps to reproduce
https://github.com/ocervell/gunicorn-opentelemetry-poc
Deploy the
custom-metrics-example/
and the OT agent inops/opentelemetry
on GKE. No need to deploy the whole Flask application.What did you expect to see?
Timeseries populated in Cloud Monitoring UI.
What did you see instead?
The metric descriptor is created correctly in Cloud Monitoring API, but there are errors while writing timeseries to Cloud Monitoring API:
What version did you use?
v0.6.0
What config did you use?
https://github.com/ocervell/gunicorn-opentelemetry-poc/blob/master/ops/opentelemetry/ot-agent.yaml
Environment
GKE
Additional context
It seems like certain API calls for writing timeseries are going through, but there is no data in Cloud Monitoring Metrics Explorer.
I tried adding a
batch
processor but was running into another issue.The text was updated successfully, but these errors were encountered: