-
Notifications
You must be signed in to change notification settings - Fork 375
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sinatra tracing not working with 0.35+ #1035
Comments
hmm .. I just tried with 0.34.2 and it seems to be working, perhaps a regression? |
Thanks for the report @mscrivo! I'll try to take a look at this soon, see if I can replicate and understand why this isn't working. Are you able to replicate this in an environment where you can safely test it? |
thanks @delner, yeah I'm able to deploy a single pod and watch how it's working. |
@mscrivo So I did some testing for this on 0.35.0 and another branch I had based on 0.35.2. The spec is as follows: it do
Datadog.configure { |c| c.use :sequel }
sequel = Sequel.sqlite(':memory:').tap do |db|
db.create_table(:table) do
String :name
end
end
sequel_two = Sequel.sqlite(':memory:').tap do |db|
db.create_table(:table) do
String :name
end
end
Datadog.configure(sequel, service_name: 'one')
Datadog.configure(sequel_two, service_name: 'two')
@traces = []
allow(Datadog.tracer.writer).to receive(:write) do |trace|
@traces << trace
end
sequel[:table].insert(name: 'data1')
sequel_two[:table].insert(name: 'data2')
@traces = @traces.flatten
expect(@traces.first.service).to eq('one')
expect(@traces.last.service).to eq('two')
end This test fails on 0.35.0, because it would appear the Sequel integration was eagerly assigning a tracer before it was replaced... I think we fixed this in #1034. I then tried this on #1037 (which is effectively 0.35.2 plus some changes) and this seemed to work just fine. So I think the latest release should have fixed this. On your test pod, can you try 0.35.2? (Maybe #1037 too if 0.35.2 doesn't work.) Let me know if the problem is still present. |
Thanks for looking into that @delner ... for whatever reason, after I upgrade to 0.35.2, I'm not getting any traces at all. Going back to 0.34.2 makes them all come back. I'm still trying to figure out why, perhaps it's something wrong with our config, but I'm not sure yet. The logs on the pod don't seem to indicate any problem, so it's a bit of mystery. |
Does your tracer point to a non-default host or port? If you reconfigured the tracer in this way, this was the scenario that had the potential to cause the bug prior to 0.35.2. Basically the instrumentation was using the wrong tracer, and it would be good to confirm that's still not the case in your application. Simple way to check this is the object IDs of the tracer after all your configuration is complete. # These values should be identical
Datadog.tracer.object_id
Datadog.configuration[:sequel][:tracer].object_id
Datadog::Pin.get_from(sequel).object_id # Your Sequel database object If they are different objects, then that would be a concern. |
The first 2 are identical, the last one is different. Here's how we have our config setup: TRACING_ENABLED = ENV['Prod'] && !ENV['ENABLE_DATADOG_TRACER'].nil?
Datadog.configure do |conf|
if TRACING_ENABLED
conf.use :aws
conf.use :http
conf.use :rack
conf.use :redis
conf.use :sidekiq
conf.use :sinatra
conf.use :sequel
end
end
Datadog.tracer.configure(
enabled: TRACING_ENABLED,
hostname: ENV['DATADOG_APM_AGENT_SERVICE_SERVICE_HOST'] || 'localhost',
sampler: Datadog::RateSampler.new(0.5),
priority_sampling: true,
port: 8126,
)
# Then to be able to identify primary vs replica calls, we're patching sequel's synchronize like this:
def synchronize(server=nil)
if !Datadog.tracer.enabled
return synchronize_without_timing(server) {|conn| yield conn}
end
# Setup databases in datadog so we know which one queries are being run on.
if server == :replica
Datadog.configure(DB, service_name: 'postgres/replica')
else
Datadog.configure(DB, service_name: 'postgres/primary')
end
span = Datadog.tracer.trace('sequel.synchronize')
span.service = DATADOG_SERVICE
span.resource = 'sequel.synchronize'
span.span_type = Datadog::Ext::AppTypes::CUSTOM
finished = false
begin
@pool.hold(server || :default) do |conn|
span.finish(Time.now)
finished = true
yield conn
end
rescue => e
span.finish(Time.now) if !finished
raise e
end
end |
Okay, so from what I can tell, the I'd would suggest trying to rewrite your configuration a bit. Remove the Datadog.configure do |c|
c.tracer.enabled = TRACING_ENABLED
c.tracer.hostname = ENV['DATADOG_APM_AGENT_SERVICE_SERVICE_HOST'] || 'localhost'
c.tracer.sampler = Datadog::RateSampler.new(0.5)
c.tracer.priority_sampling = true
c.tracer.port = 8126
# Activate your integrations here...
end |
Even with your suggested configuration change, I'm unable to get our tracing going with 0.35.2. Anything else I could look at to debug? |
@mscrivo It's going to be hard to suggest a clear solution without being able to reproduce the problem on my end. If you can reproduce the issue in an RSpec test or sample app that I can run on my end, then I should be able to debug it for you. Otherwise, I think you might have to poke around a bit at the internals of your application with break points or log statements. If you can identify why If I were debugging it, I'd look closely at how the application initializes My suspicion is the Sequel instrumentation is somehow using a stale tracer instance with default trace settings thus causing traces to be swallowed. Some things to keep in mind:
So it could be because you're calling |
thanks @delner that's super helpful .. feel free to close this issue, I'll report back when I figure out what it was. |
Would love to try, is there an easy way to do that without having a gem available? Also, I tried what you suggested earlier and Inside of Also, I should note that I'm not getting any tracing, not just the sequel stuff .. I even removed the btw, the way I'm trying to see the traces in DD dashboard is by env. We set DD_ENV as an env var in our app pods, did anything happen to change with that in this release? |
@mscrivo Yes, you can try it with:
Yes this is a known issue. It's because when the first integration is configured, it's applying the patch to the integration which causes Overall though, I don't think it should be harmful if the integration is using If you're in a safe environment, you can also try activating: Datadog.configure do |c|
c.diagnostics.debug = true
end Which will produce log output for every span in every trace produced by the application. I recommend against running it in any production or other sensitive environments because it is extremely verbose and can easily overwhelm logs. It's best to try this in an test environment where you can control the load yourself.
This is probably all tied in with same problem; trace settings are not being used because the tracer is likely misconfigured or the correct tracer is not being used. If you can see |
Trying it locally with the test adapter on and debug on, all I see is these over and over:
the only traces that seem to work are these:
|
@mscrivo I just wanted to update you that we found an issue with the new Sinatra integration, when used with another Sinatra/Rack middleware that is also instrumented by Despite that, I wasn't able to reproduce the error as originally reported. I want to make sure our changes actually fix your issue, so I'd like to ask you for a few pieces of information regarding your application.
|
class Base < Sinatra::Base
# Configuration for all environments (prod/dev/test).
configure do
set :root, File.expand_path('..', __dir__)
use Rack::Parser
end
# Track exceptions with rollbar in production.
configure :production do
# Rollbar configuration takes place in config/rollbar.rb
use Rollbar::Middleware::Sinatra
end
register Sinatra::Hashfix
include Helpers
# Setup debug probes (as middleware)
use Controllers::DebugProbes
# Log passed args to the logs/[environment].log file.
def self.log(*args)
if !@logger
environment = ENV['RACK_ENV'] || 'development'
log_path = File.expand_path("../../logs/#{environment}.log", __FILE__)
@logger = ::Logger.new(log_path)
end
@logger.info(*args)
end
end We're using Sinatra 2.0.3 |
I'm trying to upgrade our Sinatra/Rack versions to see if that helps at all .. but there are breaking change to our app, so it might take some time. |
Sinatra 2.0.8.1/Rack 2.1.2 upgrade complete. Unfortunately, that did not fix the issue, still getting no traces. |
@delner I went back to 0.34.2 for now, but wanted to keep the configuration changes you suggested: Datadog.configure do |c|
c.tracer.enabled = TRACING_ENABLED
c.tracer.hostname = ENV['DATADOG_APM_AGENT_SERVICE_SERVICE_HOST'] || 'localhost'
c.tracer.sampler = Datadog::RateSampler.new(0.5)
c.tracer.priority_sampling = true
c.tracer.port = 8126
c.use :aws
c.use :http
c.use :rack
c.use :redis
c.use :sidekiq
c.use :sinatra
c.use :sequel
end but when running with 0.34.2, it complains on startup with:
I'm not sure what's going on here ... but I'm pretty sure it's from my lack of Ruby knowledge as I only have about 3 months of working experience in Ruby. Any help you can provide would be greatly appreciated. |
Hey @mscrivo, because your are using Here's how it would look like: Datadog.configure do |c|
c.tracer enabled: TRACING_ENABLED,
hostname: ENV['DATADOG_APM_AGENT_SERVICE_SERVICE_HOST'] || 'localhost',
sampler: Datadog::RateSampler.new(0.5),
priority_sampling: true,
port: 8126
c.use :aws
c.use :http
c.use :rack
c.use :redis
c.use :sidekiq
c.use :sinatra
c.use :sequel
end Let us know if this configuration still doesn't work for you. |
🤦 Thanks that worked! |
Just to circle back on this, I tried 0.37 to no avail. So went back over to docs to see if I was missing anything, as 0.34 and lower work perfectly fine with our code. So I saw that for the new modular sinatra apps, you're supposed to add the middleware. So I added: register Datadog::Contrib::Sinatra::Tracer right below: class Base < Sinatra::Base in our main app base class. Note: we already had the rack middleware setup using: use Datadog::Contrib::Rack::TraceMiddleware I also moved our datadog config setup before any of the routes are defined, as they weren't before. And now I am getting traces, but they look messed up, sort of double counted like this: I can't figure out what I'm missing or doing wrong with the new version |
Ah ok, thanks @delner ... good to know, I was convinced it was something I was doing wrong. |
That's awesome to hear! I'll close this issue as resolved. |
We currently have sequel tracing working with the default service_name of "postgres", but want to split it out so that we have separate reporting on each of our 3 databases. I'm trying to following the guide and use:
and when I enable the test adapter and watch the traces locally, I can see the traces have "service=primary-db" for example, but when we deploy to prod, the traces are missing postgres and these new service names altogether.
I'm not sure what I'm missing or doing wrong?
Using version 0.35.1
Also, how does one separate databases when you use Sequel's sharding mechanism and don't actually create a separate connection for the replica as described here?
The text was updated successfully, but these errors were encountered: