Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Microbenchmarking #1043

Merged
merged 2 commits into from
May 28, 2020
Merged

Add Microbenchmarking #1043

merged 2 commits into from
May 28, 2020

Conversation

marcotc
Copy link
Member

@marcotc marcotc commented May 16, 2020

This PR adds microbenchmarks to core components of ddtrace.
Currently we test:

  • Trace creation (tracer.trace)
  • Writer trace recording (writer.write)
  • Baseline benchmark overhead: tests an empty method call, to validate if the benchmark environment is stable.

All test cases are measure for execution time, memory usage and GC impact.

The output looks like this (for testing Datadog::Tracer nested traces in this example):

Test:Microbenchmark Datadog::Tracer nested traces memory
Calculating -------------------------------------
                   1   253.544k memsize (    91.440k retained)
                         2.830k objects (   704.000  retained)
                         4.000  strings (     1.000  retained)
                  10     1.811M memsize (   862.280k retained)
                        14.488k objects (     5.661k retained)
                         3.000  strings (     0.000  retained)
                 100    17.340M memsize (     8.508M retained)
                       131.485k objects (    55.171k retained)
                         3.000  strings (     0.000  retained)

Comparison:
                   1:     253544 allocated
                  10:    1810560 allocated - 7.14x more
                 100:   17339976 allocated - 68.39x more

Test:Microbenchmark Datadog::Tracer nested traces timing
Warming up --------------------------------------
                   1     4.176k i/100ms
                  10     1.342k i/100ms
                 100   192.000  i/100ms
Calculating -------------------------------------
                   1     39.479k (±13.5%) i/s -     41.760k in   1.080583s
                  10      9.552k (±24.4%) i/s -     10.736k in   1.202755s
                 100      1.199k (±21.5%) i/s -      1.152k in   1.006056s

Comparison:
                   1:    39479.1 i/s
                  10:     9552.2 i/s - 4.13x  (± 0.00) slower
                 100:     1198.8 i/s - 32.93x  (± 0.00) slower

There's also a detailed, human-readable memory report under detailed report tests, which reports on every memory allocation site, like this:

allocated memory by location
-----------------------------------
     24000  lib/ddtrace/sampler.rb:188
     19200  lib/ddtrace/span.rb:69
     19200  lib/ddtrace/tracer.rb:193
     19200  lib/ddtrace/tracer.rb:259
     18400  lib/ddtrace/tracer.rb:201

This should give us a good starting point to measure small units of the tracer for performance impact and objectively measure future improvements.

@marcotc marcotc added the dev/testing Involves testing processes (e.g. RSpec) label May 16, 2020
@marcotc marcotc requested a review from a team May 16, 2020 00:21
@marcotc marcotc self-assigned this May 16, 2020
@marcotc marcotc added the performance Involves performance (e.g. CPU, memory, etc) label May 22, 2020
@marcotc marcotc merged commit 2bbbd07 into master May 28, 2020
@marcotc marcotc deleted the feat/micro-benchmarking branch May 28, 2020 18:02
@marcotc marcotc added this to the 0.37.0 milestone Jun 24, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dev/testing Involves testing processes (e.g. RSpec) performance Involves performance (e.g. CPU, memory, etc)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants