Skip to content

Commit

Permalink
mention quarkus insights #22
Browse files Browse the repository at this point in the history
  • Loading branch information
maxandersen committed Dec 11, 2023
1 parent 76471cf commit a445d5d
Showing 1 changed file with 22 additions and 21 deletions.
43 changes: 22 additions & 21 deletions docs/src/main/asciidoc/performance-measure.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,34 +11,14 @@ include::_attributes.adoc[]

This guide covers:

* Coordinated omission Problem in Tools
* how we measure memory usage
* how we measure startup time
* which additional flags will Quarkus apply to native-image by default
* Coordinated omission Problem in Tools
All of our tests are run on the same hardware for a given batch.
It goes without saying, but it's better when you say it.

== Coordinated Omission Problem in Tools

When measuring performance of a framework like Quarkus the latency experience by users are especially interesting and for that there are many different tools. Unfortunately, many fail to measure the latency correctly and instead fall short and create the Coordinate Omission problem. Meaning tools fails to acoomodate for delays to submit new requests when system is under load and aggregate these numbers making the latency and throughput numbers very misleading.

A good walkthrough of the issue is https://www.youtube.com/watch?v=lJ8ydIuPFeU[this video] where Gil Tene the author of wrk2 explains the issue.

Although that video and related papers and articles date all back to 2015 then even today you will find tools that fall short with the coordinated oission problem

Tools that at current time of writing is known to excert the problem and should NOT be used for measuring latency/throughput (it may be used for other things):

* JMeter
* wrk

Tools that are known to not be affected are:

* https://github.com/giltene/wrk2[wrk2]
* https://hyperfoil.io[HyperFoil]

Mind you, the tools are not better than your own understanding of what they measure thus even when using `wrk2` or `hyperfoil` make sure you understand what the numbers mean.

== How do we measure memory usage

When measuring the footprint of a Quarkus application, we measure https://en.wikipedia.org/wiki/Resident_set_size[Resident Set Size (RSS)]
Expand Down Expand Up @@ -258,3 +238,24 @@ circumstances one could observe non-negligible impact from the other flags too.
If you're to investigate some differences in detail make sure to check what Quarkus is invoking exactly: when the build

Check warning on line 238 in docs/src/main/asciidoc/performance-measure.adoc

View workflow job for this annotation

GitHub Actions / Linting with Vale

[vale] reported by reviewdog 🐶 [Quarkus.SentenceLength] Try to keep sentences to an average of 32 words or fewer. Raw Output: {"message": "[Quarkus.SentenceLength] Try to keep sentences to an average of 32 words or fewer.", "location": {"path": "docs/src/main/asciidoc/performance-measure.adoc", "range": {"start": {"line": 238, "column": 1}}}, "severity": "INFO"}

Check warning on line 238 in docs/src/main/asciidoc/performance-measure.adoc

View workflow job for this annotation

GitHub Actions / Linting with Vale

[vale] reported by reviewdog 🐶 [Quarkus.TermsWarnings] Consider using 'verify' rather than 'make sure' unless updating existing content that uses the term. Raw Output: {"message": "[Quarkus.TermsWarnings] Consider using 'verify' rather than 'make sure' unless updating existing content that uses the term.", "location": {"path": "docs/src/main/asciidoc/performance-measure.adoc", "range": {"start": {"line": 238, "column": 53}}}, "severity": "WARNING"}
plugin is producing a native image, the full command lines are logged.


== Coordinated Omission Problem in Tools

Check warning on line 242 in docs/src/main/asciidoc/performance-measure.adoc

View workflow job for this annotation

GitHub Actions / Linting with Vale

[vale] reported by reviewdog 🐶 [Quarkus.Headings] Use sentence-style capitalization in 'Coordinated Omission Problem in Tools'. Raw Output: {"message": "[Quarkus.Headings] Use sentence-style capitalization in 'Coordinated Omission Problem in Tools'.", "location": {"path": "docs/src/main/asciidoc/performance-measure.adoc", "range": {"start": {"line": 242, "column": 4}}}, "severity": "INFO"}

When measuring performance of a framework like Quarkus the latency experience by users are especially interesting and for that there are many different tools. Unfortunately, many fail to measure the latency correctly and instead fall short and create the Coordinate Omission problem. Meaning tools fails to acoomodate for delays to submit new requests when system is under load and aggregate these numbers making the latency and throughput numbers very misleading.

Check warning on line 244 in docs/src/main/asciidoc/performance-measure.adoc

View workflow job for this annotation

GitHub Actions / Linting with Vale

[vale] reported by reviewdog 🐶 [Quarkus.Spelling] Use correct American English spelling. Did you really mean 'acoomodate'? Raw Output: {"message": "[Quarkus.Spelling] Use correct American English spelling. Did you really mean 'acoomodate'?", "location": {"path": "docs/src/main/asciidoc/performance-measure.adoc", "range": {"start": {"line": 244, "column": 308}}}, "severity": "WARNING"}

A good walkthrough of the issue is https://www.youtube.com/watch?v=lJ8ydIuPFeU[this video] where Gil Tene the author of wrk2 explains the issue and https://www.youtube.com/watch?v=xdG8b9iDYbE[Quarkus Insights #22] have John O'Hara from Quarkus performance team show how it can show up.

Check warning on line 246 in docs/src/main/asciidoc/performance-measure.adoc

View workflow job for this annotation

GitHub Actions / Linting with Vale

[vale] reported by reviewdog 🐶 [Quarkus.Spelling] Use correct American English spelling. Did you really mean 'walkthrough'? Raw Output: {"message": "[Quarkus.Spelling] Use correct American English spelling. Did you really mean 'walkthrough'?", "location": {"path": "docs/src/main/asciidoc/performance-measure.adoc", "range": {"start": {"line": 246, "column": 8}}}, "severity": "WARNING"}

Check warning on line 246 in docs/src/main/asciidoc/performance-measure.adoc

View workflow job for this annotation

GitHub Actions / Linting with Vale

[vale] reported by reviewdog 🐶 [Quarkus.Spelling] Use correct American English spelling. Did you really mean 'Tene'? Raw Output: {"message": "[Quarkus.Spelling] Use correct American English spelling. Did you really mean 'Tene'?", "location": {"path": "docs/src/main/asciidoc/performance-measure.adoc", "range": {"start": {"line": 246, "column": 102}}}, "severity": "WARNING"}

Although that video and related papers and articles date all back to 2015 then even today you will find tools that fall short with the coordinated oission problem

Check warning on line 248 in docs/src/main/asciidoc/performance-measure.adoc

View workflow job for this annotation

GitHub Actions / Linting with Vale

[vale] reported by reviewdog 🐶 [Quarkus.Spelling] Use correct American English spelling. Did you really mean 'oission'? Raw Output: {"message": "[Quarkus.Spelling] Use correct American English spelling. Did you really mean 'oission'?", "location": {"path": "docs/src/main/asciidoc/performance-measure.adoc", "range": {"start": {"line": 248, "column": 148}}}, "severity": "WARNING"}

Tools that at current time of writing is known to excert the problem and should NOT be used for measuring latency/throughput (it may be used for other things):

Check warning on line 250 in docs/src/main/asciidoc/performance-measure.adoc

View workflow job for this annotation

GitHub Actions / Linting with Vale

[vale] reported by reviewdog 🐶 [Quarkus.Spelling] Use correct American English spelling. Did you really mean 'excert'? Raw Output: {"message": "[Quarkus.Spelling] Use correct American English spelling. Did you really mean 'excert'?", "location": {"path": "docs/src/main/asciidoc/performance-measure.adoc", "range": {"start": {"line": 250, "column": 51}}}, "severity": "WARNING"}

Check warning on line 250 in docs/src/main/asciidoc/performance-measure.adoc

View workflow job for this annotation

GitHub Actions / Linting with Vale

[vale] reported by reviewdog 🐶 [Quarkus.TermsWarnings] Consider using 'might (for possiblity)' or 'can (for ability)' rather than 'may' unless updating existing content that uses the term. Raw Output: {"message": "[Quarkus.TermsWarnings] Consider using 'might (for possiblity)' or 'can (for ability)' rather than 'may' unless updating existing content that uses the term.", "location": {"path": "docs/src/main/asciidoc/performance-measure.adoc", "range": {"start": {"line": 250, "column": 130}}}, "severity": "WARNING"}

* JMeter

Check warning on line 252 in docs/src/main/asciidoc/performance-measure.adoc

View workflow job for this annotation

GitHub Actions / Linting with Vale

[vale] reported by reviewdog 🐶 [Quarkus.Spelling] Use correct American English spelling. Did you really mean 'JMeter'? Raw Output: {"message": "[Quarkus.Spelling] Use correct American English spelling. Did you really mean 'JMeter'?", "location": {"path": "docs/src/main/asciidoc/performance-measure.adoc", "range": {"start": {"line": 252, "column": 4}}}, "severity": "WARNING"}
* wrk

Tools that are known to not be affected are:

* https://github.com/giltene/wrk2[wrk2]
* https://hyperfoil.io[HyperFoil]

Mind you, the tools are not better than your own understanding of what they measure thus even when using `wrk2` or `hyperfoil` make sure you understand what the numbers mean.

0 comments on commit a445d5d

Please sign in to comment.