-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unstable Performance Among Some Java Test Implementations #5612
Comments
Repro steps: |
The same pattern exists in php or nginx. And perhaps in more languages. https://tfb-status.techempower.com/timeline/php/plaintext And a big drop in 18 June, 2019. Really good tool the Framework Timeline 👋 , Will be better with annotated marks about that big changes in the benchmark. |
I have been investigating a estrange problem for some time. The problemIn the last runs Kumbiaphp-raw is slower than Kumbiaphp with ORM. It does not make any sense, and I think it will affect the plain php also.
https://tfb-status.techempower.com/timeline/php/fortune It's impossible for raw version to be slower than the ORM version, in all the runs after 18 June. I was thinking with a bad php stack config. But after read this issue, I think that perhaps would be a problem with the benchmark stack. |
@joanhey Below is the graph for Kumbiaphp, for reference, and it does indeed see that dip on June 18, 2019. Curiously, it seems to recover on Nov 20, 2019. |
I have edited the original post to indicate that on Jun 18, 2019, @nbrady-techempower applied the Spectre/Meltdown kernel patches, and we believe that those account for the dip. |
Yes it recover in Nov 20, like plain PHP. But I can't understand the reason. Curiously, nginx alone drop in Nov 20, 2019. |
I believe we have an answer to that now. Nov 20 is when we switched back from CentOS to Ubuntu, and we did not apply (this iptables rule)[https://news.ycombinator.com/item?id=20205566] which was previously applied on the CentOS install. That dip from Jun 8 to Nov 20 appears to be a direct relation to that particular rule being in place. |
I think that would be a timeline with all that changes in some place. A chronological history of the changes in a web page. |
TechEmpower/tfb-status#21 Yes, I want that. |
I was troubleshooting what I believed to be a performance degradation in
Gemini
(and spent a lot of time doing so) when I believe I came to the realization that it is a problem not in Gemini proper. This issue will lay out all the information we have gathered.For those unfamiliar, it is my pleasure to introduce the Framework Timeline which graphs the continuous benchmark results over time. This tool is great for illustrating the arguments that I will be laying out. This link is to the
plaintext
results forgemini
.The following is an annotated graph from
gemini
's Framework Timeline:0a. Dockerify #3292 was merged and the project was officially Dockerified
Our best guess is that this is a dip from Java 11 - Update Docker images to the jdk variant #4850 which changed the base image of many Java test implementations. The timing lines up pretty much exactly, though it is a bit of a mystery as to why moving fromFound an email chain wherein @nbrady-techempower confirmed that he once again applied Spectre/Meltdown patches and anopenjdk-11.0.3-jre-slim
toopenjdk-11.0.3-jdk-slim
would have a performance impact.iptables
rule from thisgemini
on Citrine (Ubuntu) - roughly 1.2Mplaintext
RPSgemini
on Citrine (Ubuntu) - roughly 700Kplaintext
RPSThe following shows the data table for Servlet frameworks written in Java for Round 18 published July 9, 2019 which is between number 6 and 7 on the above graph.
Comparing that with the data table for the same test implementations from the run completed on April 1, 2020 which is the last graphed day (as of this writing) on
gemini
's Framework Timeline.This shows degradation across the board for Java applications, but some are impacted more than others.
For comparison, the following is
servlet
'splaintext
Framework Timeline:We merged in some updates to Gemini today which included updating the Java base image to
openjdk-11.0.7-slim
which should be the same asopenjdk-11.0.7-jdk-slim
. So, if there was some weirdness withopenjdk-11.0.3-jdk-slim
from #4850 then the next run will show improvedplaintext
numbers for Gemini.However, that may be unrelated, so other tests I will probably do in the next hour or two:
[ ] - Downgrade
tapestry
toopenjdk:11.0.3-jre-stretch
which was the version prior to #4850[ ] - Upgrade
wicket
toopenjdk:11.0.7-slim
which would eliminate any question ifgemini
improves andwicket
improves[X] - Verify versions of
openjdk:11.0.3-jre-stretch
andopenjdk:11.0.3-jdk-stretch
have the same underlying JRE see below[X] - Verify
gemini
plaintext are not leaking connections see belowThe text was updated successfully, but these errors were encountered: