-
Notifications
You must be signed in to change notification settings - Fork 323
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Execute (and analyze) single Bench.measure #7323
Comments
Pavel Marek reports a new STANDUP for yesterday (2023-07-17): Progress: - Working on the prototype of JMH benchmarks for Enso libs
|
Pavel Marek reports a new STANDUP for today (2023-07-18): Progress: - Discussion about bench API specification - how to specify |
Designing new `Bench` API to _collect benchmarks_ first and only execute them then. This is a minimal change to allow implementation of #7323 - e.g. ability to invoke a _single benchmark_ via JMH harness. # Important Notes This is just the basic API skeleton. It can be enhanced, if the basic properties (allowing integration with JMH) are kept. It is not intent of this PR to make the API 100% perfect and usable. Neither it is goal of this PR to update existing benchmarks to use it (74ac8d7 changes only one of them to demonstrate _it all works_ somehow). It is however expected that once this PR is integrated, the newly written benchmarks (like the ones from #7270) are going to use (or even enhance) the new API.
Pavel Marek reports a new STANDUP for yesterday (2023-07-20): Progress: - The CLI of the custom JMH runner conforms to the standard JMH CLI
|
Pavel Marek reports a new STANDUP for today (2023-07-21): Progress: - Troubleshooting the recent GraalVM version update:
|
Pavel Marek reports a new STANDUP for today (2023-07-24): Progress: - Reverting some incompatible changes that I introduced after Graal Update
|
Pavel Marek reports a new STANDUP for today (2023-07-25): Progress: - Figuring out how to cmdline options for frgaal compiler so that we have a custom class path in the annotation processor.
|
Pavel Marek reports a new STANDUP for yesterday (2023-07-26): Progress: - Struggling a bit with the sbt build config again.
|
Pavel Marek reports a new STANDUP for today (2023-07-27): Progress: - Finished the prototype, now we can generate all the JMH code for benchmarks, run a single benchmark, and run all the benchmarks
|
Pavel Marek reports a new STANDUP for today (2023-07-28): Progress: - Integrating a lot of suggestions from the review.
|
Pavel Marek reports a new 🔴 DELAY for today (2023-07-31): Summary: There is 9 days delay in implementation of the Execute (and analyze) single Bench.measure (#7323) task. Delay Cause: We concluded that we want to finish the whole integration of JMH for stdlib benchmarks generation. In other words, the end goal of this task is to create a new CI job that runs all the std lib benchmarks and collects the data. This is much more involved than a simple ability to run just one benchmark locally. |
Pavel Marek reports a new STANDUP for the provided date (2023-08-06): Progress: - Bumped into yet another AssertionError in Truffle's source section - #5585
|
Pavel Marek reports a new STANDUP for today (2023-07-31): Progress: - Bumped into yet another AssertionError in Truffle's source section - #5585
|
Pavel Marek reports a new STANDUP for today (2023-08-01): Progress: - Added some reasonable annotation parameters specifying benchmark discovery.
|
Pavel Marek reports a new STANDUP for yesterday (2023-08-02): Progress: - Integrating many review comments It should be finished by 2023-08-06. |
Pavel Marek reports a new STANDUP for today (2023-08-03): Progress: - Integrating rest of the review comments
|
Pavel Marek reports a new STANDUP for today (2023-08-04): Progress: - Some problems with CI - jobs are taking unusually long, cannot merge today. It should be finished by 2023-08-06. |
The current Enso benchmarking infrastructure allows to take highlevel view of the benchmark runs, but doesn't allow simple transfer of such benchmarks for detailed analysis via lowlevel tools like IGV. As a result we are working with Enso benchmarks as a blackbox. We know something is slow, but we don't have a way to increase our cluelessness by easily running the same
Bench.measure
benchmark in isolation, with deep insight into the compilation of the single benchmark functionality, without being distracted by the rest.Tasks
The exploratory work has already been done in #7101. Time to finish it or to create new PR(s) to move us forward. As the discussion at #7270 (comment) shows, we need to start writing benchmarks in a way that is more lowlevel friendly (as the amount of work to modify original #5067 benchmark was too high to be manually repeated with every benchmark written).
Follow-up tasks
The text was updated successfully, but these errors were encountered: