Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Baseline performance audit #20

Open
countvajhula opened this issue Jan 17, 2022 · 1 comment
Open

Baseline performance audit #20

countvajhula opened this issue Jan 17, 2022 · 1 comment

Comments

@countvajhula
Copy link
Collaborator

The current performance benchmarks in profile/ that are run via make profile are inaccurate, and additionally, are not comprehensive.

  1. The existing benchmarks need auditing to ensure that the results are in accord with actual performance, with the benchmarking apparatus properly factored out.

E.g. @michaelballantyne reports:

the difference between the flat and recursive benchmarks is an artifact of your benchmarking infrastructure. In particular, check-value constructs a list that is as long as the number of iterations of the test, as take is an operation over lists rather than sequences:

(for ([i (take how-many (cycle inputs))])
(fn i))

It looks like the cost of constructing that list accounts for much of the time in the "Conditionals" benchmark, for example. With a testing setup that doesn't construct such a list, the Racket implementation is much faster than Qi. I suspect that in the recursive benchmarks the cost of the computation is higher relative to the construction of the list of inputs and therefore is more visible in the benchmarks.

  1. In addition, benchmark coverage is sparse at the moment, and should be made more comprehensive -- e.g. probably expand the benchmarks in profile/forms.rkt to include all Qi forms (currently these include only a few such as relay and group).
@countvajhula
Copy link
Collaborator Author

countvajhula commented Jun 17, 2022

Note for down the line: it would be great to add continuous benchmarking into the CI workflow, so that the performance effects of each commit are automatically tracked:

https://github.com/benchmark-action/github-action-benchmark

This will likely involve calling the existing benchmarking functions in profile/* but generating the output in the JSON format described at the above link, i.e. {"name": <name-of-the-benchmark>, "unit": "ms", "value": <the-measured-value>}.

These should be runnable via a Makefile target, e.g. make report-benchmarks, which can be run in the GitHub Actions workflow just like the existing targets for testing, etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant