-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance Measurement/Tracking #445
Comments
At the face to face, we recommended not trusting "stored away" numbers, and instead run both an old version and new version in sequence, to try to mitigate cluster changes, environmental changes, and other temporal abnormalities in the data.
Storing these performance diffs in the database might be more useful than storing raw results. |
👍 on what @gpaulsen said. Except I'd make the backwards compatibility tests be separate from these performance tests (because the performance characteristics may/will be desirable to test over a longer period of time than our backwards compatibility guarantees). Some possible requirements for the performance testing:
We as a community just need to determine the versions of OLD that we want to compare against. |
Let's also remember that we have plugin support in the new MTT. So there is no problem creating a plugin that compares against some stored "good" measurement, and another that does old vs new, and another that does what someone wants for their own purposes. If we write the plugins intelligently so data retrieval can be shared code, then it will be relatively easy to add new comparison algorithms. |
Below are some thoughts... Running TestsAbility to run in the following modes (version could be a release tar ball, or git hash):
Collecting/Reporting Data
Rendering Data
Other notes
|
This would be great to do one day if someone is interested in tinkering with it. |
We should investigate better performance benchmark integration into the new MTT infrastructure. Performance regressions are hard to see currently and automated tracking of this will give us better visibility of performance issue when commits happen instead of when we are ramping up to release.
See open-mpi/ompi#1831 for one case where this would be useful for Open MPI.
We need to discuss how to store the data, and how we can organize the DB structure and REST interface to make accessing apples-to-apples comparisons. We looked at it in the past, and it's harder than one might think.
The text was updated successfully, but these errors were encountered: