Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate libraries for performance testing #304

Open
achaikou opened this issue Oct 21, 2020 · 1 comment
Open

Investigate libraries for performance testing #304

achaikou opened this issue Oct 21, 2020 · 1 comment

Comments

@achaikou
Copy link
Contributor

There was an attempt to investigate libraries for performance testing long ago, but it was abandoned and forgotten.

Right now performance is measured manually during development without any strict guidelines.

As experience shows, that might not be good enough and we might miss some unexpected cases.
We might need to test performance of various user code snippets (load, load all metatdada, read curves, links between objects, etc) on different types of files (size, amount of metadata, number of curves, etc)

@achaikou
Copy link
Contributor Author

Possible tools:

  1. C++ benchmark: https://github.com/google/benchmark
  2. Python benchmark: https://github.com/ionelmc/pytest-benchmark

Stable performance testing is difficult and the results are highly environment dependent, but we can try to implement the following (should be possible with at least python benchmarking):

  • warm up rounds (execution time is highly dependent on the io)
  • performance tests disabled by default (like pytest.ini: addopts = --benchmark-skip)
  • special CI job, performance tests only
  • run only on PRs
  • optional job, not blocking merging
  • compare PR with master/latest release, fail on certain treshold (run performance tests in the same CI session for both versions to obtain the statistics)
  • run on different OS, if possible

Possible tests:

DLIS:

  • load
  • curves
  • load all metadata
  • objects referencing/caching
# tool_channels = {tool.fingerprint : tool.channels for tool in tools}
for channel in file.channels:
    for tool in file.tools:
        if channel in tool.channels: #if channel in tool_channels[tool.fingerprint]:
            count+= 1
  • 2 GB file if we can obtain/create one without serious costs

LIS:

  • load
  • curves

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant