You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There was an attempt to investigate libraries for performance testing long ago, but it was abandoned and forgotten.
Right now performance is measured manually during development without any strict guidelines.
As experience shows, that might not be good enough and we might miss some unexpected cases.
We might need to test performance of various user code snippets (load, load all metatdada, read curves, links between objects, etc) on different types of files (size, amount of metadata, number of curves, etc)
The text was updated successfully, but these errors were encountered:
Stable performance testing is difficult and the results are highly environment dependent, but we can try to implement the following (should be possible with at least python benchmarking):
warm up rounds (execution time is highly dependent on the io)
performance tests disabled by default (like pytest.ini: addopts = --benchmark-skip)
special CI job, performance tests only
run only on PRs
optional job, not blocking merging
compare PR with master/latest release, fail on certain treshold (run performance tests in the same CI session for both versions to obtain the statistics)
run on different OS, if possible
Possible tests:
DLIS:
load
curves
load all metadata
objects referencing/caching
# tool_channels = {tool.fingerprint : tool.channels for tool in tools}
for channel in file.channels:
for tool in file.tools:
if channel in tool.channels: #if channel in tool_channels[tool.fingerprint]:
count+= 1
2 GB file if we can obtain/create one without serious costs
There was an attempt to investigate libraries for performance testing long ago, but it was abandoned and forgotten.
Right now performance is measured manually during development without any strict guidelines.
As experience shows, that might not be good enough and we might miss some unexpected cases.
We might need to test performance of various user code snippets (load, load all metatdada, read curves, links between objects, etc) on different types of files (size, amount of metadata, number of curves, etc)
The text was updated successfully, but these errors were encountered: