Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Help comparing benchmarks across virtualenvs #370

Open
matthew-brett opened this issue Feb 15, 2016 · 12 comments · Fixed by #794
Open

Help comparing benchmarks across virtualenvs #370

matthew-brett opened this issue Feb 15, 2016 · 12 comments · Fixed by #794
Labels
enhancement Triaged as an enhancement request

Comments

@matthew-brett
Copy link

Sorry if this is the wrong place to ask. Sorry too that I am sure I am missing something obvious.

I am using asv on Windows : numpy/numpy#5479

I'm comparing performance of numpy in two virtualenvs, one with MKL linking, the other with ATLAS linking.

I told asv machine that I have two machines one called 'mike-mkl' and the other 'mike-atlas', identical other than the names.

Then I ran the benchmarks in the MKL virtualenv as:

cd numpy/benchmarks
asv run --python=same --machine=mike-mkl

In the ATLAS virtualenv:

cd numpy/benchmarks
asv run --python=same --machine=mike-atlas

I was expecting this to store the two sets of benchmarks, but it appears from the results/benchmarks.json file size and the asv preview output, that they overwrite each other.

What is the right way to record / show these benchmarks relative to one another?

@pv
Copy link
Collaborator

pv commented Feb 15, 2016

--python=same implies dry-run, so it probably doesn't save any results. This may not be desired behavior.

benchmarks.json contains only benchmark metadata, but not results and is the same for all machines.

@pv
Copy link
Collaborator

pv commented Feb 15, 2016

The issue here is that when you run with --python=same, asv has no way of knowing which commit the results correspond to, and so it cannot record the results.

@matthew-brett
Copy link
Author

The fact that ---python=same implies dry-run presumably means that, at the moment, I can't easily do tests in virtualenvs I've prepared myself?

@pv
Copy link
Collaborator

pv commented Feb 15, 2016

Yes, that is not a supported use case currently.
With what's available now, the best probably is along the lines of activate a; asv run --python=same -e > a.log 2>&1; activate b; asv run --python=same -e > b.log 2>&1 and then write a small script to compare the outputs, although it's not fully in machine-friendly format.

@pv
Copy link
Collaborator

pv commented Feb 15, 2016

To support storing results when using pre-existing virtualenvs, the following questions would need to be answered:

  • how does asv obtain an unique name for the pre-existing environment?
  • is asv allowed to install/build packages in the given environment?
  • if not, how does asv know what commit of the module was installed into the env?

@matthew-brett
Copy link
Author

How about a --commit-label flag that only applies in the --python=same case?

@pv
Copy link
Collaborator

pv commented Feb 15, 2016

That's the easiest option to implement, sounds good to me.
This still leaves open the question of a environment name label; gh-352 has overlap

@mdboom
Copy link
Collaborator

mdboom commented Mar 3, 2016

What if we just extended --python to also accept the path to an executable. You could provide the executable in the desired virtualenv. Then you could just put the different virtualenvs you want to test against in asv.conf.json. (It wouldn't be portable to other machines, because of hardcoded paths, but that's probably ok in most cases).

@pv
Copy link
Collaborator

pv commented Mar 3, 2016

gh-352 allows --python to take a path to an executable, and assigns names such as existing-py_home_pauli_prj_scipy_numpy_benchmarks_someenv_bin_python as the name of the environment. Specifying that in config file also works, so what's missing is the --commit-label thing.

Or, maybe we could add a new environment type similar to ExistingEnvironment with the difference that it does the build/install/uninstall cycle similarly as Virtualenv. Or, maybe change virtualenv to accept a path to a python executable instead of a version number, in which case it would only do the build/install/uninstall parts and skip the environment creation etc parts.

@pv pv added the enhancement Triaged as an enhancement request label Jun 5, 2016
@jeremiedbb
Copy link
Contributor

I encountered the same use case. I made a PR to enable this option (#794). Please let me know if that's what you had in mind.

@matthew-brett
Copy link
Author

Thanks - yes. If the changes allow me to specify an arbitrary label, and therefore, save the results, even if on the same machine, then yes, that it what I needed.

@pv pv closed this as completed in #794 May 18, 2019
@pv pv reopened this May 25, 2019
@mattip
Copy link
Contributor

mattip commented Jun 12, 2019

I am particularly interested in compareing across python implementations, i.e. when using run --python=3.6 vs. run --python=pypy3. From #789 I understand I am asking for enhancing the revision1 revision2 positional argument syntax for compare. What form would such an enhancement take? revision is meant to be a git-style hash, right? so one suggestion might be to use : to delineate environments or pythons: :python:3.6:76d6f4eea ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Triaged as an enhancement request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants