Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

per-run untimed setup #11

Open
minrk opened this issue Dec 22, 2011 · 7 comments
Open

per-run untimed setup #11

minrk opened this issue Dec 22, 2011 · 7 comments

Comments

@minrk
Copy link
Contributor

minrk commented Dec 22, 2011

In pyzmq, I want to test the Python overhead of thing like recv_multipart, and I don't want to time the corresponding send/poll. This means I need to have setup per run, not just creating the available namespace once. The timeit template provides exactly this (currently just specified as pass), but it's not exposed to the Benchmark object.

@minrk
Copy link
Contributor Author

minrk commented Dec 22, 2011

Just kidding - turns out timeit's setup is also run once per timer bundle, so it doesn't help. I still need this, or some approximation of it, because I need the number of sends to match the number of recvs, but I also need to not time the sends.

@wesm
Copy link
Owner

wesm commented Dec 22, 2011

Maybe need to write a Cython timer class that makes system time calls, the main issue I see is that if the operation that you're timing takes < 5 microseconds that having code like:

setup_call()
s = time.time()
bench_call()
elapsed = time.time() - s

is going to lead to a lot of overhead. Should probably look at https://bitbucket.org/robertkern/line_profiler/src for some inspiration =)

@minrk
Copy link
Contributor Author

minrk commented Dec 22, 2011

I'm fine making the runs bigger (e.g. recv_multipart 100 times), but the important part is that I can't make the number of sends and recvs match up, so it's not possible to test one side of anything.

@wesm
Copy link
Owner

wesm commented Dec 22, 2011

yeah, I think it would be good to have fast micro-benchmarks with possible setup. Cython would probably be the right place to have the least timing overhead-- can call eval on a compiled code object, right?

@wesm
Copy link
Owner

wesm commented Dec 22, 2011

Also feel free to hack on this code as much as you like :)

@minrk
Copy link
Contributor Author

minrk commented Dec 22, 2011

I've figured out that I can fake it because if I specify ncalls, I know that the timed part should be called 3*ncalls times (I think). But this means that I can never allow timeit to make its informed decision about how many times the test should be run.

@minrk
Copy link
Contributor Author

minrk commented Dec 23, 2011

What I really want is:

for i in range(ncalls):
    setup_call()
tic()
for i in range(ncalls):
    bench_call()
toc()

Not setup inside the loop. I just want the numbers to match up without having to specify ncalls and timeit.repeat inside all of my setup code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants