-
-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
adding jitted scalar maximization routine, first build #416
Conversation
version = '0.3.8' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this will be set in setup.py
I will update this tomorrow morning.
thanks @jstac this look great - I will review more thoroughly tomorrow morning. We can open issues to track which other |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice!
from numba import jit, njit | ||
|
||
@njit | ||
def maximize_scalar(func, a, b, xtol=1e-5, maxiter=500): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It will be helpful to add *args
(to pass to func
).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@oyamad thanks I have added this, although unfortunately you need to be quite careful to pass in a tuple and not a scalar, as I can't figure out how to check the type inside a jitted function
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry I don't see what you mean. Can you elaborate?
And any difference between adding args=()
and *args
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In scipy, if something in args
is passed that is not a tuple (ie. a scalar), the function will convert it to a tuple. I don't seem to be able to get isinstance
to work inside a jitted function. If you only want to set one fixed argument, you need to pass args=(y,)
which is somewhat annoying.
I guess we could use *args
- I was just following scipy's style.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see thanks.
Following exactly scipy's style makes sense, while *args
looks more Pythonic, allowing passing e.g. y=5
. I can't say which is better...
|
||
fval = -fx | ||
|
||
return fval, xf |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think a status flag (ierr
in scipy.fminbound
) should also be returned, which in scipy.fminbound
is 0
if converged and 1
if maxiter
is reached.
The minimum version number in Line 116 in 649cc55
0.38 .
|
fval : float | ||
The maximum value attained | ||
xf : float | ||
The maxizer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you mean "maximizer"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @QBatista !
It looks like |
This is very interesting! |
@mmcky Great news. (BTW, lots of people are having problems with the library because they're not at the latest version of Numba. I guess there's only so much we can do on this front...) @oyamad Thanks for your feedback. @albop It would be great to have your input. @chrishyland and his friend will be working on these and subsequent additions close to full time in July, so they can build on your results. @chrishyland Please loop your friend Paul into this discussion. |
@albop Did you still want to look at this before we merge? |
yes @jstac, 2h more please ;-)
…On Sun, Jun 24, 2018 at 10:50 PM John Stachurski ***@***.***> wrote:
@albop <https://github.com/albop> Did you still want to look at this
before we merge?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#416 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAQ5KYU4aZ6iaNfKCuhFTqmBRe_nVEJqks5t__uYgaJpZM4UciQO>
.
|
Dear John, maybe we should put a news note on the quantecon page noting
the need to have the latest version of numba
…On Mon, Jun 11, 2018, 4:01 PM John Stachurski ***@***.***> wrote:
@mmcky <https://github.com/mmcky> Great news. (BTW, lots of people are
having problems with the library because they're not at the latest version
of Numba. I guess there's only so much we can do on this front...)
@oyamad <https://github.com/oyamad> Thanks for your feedback.
@albop <https://github.com/albop> It would be great to have your input.
@chrishyland <https://github.com/chrishyland> and his friend will be
working on these and subsequent additions close to full time in July, so
they can build on your results.
@chrishyland <https://github.com/chrishyland> Please loop your friend
Paul into this discussion.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#416 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AD22TrqOBDyDq5O_LvvWUFMKUDnORJcyks5t7ngZgaJpZM4UciQO>
.
|
@jstac: I've read the code carefully and did some timings. It all sounds good to me. In particular, the overhead of using this function vs the manual inclusion of it's logic, seems small or in-existent in the examples I have tried.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
optimize subpackage is a very good idea.
Thanks all for your input. @albop It's a good point about the name. Perhaps we can leave it as is and later add a |
@thomassargent30 That's a good point. A few people had problems with this at the recent workshops. @natashawatkins Would you mind to add a news item when this is merged that emphasizes the need to update to the latest version of Numba, preferably by obtaining the latest version of Anaconda.. |
@jstac : |
Thanks @albop . If we return the number of function calls that brings us up to parity with
where |
I'm open to changing the name to, say, |
Don't want to nitpick, but don't you want to return |
|
I switched the order of Unless there are other comments I think this is ready to merge @mmcky . |
The |
@jstac I will add this to the docs. |
@jstac documentation is ready to go: https://quanteconpy.readthedocs.io/en/add_optim_subpackage/ |
Thanks everyone for this PR. I will merge tonight if there are no other comments. |
Hi @mmcky, thanks for updating the docs. Could you please merge this now? |
Hi @mmcky, I suspect that you're otherwise occupied with more important things, so just this once I'll go ahead and merge. I hope that's OK. |
Hi @mmcky, in this PR I'm adding an
optimize
subpackage and a first function within that subpackage calledmaximize_scalar
.There's an example of usage within the documentation for that function.
The code and algorithm come from SciPy's fminbound. I've stripped out some tests and jitted the function. We require numba version 0.38 and above because the routine takes a jitted function as its first argument (the scalar function to be maximized).
The intention is to add further scalar maximization routines and at least one multivariate routine, as well as perhaps root finders. Essentially, the structure will mimic
scipy.optimize
and the feature set will no doubt be smaller, but every function should be jitted and accept jitted functions to act upon.The purpose of functions in this subpackage is not necessarily to maximize speed within the optimization routine itself, but rather to ensure that we can jit compile outer functions like Bellman operators, which include a maximization routine as one step of the algorithm.
Thanks to @natashawatkins for helping me pull this together.
@sglyon @albop @cc7768 @chrishyland FYI