-
-
Notifications
You must be signed in to change notification settings - Fork 199
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gauss Hermite-based normal and lognormal quadrature nodes and weights #258
Conversation
… functions for converting between location and scale in normal<->lognormal.
I like the idea of a keyword, not so much because I like it for itself but because it encourages us to think about the elements that are common across all the implementations. But more deeply, we need to rethink how deal with distributions; we should pass the abstract form of the distribution ("lognormal" or whatever) as far down the chain as we can, and only force a discretization at the last point where it is necessary. To be more concrete, suppose that in principle the model being used has a lognormal distribution with a given mean and standard deviation. Then the object being passed to various components in HARK (solvers, simulators, etc) should not just be a list of points and probabilities, it should contain all the info needed to GENERATE those points from the deeper Platonic form of the distribution and Aristotelian calibration of its particular parameters. This is (kind of) the way I think Pablo is doing things in dolo. |
I was going to suggest representing a given distribution as a class that would store parameters and allow the user to generate specific types of nodes and weight at the request of the user (right away if they want, or later if there isn’t a reason to do it right away). Then each type of approximation would just be a method to the class. Instead of passing a list of Shock means, vars and counts, the user could just pass the appropriate distribution class. However, for the sake of this pr, I’m not sure it is what we want to do. We can always move these functions into methods if a class. |
How does dolo represent these objects? I think it is already doing pretty
much what you describe: Information is stored about the nature of the
distribution and its parameters, and then the object can be told to
discretize itself.
Wherever possible I want to do things the way dolo does so that the
interaction between dolo and HARK is as direct as possible.
…On Sat, Apr 27, 2019 at 6:33 AM Patrick Kofod Mogensen < ***@***.***> wrote:
I was going to suggest representing a given distribution as a class that
would store parameters and allow the user to generate specific types of
nodes and weight at the request of the user (right away if they want, or
later if there isn’t a reason to do it right away). Then each type of
approximation would just be a method to the class. Instead of passing a
list of Shock means, vars and counts, the user could just pass the
appropriate distribution class. However, for the sake of this pr, I’m not
sure it is what we want to do. We can always move these functions into
methods if a class.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#258 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ABSEEJFB2UJA6LNUAXHDFCDPSQTWXANCNFSM4HIWE72Q>
.
--
- Chris Carroll
|
Yes, but maybe we should discuss this in a separate issue and include Matt and Pablo. |
@mnwhite any thoughts here? |
Merging as soon as checks finish running (I merged in master). |
@pkofod can you add release notes? |
Quite randomly, I found that my name was mentioned here. I opened an issue against dolo to add more i.i.d. distributions to the dolo language. From my perspective, which python function to use is not a problem and I'm happy to reuse the ones from HARK or from another package for the actual discretization. However, the crucial point to me is to settle on a predictable, consistent, naming scheme for the yaml file. One option is to use the conventions from R-dist as in https://pypi.org/project/distcan/ and distributions.jl . |
However, the crucial point to me is to settle on a predictable, consistent,
naming scheme for the yaml file. One option is to use the conventions from
R-dist as in https://pypi.org/project/distcan/ and distributions.jl .
Am in complete agreement that predictable, consistent naming scheme is
essential, and would be happy to have HARK adopt as its standard whatever
you judge to be a good choice.
Reading the docs at the link, I’m pretty much on board with the distcan
point that ithe scipy way of doing things has gone in the direction of what
I like to cal “gratuitous generality.” Yes, most distributions have a
“location” object and a “variation” object and maybe you can even come up
with abstract categories for higher-order aspects. But if everybody
uses [image:
\mu] and [image: \sigma] for the mean and variance of a normal
distribution, it seems perverse to insist that because [image: \mu] is the
location parameter and [image: \sigma] is the variation parameter, they
should be called “loc” and “vari” (or whatever) instead of [image:
\mu] and [image:
\sigma].
…On Fri, May 17, 2019 at 5:38 AM Pablo Winant ***@***.***> wrote:
Quite randomly, I found that my name was mentioned here. I opened an issue
against dolo to add more i.i.d. distributions to the dolo language. From my
perspective, which python function to use is not a problem and I'm happy to
reuse the ones from HARK or from another package for the actual
discretization. However, the crucial point to me is to settle on a
predictable, consistent, naming scheme for the yaml file. One option is to
use the conventions from R-dist as in https://pypi.org/project/distcan/
and distributions.jl .
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
<#258?email_source=notifications&email_token=AAKCK756E7BMHRRO6J4O22DPVZ4KVA5CNFSM4HIWE722YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODVUI4VA#issuecomment-493391444>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAKCK75JW73423WLLYJDUBTPVZ4KVANCNFSM4HIWE72Q>
.
--
- Chris Carroll
|
@llorracc @mnwhite
supercedes #163 by just using what's in numpy/scipy, and actually uses it in normal and a lognormal approximation functions.
What do you think would be the best thing to do here designwise? We can either do what I did here and create a function with a different name, or we can have a "poly" function for the normal and lognormal respectively, that are controlled by a keyword ("equiprobable", "gausshermite", "sample", etc).
I also added two small functions that convert means and variances between normal <-> lognormal.
I will add some tests.