Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gauss Hermite-based normal and lognormal quadrature nodes and weights #258

Merged
merged 12 commits into from
May 14, 2019

Conversation

pkofod
Copy link
Contributor

@pkofod pkofod commented Apr 26, 2019

@llorracc @mnwhite
supercedes #163 by just using what's in numpy/scipy, and actually uses it in normal and a lognormal approximation functions.

What do you think would be the best thing to do here designwise? We can either do what I did here and create a function with a different name, or we can have a "poly" function for the normal and lognormal respectively, that are controlled by a keyword ("equiprobable", "gausshermite", "sample", etc).

I also added two small functions that convert means and variances between normal <-> lognormal.

I will add some tests.

pkofod added 3 commits April 26, 2019 16:14
… functions for converting between location and scale in normal<->lognormal.
@llorracc
Copy link
Collaborator

What do you think would be the best thing to do here designwise? We can either do what I did here and create a function with a different name, or we can have a "poly" function for the normal and lognormal respectively, that are controlled by a keyword ("equiprobable", "gausshermite", "sample", etc).

I like the idea of a keyword, not so much because I like it for itself but because it encourages us to think about the elements that are common across all the implementations.

But more deeply, we need to rethink how deal with distributions; we should pass the abstract form of the distribution ("lognormal" or whatever) as far down the chain as we can, and only force a discretization at the last point where it is necessary.

To be more concrete, suppose that in principle the model being used has a lognormal distribution with a given mean and standard deviation. Then the object being passed to various components in HARK (solvers, simulators, etc) should not just be a list of points and probabilities, it should contain all the info needed to GENERATE those points from the deeper Platonic form of the distribution and Aristotelian calibration of its particular parameters.

This is (kind of) the way I think Pablo is doing things in dolo.

@pkofod
Copy link
Contributor Author

pkofod commented Apr 27, 2019

I was going to suggest representing a given distribution as a class that would store parameters and allow the user to generate specific types of nodes and weight at the request of the user (right away if they want, or later if there isn’t a reason to do it right away). Then each type of approximation would just be a method to the class. Instead of passing a list of Shock means, vars and counts, the user could just pass the appropriate distribution class. However, for the sake of this pr, I’m not sure it is what we want to do. We can always move these functions into methods if a class.

@llorracc-git
Copy link
Contributor

llorracc-git commented Apr 27, 2019 via email

@pkofod
Copy link
Contributor Author

pkofod commented Apr 28, 2019

Yes, but maybe we should discuss this in a separate issue and include Matt and Pablo.

@pkofod
Copy link
Contributor Author

pkofod commented May 14, 2019

@mnwhite any thoughts here?

@mnwhite
Copy link
Contributor

mnwhite commented May 14, 2019

Merging as soon as checks finish running (I merged in master).

@mnwhite mnwhite merged commit 06f1a26 into econ-ark:master May 14, 2019
@pkofod pkofod deleted the gausherm branch May 14, 2019 15:35
@shaunagm
Copy link
Contributor

@pkofod can you add release notes?

@albop
Copy link
Collaborator

albop commented May 17, 2019

Quite randomly, I found that my name was mentioned here. I opened an issue against dolo to add more i.i.d. distributions to the dolo language. From my perspective, which python function to use is not a problem and I'm happy to reuse the ones from HARK or from another package for the actual discretization. However, the crucial point to me is to settle on a predictable, consistent, naming scheme for the yaml file. One option is to use the conventions from R-dist as in https://pypi.org/project/distcan/ and distributions.jl .

@pkofod
Copy link
Contributor Author

pkofod commented May 17, 2019

@pkofod can you add release notes?

"#258 adds functions to calculate quadrature nodes and weights for numerically evaluating expectations in the presence of (log-)normally distributed random variables.."

@llorracc
Copy link
Collaborator

llorracc commented May 17, 2019 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants