Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

First draft of DCEGM #206

Closed
wants to merge 19 commits into from
Closed

First draft of DCEGM #206

wants to merge 19 commits into from

Conversation

pkofod
Copy link
Contributor

@pkofod pkofod commented Nov 2, 2018

More discrete choice fun for HARK.

I'm going to start thinking about making parts of this code prettier, but it works. Some things that are quite annoying is stuff like: python does not follow IEEE754 for division, so -1.0/V_T becomes numpy.divide(-1.0, V_T). Another thing is that nanargmax and nanmax throw errors for all nan-slices, but I think I might just be better off either a) writing a loop in numba or b) setting to -inf instead of nan.

Anyway, the notebook is very drafty, but plots the following three figures to verify that stuff works as expected

choicespecific

workers

beginning

@pkofod
Copy link
Contributor Author

pkofod commented Nov 2, 2018

I might add that I have yet to do the part where you adjust the m-grid to hold values that are epsilon larger than the m values right at the kink, so you get those really sharp drops, but that's a minor thing.

@mnwhite
Copy link
Contributor

mnwhite commented Nov 2, 2018

On division: Isn't the Python deviation from IEEE standard only in the least significant bit, and only about half the time (i.e. it's a rounding issue)? Something like that shouldn't create any problems, right?

On nanargmax: Yeah, this puts the "arg" in "argmax". Sorry, bad. Anyway, this issue comes up for me in my discrete choice models. One solution is, as you say, to have out-of-bounds or illegal states return a value of -inf rather than nan. The other is to do pre-checks / "remove" queries of states that are known to be illegal for all discrete choices.

On the model you're showing figures for, what are the sources of risk? From all of the discontinuities, I would have guessed no risk, but the cFunc is concave, so...? Unless the concavity is coming solely from the borrowing constraint.

@pkofod
Copy link
Contributor Author

pkofod commented Nov 2, 2018

On division: Isn't the Python deviation from IEEE standard only in the least significant bit, and only about half the time (i.e. it's a rounding issue)? Something like that shouldn't create any problems, right?

The problem I've encountered here relates to division with zero. IEEE 754 states that division by zero is sign(x)*Inf unless x is zero itself, then it's NaN. Python throws an error! This means that operations that rely on going back and forth between infinites and division by zero will fail. It's no big deal as numpy is of course IEEE 754 compliant (they're also numerically oriented whereas python in general is not), it's just ugly imo.

On nanargmax: Yeah, this puts the "arg" in "argmax". Sorry, bad.

I feel you: https://github.com/econ-ark/HARK/pull/206/files#diff-2a9e2867bd6ec3a09643d03e778a1d00R405

Anyway, this issue comes up for me in my discrete choice models. One solution is, as you say, to have out-of-bounds or illegal states return a value of -inf rather than nan. The other is to do pre-checks / "remove" queries of states that are known to be illegal for all discrete choices.

Yes, I went with the pre-checking. I'll try to write out each version and see which one is better.

On the model you're showing figures for, what are the sources of risk? From all of the discontinuities, I would have guessed no risk, but the cFunc is concave, so...? Unless the concavity is coming solely from the borrowing constraint.

Yes, no risk. This is the version in the DGEGM paper. Minor tweaks will come in the next commits to allow for income risk.

@pkofod
Copy link
Contributor Author

pkofod commented Nov 2, 2018

Here's what will happen if you set model = dcegm.RetiringDeaton(sigma=0.0005) to add some taste shocks / a bit of logit smoothing:
bitoftaste

@pkofod
Copy link
Contributor Author

pkofod commented Nov 3, 2018

@mnwhite Where would something like the discreteEnvelope function in this PR fit in HARK? It could be called something completely different, I'm completely indifferent, but I've been calling it that in the three open PRs on discrete choice model variations. I call it that because it takes the choice specific value functions and calculates the upper envelope of those - as well as the policies / conditional choice probabilities in the discrete dimension of the action space.

Is this something that should go in HARK.interpolation? I've earlier asked a similar question, and I got the impression that anything "handling functions, approximations of functions, operation on (approximations to) functions" belong there. I guess it could also be a "discrete choice" module. Just looking at the current content in HARK it's not really obvious to me where to put it, but since I'm using it in all three PRs, it's quite obvious that it's something that should be available from somewhere inside of HARK if discrete(/continuous) choices [potentially with ev taste shocks] are going to play a role in the toolkit - and it seems stupid to repeat the same function across several folders.

@llorracc
Copy link
Collaborator

llorracc commented Nov 9, 2018

I think probably it should not go into HARK.interpolation because it's not really about interpolation (and Pablo Winant will be restructuring what's in there now, so we don't want to add more content to it until he is done). Where it SHOULD go is something that either (a) Matt will have a clear answer to or (b) we need to have an offline planning discussion to derive a generic answer to questions like this.

@mnwhite
Copy link
Contributor

mnwhite commented Nov 9, 2018 via email

@pkofod
Copy link
Contributor Author

pkofod commented Nov 9, 2018

Well, I was going to say it should go in HARK.interpolation, because it is about function representation, and that's what's in there.

This was the understanding I got from the discussion on zulip, but I'm happy to place it anywhere. If interpolation is going to be about "interpolation" as the term is normally used, then we need another module for things like upper/lower envelopes, etc.

@pkofod
Copy link
Contributor Author

pkofod commented Nov 21, 2018

I've cleaned up some things, done some things in a slightly different way, and added a simple income uncertainty that is handled using simulation; should use a better quadrature scheme here, and have names match those in other modules.

@pkofod
Copy link
Contributor Author

pkofod commented Nov 21, 2018

I've used the GH quadrature points from numpy. I just realized there's #163 . I'm not sure I agree that we should have an actual algorithm in HARK do calculate these nodes and weights, but a lognormal/normal wrapper might be useful.

@pkofod
Copy link
Contributor Author

pkofod commented Jan 20, 2019

I pushed an improved version, especially in terms of things being simplified compared to earlier.

Don't mind the simulation code, hadn't seen how it was supposed to be done in HARK back when I did this PR (I'm essentially destroying the hard work put into "simulate" already), but I have a modified version locally that needs some polishing off. It's not too important to what I was actually doing (NEGM), so I ignored it for now.

In this and other PRs I'm (still) using nonlinspace taken from Jeppe/Thomas/etc's various codes. I tried to use makeGridExpMult, but it seems to be a bit "too" nonlinear, even with they keyword timestonest (which is a bit hard to read, just now realized it was times to nest and not something with a stone) set to 1). This makes you miss some of the value-crossing, and thus you miss some of the discontinuities in the solution in these models - unless you use a lot of grid points. Is makeGridExpMult what you (@mnwhite @llorracc ) typically use in your work, or would it make sense to introduce some other "nonlinear grid" generators?

Last, I'm still curious about what @mnwhite has to say about the best way to include the dcegm re-interpolator/upper envelope calculation. The reason why I took the time to clean this PR up a bit is because it is very close to the (almost PR-ready) illiquid asset-model (related to the TFI housing grant) solved with NEGM (as it uses the dcegm-method for the consumption problem). If it was in HARK.interpolation or utilities or wherever, it would be easier to build demos/remarks/... using DCEGM as the solution algorithm or as a sub-step in a more complicated solution algorithm.

@pkofod
Copy link
Contributor Author

pkofod commented Feb 25, 2019

This is basically replaced by econ-ark/DemARK#27 #226 and econ-ark/REMARK#9 . However, it is unclear to me whether the code to solve the model (the class) should be only in the remark, or if some of it should remain here? That is, should there be an consume-save agent with endogenous (potentially non-absorbing) retirement choice, or should we only keep the actual dcegm etc functions here in HARK?

@mnwhite
Copy link
Contributor

mnwhite commented Feb 25, 2019 via email

@pkofod
Copy link
Contributor Author

pkofod commented Feb 25, 2019

This is the right place, in my opinion. I will take a look.

Right, you should not look at this PR, but rather at the REMARK and the other PR #226 . I have significantly changed the code. There is still one change left: I'm going to change the actual solution code to mirror ConsIndShk(Basic).

@mnwhite
Copy link
Contributor

mnwhite commented Feb 25, 2019 via email

@pkofod
Copy link
Contributor Author

pkofod commented Feb 25, 2019

But point stands: Model and solver code should be in HARK, demonstrating a particular application of that model and what it can do goes in DEMARK or REMARK, depending on the nature of the application (just a demo or producing the results of a paper).

Right. My main concern was if an endogenous retiree was significant enough to have it's own class, or if it's better to change the code to be more generally "DC"-ish, and then have the REMARK supply the relevant functions or inherit from this DiscreteChoiceConsIndShkConsumer in a RetiringConsIndShkConsumer(how long can we go? :) MarkovDiscreteChoicePortfolioConsIndShkConsumer ;) ) Doing the stuff I have yet to do for the solver (class-ify it) would make it easier to go the more generic "DC" class-route.

I'm looking at another extension right now of adding a portfolio choice like in SolvingMicroDSOPs, but I may jump back and rewrite the solver part if I run into something frustrating, and need a task I know how to finish ;)

@llorracc
Copy link
Collaborator

llorracc commented Feb 26, 2019 via email

@llorracc
Copy link
Collaborator

llorracc commented Feb 26, 2019 via email

@pkofod
Copy link
Contributor Author

pkofod commented Feb 26, 2019

I’d tend to agree: a generic dc class is preferable to a retirement specific class. The hard part about the fully generic class is of course to make it dlexible enough to handle the tricks that make complicated models feasible to solve (say presolving the solution for an absorbing choice, etc). But I think this will just be something for us to experiment with and learn from

@llorracc-git
Copy link
Contributor

llorracc-git commented Feb 26, 2019 via email

@pkofod
Copy link
Contributor Author

pkofod commented Feb 26, 2019

I like Pablo's distinction between classes that do things that are generically mathematical/computational, versus solve a particular problem.

Sorry, maybe I missed this, where do I read Pablo's answer? Did he reply to an e-mail I didn't see, or is it from prior discussion?

@llorracc
Copy link
Collaborator

@pkofod could you look this over and confirm that you have used naming conventions consistent with the guildelines in the https://github.com/econ-ark/NARK notation guidelines?

@pkofod
Copy link
Contributor Author

pkofod commented Mar 18, 2019

@pkofod could you look this over and confirm that you have used naming conventions consistent with the guildelines in the https://github.com/econ-ark/NARK notation guidelines?

Sure, but what you're seeing here is not the most recent version. All the Coh etc is gone in the REMARK. I'll start by making this PR up-to-date, but the main thing to get merged - IMO - is the other PR that just has the envelope/logsum functions #226 . Unless you do want a specific "retirement model" in HARK, and not a more generic discrete choice class (that may just be a feature in ConsIndShkModel, but that's still up in the air I guess).

@llorracc
Copy link
Collaborator

llorracc commented Mar 18, 2019 via email

@pkofod pkofod closed this Apr 11, 2019
@pkofod
Copy link
Contributor Author

pkofod commented Apr 11, 2019

Went over the code here. This PR is no longer directly relevant; all code is moved to other PRs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants