Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HARK.algos #1283

Closed
wants to merge 15 commits into from
Closed

HARK.algos #1283

wants to merge 15 commits into from

Conversation

sbenthall
Copy link
Contributor

@sbenthall sbenthall commented Jun 14, 2023

This PR aims to provide two general purpose algorithms for the optimization 'step' of consumer problems, with automated tests confirming their results replicate teh SolvingMicroDSOPs results to a high degree of precision.

I'm hoping that this:

  • Provides automated tests that can used in further HARK 1.0 or Dolo integration work
  • Offers examples of general purpose solution algorithms that are usable incrementally in HARK
    • Perhaps, existing problem solutions can be refactored to use these algorithms, where appropriate. (Much like earlier solutions were refactored to use a general expectations method.) This provides an implementation path from HARK's current state to HARK 1.0.
    • These concrete examples can be workshoped with respect to their interface and notation,

It introduces two new subdirectories for HARK modules:

  • HARK.algos is intended to contain general purpose algorithms, and their tests. C.f. dolo.algos (link)
  • HARK.gothic is a direct copy/fork of the gothic_class and resources files from SolvingMicroDSOPs. These are used in the automated tests for the algorithms.

The two algorithms provided are:

  • In foc.py, solving the optimal policy decision based on numerically solving the first order condition
  • In egm.py, solving the optimal policy decision analytically using EGM

As of this writing, there is a passing test for the optimal_policy_foc() method. The substantive test for the EGM algorithm is still a work in progress. Most of this code is from the BARK PR #1186 . But what I have done here is separate out the algorithmic component, made it more general (see below), and provided more robust testing (relative to SolvingMicroDSOPs).

The algorithms provided generalize from BARK in that they do not depend on a Stage object explicitly. Rather, they are stand-alone functions. However, they do rely on some notation that we've settled on in BARK. Notably, I'm following Sargent and Stachurski and using: $X$ for states, $Z$ for shocks, and $A$ for actions. I think this leaves $Y$ as a natural choice for post-states. I suppose that could be debated.

I believe these algorithms can be used in existing HARK solvers, allowing some hard-coded mathematical statements to be replaced with mathematical functions. I think that moving in this direction will make it easier to move HARK towards HARK 1.0 while maintaining backwards compatibility.

  • Tests for new functionality/models or Tests to reproduce the bug-fix in code.
  • Updated documentation of features that add new functionality.
  • Update CHANGELOG.md with major/minor changes.

@llorracc
Copy link
Collaborator

Seb,

This looks awesome! Thanks for taking the initiative and realizing there was something useful to be done on this front, and just doing it!

@sbenthall
Copy link
Contributor Author

Thanks Chris. You give me too much credit. I realized this was what needed to happen after hearing your concerns with what I was doing with BARK. I think this incremental way forward is going to work better.

@sbenthall sbenthall changed the title [WIP] hark.algos [WIP] HARK.algos Jun 14, 2023
@codecov
Copy link

codecov bot commented Jun 14, 2023

Codecov Report

Patch coverage: 45.89% and project coverage change: -1.29 ⚠️

Comparison is base (2209109) 72.55% compared to head (6c4d9e1) 71.27%.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1283      +/-   ##
==========================================
- Coverage   72.55%   71.27%   -1.29%     
==========================================
  Files          78       83       +5     
  Lines       13009    13667     +658     
==========================================
+ Hits         9439     9741     +302     
- Misses       3570     3926     +356     
Impacted Files Coverage Δ
HARK/gothic/resources.py 30.29% <30.29%> (ø)
HARK/gothic/gothic_class.py 48.48% <48.48%> (ø)
HARK/algos/foc.py 69.49% <69.49%> (ø)
HARK/algos/egm.py 88.88% <88.88%> (ø)
HARK/algos/tests/test_algos.py 100.00% <100.00%> (ø)

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

@sbenthall
Copy link
Contributor Author

[ ] CDC: Potentially rename/document gothic.
[ ] MNW: Specify another model and try it. ConsPrefShock ?

@@ -0,0 +1,125 @@


def analytic_pi_star_y(self,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

'acted'
'retro-policy'

$\pi(x)$
$\pi_y(y)$

What to call this?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how about 'chosen'

@sbenthall
Copy link
Contributor Author

The EGM test is now working! The core substance of this PR is, in my view, complete.

Remaining TODOs:

  • Complete the documentation
  • Clean up the code (less code reuse; formatting, etc.)

Questions raised by this PR (hopefully not blockers, but wanted to bring them up)...

Do we have a canonical way of representing grids over multiple variables?

  • Sometimes we want to store values (such as optimal actions, or (marginal) v$alue) at many (potentially multidimensional) gridpoints.
  • xarray provides helpful support for this with their coords mechanic
  • This coords object is of a specific xarray type, DataArrayCoordinates, which is 'dictionary-like'.
    • Specifically, it is 'like' a mapping from strings (variable labels) to DataArrays... which have the named variable dimension as its coords and those same coords in its values.
    • This is, in my opinion, a rather confusing data structure to initialize by hand
  • The data we need to initialize a grids is a Mapping from variables names to a Sequence.
    • But this 'raw' version of the grid then needs some massaging to get it to fit into other forms. (For example, combining the grids of state and shock values).

What I've implemented in this PR fudges things a bit:

  • using dictionaries for grids input into the algorithms
  • then coping with the xarray coords mechanics within the algorithms and on the xarray output

What I think might be a good move going forward is:

  • developing a separate module dedicated to helper functions with grids, making it easy to:
    • create them from code
    • merge them
    • iterate over them (something xarray makes shockingly difficult, I've found)

I wonder if @alanlujan91 @mnwhite or @Mv77 have any thoughts on grid representations at this point.

@sbenthall
Copy link
Contributor Author

EGM test works locally with Python 3.10, but apparently not in the automated Python 3.8 test. bummer. Will fix

@Mv77
Copy link
Contributor

Mv77 commented Jun 16, 2023

Neat!

Yes, some thoughts on grid representations are that it would be good to have some state object that we could reference with keys. E.g., state['mNrm'] would give you normalized market resources etc.

I looked into this a bit and came out with your same impression that xarray probably has the right structure in there somewhere, but I found it too confusing. I did not have >2 hrs to figure out the difference between a data array, a dataset, a coordinate and a dimension. Perhaps we can invest some time in finding the right class.

Something that I think would be huge in our goal of representing models using transition equations would be to have an integration between this state representation and our tools in distribution.py. A stochastic transition equation is a function that takes state today and returns a stochastic distribution over state tomorrow. It would be great to have tools to represent and work with those kinds of objects. Our matrix-valued distributions come a long way but lack the indexing-by-name capabilities that would make the code more readable; relying instead on the matrix-unpacking ugly code that is prevalent in my recent pr.

@sbenthall
Copy link
Contributor Author

The EGM algorithm now matches the target values up to a precision of 1e-12.
I was getting exact precision locally and don't know what could be the problem.
@mnwhite is 1e-12 precise enough, or is it necessary to look more deeply into this?
It is hard to debug because I can't locally reproduce the error.

codecov is very helpfully pointing out which lines of code are not covered by automated tests!
I will improve the test coverage on the HARK.algoes code.

However, the HARK.gothic code is not my own code -- it is ported directly from SolvingMicroDSOPs and does not come with automated test coverage. Sadly, this PR will necessarily reduce test coverage in HARK, I fear.

@sbenthall
Copy link
Contributor Author

Yes, some thoughts on grid representations are that it would be good to have some state object that we could reference with keys. E.g., state['mNrm'] would give you normalized market resources etc.

I saw that and I'm very glad that we are converging.
Sounds like we need a Mapping type to pass around state values.
I've worked on this a bit and will likely do more while cleaning up this PR.

I am interested in your transition matrix PR and will follow up with you on that once I've get a better sense of it.

A stochastic transition equation is a function that takes state today and returns a stochastic distribution over state tomorrow.

I understand. I believe @alanlujan91 did some work on using xarrays in our distribution classes. Maybe we should move more robustly in that direction.

Do you see stochastic transitions as ever being more complex than the application of a function, say $g$ to states $x$ and the realization of shocks $z$? I.e. if $s' = g(x, z)$ then $Pr[s' | s] = \mathbb{E}_z[g(s' | s, z)]$. So if the basic equations of the model were clearly defined, it would be easy to combine them to get the stochastic transition function as a conditional probability distribution?

What I'm hoping this PR lets us do is break down the existing models into their fundamental equations, which can then be passed around to construct new objects like this.

@Mv77
Copy link
Contributor

Mv77 commented Jun 16, 2023

I understand. I believe @alanlujan91 did some work on using xarrays in our distribution classes. Maybe we should move more robustly in that direction.

We currently have a DiscreteDistributionLabeled class that uses xarray under the hood. But I played around with its method and I don't think dist_of_func is capable of producing labeled distributions. A function DistOfFunc with signature (DiscreteDistributionLabeled, Function) -> DiscreteDistributionLabeled would achieve what I want.

Do you see stochastic transitions as ever being more complex than the application of a function, say g to states x and the realization of shocks z? I.e. if s′=g(x,z) then Pr[s′|s]=Ez[g(s′|s,z)]. So if the basic equations of the model were clearly defined, it would be easy to combine them to get the stochastic transition function as a conditional probability distribution?

I don't think so, but that depends on what we allow to be in the 'equation.' For instance, the transition from m(t) to m(t+1) in the simple model is m(t+1) = (m(t) - c*(m(t)))*R/PermShk/PermGroFac + TransShk, where c*() is the solved policy function. If you consider the solved policies can be bundled in what we call the equation itself then yes, I think shocks and states are enough.

@alanlujan91
Copy link
Member

@Mv77 you might find this interesting

def post_state_transition(self, post_state=None, shocks=None, params=None):

In that file, I take expectations of full transitions which contain many variables all at once, including value and marginal value

def create_continuation_function(self):

To do this, I represent the state as a DataArray, and all other functions that depend on the state are included in a Dataset. I do think xarray is a very important building block for future HARK iterations.

@sbenthall
Copy link
Contributor Author

Aha, @alanlujan91 thank you for reminding me about the ConsLabeledType, which is great work.

I see now that you have picked out good names for a lot of the key features, and have a general EGM method as a class method:

def endogenous_grid_method(self):

I hope these standalone EGM and FOC algorithms still can stand as contributions. @alanlujan91 I wonder if you have any thoughts on how to improve this EGM implementation, given the one in ConsLabeledType.

To some extent, making the change to use xarray as a base means deciding to refactor the old models and tools to use it, rather than putting 'Labeled' on everything. I personally would be +1 to working in that direction.

I'll bring this up, and some related ideas, in some other issues.

@sbenthall
Copy link
Contributor Author

Looking at your implementation @alanlujan91 I'm noticing that there doesn't seem to be a very good reason for functions like this to be object methods, since they don't invoke self except superficially, and they can be reused across other models. Do you agree?

If so, in order to reduce redundancy, I'd suggest that this sort of functionality get moved to HARK.algos so it's clearly available for use across models.

I'll try to lift design ideas from your version when I do my code cleanup in this PR.

@sbenthall sbenthall changed the title [WIP] HARK.algos HARK.algos Jun 26, 2023
@sbenthall sbenthall requested a review from mnwhite June 26, 2023 15:58
@sbenthall
Copy link
Contributor Author

I have written documentation for this feature and updated the CHANGELOG.

Requesting review from @mnwhite .

The docs look a little funny because for some reason we need to very selectively escape backslashes in the :math: blocks for the LaTeX to render, but I've confirmed they do render locally with make html.

One question, which has come up before, is what mathematical notation to use for the post-value function over post-states. The algorithms in this PR are not specific to either a stage or a period; they are intended to be available as needed when a user is crafting a solution and would like to use them in an optimization step.

@mnwhite
Copy link
Contributor

mnwhite commented Jul 24, 2023

My big piece of feedback here is that we need to be clear about taxonomy and labeling of variable types, and what they mean. The same piece of information can be two different "types" of variables at different "moments" in a problem. As written, algos.foc has the policy function map from STATES x SHOCKS to CONTROLS, but it should just map from STATES to CONTROLS.

The information in an element of the STATES-space is (or more precisely, should be) the lowest dimensional sufficient statistic for the agent's payoff-relevant information set at the moment that they take an action. It might be that a piece of information that was randomly drawn is part of that information set-- e.g. a preference shock in the ConsPrefShock model. However, that piece of information is a STATE at the moment the action decision is made.

In the opposite direction, for the base case you're working with (ConsIndShock), you don't want what would we would think of as the SHOCKS to be part of the domain of the policy function. That is, once I know m_t, I don't need to know theta_t nor psi_t-- they're not future-payoff-relevant information, and m_t is the sufficient statistic for future behavior. We want a policy function that maps from m_t to c_t, not one that maps from m_t X theta_t X psi_t to c_t.

Here's my general concept of both models and the desired structure for a HARK "frame" or whatever we're calling it. Variable types include:

CONTROLS : the set of actions the agent can take
STATES : the minimum-dimensional sufficient statistic space for the agent's information set when they choose their action
SHOCKS : more accurately, EXOGENOUS, but for micro-only models these are random variables
PRE-STATES : the minimum dimensional sufficient statistic space to represent the information "coming into" the frame
POST-STATES : the minimum dimensional sufficient statistic space to represent the information "going out from" the frame

Generally, there is some function that maps from PRE-STATES x SHOCKS to STATES. Some policy function (optimal or otherwise) maps from STATES to CONTROLS. Then another function maps from STATES and CONTROLS to POST-STATES. In single-frame models, the space of POST-STATES is the same as the space of PRE-STATES, but in general there's some mapping to the PRE-STATES of the next frame, which might just be a re-labeling.

Just as sometimes information that arose in SHOCKS is also explicitly part of STATES, in some models one or more CONTROLS needs to go in POST-STATE as well. This includes the portfolio choice model.

@sbenthall
Copy link
Contributor Author

Thanks @mnwhite . I follow all this. I think it shouldn't be hard to change the code to collapse what are current state and action spaces into one space. Let me get back to you about this once I've worked on that a bit.

@sbenthall
Copy link
Contributor Author

Since this PR was last touched, the problem representation, now in model.py, is different.

This still contains an implementation of algorithms that could be used in #1512 and #1513, but it requires some modification.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants