Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] t-indexed inputs for lifecycle models #1042

Open
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

sbenthall
Copy link
Contributor

Building on #1039, this PR is (yet another) attempt to fix #1022.

This PR focuses only on lifecycle models, where AgentType.cycles == 1.

It currently has PerfectForesight models working.

It is currently work in progress. I need help figuring something out.

The next model to crack is the IndShockConsumerType model.

Many of the tests are broken, but I'm going to have to assume that especially in the simulation code this is unavoidable given the new parameterization. Where the numbers are within the right ballpark I've just adjusted test targets.

However, this test in particular is failing in a way that suggests something deeper is wrong. I don't understand why the MPC should be so off from its original target:

self = <HARK.ConsumptionSaving.tests.test_IndShockConsumerType.testIndShockConsumerType testMethod=test_simulated_values>

    def test_simulated_values(self):
        self.agent.initialize_sim()
        self.agent.simulate()
    
        ## uses simulated values -- needs simulation code update.
>       self.assertAlmostEqual(self.agent.MPCnow[1], 0.5711503906043797)
E       AssertionError: 1.0 != 0.5711503906043797 within 7 places

Any thoughts?

  • Tests for new functionality/models or Tests to reproduce the bug-fix in code.
  • Updated documentation of features that add new functionality.
  • Update CHANGELOG.md with major/minor changes.

@Mv77
Copy link
Contributor

Mv77 commented Jul 15, 2021

Hm.

A hypothesis:

The MPC depends on states (wealth, permanent income) and age. Therefore, if the shock history that yields a particular mpc for a particular agent in a particular simulation period changes, that MPC will change too. Thus, this might be related to RNG?

More puzzling is the fact that you are getting an MPC of 1.0. That means people want to consume all of the wealth that they get. This can happen either on the last period of life or for liquidity-constrained agents. Things that might be happening:

  • Changes in survival draws are making the agent-period combination that you are checking a terminal one.
  • Changes in shocks are making this particular agent liquidity constrained in the moment that you are checking.

I would be more concerned if something like solution[some period].cFunc(fixed_number) changed. Since the argument is kept fixed, a change would mean the function (and hence the solution) changed, which should not happen.

@mnwhite
Copy link
Contributor

mnwhite commented Jul 16, 2021 via email

@Mv77
Copy link
Contributor

Mv77 commented Jul 21, 2021

@sbenthall I was taking a quick look at this again and ran into a failure that might be illustrative.

self = <HARK.tests.test_core.test_AgentType testMethod=test_solve>

    def test_solve(self):
        self.agent.time_vary = ["vary_1"]
        self.agent.time_inv = ["inv_1"]
        self.agent.vary_1 = [1.1, 1.2, 1.3, 1.4]
        self.agent.inv_1 = 1.05
        # to test the superclass we create a dummy solve_one_period function
        # for our agent, which doesn't do anything, instead of using a NullFunc
        self.agent.solve_one_period = lambda vary_1: MetricObject()
        self.agent.solve()
>       self.assertEqual(len(self.agent.solution), 4)
E       AssertionError: 3 != 4

That's suggesting that the agent's problem "lost" one period (it was a 4-period problem before and it's now of length 3). This would be a fundamental change in the problem and might change MPCs. Might this be a bug introduced by the indexing change?

@sbenthall
Copy link
Contributor Author

@Mv77 In the case you bring up --- it's expected that the new patch will reduce the number of solutions to finite problems by 1, if the model is not updated. That's because the first item of the list of time-varying values will now be interpreted as the values for the 0th period, whereas before it was for the 1st period. I've been adding a 0th period to the time varying parameters lists in the test cases. I must have missed that one.

@alanlujan91 alanlujan91 requested review from alanlujan91 and removed request for alanlujan91 November 5, 2021 18:18
@sbenthall
Copy link
Contributor Author

Blocked on #1105

@Mv77
Copy link
Contributor

Mv77 commented Apr 28, 2023

Today I am experiencing my yearly bump against timing issues in HARK.

My strong opinion is that the root cause of all the issues that we have with timing have to do with the fact that we take shocks to be the first thing that happen in a period, instead of the last. I think life would be much easier if shocks were the last thing to happen in a period. People have expressed strong dislike for this idea, but another year has gone by and we have not solved this issue. So I will give my reasons again.

Our solution objects take states as inputs. Not post-states and shocks. We write $c(m)$, not $c(a, \psi, \theta)$.

And yet, we insist that in a period, get_shocks needs to be called before get_states combines last period's post-states with the shocks to produce states. Only then can we apply the policy functions. Our simulations and solutions start from different places.

This generates all sorts of awkwardness. For instance, to start the simulation of a life-cycle model in which the first period is age 21, we need to think of what the distribution of the permanent income shock experienced from age 20 to 21 is, and what the distribution of end-of-age-20 assets is. This is because we need to combine those in order to produce the age 21 assets from which we start to simulate. But the user never specified this (and he shouldn't have, he wants a model that starts at age 21!) so we are forced to make some assumption like saying that the shock is drawn from the 21-to-22 distribution, even though it is the 20-to-21 shock. A clearer thing would be to ask the user for the distribution of $m$ at age 21.

This would also reduce the communication between time periods, and the shifting backwards and forwards of age-varying distributions for solving and simulating. With the current way we express models IncShkDstn[t] is used in the solution of period t but in the simulation of period t+1. If shocks were the last thing to happen, IncShkDstn[t] would be the distribution that is both "expectorated" (@llorracc) and drawn-from at the end of period t.

@sbenthall
Copy link
Contributor Author

sbenthall commented Apr 28, 2023 via email

@llorracc
Copy link
Collaborator

llorracc commented Apr 28, 2023 via email

@mnwhite
Copy link
Contributor

mnwhite commented Apr 28, 2023 via email

@sbenthall
Copy link
Contributor Author

sbenthall commented Apr 28, 2023

It occurs to me that while the current BARK implementation has shocks occurring at the beginning of b-blocks, there's no reason why an alternative form of block (b-block-2, or whatever) couldn't look more like what Mateo describes.

The block could still be solved in isolation. It might require the definition of an intermediate state space.

  • input space $S$
  • controls $X$
  • middle space $W$
  • shocks $P_M$ over space $M$
  • deterministic transition $g: S \times X \rightarrow W$
  • generalized discounts: $B : M \rightarrow \mathbb{R}$
  • update: $v_w(w) = B(m)\mathbb{E}_{P_M}[v(a,w)]$

One possible upside to having more than one style of bellman block might be that it makes it easier to cover a range of models in which a block input, like the risky rate of return, is constant in one model and based on a stochastic shock in another.

Others might get analytic clarity on the issue if they considered non-normalized versions of the consumption problem.

@Mv77
Copy link
Contributor

Mv77 commented Apr 30, 2023

I had a chat with @llorracc in which we discussed his proposed framework and I am on board with it. The crucial part is that what he now calls prospectation about period $t$ will be done by and stored in the time- $t$ solution object. I think that will work!

As I read the comments I am glad to see that this is also @mnwhite 's proposed solution. Glad to see consensus about this.

@Mv77
Copy link
Contributor

Mv77 commented Apr 30, 2023

It occurs to me that while the current BARK implementation has shocks occurring at the beginning of b-blocks, there's no reason why an alternative form of block (b-block-2, or whatever) couldn't look more like what Mateo describes.

I think that all models can be represented with shocks-at-the-beginning or shocks-at-the-end blocks only, so I don't think this would be that important.

I think I had missed some conversations about this timing issue in the current revamp proposals. It looks to me like they deal with it!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Consistent t-index for time varying shock parameters.
4 participants