-
-
Notifications
You must be signed in to change notification settings - Fork 199
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] t-indexed inputs for lifecycle models #1042
base: master
Are you sure you want to change the base?
Conversation
Hm. A hypothesis: The MPC depends on states (wealth, permanent income) and age. Therefore, if the shock history that yields a particular mpc for a particular agent in a particular simulation period changes, that MPC will change too. Thus, this might be related to RNG? More puzzling is the fact that you are getting an MPC of 1.0. That means people want to consume all of the wealth that they get. This can happen either on the last period of life or for liquidity-constrained agents. Things that might be happening:
I would be more concerned if something like |
I don't know exactly what this test is doing (i.e. what the structure of
the problem is, or how many periods its simulating), but here's my
intuition from looking at the values:
I think this test agent is specified as having a cycle that is one period
long and its `cycles` was set to 1. In the original code, it would solve a
two-period problem (the terminal period, in which the MPC is 1.0; and one
non-terminal period, where the MPC should be a little over 0.5. Thus when
you do a test simulation (maybe for one period, maybe for more), the MPC
ends up being a little over 0.5.
*After* the code changes, several things could have happened. First, this
might now be a true "one-period problem", so the only possible MPC value
that will ever be encountered is 1.0. Simulating for any number of periods
is equivalent to creating a bunch of agents, giving them some money, and
asking how much of it they want to consume, knowing they will die
immediately afterward. You monster.
Second, it might still be a two-period problem with one non-terminal
period, but something about the RNG has been changed so that the specific
individual you're querying in your MPC *just so happens* to be in their
terminal period rather than non-terminal period. If it's a two-period
problem and you simulate many people for many periods, you might think the
simulate population would repeatedly flip between non-terminal and
terminal, but some in the population will become asynchronized due to dying
after their non-terminal period.
I'm not sure whether either of these guesses is correct, but they're
consistent with the evidence.
…On Thu, Jul 15, 2021 at 4:11 PM Mateo Velásquez-Giraldo < ***@***.***> wrote:
Hm.
A hypothesis:
The MPC depends on states (wealth, permanent income) and age. Therefore,
if the shock history that yields a particular mpc for a particular agent in
a particular simulation period changes, that MPC will change too. Thus,
this might be related to RNG?
More puzzling is the fact that you are getting an MPC of 1.0. That means
people want to consume all of the wealth that they get. This can happen
either on the last period of life or for liquidity-constrained agents.
Things that might be happening:
- Changes in survival draws are making the agent-period combination
that you are checking a terminal one.
- Changes in shocks are making this particular agent liquidity
constrained in the moment that you are checking.
I would be more concerned if something like solution[some
period].cFunc(fixed_number) changed. Since the argument is kept fixed, a
change would mean the function (and hence the solution) changed, which
should not happen.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#1042 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADKRAFIWXZM3FWDPN7BVVF3TX46IZANCNFSM5AHTS5UQ>
.
|
@sbenthall I was taking a quick look at this again and ran into a failure that might be illustrative.
That's suggesting that the agent's problem "lost" one period (it was a 4-period problem before and it's now of length 3). This would be a fundamental change in the problem and might change MPCs. Might this be a bug introduced by the indexing change? |
@Mv77 In the case you bring up --- it's expected that the new patch will reduce the number of solutions to finite problems by 1, if the model is not updated. That's because the first item of the list of time-varying values will now be interpreted as the values for the 0th period, whereas before it was for the 1st period. I've been adding a 0th period to the time varying parameters lists in the test cases. I must have missed that one. |
Blocked on #1105 |
Today I am experiencing my yearly bump against timing issues in HARK. My strong opinion is that the root cause of all the issues that we have with timing have to do with the fact that we take shocks to be the first thing that happen in a period, instead of the last. I think life would be much easier if shocks were the last thing to happen in a period. People have expressed strong dislike for this idea, but another year has gone by and we have not solved this issue. So I will give my reasons again. Our solution objects take states as inputs. Not post-states and shocks. We write And yet, we insist that in a period, This generates all sorts of awkwardness. For instance, to start the simulation of a life-cycle model in which the first period is age 21, we need to think of what the distribution of the permanent income shock experienced from age 20 to 21 is, and what the distribution of end-of-age-20 assets is. This is because we need to combine those in order to produce the age 21 assets from which we start to simulate. But the user never specified this (and he shouldn't have, he wants a model that starts at age 21!) so we are forced to make some assumption like saying that the shock is drawn from the 21-to-22 distribution, even though it is the 20-to-21 shock. A clearer thing would be to ask the user for the distribution of This would also reduce the communication between time periods, and the shifting backwards and forwards of age-varying distributions for solving and simulating. With the current way we express models |
Just to fill in context from other work:
- in BARK, shocks come at the start of b-blocks, and policies (solution
objects) explicitly are functions of both endogenous state and exogenous
shocks.
- Pablo uses a Greek tau for a stochastic transition equation that maps
from state to a distribution over next period state.
I wonder if there are considerations about the efficiency of solutions.
With shocks at the beginning of a block, transitions are deterministic and
so expectations can be taken over the value of each state-shock pair given
the optimal policy.
With shocks at the end, how do you solve a block? Start with the value
function on state-shock pairs at the end of the block, then take
expectations to a middle-of-period-state, then optimize for each
start-of-block state-shock pair?
In that case, solutions would still need to be over the state-shock space.
I think the fact that solutions in HARK are only over state is because the
HARK models are currently all special cases.
…On Thu, Apr 27, 2023, 9:18 PM Mateo Velásquez-Giraldo < ***@***.***> wrote:
Today I am experiencing my yearly bump against timing issues in HARK.
My strong opinion is that the root cause of all the issues that we have
with timing have to do with the fact that we take shocks to be the first
thing that happen in a period, instead of the last. I think life would be
much easier if shocks were the last thing to happen in a period. People
have expressed strong dislike for this idea, but another year has gone by
and we have not solved this issue. So I will give my reasons again.
Our solution objects take *states* as inputs. Not post-states and shocks.
We write $c(m)$, not $c(a, \psi, \theta)$.
And yet, we insist that in a period, get_shocks needs to be called before
get_states combines last period's post-states with the shocks to produce
states. Only then can we apply the policy functions. Our simulations and
solutions start from different places.
This generates all sorts of awkwardness. For instance, to start the
simulation of a life-cycle model in which the first period is age 21, we
need to think of what the distribution of the permanent income shock
experienced from age 20 to 21 is, and what the distribution of
end-of-age-20 assets is. This is because we need to combine those in order
to produce the age 21 assets from which we start to simulate. But the user
never specified this (and he shouldn't have, he wants a model that starts
at age 21!) so we are forced to make some assumption like saying that the
shock is drawn from the 21-to-22 distribution, even though it is the
20-to-21 shock. A clearer thing would be to ask the user for the
distribution of $m$ at age 21.
This would also reduce the communication between time periods, and the
shifting backwards and forwards of age-varying distributions for solving
and simulating. With the current way we express models IncShkDstn[t] is
used in the solution of period t but in the simulation of period t+1. If
shocks were the last thing to happen, IncShkDstn[t] would be the
distribution that is both "expectorated" ***@***.***
<https://github.com/llorracc>) and drawn-from at the end of period t.
—
Reply to this email directly, view it on GitHub
<#1042 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAQZEACZ3IXZNGOT5W66J3XDMLFJANCNFSM5AHTS5UQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
It is a first-order priority to fix this awkwardness.
But my proposed template going forward is at
https://github.com/llorracc/SolvingMicroDSOPs-Latest/blob/master/subfile-the-problem.pdf
Some of the reasons for this are articulated in the document, and there are
others.
The most important of them are:
1. A stochastic variable with a date $t$ needs to be known to the agent
only when they are in period $t$
1. We need to be disciplined about "stages" within a period, and "moves"
within a stage (see the doc)
Adopting this scheme would solve the problems that Mateo describes
…On Thu, Apr 27, 2023 at 9:18 PM Mateo Velásquez-Giraldo < ***@***.***> wrote:
Today I am experiencing my yearly bump against timing issues in HARK.
My strong opinion is that the root cause of all the issues that we have
with timing have to do with the fact that we take shocks to be the first
thing that happen in a period, instead of the last. I think life would be
much easier if shocks were the last thing to happen in a period. People
have expressed strong dislike for this idea, but another year has gone by
and we have not solved this issue. So I will give my reasons again.
Our solution objects take *states* as inputs. Not post-states and shocks.
We write $c(m)$, not $c(a, \psi, \theta)$.
And yet, we insist that in a period, get_shocks needs to be called before
get_states combines last period's post-states with the shocks to produce
states. Only then can we apply the policy functions. Our simulations and
solutions start from different places.
This generates all sorts of awkwardness. For instance, to start the
simulation of a life-cycle model in which the first period is age 21, we
need to think of what the distribution of the permanent income shock
experienced from age 20 to 21 is, and what the distribution of
end-of-age-20 assets is. This is because we need to combine those in order
to produce the age 21 assets from which we start to simulate. But the user
never specified this (and he shouldn't have, he wants a model that starts
at age 21!) so we are forced to make some assumption like saying that the
shock is drawn from the 21-to-22 distribution, even though it is the
20-to-21 shock. A clearer thing would be to ask the user for the
distribution of $m$ at age 21.
This would also reduce the communication between time periods, and the
shifting backwards and forwards of age-varying distributions for solving
and simulating. With the current way we express models IncShkDstn[t] is
used in the solution of period t but in the simulation of period t+1. If
shocks were the last thing to happen, IncShkDstn[t] would be the
distribution that is both "expectorated" ***@***.***
<https://github.com/llorracc>) and drawn-from at the end of period t.
—
Reply to this email directly, view it on GitHub
<#1042 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAKCK74JSQWLGFO6FMDARP3XDMLFJANCNFSM5AHTS5UQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
--
- Chris Carroll
|
I agree that some of the current setup is awkward, and I support revising
it. But I also strongly agree with Chris that information/shocks that are
revealed at t should be indexed as t and occur in period t.
I *think* the proper solution to this is to change when shocks are
integrated out *in the solver*, not when they occur during simulation.
Period t shocks should be integrated out (constructing the gothic-v
function over post-states) during the period t solver.
This would solve two problems. First, the awkwardness of how shock timing
is set up in HARK, in terms of needing the period t+1 shock distribution to
solve period t problem. Second, it would allow more "chained models" where
the nature of the problem changes between periods. As long as the
"post-state variable interface" matches (or a dummy transition period is
added), solvers for different model types can be compatible.
This has been a long-running issue for HARK. We identified it very early on
in its life, and had vigorous debates about it. It looks like those have
continued during my absence. It should probably be dealt with.
On Thu, Apr 27, 2023, 11:26 PM Christopher Llorracc Carroll <
***@***.***> wrote:
… It is a first-order priority to fix this awkwardness.
But my proposed template going forward is at
https://github.com/llorracc/SolvingMicroDSOPs-Latest/blob/master/subfile-the-problem.pdf
Some of the reasons for this are articulated in the document, and there are
others.
The most important of them are:
1. A stochastic variable with a date $t$ needs to be known to the agent
only when they are in period $t$
1. We need to be disciplined about "stages" within a period, and "moves"
within a stage (see the doc)
Adopting this scheme would solve the problems that Mateo describes
On Thu, Apr 27, 2023 at 9:18 PM Mateo Velásquez-Giraldo <
***@***.***> wrote:
> Today I am experiencing my yearly bump against timing issues in HARK.
>
> My strong opinion is that the root cause of all the issues that we have
> with timing have to do with the fact that we take shocks to be the first
> thing that happen in a period, instead of the last. I think life would be
> much easier if shocks were the last thing to happen in a period. People
> have expressed strong dislike for this idea, but another year has gone by
> and we have not solved this issue. So I will give my reasons again.
>
> Our solution objects take *states* as inputs. Not post-states and shocks.
> We write $c(m)$, not $c(a, \psi, \theta)$.
>
> And yet, we insist that in a period, get_shocks needs to be called before
> get_states combines last period's post-states with the shocks to produce
> states. Only then can we apply the policy functions. Our simulations and
> solutions start from different places.
>
> This generates all sorts of awkwardness. For instance, to start the
> simulation of a life-cycle model in which the first period is age 21, we
> need to think of what the distribution of the permanent income shock
> experienced from age 20 to 21 is, and what the distribution of
> end-of-age-20 assets is. This is because we need to combine those in
order
> to produce the age 21 assets from which we start to simulate. But the
user
> never specified this (and he shouldn't have, he wants a model that starts
> at age 21!) so we are forced to make some assumption like saying that the
> shock is drawn from the 21-to-22 distribution, even though it is the
> 20-to-21 shock. A clearer thing would be to ask the user for the
> distribution of $m$ at age 21.
>
> This would also reduce the communication between time periods, and the
> shifting backwards and forwards of age-varying distributions for solving
> and simulating. With the current way we express models IncShkDstn[t] is
> used in the solution of period t but in the simulation of period t+1. If
> shocks were the last thing to happen, IncShkDstn[t] would be the
> distribution that is both "expectorated" ***@***.***
> <https://github.com/llorracc>) and drawn-from at the end of period t.
>
> —
> Reply to this email directly, view it on GitHub
> <#1042 (comment)>, or
> unsubscribe
> <
https://github.com/notifications/unsubscribe-auth/AAKCK74JSQWLGFO6FMDARP3XDMLFJANCNFSM5AHTS5UQ
>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
--
- Chris Carroll
—
Reply to this email directly, view it on GitHub
<#1042 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADKRAFOXMQ3CDFGFGIY2P2LXDM2HZANCNFSM5AHTS5UQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
It occurs to me that while the current BARK implementation has shocks occurring at the beginning of b-blocks, there's no reason why an alternative form of block (b-block-2, or whatever) couldn't look more like what Mateo describes. The block could still be solved in isolation. It might require the definition of an intermediate state space.
One possible upside to having more than one style of bellman block might be that it makes it easier to cover a range of models in which a block input, like the risky rate of return, is constant in one model and based on a stochastic shock in another. Others might get analytic clarity on the issue if they considered non-normalized versions of the consumption problem. |
I had a chat with @llorracc in which we discussed his proposed framework and I am on board with it. The crucial part is that what he now calls As I read the comments I am glad to see that this is also @mnwhite 's proposed solution. Glad to see consensus about this. |
I think that all models can be represented with shocks-at-the-beginning or shocks-at-the-end blocks only, so I don't think this would be that important. I think I had missed some conversations about this timing issue in the current revamp proposals. It looks to me like they deal with it! |
Building on #1039, this PR is (yet another) attempt to fix #1022.
This PR focuses only on lifecycle models, where AgentType.cycles == 1.
It currently has PerfectForesight models working.
It is currently work in progress. I need help figuring something out.
The next model to crack is the IndShockConsumerType model.
Many of the tests are broken, but I'm going to have to assume that especially in the simulation code this is unavoidable given the new parameterization. Where the numbers are within the right ballpark I've just adjusted test targets.
However, this test in particular is failing in a way that suggests something deeper is wrong. I don't understand why the MPC should be so off from its original target:
Any thoughts?