-
-
Notifications
You must be signed in to change notification settings - Fork 199
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
define distributions directly in initial parameters for models #620
Comments
This is related to #664 , because these distributions are often '"time-varying". It looks like the expectation is that I don't see any reason for this except for haste in the early coding. |
This feature would make HARK closer to Dolo. |
No, this is deliberate.
Remember that IndShockConsumerType is used for finite horizon (life cycle) models as well as infinite horizon models.
There is a lot of variation by age in the magnitude of income shocks, both transitory and permanent. Actually, that age-related variation can have important implications for things like the profile of stockholding patterns by age. So it is deliberate that the default is to allow TranShkStd and PermShkStd to vary with time/age.
… On 2021-04-04, at 13:52, Sebastian Benthall ***@***.***> wrote:
This is related to #664 , because these distributions are often '"time-varying".
It looks like the expectation is that TranShkStd is a list even when there's a single value and so is "time-varying" even though it is static across time.
https://github.com/econ-ark/HARK/blob/master/HARK/ConsumptionSaving/ConsIndShockModel.py#L2020-L2023
I don't see any reason for this except for haste in the early coding.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
I see now I've conflated two things in this ticket.
So, rather than giving a model:
as part of its initial parameters, you would give it:
The advantage of this is then If the distribution was not time-varying, I'd suggest:
and a related configuration for an It would be a little bit harder, but not impossible, to allow these Process definitions to refer to the existing parameters |
OK, that post clarifies things (and I've read further back in the thread as well, so now am more up to speed on the issues here). What's not clear to me is whether it makes sense to tackle this set of issues in the specific context of shocks now, because we are facing exactly the same issue with every other aspect of the problem. Our discussion of how to create an architecture in which we allow for successive "stages" of the Bellman problem is essentially saying that not only do we need to allow shock processes to change (arbitrarily) from one stage to the next, we need to allow every single aspect of the problem except the value function to change: State variables, control variables, optimality conditions, solvers, etc. If we had an architecture that worked for all of those elements, it would also work for distributions. |
I see the "stage" issue as quite different. Exogenous shocks are occurring once per period. I'm making these proposals because they would make HARK look more like Dolo. I believe the question of how Dolo deals with cases where there is more than one control variable is unresolved, whereas Dolo handles shocks more elegantly than HARK currently does. |
But, per our earlier discussion that the "problem" may change, say, at different ages of life, if we can have an architecture can handle that for all the other things that are MORE complicated than the distributions (because, for example, there may be several transition equations or whatever inside a period, but only one draw of exogenous shocks), then we will surely also want to use the same architecture for keeping track of the shocks as for keeping track of everything else about the problem. Further, I do not see any technical reason to structure things in such a way that there can only be one draw of shocks per period. Maybe a good technical case for this exists and I'm missing it, but if, for example, one of the decisions that a person makes in the period is whether or not to take some risk, or which of several kinds of risks to take, it would be wasteful to precompute shock distributions for all possible choices if the person might not actually make them. Also, the ability to deal with these issues is one of the few ways in Dolo is going to change to achieve HARK-like capabilities, instead of the other way around. I'm sure that in the revamping Pablo will keep his treatment of distributions as processes, so maybe you're right that we should go ahead and emulate that now. But to the extent that we will need to re-engineer that after we get the whole stages thing worked out, it might be more efficient at least to iron out our plans for the stages thing before tackling the distributions. |
I will take this as a concession of the point with respect to the narrow scope of this issue, which I see as being strictly about how to configure the initial distributions and exogenous shocks of HARK models that are currently in the library. I do have thoughts about the other points you're bringing up and look forward to further conversations to iron out the plans for stages. |
See #1024 for steps towards this. |
See this note about correlated shocks: #1106 (comment) |
The work I did on this in #1121 was blocked by the simulation test issue. |
Shoot, ok. blocked by #1105 |
I looked at the linked example, and I don't think there's an inconsistency
there. In the basic ConsIndShock model, Rfree is a single float because
it's assumed to be exogenous to the individual, and there's only one asset
to hold. Likewise, the coefficient of relative risk aversion is constant
because we assumed the agent has the same intraperiod preferences in all
periods. It works out that changing Rfree to being time-varying (and
changing the input to a list) will result in the correct solution (but the
simulation code would need to be adjusted), while trying to set CRRA to be
time-varying would result in an *incorrect* solution because the solution
code makes assumptions about the structure of the problem, which would be
violated if CRRA_t \neq CRRA_{t+1}. The solver would run, but the produced
consumption function would be somewhat wrong.
Similarly, RiskyDstn is time-invariant because it describes the returns on
a *second* asset, which is external to the agent (as a matter of economic
assumption). And just like Rfree, changing it to be time-varying will
result in the correct solution being produced by the code, but the
simulation code would need updating. There's no inconsistency here between
the form of the model inputs and the form of the model output. It was a
deliberate design choice, not done out of haste or laziness.
As for raw parameters versus objects in dictionaries, it's somewhat a
matter of taste. The design intent was twofold. First, to be able to have
all model parameters sit in a JSON file, which could be read in by HARK or
Mathematica or Matlab or whatever to be able to easily compare solution
methods/libraries, working from the same parameter file. Second, to make
model estimation (or other usage) code easier / more intuitive to write.
Suppose you want to see how some model outcome of interest varies with a
distribution parameter, in a simple setting with just one AgentType
instance. The code for that looks like this:
ParamVec = some vector
OutVec = np.zeros_like(ParamVec)
MyType = AgentTypeSubclass(**baseline_dictionary)
for j in range(ParamVec.size):
....MyType(param_name = ParamVec[j])
....MyType.update()
....MyType.solve()
....MyType.initialize_sim()
....MyType.simulate()
....OutVec[j] = extract_interesting_information(MyType.history)
plt.plot(ParamVec, OutVec)
plt.xlabel(param_name)
plt.ylabel(interesting statistic name)
plt.show()
Estimation code looks similar: just cross off the parameter initialization
and plotting code, have more parameters get set, and throw an off-the-shelf
optimizer around the whole thing.
My intent was that if someone wanted an alternate parametric specification
for some distribution, they'd just make a subclass of the AgentType class
and overwrite its updateThatDstn() method.
…On Wed, May 11, 2022 at 11:40 AM Sebastian Benthall < ***@***.***> wrote:
Shoot, ok. blocked by #1105 <#1105>
—
Reply to this email directly, view it on GitHub
<#620 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADKRAFNFG2GOQVY52TZEOV3VJPIF7ANCNFSM4MEZ7E4Q>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Yes. Things like this are one of the key reasons I want to move to a structure where everything the agent at t knows about the solution at t+1 is explicitly passed as part of the structure of an explicit input to the time-t agent. Several steps I want to take will be helpful here:
In principle, that would allow us to have any sequence of values of relative risk aversion. Their consequences would be entirely embodied in the value, marginal value, etc functions at the beginning of the period, so the t-1 agent would only need to know the t "expected" value function as a function of the beginning-of-period states (k).
But at present if we want to handle perfect foresight MIT shocks to the aggregate (like, people might worry that the rate of return is riskier in election years, or in the millenium year, or whatever), we have to do that by having the sequence of distributions laid out as being different by age, and use the fact that if a person is one year older it is because they have moved one year into the future in calendar time. This example may seem a bit artificial being able to handle it is core to being able to use our toolkit to generically handle a vast class of macro models -- with perfect foresight MIT shocks. |
@mnwhite the original scope of this issue was to define shock distributions directly for models, to make them more explicit. This is happening in the new model format, here: I'm satisfied by this. I know you've been working on the older models, with the 'constructions' workflow, which I don't think is directly related to this. Do you see any reason not to close this issue as completed? I think it can be closed. |
Close it.
…On Mon, Jun 3, 2024 at 3:27 PM Sebastian Benthall ***@***.***> wrote:
@mnwhite <https://github.com/mnwhite> the original scope of this issue
was to define shock distributions directly for models, to make them more
explicit.
This is happening in the new model format, here:
https://github.com/econ-ark/HARK/blob/master/HARK/models/consumer.py
I'm satisfied by this.
I know you've been working on the older models, with the 'constructions'
workflow, which I don't think is directly related to this.
Do you see any reason not to close this issue as completed? I think it can
be closed.
—
Reply to this email directly, view it on GitHub
<#620 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADKRAFKB7TRFOGXTYPKEFJ3ZFS7SNAVCNFSM4MEZ7E42U5DIOJSWCZC7NNSXTN2JONZXKZKDN5WW2ZLOOQ5TEMJUGU4TKOBVGY3A>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Related to #227
Currently, when the parameters of distributions are given to models, they are listed as unstructured fields whose meaning depends on their name, e.g.:
aNrmInitMean
.aNrmInitStd
.pLvlInitMean
.pLvlInitStd
, ...It can be changed so that these parameters look like:
aNrmInit : LogNormal(0, 1)
.The text was updated successfully, but these errors were encountered: