-
-
Notifications
You must be signed in to change notification settings - Fork 158
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add optional argument to Behavior.response method #1858
Conversation
Codecov Report
@@ Coverage Diff @@
## master #1858 +/- ##
======================================
Coverage 100% 100%
======================================
Files 37 37
Lines 3207 3213 +6
======================================
+ Hits 3207 3213 +6
Continue to review full report at Codecov.
|
Here is the script I used to develop pull request #1858:
And here is sample output when tracing and using the default values of the two constraint arguments when
You can use this script to try out different values of the two constraint arguments and see what happens to the outlier filing units and see what happens to the size of the aggregate behavioral-response effect in comparison to the static revenue loss of the TCJA reform. Notice that when when
to this:
This is the kind of excessively large substitution response that has been discussed since November. |
It seems likely that Tax-Calculator 0.16.0 will be released during the week of February 12-16 after the 2011 Given all the discussion of unreasonably large behavioral substitution responses to the TCJA reform, it would seem desirable to include a fix of that problem in release 0.16.0. Does anybody have any comment (pro or con) to make about the proposed changes in pull request #1858? |
Did no one see my comments on this Feb 1? I didn't hear any pushback, but
the proposal below doesn't mention my comments and is not desirable. I
will explain again, in different words.
It is true that there are some notches in the tax code, where the marginal
tax rate is more than one. Since (1-mtr) appears in the denominator of the
behavioral effect it causes the sign of the behavioral effect to reverse
as the mtr goes past 1. This is illogical and should not be allowed. The
correction proposed, to cap the rate at .99999 is not a desirable
work-around. This puts the denominator to .00001 and raises the behavioral
effect by 5 orders of magnitude. The elasticity wasn't estimated around
that sort of tax rate, and the result is simply wrong. It should not be
used, even as an option. The behavioral effect needs to be capped at a
value near the level at which the estimates were done. Perhaps .7 or .5.
Or behavior could be ignored for the few taxpayers with very high tax
rates. These taxpayers contribute very little to the total effect if the
effect is measured in a plausible manner. I can't think of any excuse to
leave them in with .00001 in the denominator.
To repeat - the formula only makes sense for reasonable tax rates and if
applied to notches will give unreasonable results. The elasticity is only
reasonably constant around the common run of tax rates.
dan
…On Fri, 9 Feb 2018, Martin Holmer wrote:
It seems likely that Tax-Calculator 0.16.0 will be released during the
week of February 12-16 after the 2011 puf.csv and associated files from
taxdata pull request 114 are incorporated into Tax-Calculator.
Given all the discussion of unreasonably large behavioral substitution
responses to the TCJA reform, it would seem desirable to include a fix of
that problem in release 0.16.0.
Does anybody have any comment (pro or con) to make about the proposed
changes in pull request #1858?
@MattHJensen @feenberg @rickecon @jdebacker @GoFroggyRun
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the
thread.[AHvQVV5XNcRrskriFL6dAgxqNp5chCNtks5tTMdfgaJpZM4R7tgL.gif]
|
You heard on response to your comment on #1856 because the @GoFroggyRun, who is the author of that pull request, did not respond. This is a different pull request trying to resolve the same problem. As the author of #1858, I'll try to respond to your questions in subsequent #1858 comments. |
@feenberg said in a comment on pull request #1858:
The default values for the two new constraint arguments in the Behavior.response() method proposed in pull request #1858, imply a maximum denominator of 0.01 and limits the Given this understanding of what is being proposed in #1858, do you think the changes in #1858 are an improvement? If not, please explain why the default constraint values in #1858 are not appropriate. |
On Fri, 9 Feb 2018, Martin Holmer wrote:
@feenberg said in a comment on pull request #1858:
It is true that there are some notches in the tax code, where the marginal tax rate is
more than one. Since (1-mtr) appears in the denominator of the behavioral effect it causes
the sign of the behavioral effect to reverse as the mtr goes past 1. This is illogical and
should not be allowed. The correction proposed, to cap the rate at .99999 is not a
desirable work-around. This puts the denominator to .00001 and raises the behavioral
effect by 5 orders of magnitude. The elasticity wasn't estimated around that sort of tax
rate, and the result is simply wrong. It should not be used, even as an option. The
behavioral effect needs to be capped at a value near the level at which the estimates were
done. Perhaps .7 or .5. Or behavior could be ignored for the few taxpayers with very high
tax rates. These taxpayers contribute very little to the total effect if the effect is
measured in a plausible manner. I can't think of any excuse to leave them in with .00001
in the denominator.
The default values for the two new constraint arguments in the Behavior.response() method proposed in
pull request #1858, imply a maximum denominator of 0.01 and limits the pch variable for a filing unit
to be no more than one. Your suggestion of capping the marginal tax rates at "perhaps 0.7 or 0.5" is
viewed by me and @MattHJensen as undesirable, as we both said in separate comments to pull request
#1856.
Given this understanding of what is being proposed in #1858, do you think the changes in #1858 are an
improvement? If not, please explain why the default constraint values in #1858 are not appropriate.
Even .01 is out of sample for estimated value of the elasticity of taxable
income. Consider a taxpayer at a notch, such that an additional dollar of
income would raise the tax liability by $2. Presumably they don't earn
that extra dollar. Now change the tax such that the liability is 50 cents.
They may increase their earnings - delta tau is now -.49. But does it make
sense that the effect should be 100 times the effect of moving from .49 to
0 mtr?
I don't know what else to say - out of sample calculations are not
reliable and should not be depended on to give sensible results.
Also, I am concerned that we don't know the source of these notches. Are
they caused by rounding to the nearest $50 or $100? Something else? I
don't see how there can be more than a thousand notches in our sample.
Our calculation of mtr is based on a finite difference. What is that
difference? A penny? A dollar? A thousand dollars? In the online taxsim
calculator I do a finite difference added and subtracted, then use the
smaller rate. That eliminates the notches. Another method is to use a very
small finite difference - a tenth of a penny. That will hardly ever see a
notch, and if it does the record can be recognized by an absurd tax rate.
Another alternative would be to increase the size of the finite difference
if a notch is found at a small difference.
dan
… —
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the
thread.[AHvQVRrEcO5hwmEz0yXcXrdvdJyxYL8Qks5tTNDKgaJpZM4R7tgL.gif]
|
Over the last couple of days, I have been searching through the literature to find some evidence that could inform where Meanwhile, @feenberg presented a strong reason for capping the MTR significantly below one, which we (or at least I) hadn't been considering before
I still find it unsatisfying to cap MTR significantly below one for all of the reasons @martinholmer has described, but based on @feenberg's reasoning, the outstanding empirical evidence doesn't provide any justification for applying ETI estimates based on very high MTRs. As a near term solution, I am inclined to split the difference and use the structure in this PR to set |
One more comment on this, which is that I think both of our core options are significantly better than what is currently in master, and I think we should just merge one for the 0.16.0 release and then keep considering this issue on a more leisurely schedule. My inclination is mtr_cap=.7 and max_pch=9e99 based on the reasoning in the comment above, but I think mtr_cap=.99 and max_pch=1 is also serviceable while we think more about this. |
@MattHJensen said in pull request #1858:
So, if "the outstanding empirical evidence doesn't provide any justification for applying ETI estimates based on very high MTRs", this implies to me that the "outstanding empirical evidence" is all based on aggregate data. If that is so, the logical conclusion is that "applying ETI estimates" in a microsimulation model is completely inappropriate. And that would imply we should remove the As I always said, I don't know this literature at all. But listening to the arguments being made here suggest we should simply drop the substitution-effect logic from Tax-Calculator. |
@MattHJensen said in pull request #1858:
I don't see this as a "split the difference" solution. If we believe the substitution elasticity estimates are derived from econometric work on micro data (rather than from aggregate time-series data) then there is no problem applying the estimated elasticity with logical-bounds restrictions. But if not, then to me the logical response is to conclude that substitution elasticities have no place in a microsimulation model. The above "near term solution" has all the problems I outlined in this comment. So, I don't see it as a "solution" at all. I see it as introducing another bug into Tax-Calculator. |
@martinholmer, some of the estimates in the literature are derived from econometric work on microdata. The disagreement is about whether there is a problem with applying them in a simulation of proposed policy to a taxpayer on a notch. |
@MattHJensen said in pull request #1858:
Can you point us the the paper(s) that contain "econometric work on microdata" and also point us the the page(s) that contain econometric estimates of the elasticities by income group (as you mentioned in issue #494)? There is no evidence that I've seen that indicates that any of the filing units with high marginal tax rates are at a "notch". This is just what Dan keeps saying without providing any evidence to support his allegation. What I've seen when actually looking at the filing units with high marginal tax rates is that their attributes put them in a place where the marginal tax rate on taxpayer earnings is very high. For example, they are in a high tax bracket and they are experiencing the phase-out of eduction credits. If you want to characterize this problem as being caused by taxpayers being "on a notch" I suggest you show us some filing units in the |
I may be using the term improperly, but I would characterize this as a notch because the tax unit's MTR will be lower as it goes higher up the income range.
Gruber and Saez. See table 9 on page 24. |
@martinholmer, could you describe why you think dampening the behavioral response with |
In issue #494, @MattHJensen pointed to substitution elasticity estimates derived from econometric work on micro data (by Gruber and Saez) that vary by income group. Here are those results: Thanks, @MattHJensen, for providing us with these estimates. If I understand correctly the request in issue #494, we should revise the logic of the Behavior.response method so that it does not use a single substitution elasticity (that applies to all filing units) but rather have it use more than one income-group-specific substitution elasticity. So, the new logic would permit, for example, the use of these Gruber-Saez results: 0.18 for lower income filing units, 0.11 for middle income filing units, and 0.57 for higher income filing units. Notice that, by far, the largest elasticity is for the higher income group. Given these results, I don't understand why Dan keeps saying that the substitution elasticity should not be applied to high income groups with high marginal tax rates. What Gruber and Saez find is that is where most of the substitution is occurring. |
@MattHJensen asked:
I don't think that is true. If I said that, then I misspoke. With
As I remember, the results were almost the same. So, dropping the |
|
The key findings from the comment above, which I generated using the script provided by @martinholmer (Thank you!), are that:
Given these findings, it does seem to make sense to drop |
@MattHJensen said in pull request #1858:
OK, I see what you're saying. I was just inferring from Dan's comments (which focused on whether we were using a one cent or a one dollar income change and whether the income change was positive or negative) that he was using the term to talk about large discontinuous jumps in tax liability. I don't see any evidence that the outlier filing units we see in the Behavior.response trace output are experiencing large discontinuous jumps in tax liability. But, at the same time, if their income rises by substantial amounts, they eventually will experience lower marginal tax rates because the phase-out will be completed. So, in hopes of finding some clarity about tax terminology, I did a Google search for "what is tax notch" and the first link was a paper by Joel Slemrod. He describes a "notch" as follows:
Slemrod's usage seems to be the same as Dan's usage of the term. |
What combination of taxes and phasesouts is responsible for the highest
rates?
Can I ask what is the finite difference we use to calculate the MTR? Is
there any reason to believe a positive difference will be more or less
valid than a negative difference? If they differ, which would be better?
What action do we take to smooth places in the tax code where a dollar
increase in income raises taxable income by $50 or $100? Doesn't this
cause some taxpayers to have a high MTR for a positive finite difference
and a lower rate for a negative difference? Does it make sense to model
this taxpayer as being very sensitive to the tax rate on wage income? In
taxsim we smooth all such step functions over the $50 or $100 range.
dan
…On Tue, 13 Feb 2018, Martin Holmer wrote:
@MattHJensen said in pull request #1858:
For example, they are in a high tax bracket and they
are experiencing the phase-out of eduction credits.
I may be using the term improperly, but I would characterize
this as a notch because the tax unit's MTR will be lower as it
goes higher up the income range.
OK, I see what you're saying. I was just inferring from Dan's comments
(which focused on whether we were using a one cent or a one dollar income
change and whether the income change was positive or negative) that he was
using the term to talk about large discontinuous jumps in tax liability. I
don't see any evidence that the outlier filing units we see in the
Behavior.response trace output are experiencing large discontinuous jumps in
tax liability. But, at the same time, if their income rises by substantial
amounts, they eventually will experience lower marginal tax rates because
the phase-out will be completed.
So, in hopes of finding some clarity about tax terminology, I did a Google
search for "what is tax notch" and the first link was a paper by Joel
Slemrod. He describes a "notch" as follows:
A wide range of tax and other policies create discontinuous
jumps—notches—in the choice set of individuals or firms, because
incremental changes in behavior cause discrete changes in net
tax liability.
Slemrod's usage seems to be the same as Dan's usage of the term.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the
thread.[AHvQVSArJsdgOCBl6zyx0CVeFKw8GCCXks5tUdOdgaJpZM4R7tgL.gif]
|
@feenberg said:
Tax-Calculator uses a penny by default. We don't calculate negative finite differences by default because of the computational burden and because testing results have shown that it doesn't make much of a difference. If we did, we would take the smaller of the two.
We apply the same smoothing as is in TaxSim.
In my experience in the past, several high mtrs resulted from the taxation of social security benefits. See, for example, this discussion. |
On Tue, 13 Feb 2018, Matt Jensen wrote:
In my experience in the past, several high mtrs resulted from the taxation
of social security benefits. See, for example, this discussion.
In part of the SS phasein-range each dollar of income adds 1.85 to taxable
income, which won't put the taxpayer above .7 alone. I suppose there might
be taxpayers in another clawback at the same time though. Those are the
taxpayers I wouldn't use in the elasticity formula.
dan
|
In pull request #1858, @MattHJensen said:
In response, @feenberg said this:
Dan, if you would have looked at the link Matt provided, you would have seen (from the TAXSIM output) that you are wrong. Here (in part) is what Matt pointed to: This output shows that TAXSIM says the federal income tax MTR on taxpayer earnings is 75.85 percent, which is, in fact, "above .7". And when you add in the payroll tax rate of 15.30 percent, this taxpayer is experiencing a combined MTR on earnings of 91.15 percent. |
This pull request adds flexibility to the ad hoc constraints on the calculated value of the proportional change in marginal aftertax rates in the
Behavior.response
method. This proportional change (pch
) variable is used in the substitution-effect calculations and in the charitable-contribution-response calculations. Other response calculations use semi-elasticities which do not require the calculation of thepch
variable.The are two problems in a microsimulation when calculating the
pch
variable:In rare instances, the marginal tax rate,
MTR
, can be greater or equal to one, which makes the marginal aftertax rate,1-MTR
, negative (or zero) causing inappropriate values ofpch
which is defined as(1-MTR2)/(1-MTR1) - 1
(where the trailing 1 denotes baseline and the trailing 2 denotes reform). After the initial discussion of issue Are Tax-Calculator results too sensitive to substitution effect elasticity? #1668, @jdebacker and I agreed on the current approach to handling this situation, which involves capping bothMTR1
andMTR2
at a number,nearone
, that is very close to one, 0.999999.In some instances, the value of
pch
can be quite large even though bothMTR1
andMTR2
are not capped. In these cases, largepch
values generate enormous dollar increases in taxable income. That this is going on when simulating the move from pre-TCJA policy to TCJA policy was discussed back in November in this comment, which included the following two sentences:But neither me nor anybody else followed up on this matter until @MattHJensen made this January comment.
After setting
trace=True
when calling theBehavior.response
method in a wide variety of situations, it seems to me the our problems are rooted in using a substitution elasticity (rather than a semi-elasticity). From what little I know about the research literature, it would not be desirable to avoid our outlier problems by converting to a semi-elasticity. So, sticking with an elasticity and itspch
calculation, we need to impose some kinds of constraints on the calculated values so that the substitution response is not unreasonably large.There are many kinds of ad hoc constraints and some are more sensible than others. In particular, the constraints suggested in pull request #1856 do not seem very sensible.
This pull request suggests a pair of constraints --- the severity of which can be controlled by two new arguments of the
Behavior.response
method --- that together can be used to limit effect of the handful of outliers on the aggregate response. Both these new arguments have default values, but those default values can be discussed here.The main advantage of the constraint logic suggested in this pull request is that the outliers, who have enormous responses in the unconstrained calculation, still have large responses after the constraints have been applied.
See the script that I used to trace the outliers and develop the constraint ideas in this pull request in this comment.
Comments and discussion are welcome, but I'm going to be out-of-town on Wednesday and Thursday, so I will not be able to join the conversation until Friday.
@MattHJensen @feenberg @rickecon @jdebacker @GoFroggyRun