Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add optional argument to Behavior.response method #1858

Merged
merged 6 commits into from
Feb 13, 2018
Merged

Add optional argument to Behavior.response method #1858

merged 6 commits into from
Feb 13, 2018

Conversation

martinholmer
Copy link
Collaborator

@martinholmer martinholmer commented Feb 6, 2018

This pull request adds flexibility to the ad hoc constraints on the calculated value of the proportional change in marginal aftertax rates in the Behavior.response method. This proportional change (pch) variable is used in the substitution-effect calculations and in the charitable-contribution-response calculations. Other response calculations use semi-elasticities which do not require the calculation of the pch variable.

The are two problems in a microsimulation when calculating the pch variable:

  1. In rare instances, the marginal tax rate, MTR, can be greater or equal to one, which makes the marginal aftertax rate, 1-MTR, negative (or zero) causing inappropriate values of pch which is defined as (1-MTR2)/(1-MTR1) - 1 (where the trailing 1 denotes baseline and the trailing 2 denotes reform). After the initial discussion of issue Are Tax-Calculator results too sensitive to substitution effect elasticity? #1668, @jdebacker and I agreed on the current approach to handling this situation, which involves capping both MTR1 and MTR2 at a number, nearone, that is very close to one, 0.999999.

  2. In some instances, the value of pch can be quite large even though both MTR1 and MTR2 are not capped. In these cases, large pch values generate enormous dollar increases in taxable income. That this is going on when simulating the move from pre-TCJA policy to TCJA policy was discussed back in November in this comment, which included the following two sentences:

At the extreme end of the sub distribution, we have 1,316 filing units who are simulated to have an increase in taxable income of one million dollars or more. And there are another 9,941 filing units who are simulated to have an increase in taxable income of between $100,000 and $1,000,000.

But neither me nor anybody else followed up on this matter until @MattHJensen made this January comment.

After setting trace=True when calling the Behavior.response method in a wide variety of situations, it seems to me the our problems are rooted in using a substitution elasticity (rather than a semi-elasticity). From what little I know about the research literature, it would not be desirable to avoid our outlier problems by converting to a semi-elasticity. So, sticking with an elasticity and its pch calculation, we need to impose some kinds of constraints on the calculated values so that the substitution response is not unreasonably large.

There are many kinds of ad hoc constraints and some are more sensible than others. In particular, the constraints suggested in pull request #1856 do not seem very sensible.

This pull request suggests a pair of constraints --- the severity of which can be controlled by two new arguments of the Behavior.response method --- that together can be used to limit effect of the handful of outliers on the aggregate response. Both these new arguments have default values, but those default values can be discussed here.

The main advantage of the constraint logic suggested in this pull request is that the outliers, who have enormous responses in the unconstrained calculation, still have large responses after the constraints have been applied.

See the script that I used to trace the outliers and develop the constraint ideas in this pull request in this comment.

Comments and discussion are welcome, but I'm going to be out-of-town on Wednesday and Thursday, so I will not be able to join the conversation until Friday.

@MattHJensen @feenberg @rickecon @jdebacker @GoFroggyRun

@codecov-io
Copy link

codecov-io commented Feb 6, 2018

Codecov Report

Merging #1858 into master will not change coverage.
The diff coverage is 100%.

Impacted file tree graph

@@          Coverage Diff           @@
##           master   #1858   +/-   ##
======================================
  Coverage     100%    100%           
======================================
  Files          37      37           
  Lines        3207    3213    +6     
======================================
+ Hits         3207    3213    +6
Impacted Files Coverage Δ
taxcalc/behavior.py 100% <100%> (ø) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 0f718eb...f682979. Read the comment docs.

@martinholmer
Copy link
Collaborator Author

martinholmer commented Feb 6, 2018

Here is the script I used to develop pull request #1858:

from __future__ import print_function
from taxcalc import *
import urllib as url_lib

# read two reform files from Tax-Calculator website
BASE_URL = ('https://raw.githubusercontent.com/'
            'open-source-economics/Tax-Calculator/master/taxcalc/reforms/')
baseline_name = '2017_law.json'  # pre-TCJA policy
baseline_text = url_lib.urlopen(BASE_URL + baseline_name).read()
baseline = Calculator.read_json_param_objects(baseline_text, None)
reform_name = 'TCJA_Reconciliation.json'  # TCJA as passed by Congress
reform_text = url_lib.urlopen(BASE_URL + reform_name).read()
reform = Calculator.read_json_param_objects(reform_text, None)

# specify pre-TCJA baseline policy and Calculator object that uses PUF data
policy1 = Policy()
policy1.implement_reform(baseline['policy'])
if policy1.reform_errors:
    print(policy1.reform_errors)
    exit(1)
calc1 = Calculator(policy=policy1, records=Records(), verbose=False)

# specify post-TCJA reform policy and Calculator object without any response
policy2 = Policy()
calc2 = Calculator(policy=policy2, records=Records(), verbose=False)

# specify post-TCJA reform policy and Calculator object with behavior response
behv_assumps = {2013: {"_BE_sub": [0.25]}}
behv = Behavior()
behv.update_behavior(behv_assumps)
calc3b = Calculator(policy=policy2, records=Records(), verbose=False,
                    behavior=behv)

cyr = 2020

# compute tax revenue in specified year for all three Calculator object
calc1.advance_to_year(cyr)
calc1.calc_all()
calc2.advance_to_year(cyr)
calc2.calc_all()
calc3b.advance_to_year(cyr)
calc3 = Behavior.response(calc1, calc3b, mtr_cap=0.99, max_pch=1.0, trace=True)

# compare aggregate tax revenue in cyr
ptax1 = calc1.weighted_total('payrolltax')
ptax2 = calc2.weighted_total('payrolltax')
ptax3 = calc3.weighted_total('payrolltax')
itax1 = calc1.weighted_total('iitax')
itax2 = calc2.weighted_total('iitax')
itax3 = calc3.weighted_total('iitax')

# print aggregate tax revenues in cyr
print('{}_calc1_ptax($B)= {:.1f}'.format(cyr, ptax1 * 1e-9))
print('{}_calc1_itax($B)= {:.1f}'.format(cyr, itax1 * 1e-9))
print('{}_calc2_ptax($B)= {:.1f}'.format(cyr, ptax2 * 1e-9))
print('{}_calc2_itax($B)= {:.1f}'.format(cyr, itax2 * 1e-9))
print('{}_calc3_ptax($B)= {:.1f}'.format(cyr, ptax3 * 1e-9))
print('{}_calc3_itax($B)= {:.1f}'.format(cyr, itax3 * 1e-9))

And here is sample output when tracing and using the default values of the two constraint arguments when _BE_sub = 0.25 (rather than 0.40):

*** TRACE for variable wmtr1
*** Histogram:
[  3791  55030 145102  10700    794     93      7      2      4      0
      2      0]
[ -9.00000000e+99   0.00000000e+00   2.50000000e-01   5.00000000e-01
   6.00000000e-01   7.00000000e-01   8.00000000e-01   9.00000000e-01
   9.99999000e-01   1.10000000e+00   1.20000000e+00   1.30000000e+00
   9.00000000e+99]
*** Person-weighted mean= 0.25
high wage_mtr1: [ 1.01264169  1.01264169  1.04341831  1.25177691  1.25177691  1.06664172]
wage_mtr2 them: [ 0.89187996  0.89187996  1.01555022  1.177462    1.177462    0.2131909 ]
*** TRACE for variable pch
*** Histogram:
[     0     13   1128   4323   9063  59970 102513  24305  13733    452
     25]
[ -9.00000000e+99  -1.00000000e+00  -5.00000000e-01  -2.00000000e-01
  -1.00000000e-01  -1.00000000e-05   1.00000000e-05   1.00000000e-01
   2.00000000e-01   5.00000000e-01   1.00000000e+00   9.00000000e+99]
*** Person-weighted mean= 0.04
*** Dollar-weighted mean= 0.16
*** TRACE for variable sub
*** Histogram:
[ 7108  6028 63624 73568 32949 22038  9394   816]
[ -9.00000000e+99  -1.00000000e+03  -1.00000000e-01   1.00000000e-01
   1.00000000e+03   1.00000000e+04   1.00000000e+05   1.00000000e+06
   9.00000000e+99]
*** Person-weighted mean= 1044.84
2020_calc1_ptax($B)= 1229.4
2020_calc1_itax($B)= 1924.7
2020_calc2_ptax($B)= 1229.4
2020_calc2_itax($B)= 1735.8
2020_calc3_ptax($B)= 1239.9
2020_calc3_itax($B)= 1791.1

You can use this script to try out different values of the two constraint arguments and see what happens to the outlier filing units and see what happens to the size of the aggregate behavioral-response effect in comparison to the static revenue loss of the TCJA reform.

Notice that when when _BE_sub=0.25 using the logic on the master branch, which is equivalent to setting mtr_cap=0.999999 and max_pch=9e99, the last line in the aggregate output changes from this:

2020_calc3_itax($B)= 1791.1

to this:

2020_calc3_itax($B)= 14476.9

This is the kind of excessively large substitution response that has been discussed since November.

@martinholmer martinholmer added WIP and removed WIP labels Feb 6, 2018
@martinholmer
Copy link
Collaborator Author

It seems likely that Tax-Calculator 0.16.0 will be released during the week of February 12-16 after the 2011 puf.csv and associated files from taxdata pull request 114 are incorporated into Tax-Calculator.

Given all the discussion of unreasonably large behavioral substitution responses to the TCJA reform, it would seem desirable to include a fix of that problem in release 0.16.0.

Does anybody have any comment (pro or con) to make about the proposed changes in pull request #1858?

@MattHJensen @feenberg @rickecon @jdebacker @GoFroggyRun

@feenberg
Copy link
Contributor

feenberg commented Feb 9, 2018 via email

@martinholmer
Copy link
Collaborator Author

@feenberg said in PR #1858:

Did no one see my comments on [pull request #1856] Feb 1?
I didn't hear any pushback, but the proposal below doesn't mention my comments and is not desirable.

You heard on response to your comment on #1856 because the @GoFroggyRun, who is the author of that pull request, did not respond.

This is a different pull request trying to resolve the same problem. As the author of #1858, I'll try to respond to your questions in subsequent #1858 comments.

@martinholmer
Copy link
Collaborator Author

@feenberg said in a comment on pull request #1858:

It is true that there are some notches in the tax code, where the marginal tax rate is more than one. Since (1-mtr) appears in the denominator of the behavioral effect it causes the sign of the behavioral effect to reverse as the mtr goes past 1. This is illogical and should not be allowed. The correction proposed, to cap the rate at .99999 is not a desirable work-around. This puts the denominator to .00001 and raises the behavioral effect by 5 orders of magnitude. The elasticity wasn't estimated around that sort of tax rate, and the result is simply wrong. It should not be used, even as an option. The behavioral effect needs to be capped at a value near the level at which the estimates were done. Perhaps .7 or .5. Or behavior could be ignored for the few taxpayers with very high tax rates. These taxpayers contribute very little to the total effect if the effect is measured in a plausible manner. I can't think of any excuse to leave them in with .00001 in the denominator.

The default values for the two new constraint arguments in the Behavior.response() method proposed in pull request #1858, imply a maximum denominator of 0.01 and limits the pch variable for a filing unit to be no more than one. Your suggestion of capping the marginal tax rates at "perhaps 0.7 or 0.5" is viewed by me and @MattHJensen as undesirable, as we both said in separate comments to pull request #1856.

Given this understanding of what is being proposed in #1858, do you think the changes in #1858 are an improvement? If not, please explain why the default constraint values in #1858 are not appropriate.

@feenberg
Copy link
Contributor

feenberg commented Feb 11, 2018 via email

@MattHJensen
Copy link
Contributor

Over the last couple of days, I have been searching through the literature to find some evidence that could inform where max_pch should be set, and I haven't found anything.

Meanwhile, @feenberg presented a strong reason for capping the MTR significantly below one, which we (or at least I) hadn't been considering before

The elasticity wasn't estimated around that sort of tax rate, and the result is simply wrong. It should not be used, even as an option. The behavioral effect needs to be capped at a value near the level at which the estimates were done.

Out of sample calculations are not reliable and should not be depended on to give sensible results.

I still find it unsatisfying to cap MTR significantly below one for all of the reasons @martinholmer has described, but based on @feenberg's reasoning, the outstanding empirical evidence doesn't provide any justification for applying ETI estimates based on very high MTRs.

As a near term solution, I am inclined to split the difference and use the structure in this PR to set nearone (perhaps renamed as mtr_cap or similar) at .7 and set max_pch at 9e99. This would have a similar outcome to #1856 but would allow for max_pch to be used as an alternative for users who prefer that. It may also make sense to open a new issue for the consideration of these default settings.

@MattHJensen
Copy link
Contributor

One more comment on this, which is that I think both of our core options are significantly better than what is currently in master, and I think we should just merge one for the 0.16.0 release and then keep considering this issue on a more leisurely schedule. My inclination is mtr_cap=.7 and max_pch=9e99 based on the reasoning in the comment above, but I think mtr_cap=.99 and max_pch=1 is also serviceable while we think more about this.

@martinholmer
Copy link
Collaborator Author

@MattHJensen said in pull request #1858:

I still find it unsatisfying to cap MTR significantly below one for all of the reasons @martinholmer has described, but based on @feenberg's reasoning, the outstanding empirical evidence doesn't provide any justification for applying ETI estimates based on very high MTRs.

So, if "the outstanding empirical evidence doesn't provide any justification for applying ETI estimates based on very high MTRs", this implies to me that the "outstanding empirical evidence" is all based on aggregate data. If that is so, the logical conclusion is that "applying ETI estimates" in a microsimulation model is completely inappropriate. And that would imply we should remove the Behavior.response substitution-effect logic from Tax-Calculator.

As I always said, I don't know this literature at all. But listening to the arguments being made here suggest we should simply drop the substitution-effect logic from Tax-Calculator.

@martinholmer
Copy link
Collaborator Author

@MattHJensen said in pull request #1858:

As a near term solution, I am inclined to split the difference and use the structure in this PR to set nearone (perhaps renamed as mtr_cap or similar) at 0.7 and set max_pch at 9e99.

I don't see this as a "split the difference" solution. If we believe the substitution elasticity estimates are derived from econometric work on micro data (rather than from aggregate time-series data) then there is no problem applying the estimated elasticity with logical-bounds restrictions. But if not, then to me the logical response is to conclude that substitution elasticities have no place in a microsimulation model.

The above "near term solution" has all the problems I outlined in this comment. So, I don't see it as a "solution" at all. I see it as introducing another bug into Tax-Calculator.

@MattHJensen
Copy link
Contributor

MattHJensen commented Feb 13, 2018

if we believe the substitution elasticity estimates are derived from econometric work on micro data (rather than from aggregate time-series data) then there is no problem applying the estimated elasticity with logical-bounds restrictions.

@martinholmer, some of the estimates in the literature are derived from econometric work on microdata.

The disagreement is about whether there is a problem with applying them in a simulation of proposed policy to a taxpayer on a notch.

@martinholmer
Copy link
Collaborator Author

@MattHJensen said in pull request #1858:

some of the estimates in the literature are derived from econometric work on microdata.

The disagreement is about whether there is a problem with applying them in a simulation of proposed policy to a taxpayer on a notch.

Can you point us the the paper(s) that contain "econometric work on microdata" and also point us the the page(s) that contain econometric estimates of the elasticities by income group (as you mentioned in issue #494)?

There is no evidence that I've seen that indicates that any of the filing units with high marginal tax rates are at a "notch". This is just what Dan keeps saying without providing any evidence to support his allegation. What I've seen when actually looking at the filing units with high marginal tax rates is that their attributes put them in a place where the marginal tax rate on taxpayer earnings is very high. For example, they are in a high tax bracket and they are experiencing the phase-out of eduction credits.

If you want to characterize this problem as being caused by taxpayers being "on a notch" I suggest you show us some filing units in the puf.csv file that are at a "notch".

@MattHJensen
Copy link
Contributor

For example, they are in a high tax bracket and they are experiencing the phase-out of eduction credits.

I may be using the term improperly, but I would characterize this as a notch because the tax unit's MTR will be lower as it goes higher up the income range.

Can you point us the the paper(s) that contain "econometric work on microdata" and also point us the the page(s) that contain econometric estimates of the elasticities by income group (as you mentioned in issue #494)?

Gruber and Saez. See table 9 on page 24.

@MattHJensen
Copy link
Contributor

MattHJensen commented Feb 13, 2018

@martinholmer, could you describe why you think dampening the behavioral response with max_pch is more sensible than capping it with mtr_cap? My take right now is that a max_pch cap has a very similar effect on aggregate results but lacks any clear justification and will be harder for a user to understand. So I am curious why you seem quite antagonistic towards the mtr_cap but are fine with max_pch. It's quite possible that I just don't understand why max_pch is better.

@martinholmer
Copy link
Collaborator Author

In issue #494, @MattHJensen pointed to substitution elasticity estimates derived from econometric work on micro data (by Gruber and Saez) that vary by income group. Here are those results:

screen shot 2018-02-13 at 11 05 03 am

Thanks, @MattHJensen, for providing us with these estimates.

If I understand correctly the request in issue #494, we should revise the logic of the Behavior.response method so that it does not use a single substitution elasticity (that applies to all filing units) but rather have it use more than one income-group-specific substitution elasticity. So, the new logic would permit, for example, the use of these Gruber-Saez results: 0.18 for lower income filing units, 0.11 for middle income filing units, and 0.57 for higher income filing units.

Notice that, by far, the largest elasticity is for the higher income group. Given these results, I don't understand why Dan keeps saying that the substitution elasticity should not be applied to high income groups with high marginal tax rates. What Gruber and Saez find is that is where most of the substitution is occurring.

@martinholmer
Copy link
Collaborator Author

martinholmer commented Feb 13, 2018

@MattHJensen asked:

could you describe why you think dampening the behavioral response with max_pch is more sensible than capping it with mtr_cap?

I don't think that is true. If I said that, then I misspoke.

With mtr_cap at 0.99, the value of max_pch makes almost no difference. Use the script I supplied to compare results generated by two sets of assumptions:

  1. mtr_cap=0.99 and max_pch=1.0

  2. mtr_cap=0.99 and max_pch=0.5

As I remember, the results were almost the same.

So, dropping the max_pch is OK with me. Should I change this pull request to drop the max_pch argument?

@MattHJensen
Copy link
Contributor

*** TRACE *** mtr_cap=0.99 and max_pch=1.0
*** TRACE for variable wmtr1
*** Histogram:
[  3791  55030 145102  10700    794     93      7      2      4      0
      2      0]
[ -9.00000000e+99   0.00000000e+00   2.50000000e-01   5.00000000e-01
   6.00000000e-01   7.00000000e-01   8.00000000e-01   9.00000000e-01
   9.99999000e-01   1.10000000e+00   1.20000000e+00   1.30000000e+00
   9.00000000e+99]
*** Person-weighted mean= 0.25
high wage_mtr1: [ 1.01264169  1.01264169  1.04341831  1.25177691  1.25177691  1.06664172]
wage_mtr2 them: [ 0.89187996  0.89187996  1.01555022  1.177462    1.177462    0.2131909 ]
*** TRACE for variable pch
*** Histogram:
[     0     13   1128   4323   9063  59970 102513  24305  13733    452
     25]
[ -9.00000000e+99  -1.00000000e+00  -5.00000000e-01  -2.00000000e-01
  -1.00000000e-01  -1.00000000e-05   1.00000000e-05   1.00000000e-01
   2.00000000e-01   5.00000000e-01   1.00000000e+00   9.00000000e+99]
*** Person-weighted mean= 0.04
*** Dollar-weighted mean= 0.16
*** TRACE for variable sub
*** Histogram:
[ 7108  6028 63624 73568 32949 22038  9394   816]
[ -9.00000000e+99  -1.00000000e+03  -1.00000000e-01   1.00000000e-01
   1.00000000e+03   1.00000000e+04   1.00000000e+05   1.00000000e+06
   9.00000000e+99]
*** Person-weighted mean= 1044.84
2020_calc1_ptax($B)= 1229.4
2020_calc1_itax($B)= 1924.7
2020_calc2_ptax($B)= 1229.4
2020_calc2_itax($B)= 1735.8
2020_calc3_ptax($B)= 1239.9
2020_calc3_itax($B)= 1791.1

*** TRACE *** mtr_cap=0.99 and max_pch=9e+99
*** TRACE for variable wmtr1
*** Histogram:
[  3791  55030 145102  10700    794     93      7      2      4      0
      2      0]
[ -9.00000000e+99   0.00000000e+00   2.50000000e-01   5.00000000e-01
   6.00000000e-01   7.00000000e-01   8.00000000e-01   9.00000000e-01
   9.99999000e-01   1.10000000e+00   1.20000000e+00   1.30000000e+00
   9.00000000e+99]
*** Person-weighted mean= 0.25
high wage_mtr1: [ 1.01264169  1.01264169  1.04341831  1.25177691  1.25177691  1.06664172]
wage_mtr2 them: [ 0.89187996  0.89187996  1.01555022  1.177462    1.177462    0.2131909 ]
*** TRACE for variable pch
*** Histogram:
[     0     13   1128   4323   9063  59970 102513  24305  13733    452
     25]
[ -9.00000000e+99  -1.00000000e+00  -5.00000000e-01  -2.00000000e-01
  -1.00000000e-01  -1.00000000e-05   1.00000000e-05   1.00000000e-01
   2.00000000e-01   5.00000000e-01   1.00000000e+00   9.00000000e+99]
*** Person-weighted mean= 0.04
*** Dollar-weighted mean= 0.16
*** TRACE for variable sub
*** Histogram:
[ 7108  6028 63624 73568 32949 22034  9394   820]
[ -9.00000000e+99  -1.00000000e+03  -1.00000000e-01   1.00000000e-01
   1.00000000e+03   1.00000000e+04   1.00000000e+05   1.00000000e+06
   9.00000000e+99]
*** Person-weighted mean= 1063.53
2020_calc1_ptax($B)= 1229.4
2020_calc1_itax($B)= 1924.7
2020_calc2_ptax($B)= 1229.4
2020_calc2_itax($B)= 1735.8
2020_calc3_ptax($B)= 1240.0
2020_calc3_itax($B)= 1792.2


*** TRACE *** mtr_cap=0.7 and max_pch=9e+99
*** TRACE for variable wmtr1
*** Histogram:
[  3791  55030 145102  10700    794     93      7      2      4      0
      2      0]
[ -9.00000000e+99   0.00000000e+00   2.50000000e-01   5.00000000e-01
   6.00000000e-01   7.00000000e-01   8.00000000e-01   9.00000000e-01
   9.99999000e-01   1.10000000e+00   1.20000000e+00   1.30000000e+00
   9.00000000e+99]
*** Person-weighted mean= 0.25
high wage_mtr1: [ 1.01264169  1.01264169  1.04341831  1.25177691  1.25177691  1.06664172]
wage_mtr2 them: [ 0.89187996  0.89187996  1.01555022  1.177462    1.177462    0.2131909 ]
*** TRACE for variable pch
*** Histogram:
[     0      6   1131   4304   9083  59981 102543  24290  13716    450
     21]
[ -9.00000000e+99  -1.00000000e+00  -5.00000000e-01  -2.00000000e-01
  -1.00000000e-01  -1.00000000e-05   1.00000000e-05   1.00000000e-01
   2.00000000e-01   5.00000000e-01   1.00000000e+00   9.00000000e+99]
*** Person-weighted mean= 0.04
*** Dollar-weighted mean= 0.16
*** TRACE for variable sub
*** Histogram:
[ 7105  6028 63635 73589 32922 22036  9391   819]
[ -9.00000000e+99  -1.00000000e+03  -1.00000000e-01   1.00000000e-01
   1.00000000e+03   1.00000000e+04   1.00000000e+05   1.00000000e+06
   9.00000000e+99]
*** Person-weighted mean= 1045.16
2020_calc1_ptax($B)= 1229.4
2020_calc1_itax($B)= 1924.7
2020_calc2_ptax($B)= 1229.4
2020_calc2_itax($B)= 1735.8
2020_calc3_ptax($B)= 1239.9
2020_calc3_itax($B)= 1791.0


*** TRACE *** mtr_cap=0.5 and max_pch=9e+99
*** TRACE for variable wmtr1
*** Histogram:
[  3791  55030 145102  10700    794     93      7      2      4      0
      2      0]
[ -9.00000000e+99   0.00000000e+00   2.50000000e-01   5.00000000e-01
   6.00000000e-01   7.00000000e-01   8.00000000e-01   9.00000000e-01
   9.99999000e-01   1.10000000e+00   1.20000000e+00   1.30000000e+00
   9.00000000e+99]
*** Person-weighted mean= 0.25
high wage_mtr1: [ 1.01264169  1.01264169  1.04341831  1.25177691  1.25177691  1.06664172]
wage_mtr2 them: [ 0.89187996  0.89187996  1.01555022  1.177462    1.177462    0.2131909 ]
*** TRACE for variable pch
*** Histogram:
[     0      0    925   4208   9254  61510 101915  24533  13098     77
      5]
[ -9.00000000e+99  -1.00000000e+00  -5.00000000e-01  -2.00000000e-01
  -1.00000000e-01  -1.00000000e-05   1.00000000e-05   1.00000000e-01
   2.00000000e-01   5.00000000e-01   1.00000000e+00   9.00000000e+99]
*** Person-weighted mean= 0.04
*** Dollar-weighted mean= 0.14
*** TRACE for variable sub
*** Histogram:
[ 6988  6004 65177 73520 35340 19081  8675   740]
[ -9.00000000e+99  -1.00000000e+03  -1.00000000e-01   1.00000000e-01
   1.00000000e+03   1.00000000e+04   1.00000000e+05   1.00000000e+06
   9.00000000e+99]
*** Person-weighted mean= 966.29
2020_calc1_ptax($B)= 1229.4
2020_calc1_itax($B)= 1924.7
2020_calc2_ptax($B)= 1229.4
2020_calc2_itax($B)= 1735.8
2020_calc3_ptax($B)= 1239.4
2020_calc3_itax($B)= 1785.8

@MattHJensen
Copy link
Contributor

MattHJensen commented Feb 13, 2018

The key findings from the comment above, which I generated using the script provided by @martinholmer (Thank you!), are that:

  • max_pch between 1 (proposed in this PR) and 9e99 (implicit) makes almost no difference for the results.
  • mtr_cap makes little difference in the range between .5 and .99 and negligible difference between .7 and .99.

Given these findings, it does seem to make sense to drop max_pch as an option (so the implicit value is left at 9e99). Leaving mtr_cap at 0.99 for now is also fine by me.

@martinholmer
Copy link
Collaborator Author

@MattHJensen said in pull request #1858:

For example, they are in a high tax bracket and they are experiencing the phase-out of eduction credits.

I may be using the term improperly, but I would characterize this as a notch because the tax unit's MTR will be lower as it goes higher up the income range.

OK, I see what you're saying. I was just inferring from Dan's comments (which focused on whether we were using a one cent or a one dollar income change and whether the income change was positive or negative) that he was using the term to talk about large discontinuous jumps in tax liability. I don't see any evidence that the outlier filing units we see in the Behavior.response trace output are experiencing large discontinuous jumps in tax liability. But, at the same time, if their income rises by substantial amounts, they eventually will experience lower marginal tax rates because the phase-out will be completed.

So, in hopes of finding some clarity about tax terminology, I did a Google search for "what is tax notch" and the first link was a paper by Joel Slemrod. He describes a "notch" as follows:

A wide range of tax and other policies create discontinuous jumps—notches—in the choice set of individuals or firms, because incremental changes in behavior cause discrete changes in net tax liability.

Slemrod's usage seems to be the same as Dan's usage of the term.

@feenberg
Copy link
Contributor

feenberg commented Feb 13, 2018 via email

@MattHJensen
Copy link
Contributor

MattHJensen commented Feb 13, 2018

@feenberg said:

Can I ask what is the finite difference we use to calculate the MTR? Is
there any reason to believe a positive difference will be more or less
valid than a negative difference? If they differ, which would be better?

Tax-Calculator uses a penny by default. We don't calculate negative finite differences by default because of the computational burden and because testing results have shown that it doesn't make much of a difference. If we did, we would take the smaller of the two.

What action do we take to smooth places in the tax code where a dollar
increase in income raises taxable income by $50 or $100? Doesn't this
cause some taxpayers to have a high MTR for a positive finite difference
and a lower rate for a negative difference? Does it make sense to model
this taxpayer as being very sensitive to the tax rate on wage income? In
taxsim we smooth all such step functions over the $50 or $100 range.

We apply the same smoothing as is in TaxSim.

What combination of taxes and phasesouts is responsible for the highest
rates?

In my experience in the past, several high mtrs resulted from the taxation of social security benefits. See, for example, this discussion.

@feenberg
Copy link
Contributor

feenberg commented Feb 13, 2018 via email

@martinholmer
Copy link
Collaborator Author

In pull request #1858, @MattHJensen said:

In my experience in the past, several high mtrs resulted from the taxation of social security benefits. See, for example, this discussion.

In response, @feenberg said this:

In part of the SS phasein-range each dollar of income adds 1.85 to taxable income, which won't put the taxpayer above .7 alone.

Dan, if you would have looked at the link Matt provided, you would have seen (from the TAXSIM output) that you are wrong. Here (in part) is what Matt pointed to:

screen shot 2018-02-13 at 3 53 58 pm

This output shows that TAXSIM says the federal income tax MTR on taxpayer earnings is 75.85 percent, which is, in fact, "above .7". And when you add in the payroll tax rate of 15.30 percent, this taxpayer is experiencing a combined MTR on earnings of 91.15 percent.

@martinholmer martinholmer merged commit ca4a43d into PSLmodels:master Feb 13, 2018
@martinholmer martinholmer changed the title Add optional arguments to Behavior.response method Add optional argument to Behavior.response method Feb 13, 2018
@martinholmer martinholmer deleted the fix-response branch February 14, 2018 19:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants