-
Notifications
You must be signed in to change notification settings - Fork 2
/
Copy pathhelp.glmrob.nb.html
153 lines (125 loc) · 7.99 KB
/
help.glmrob.nb.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>R: Robust estimations for negative binomial GLM</title>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<link rel="stylesheet" type="text/css" href="R.css" />
</head><body>
<table width="100%" summary="page for glmrob.nb"><tr><td>glmrob.nb</td><td style="text-align: right;">R Documentation</td></tr></table>
<h2>Robust estimations for negative binomial GLM</h2>
<h3>Description</h3>
<p><code>glmrob.nb</code> is used to fit a negative binomial model by robust (bounded influence) methods, of the Mallows' type. Two approaches are implemented: The PR approach (for <code>bounding.func = "T/T"</code> and <code>bounding.func = "HA/HA"</code>) achieves robustness in the response by applying a bounded function on the Pearson residuals while the UD approach (for <code>bounding.func = "BY/T"</code>) achieves robustness in the response by applying a bounded function on the unscaled deviance components.
</p>
<h3>Usage</h3>
<pre>
glmrob.nb(y, X, bounding.func = "T/T", offset = rep(0,length(y)), weights.on.x = 'none',
options.wx = list(mve.nobs = floor(length(y)*0.8), p.chisq = 0.95),
c.tukey.beta = 4, c.tukey.sig = 4, c.by.beta = 4,
a.hampel.beta = 3, b.hampel.beta = 4, c.hampel.beta = 5,
a.hampel.sig = 3, b.hampel.sig = 4, c.hampel.sig = 5,
param.ini, minsig = 1e-2, maxsig = 50, minmu = 1e-10, maxmu = 1e120,
maxit = 30, maxit.sig = 50, sig.prec = 1e-8, tol = 1e-4)
</pre>
<h3>Arguments</h3>
<table summary="R argblock">
<tr valign="top"><td><code>y</code></td>
<td>
<p>The count response variable consisting of non-negative integers.</p>
</td></tr>
<tr valign="top"><td><code>X</code></td>
<td>
<p>The matrix of covariates. Number of rows must match the length of <code>y</code>. Missing values are not supported and the corresponding rows should be discarded beforehand. A single vector full of ones is supported for an intercept-only model.</p>
</td></tr>
<tr valign="top"><td><code>bounding.func</code></td>
<td>
<p>The bounded functions applied either on the Pearson residuals or on the deviance components; choice among <code>"T/T"</code>, <code>"HA/HA"</code> and <code>"BY/T"</code>. See Details below.</p>
</td></tr>
<tr valign="top"><td><code>offset</code></td>
<td>
<p>A vector of length <code>length(y)</code> to be directly added to the linear predictor, that is a covariate whose coefficient is set to 1.</p>
</td></tr>
<tr valign="top"><td><code>weights.on.x</code></td>
<td>
<p>Character string specifying the weights on the design. Choice among <code>"none"</code>, all equal to one, and <code>"hard"</code>, hard-rejection weights based on robust Mahalanobis distances.</p>
</td></tr>
<tr valign="top"><td><code>options.wx</code></td>
<td>
<p>A list of options for the weights in the design. <code>mve.nobs</code> is the minimum number of observations enclosed in the Minimum Volume Ellipsoid used for computing robust Mahalanobis distances; passed on to cov.rob. <code>p.chisq</code> is the probability determining the chi squared quantile used as a cut-off value for the hard rejection weights.</p>
</td></tr>
<tr valign="top"><td><code>c.tukey.beta, c.tukey.sig</code></td>
<td>
<p>Tuning constants for Tukey's biweight function, for the robust M-estimations of beta and sigma respectively. Both have an impact for <code>bounding.func = "T/T"</code> while <code>c.tukey.sig</code> has also an impact for <code>bounding.func = "BY/T"</code>.</p>
</td></tr>
<tr valign="top"><td><code>c.by.beta</code></td>
<td>
<p>Tuning constant used for Bianco and Yohai (1996)'s function. Has an impact only for <code>bounding.func = "BY/T"</code>.</p>
</td></tr>
<tr valign="top"><td><code>a.hampel.beta, b.hampel.beta, c.hampel.beta, a.hampel.sig, b.hampel.sig, c.hampel.sig</code></td>
<td>
<p>Tuning constants <code>a</code>, <code>b</code> and <code>c</code> for Hampel's three part function, for the robust M-estimations of beta and sigma respectively. Have an impact only for <code>bounding.func = "HA/HA"</code>.</p>
</td></tr>
<tr valign="top"><td><code>param.ini</code></td>
<td>
<p>An optional vector of starting values for <code>c(sigma, beta)</code> (in that order). If missing, the MLE is computed as starting values for the robust estimates.</p>
</td></tr>
<tr valign="top"><td><code>minsig, maxsig</code></td>
<td>
<p>Lower and upper bounds passed on to uniroot, used for updating the robust estimate of sigma given beta. In case of multiple solutions (typically with low tuning constant values), increasing <code>minsig</code>, say to 0.05 or 0.1, may help avoiding spurious solutions arbitrarily close to 0.</p>
</td></tr>
<tr valign="top"><td><code>minmu, maxmu</code></td>
<td>
<p>Lower and upper bounds of fitted means, for numerical stability.</p>
</td></tr>
<tr valign="top"><td><code>maxit</code></td>
<td>
<p>Maximum number of iterations between robust updates of beta given sigma and sigma given beta.</p>
</td></tr>
<tr valign="top"><td><code>maxit.sig</code></td>
<td>
<p>Maximum number of iterations passed on to <code>uniroot</code> for the robust update of sigma.</p>
</td></tr>
<tr valign="top"><td><code>sig.prec</code></td>
<td>
<p>Numerical tolerance passed on to <code>uniroot</code> for the robust update of sigma.</p>
</td></tr>
<tr valign="top"><td><code>tol</code></td>
<td>
<p>Numerical tolerance for robust update of beta given sigma and for maximum likelihood starting values.</p>
</td></tr>
</table>
<h3>Details</h3>
<p>The function computes robust M-estimates of the regression parameter beta and the overdispersion parameter sigma of the negative binomial (generalized linear) model. The user can choose among three sets of estimators: <code>T/T</code> applies Tukey's biweight function on the Pearson residuals for estimating both beta and sigma; <code>HA/HA</code> uses Hampel's three part function on the Pearson residuals for both beta and sigma; <code>BY/T</code> applies Bianco and Yohai's function on the unscaled deviance components for estimating beta and uses Tukey's biweight for sigma.
</p>
<p>The parameterization used here is such that the conditional variance of the response given the covariates is <i>μ+σμ^2</i>, so that the Poisson distribution with mean <i>μ</i> is the limiting case when <i>σ</i> tends to 0. The sigma considered here is thus the inverse of the theta considered in glm.nb from the <span class="pkg">MASS</span> package.
</p>
<h3>Value</h3>
<p>The function returns a list with following components:
</p>
<table summary="R valueblock">
<tr valign="top"><td><code>coef</code></td>
<td>
<p>a vector whose first element is the robust estimate of sigma, followed by the intercept and by the covariates' coefficients robust estimates.</p>
</td></tr>
<tr valign="top"><td><code>stdev</code></td>
<td>
<p>a vector consisting of the (robust) standard errors of the estimates of beta.</p>
</td></tr>
<tr valign="top"><td><code>weights.y</code></td>
<td>
<p>a vector of length <code>length(y)</code> consisting of the resulting weights on the response.</p>
</td></tr>
<tr valign="top"><td><code>weights.x</code></td>
<td>
<p>a vector of length <code>length(y)</code> consisting of the resulting weights on the design.</p>
</td></tr>
</table>
<h3>Note</h3>
<p>This is version 0.4.</p>
<h3>Author(s)</h3>
<p>William H. Aeberhard, <a href="mailto:[email protected]">[email protected]</a>
</p>
<h3>References</h3>
<p>Aeberhard, W. H., Cantoni, E., Heritier, S. (2014) Robust Inference in the Negative Binomial Regression Model with an Application to Falls Data. <em>Biometrics</em> <b>70, </b> 920–931.
</p>
<p>Bianco, A. M., Yohai, V. J. (1996) Robust estimation in the logistic regression model. In <em>Robust Statistics, Data Analysis, and Computer Intensive Methods</em>, H. Rieder (ed), 17–34, New York: Springer-Verlag.
</p>
<p>Cantoni, E., Ronchetti, E. (2001) Robust Inference for Generalized Linear Models. <em>JASA</em> <b>96,</b> 1022–1030.
</p>
</body></html>