Fits a logistic regression by maximum penalized likelihood, in which the penalty function is the Jeffreys invariant prior. This removes the O(1/n) term from the asymptotic bias of estimated coefficients (Firth, 1993), and always yields finite estimates and standard errors (whereas the MLE is infinite in situations of complete or quasi-complete separation).
brlr(formula, data = NULL, offset, weights, start, …, subset, dispersion = 1, na.action = na.omit, contrasts = NULL, x = FALSE, br = TRUE, control = list(maxit = 200))
an optional set of starting values (of the model coefficients) for the optimization
further arguments passed to or from other methods
an optional vector specifying a subset of observations to be used in the fitting process
an optional parameter for over- or under-dispersion relative to binomial variation default is 1
a function which indicates what should happen when the data contain `NAs. The default is set by the
if that is unset. The “factory-fresh default is
should the model matrix be included in the resultant object?
a logical switch indicating whether the bias-reducing penalty is applied; default is
brlrhas essentially the same user interface asglm(family=binomial, …) see the example below.
A model object of classbrlr, with components
deviance minus 2*logdet(Fisher information)
logical, did the optimization converge?
number of iterations of the optimization algorithm (BFGS via
the observed binomial proportions, as for
object, binomial with logistic link, as for
the diagonal elements of the models “hat matrix
the estimated Fisher information matrix
1. Methods specific to thebrlrclass of models areprint.brlrsummary.brlrprint.summary.brlrvcov.brlradd1.brlrdrop1.brlr
Others are inherited from theglmclass.
2. The results of the bias-reduced fit typically have regression coefficients slightly closer to zero than the maximum likelihood estimates, and slightly smaller standard errors. (In logistic regression, bias reduction is achieved by a slight shrinkage of coefficients towards zero; thus bias reduction also reduces variance.) The difference is typically small except in situations of sparse data and/or complete separation. See also Heinze and Schemper (2002), Zorn (2005).
Firth, D. (1993) Bias reduction of maximum likelihood estimates.Biometrika80, 2738.
Firth, D. (1992) Bias reduction, the Jeffreys prior and GLIM. InAdvances in GLIM and Statistical Modelling, Eds. L Fahrmeir, B J Francis, R Gilchrist and G Tutz, pp91100. New York: Springer.
Heinze, G. and Schemper, M. (2002) A solution to the problem of separation in logistic regression.Statistics in Medicine21, 24092419.
Zorn, C (2005). A solution to separation in binary response models.Political Analysis13, 157170.
Habitat preferences of lizards, from McCullagh and Nelder (1989, p129); this reproduces the results given in Firth (1992). First the standard maximum-likelihood fit: data(lizards) glm(cbind(grahami, opalinus) ~ height + diameter + light + time, family = binomial, data=lizards) Now the bias-reduced version: brlr(cbind(grahami, opalinus) ~ height + diameter + light + time, data=lizards)