Journal article
Horseshoe Prior for Bayesian Linear Regression with Hyperbolic Errors
STATISTICS AND APPLICATIONS, Vol.22(3), pp.199-209
01/01/2024
Abstract
It is well known that squared error loss is not robust to outliers. As an alternative, Huber loss may be used for robust regression; however, Huber loss is not readily amenable to Bayesian computation. It has been shown that hyperbolic loss can be regarded as an approximation to Huber loss, and the hyperbolic distribution can be expressed as a scale mixture of normal distributions, which makes it appealing for Bayesian computation. The idea of Bayesian Huberized lasso was first proposed by Park and Casella (2008), and was formally developed and implemented by Kawakami and Hashimoto (2023). Since the Bayesian Huberized lasso cannot shrink regression coefficients to exactly zero, and has lighter tailed errors than a Cauchy distribution, De and Ghosh (2024) proposed a model that encompasses both hyperbolic and t- errors, with a mixture prior on regression coefficients consisting of two parts, a point mass at zero and a continuous distribution, that can shrink coefficients to exactly zero. The approach of De and Ghosh (2024) could be considered as a gold standard for Bayesian model averaging, but posterior computation with such a point mass mixture prior, popularly known as the spike and slab prior, can be challenging with many covariates. The horseshoe prior is known to mimic some of the desirable properties of spike and slab priors, while being computationally less intensive. Motivated by this attractive property of the horseshoe prior, in this article we develop an algorithm for Bayesian linear regression with hyperbolic errors, and horseshoe priors on the regression coefficients. We illustrate using simulation studies and an analysis of the famous Boston housing dataset, that posterior distributions under horseshoe priors can capture sparsity better than Bayesian lasso priors. For moderate dimensional regression problems, the spike and slab prior performs better than the horseshoe in capturing the sparsity of regression coefficients. However, we find that Markov chain Monte Carlo (MCMC) algorithms with horseshoe priors have improved mixing, which suggests that Bayesian shrinkage with the horseshoe prior and its generalizations, such as the regularized horseshoe prior, could be a promising direction to explore for high dimensional robust regression.
Details
- Title: Subtitle
- Horseshoe Prior for Bayesian Linear Regression with Hyperbolic Errors
- Creators
- Shamriddha De - Univ Iowa, Dept Stat & Actuarial Sci, Iowa City, IA 52242 USAJoyee Ghosh - Univ Iowa, Dept Stat & Actuarial Sci, Iowa City, IA 52242 USA
- Resource Type
- Journal article
- Publication Details
- STATISTICS AND APPLICATIONS, Vol.22(3), pp.199-209
- Publisher
- Soc Statistics Computer & Applications
- ISSN
- 2454-7395
- Number of pages
- 11
- Grant note
- DMS-1612763 / NSF; National Science Foundation (NSF)
- Language
- English
- Date published
- 01/01/2024
- Academic Unit
- Statistics and Actuarial Science
- Record Identifier
- 9984774237602771
Metrics
2 Record Views