Posterior variance predictions in sparse Bayesian learning under approximate inference techniques

Kurisummoottil Thomas, Christo; Slock, Dirk TM
ASILOMAR 2020, 54th Asilomar Conference on Signals, Systems, and Computers, 1-5 November 2020, Virtual Conference

Sparse Bayesian Learning (SBL), initially proposed in the Machine Learning (ML) literature, is an efficient and well-studied framework for sparse signal recovery. SBL uses hierarchical Bayes with a decorrelated Gaussian prior in which the variance profile is also to be estimated. This is more sparsity inducing than e.g. a Laplacian prior. However, SBL does not scale with problem dimensions due to the computational complexity associated with the matrix inversion in Linear Mimimum Mean Squared Error (LMMSE) estimation. To address this issue, various low complexity approximate Bayesian inference techniques have been introduced for the LMMSE component, including Variational Bayesian (VB) inference, Space Alternating Variational Estimation (SAVE) or Message Passing (MP) algorithms such as Belief Propagation (BP) or Expectation Propagation (EP) or Approximate MP (AMP). These algorithms may converge to the correct LMMSE estimate. However, in ML we are often also interested in having posterior variance information. SBL via BP or SAVE provides (largely) underestimated variance estimates. AMP style algorithms may provide more accurate variance information. The State Evolution analysis may show convergence of the (sum) MSE to the MMSE value. But we are interested also in the MSE of the individual components. To this end, utilizing the random matrix theory results, we show that in the large system limit, under i.i.d. entries in the measurement matrix, the per component MSE predicted by BP or xAMP converges to the Bayes optimal value.


DOI
Type:
Conference
Date:
2020-11-01
Department:
Communication systems
Eurecom Ref:
6384
Copyright:
Asilomar

PERMALINK : https://www.eurecom.fr/publication/6384