ASILOMAR 2019, Asilomar Conference on Signals, Systems, and Computers, 3-6 November 2019, Pacific Grove, CA, USA
Sparse Bayesian Learning (SBL) is an efficient and well-studied framework for sparse signal recovery. SBL relies on a parameterized prior on the sparse signal to be estimated. The prior is chosen (with estimated hyperparameters) such that it encourages sparsity
in the representation of the signal. However, SBL doesn’t scale with problem dimensions due to the computational complexity associated with matrix inversion. To address this issue, there exists low complexity methods based on approximate Bayesian inference. Various state of the art approximate inference methods are based on variational
Bayesian (VB) inference or message passing algorithms such as belief propagation (BP) or expectation propagation. Moreover, these approximate inference methods can be unified under the optimization of Bethe free energy with appropriate constraints. SBL
allows to treat more general signal models by the use of hierarchical prior formulation which eventually becomes more sparsity inducing than e.g., Laplacian prior. In this paper, we study the convergence behaviour of the mean and variance of the unknown parameters in SBL under approximate Bayesian inference.
Systèmes de Communication
© 2019 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.