ICASSP 2019, International Conference on Acoustics, Speech, and Signal Processing, 12-17 May 2019, Brighton, UK
In this paper, we address the fundamental problem of Sparse Bayesian Learning (SBL), where the received signal is a high-order tensor. We furthermore consider the problem of dictionary learning (DL), where the tensor observations are assumed to be generated
from a Kronecker structured (KS) dictionary matrix multiplied by the sparse coefficients. Exploiting the tensorial structure results in a reduction in the number of degrees of freedom in the learning problem, since the dimensions of each of the factor matrices are significantly smaller than the matricized dictionary if we vectorize the observations. We propose a novel fast algorithm called space alternating variational estimation with dictionary learning (SAVED-KS), which is a version of variational Bayes (VB)-SBL pushed to the scalar level. Similarly, as for SAGE (space-alternating generalized
expectation maximization) compared to EM, the component-wise approach of SAVED-KS compared to SBL renders it less likely to get stuck in bad local optima and its inherent damping (more cautious progression) also leads to typically faster convergence of
the non-convex optimization process. Simulation results show that the proposed algorithm has a faster convergence rate and lower mean squared error (MSE) compared to the alternating least squares (ALS) based method for tensor decomposition.
© 2019 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.