The composition of multiple Gaussian Processes as a Deep Gaussian Process (DGP) enables a deep probabilistic approach to flexibly quantify uncertainty and carry out model selection in various learning scenarios. In this work, we introduce a novel formulation of DGPs based on random Fourier features that we train using stochastic variational inference. Our proposal yields an efficient way of training DGP architectures without compromising on predictive performance. Through a series of experiments, we illustrate how our model compares favorably to other state-of-the-art inference methods for DGPs for both regression and classification tasks. We also demonstrate how an asynchronous implementation of stochastic gradient optimization can exploit the computational power of distributed systems for large-scale DGP learning
Practical learning of deep gaussian processes via random Fourier features
Submitted on ArXiv, October 14th, 2016
Type:
Rapport
Date:
2016-10-14
Department:
Data Science
Eurecom Ref:
5028
Copyright:
© EURECOM. Personal use of this material is permitted. The definitive version of this paper was published in Submitted on ArXiv, October 14th, 2016 and is available at :
See also:
PERMALINK : https://www.eurecom.fr/publication/5028