EURECOM DATA Talk : “Back to the future: revisiting inducing-variable approximations with a new Bayesian approach to sparse Gaussian processes”

Simone Rossi (EURECOM, Data Science Department) -
Data Science

Date: March 25th 2021
Location: Eurecom - Eurecom

Abstract: Bayesian kernel machines based on Gaussian processes combine the modeling flexibility of kernel methods with the ability to carry out sound quantification of uncertainty. Modeling and inference with Gaussian processes have evolved considerably over the last few years with key contributions in the direction of scalability to any number of data-points. Variational inference techniques based on inducing variables provide an elegant framework for scalable posterior estimation and they provide a robust procedure for point estimation of the inducing inputs, i.e. the location of the inducing variables. In this talk, we challenge the common wisdom that optimizing the inducing inputs in the variational framework yields optimal performance. We show that, by revisiting old model approximations such as the fully-independent training conditionals endowed with powerful sampling-based inference methods, treating both inducing locations and GP hyper-parameters in a Bayesian way can improve performance significantly. Based on recent advancements of Hamiltonian Monte Carlo methods, we develop a fully Bayesian approach to scalable GP and deep GP models, and demonstrate its state-of-the-art performance through an extensive experimental campaign. Reference: Rossi, S., Heinonen, M., Bonilla, E. V., Shen, Z., & Filippone, M. (2021). Sparse Gaussian Processes Revisited: Bayesian Approaches to Inducing-Variable Approximations. AISTATS 2021. Biography : Simone has been a PhD candidate under the supervision of Prof. Maurizio Filippone at EURECOM since 2018. He also holds a MSc in Computer Engineering from Telecom Paris (France) and a MSc in Electronic Engineering from Politecnico di Torino (Italy). In 2017, he was at Columbia University (New York, USA) to work on new prototypes of hardware accelerators for deep learning models in the context of high-efficient system-of-chip. Now, his main research has been focused on novel methods for applying scalable Bayesian inference to deep models (including deep neural networks and Gaussian processes), with approximate variational inference techniques and Monte-Carlo methods. In the past few years, he gave talks and poster presentations at several machine learning conferences, such as ICML 2019 (Long Beach, USA), NeurIPS 2019 (Vancouver, CA) during the Bayesian Deep Learning workshop, NeurIPS 2020 and AISTATS 2021 (both virtual). Data Science Seminars:

Permalink: https://www.eurecom/seminar/100632