Convergence guarantees for adaptive Bayesian quadrature methods

Kanagawa, Motonobu; Hennig, Philipp
NIPS 2019, 33rd Conference on Neural Information Processing Systems, 8-14 December 2019, Vancouver, Canada

Adaptive Bayesian quadrature (ABQ) is a powerful approach to numerical integration that empirically compares favorably with Monte Carlo integration on problems of medium dimensionality (where nonadaptive quadrature is not competitive). Its key ingredient is an acquisition function that changes as a function of previously collected values of the integrand. While this adaptivity appears to be empirically powerful, it complicates analysis. Consequently, there are no theoretical guarantees so far for this class of methods. In this work, for a broad class of adaptive Bayesian quadrature methods, we prove consistency, deriving non-tight but informative convergence rates. To do so we introduce a new concept we call weak adaptivity. In guaranteeing consistency of ABQ, weak adaptivity is notionally similar to the ideas of detailed balance and ergodicity in Markov Chain Monte Carlo methods, which allow sufficient conditions for consistency of MCMC. Likewise, our results identify a large and flexible class of adaptive Bayesian quadrature rules as consistent, within which practitioners can develop empirically efficient methods.


Type:
Conference
City:
Vancouver
Date:
2019-12-08
Department:
Data Science
Eurecom Ref:
6036
Copyright:
© NIST. Personal use of this material is permitted. The definitive version of this paper was published in NIPS 2019, 33rd Conference on Neural Information Processing Systems, 8-14 December 2019, Vancouver, Canada and is available at :
See also:

PERMALINK : https://www.eurecom.fr/publication/6036