Improving training of likelihood-based generative models with Gaussian homotopy

Tran, Ba-Hien; Franzese, Giulio; Michiardi, Pietro; Filippone, Maurizio

Generative Models (GMs) have recently gained popularity thanks to their success in various domains. In computer vision, for instance, they are able to generate astonishing realistic-looking images. Likelihood-based GMs are fast at generating new samples, given that they need a single model evaluation per sample, but their sample quality is usually lower than score-based Diffusion Models (DMs). In this work, we verify that the success of score-based DMs is in part due to the process of data smoothing, by incorporating this in the training of likelihood-based GMs. In the literature of optimization, this process of data smoothing is referred to as Gaussian homotopy (GH), and it has strong theoretical grounding. Crucially, GH does not incur computational overheads, and it can be implemented by adding one line of code in any training loop. We report results on various GMs, including Variational Autoencoders and Normalizing Flows, applied to image datasets demonstrating that GH enables significant improvements in sample quality

Data Science
Eurecom Ref:
© 2023 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.