Multimodal variational autoencoders for sensor fusion and cross generation

Da Silva–Filarder, Matthieu; Ancora, Andrea; Filippone, Maurizio; Michiardi, Pietro
ICMLA 2021, 20th IEEE International Conference on Machine Learning and Applications, 13-15 December 2021, Pasadena, CA, USA (Virtual Event)

The cognitive system of humans, which allows them to create representations of their surroundings exploiting multiple senses, has inspired several applications to mimic this remarkable property. The key for learning rich representations of data collected by multiple, diverse sensors, is to design generative models that can ingest multimodal inputs, and merge them in a common space. This enables to: i) obtain a coherent generation of samples for all modalities, ii) enable cross-sensor generation, by using available modalities to generate missing ones and iii) exploit synergy across modalities, to increase reconstruction quality. In this work, we study multimodal variational autoencoders, and propose new methods for learning a joint representation that can both improve synergy and enable cross generation of missing sensor data. We evaluate these approaches on well-established datasets as well as on a new dataset that involves multimodal object detection with three modalities. Our results shed light on the role of joint posterior modeling and training objectives, indicating that even simple and efficient heuristics enable both synergy and cross generation properties to coexist.


DOI
Type:
Conference
City:
Pasadena
Date:
2021-12-13
Department:
Data Science
Eurecom Ref:
6799
Copyright:
© 2021 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
See also:

PERMALINK : https://www.eurecom.fr/publication/6799