In this article we present a novel multimodal gender recognition system, which successfully integrates the head and mouth motion information with facial appearance by taking advantage of a unified probabilistic framework. In fact, we develop a temporal subsystem that has an extended feature space consisting of parameters related to head and mouth motion; at the same time, we introduce a complementary spatial subsystem based on a probabilistic extension of the eigenface approach. In the end, we implement an integration step to combine the similarity scores of the two parallel subsystems, using a suitable opinion fusion (or score fusion) strategy. The experiments show that not only facial appearance but also head and mouth motion possess a potentially relevant discriminatory power, and that the integration of different sources of biometric information from video sequences is the key strategy to develop more accurate and reliable recognition systems.
Facial gender recognition using multiple sources of visual information
MMSP 2008, 10th IEEE International Workshop on MultiMedia Signal Processing, October 8-10, 2008, Cairns, Queensland, Australia
© 2008 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
PERMALINK : https://www.eurecom.fr/publication/2631