Most face expression algorithms assume a front or 'near-to-front' head position. This assumption becomes an important limitation when studying input from real systems. In this article we present a new approach to robustly determine face expression independently of the head pose. Our analysis-synthesis cooperation, possible thanks to the use of a highly realistic 3D head model and the application of Kalman filtering to predict the user pose, permits to correctly track the interesting face features. Adapting 'near-to-front' analysis techniques based on the predicted pose enables us to use such algorithms with moving speakers.
Facial expression analysis robust to 3D head pose motion
ICME 2002 - IEEE International Conference on Multimedia and Expo, August 26-29 2002, Lausanne, Switzerland
© 2002 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
PERMALINK : https://www.eurecom.fr/publication/945