A visual analysis/synthesis feedback loop for unconstrained face tracking

Valente, Stéphane; Dugelay, Jean-Luc
Research report RR-99-051

We propose a novel approach for face tracking, resulting in a visual feedback loop: instead of trying to adapt a more or less realistic artificial face model to an individual, we construct from precise range data a specific texture and wire-frame face model, whose realism allows the analysis and synthesis modules to visually cooperate in the image plane, by directly using 2 D patterns synthesized by the face model. Unlike other feedback loops found in the literature, we do not explicitely handle the 3 D complex geometric data of the face model, to make real{time manipulations possible. Our main contribution is a complete face tracking and pose estimation framework, with few assumptions about the face rigid motion (allowing large rotations out of the image plane), and without marks or makeup on the user's face. Our framework feeds the featur {tracking procedure with synthesized facial patterns, controlled by an extended Kalman filter. Within this framework, we present original and efficient geometric and photometric modelling techniques, and a reformulation of a block {matching algorithm to make it match synthesized patterns with real images, and avoid background areas during the matching. We also o er some numerical evaluations, assessing the validity of our algorithms, and new developments in the context of facial animation. Our face tracking algorithm may be used to recover the 3 D position and orientation of a real face and generate a MPEG 4 animation stream to reproduce the rigid motion of the face with a synthetic face model. It may also serve as a preprocessing step for further facial expression analysis algorithms, since it locates the position of the facial features in the image plane, and gives 3D information to take into account the possible coupling between pose and expressions of the analysed facial images.


Type:
Report
Date:
1999-11-01
Department:
Digital Security
Eurecom Ref:
273
Copyright:
© EURECOM. Personal use of this material is permitted. The definitive version of this paper was published in Research report RR-99-051 and is available at :
See also:

PERMALINK : https://www.eurecom.fr/publication/273