This paper presents a novel view-based approach to quantify and reproduce facial expressions, by systematically exploiting the degrees of freedom allowed by a realistic face model. This approach embeds efficient mesh morphing and texture animations to synthesize facial expressions. We suggest using eigenfeatures, built from synthetic images, and designing an estimator to interpret the responses of the eigenfeatures on a facial expression in terms of animation parameters.
Analysis and reproduction of facial expressions for realistic communicating clones
Journal of VLSI Signal Processing Systems, Volume 29, N°1/2, August/September 2001
© Springer. Personal use of this material is permitted. The definitive version of this paper was published in Journal of VLSI Signal Processing Systems, Volume 29, N°1/2, August/September 2001 and is available at : http://dx.doi.org/10.1023/A:1011119413862
PERMALINK : https://www.eurecom.fr/publication/661