Affective Computing refers to computing that relates to, arises from, or deliberately influences emotions and has is natural application domain in highly abstracted human--computer interactions. Affective computing can be divided into three main parts, namely display, recognition, and synthesis. The design of intelligent machines able to create natural interactions with the users necessarily implies the use of affective computing technologies. We propose a generic architecture based on the framework "Multimodal Affective User Interface" by Lisetti and the psychological "Component Process Theory" by Scherer which puts the user at the center of the loop exploiting these three parts of affective computing. We propose a novel system performing automatic, real-time, emotion recognition through the analysis of human facial expressions and vocal prosody. We also discuss about the generation of believable facial expressions for different platforms and we detail our system based on Scherer theory. Finally we propose an intelligent architecture that we have developed capable of simulating the process of appraisal of emotions as described by Scherer.
Affective computing : display, recognition, and computer synthesis of emotions
© TELECOM ParisTech. Personal use of this material is permitted. The definitive version of this paper was published in Thesis and is available at :
PERMALINK : https://www.eurecom.fr/publication/2911