MMSP 2023, IEEE 25th International Workshop on Multimedia Signal Processing, 27-29 September 2023, Poitiers, France
The progress achieved in deepfake technology has been remarkable; however, evaluating the resulting videos and comparing different generators remains challenging. A primary concern arises from the lack of ground-truth data, except for self-reenactment scenarios. Additionally, available datasets may have inherent limitations, such as lacking expected animations or demonstrating inadequate subject diversity. Furthermore, there are ethical and privacy concerns when using real individuals’ faces in such applications. This paper goes beyond the state-of-the-art dealing with the evaluation of deepfake generators. by introducing an innovative dataset featuring Metahumans. Our dataset ensures the availability of ground-truth data and encompasses diverse facial expressions, variations in pose and illumination conditions, and combinations of these factors. Additionally, we meticulously control and verify the expected animations within the dataset. The proposed extension enables accurate evaluation of cross-reenactment generated images. By utilizing various established metrics, we demonstrate a high degree of correlation between the generator’s scores obtained from deepfake videos of Metahumans and those obtained from deepfake videos of real persons.
Type:
Conference
City:
Poitiers
Date:
2023-09-27
Department:
Digital Security
Eurecom Ref:
7424
Copyright:
© 2023 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
See also: