IPAS 2025, 6th IEEE International Conference on Image Processing Applications and Systems, 9-11 January 2025, Lyon, France
Politicians and government leaders are critical targets for deepfake attacks. A single deepfake involving these individuals can severely damage their careers or, in extreme
cases, pose a national security threat. Attackers can leverage vast amounts of publicly available audio and video recordings to train their models, making this threat even more pressing. In response, specialized deepfake detectors have been developed to focus on detecting deepfakes targeting a specific Person of Interest (POI). By learning facial expressions and movements unique to the POI, these detectors can identify inconsistencies in deepfakes where these authentic attributes are absent. However,
previous methods relied on Facial Action Units, which offer an incomplete representation of the POI’s behavior. In this paper, we propose a novel approach to learning POI-specific movements without requiring deepfake samples during training, making it independent of any deepfake generation methods. Although our technique is speaker-dependent, it provides a robust solution for protecting high-profile individuals who are particularly exposed to deepfake threats.
Type:
Conference
City:
Lyon
Date:
2025-01-09
Department:
Digital Security
Eurecom Ref:
7971
Copyright:
© 2025 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
See also: