Biometric face authentication leverages the unique biological features of an individual’s face, providing a secure and convenient alternative to traditional password-based authentication. With the widespread adoption of face verification in remote authentication services and portable devices, ensuring the
robustness of these systems against spoofing attacks has become increasingly critical. While traditional biometric threat models primarily focus on vulnerabilities within verification pipelines, the rise of AI-generated deepfake technology introduces a new and sophisticated attack vector. Deepfakes
enable real-time manipulation of facial images, posing a significant challenge to authentication security by spoofing verification systems.
This thesis addresses multiple aspects of face authentication, including face verification and attacks such as deepfake and injection attacks. It contributes to improving both the accuracy of biometric authentication systems and the robustness of deepfake detection algorithms, enhancing overall security.
The first contribution of this thesis is the introduction of an advanced face alignment method designed to improve verification accuracy by mitigating the effects of variations in head pose, facial expression, and illumination.
The second contribution focuses on understanding the threats posed by deepfake attacks. We analyze the quality of deepfakes generated by face reenactment methods and introduce a novel deepfake quality assessment protocol. This protocol systematically evaluates the video frame quality of face-reenactment techniques. Given the lack of standardized datasets for such assessments, we propose two video generation approaches utilizing 3D head models to create diverse and controlled evaluation scenarios.
Furthermore, we analyze the impact of beautification filters on deepfake detection systems, revealing significant vulnerabilities in state-of-the-art classifiers when subjected to such modifications. To improve deepfake detection performance, we propose leveraging raw domain data as input, thereby reducing the impact of common image processing techniques such as compression and beautification filters. By constraining the distribution of real images, our approach enhances the model’s ability to differentiate between genuine and manipulated content, improving detection accuracy in challenging scenarios.
Lastly, we investigate the role of compression artifacts in detecting digital replay attacks, where adversaries inject authentic video footage into the system via virtual camera software. We explore a novel strategy of bypassing the compression pipeline and directly capturing uncompressed image data from the user’s device. This approach strengthens anti-spoofing mechanisms by exploiting the differences between uncompressed sensor data and compressed media typically used in injected attacks.
The findings and methodologies presented in this thesis contribute to the ongoing efforts to secure biometric authentication systems against evolving threats, advancing the field of deepfake detection and face verification security.