Automatic speaker verification is susceptible to various manipulations and spoofing, such as text-to-speech (TTS) synthesis, voice conversion (VC), replay, tampering, adversarial attacks, and so on. In this paper, we consider a new spoofing scenario called “Partial Spoof” (PS) in which synthesized or transformed audio segments are embedded into a bona fide speech utterance. While existing countermeasures (CMs) can detect fully spoofed utterances where entire audio signals are generated through the application of TTS and/or VC algorithms, there is a need for their adaptation or extension to the PS scenario to detect utterances in which only a part of the audio signal is generated using TTS or VC algorithms and hence only a fraction of an utterance is spoofed. For improved explainability, such new CMs should ideally also be able to detect such short spoofed segments. Our previous study introduced the first version of a speech database suitable for training CMs for the PS scenario and showed that, although it is possible to train CMs to execute the two types of detection described above, there is much room for improvement. In this paper we propose various improvements to construct a significantly more accurate CM that can detect short generated spoofed audio segments at finer temporal resolutions. First, we introduce newly proposed self-supervised pre-trained models as enhanced feature extractors. Second, we extend the PartialSpoof database by adding segment labels for various temporal resolutions. Since the short spoofed audio segments to be embedded by attackers are of variable length, six different temporal resolutions are considered, ranging from as short as 20 ms to as large as 640 ms. Third, we propose a new CM and training strategies that enable the simultaneous use of the segment-level labels at different temporal resolutions as well as utterance-level labels to execute the above two types of detection at the same time. We also show that the proposed CM is capable of detecting spoofing at the utterance level with low error rates, not only in the PS scenario but also in a related logical access (LA) scenario. The equal error rates (EERs) of utterance-level detection on the PartialSpoof database and the ASVspoof 2019 LA database were 0.47% and 0.59%, respectively.
The PartialSpoof database and countermeasures for the detection of short generated audio segments embedded in a speech utterance
IEEE/ACM Transactions on Audio Speech and Language Processing, Vol.31, 30 December 2022
Type:
Journal
Date:
2022-12-30
Department:
Sécurité numérique
Eurecom Ref:
6870
Copyright:
© 2022 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
See also:
PERMALINK : https://www.eurecom.fr/publication/6870