Artefacts that serve to distinguish bona fide speech from spoofed or deepfake speech are known to reside in specific subbands and temporal segments. Various approaches can be used to capture and model such artefacts, however, none works well across a spectrum of diverse spoofing attacks. Reliable detection then often depends upon the fusion of multiple detection systems, each tuned to detect different forms of attack. In this paper we show that better performance can be achieved when the fusion is performed within the model itself and when the representation is learned automatically from raw waveform inputs. The principal contribution is a spectro-temporal graph attention network (GAT) which learns the relationship between cues spanning different sub-bands and temporal intervals. Using a model-level graph fusion of spectral (S) and temporal (T) sub-graphs and a graph pooling strategy to improve discrimination, the proposed RawGAT-ST model achieves an equal error rate of 1.06% for the ASVspoof 2019 logical access database. This is one of the best results reported to date and is reproducible using an open source implementation.
End-to-end spectro-temporal graph attention networks for speaker verification anti-spoofing and speech deepfake detection
ASVspoof 2021, Automatic Speaker Verification Spoofing And Countermeasures Challenge, 16 September 2021
© ISCA. Personal use of this material is permitted. The definitive version of this paper was published in ASVspoof 2021, Automatic Speaker Verification Spoofing And Countermeasures Challenge, 16 September 2021 and is available at : http://dx.doi.org/10.21437/ASVSPOOF.2021-1
PERMALINK : https://www.eurecom.fr/publication/6610