Adversarial attacks through architectures and spectra in face recognition

Bisogni, Carmen; Cascone, Lucia; Dugelay, Jean-Luc; Pero, Chiara
Pattern Recognition Letters, 15 April 2021

The ability of Deep Neural Networks (DNNs) to make fast predictions with high accuracy made them very popular in real-time applications. DNNs are nowadays in use for secure access to services or mobile devices. However, as DNNs use increased, at the same time attack techniques are born to “break” them. This paper presents a particular way to fool DNNs by moving from one spectrum to another one. The application field we explore is face recognition. The attack is first built on a trained Face DNN on Visible, Near Infrared or Thermal images, then transposed to another spectrum to fool another DNN. The attacks performed are based on the Fast Gradient Sign Method with the aim to misclassify the subject knowing the DNN to attack (White-Box Attack) but without knowing the DNN on which the attack will be transposed (Black-Box Attack). Results show that this cross-spectral attack is able to fool the most popular DNN architectures. In worst cases the DNN becomes useless to perform face recognition after the attack.

Sécurité numérique
Eurecom Ref:
© Elsevier. Personal use of this material is permitted. The definitive version of this paper was published in Pattern Recognition Letters, 15 April 2021 and is available at :
See also: