Adversarial classification under Gaussian mechanism: Calibrating the attack to sensitivity

Unsal, Ayse ; Önen, Melek
Submitted to ArXiV, 24 January 2022

This work studies anomaly detection under differential privacy with Gaussian perturbation using both statistical and information-theoretic tools. In our setting, the adversary aims to modify the differentially private information of a statistical dataset by inserting additional data without being detected by using the differential privacy to her/his own benefit. To this end, firstly via hypothesis testing, we characterize a statistical threshold for the adversary, which balances the privacy budget and the induced bias (the impact of the attack) in order to remain undetected. In addition, we establish the privacy-distortion tradeoff in the sense of the well-known rate-distortion function for the Gaussian mechanism by using an information-theoretic approach and present an upper bound on the variance of the attacker's additional data as a function of the sensitivity and the original data's second-order statistics. Lastly, we introduce a new privacy metric based on Chernoff information for classifying adversaries under differential privacy as a stronger alternative for the Gaussian mechanism. Analytical results are supported by numerical evaluations.

 
 

HAL
Type:
Conference
Date:
2022-01-24
Department:
Digital Security
Eurecom Ref:
6796
Copyright:
© EURECOM. Personal use of this material is permitted. The definitive version of this paper was published in Submitted to ArXiV, 24 January 2022 and is available at :

PERMALINK : https://www.eurecom.fr/publication/6796