DPM 2018, 13th DPM International Workshop on Data Privacy Management, 6-7 September 2018, Barcelona, Spain
Deep Learning has recently become very popular thanks to major advances in cloud computing technology. However, pushing Deep Learning computations to the cloud poses a risk to the privacy of the data involved. Recent solutions propose to encrypt data with Fully Homomorphic Encryption (FHE) enabling the execution of operations over encrypted data. Given the serious performance constraints of this technology, recent privacy preserving deep learning solutions aim at first customizing the underlying neural network operations and further apply encryption. While the main neural network layer investigated so far is the activation layer, in this paper we study the Batch Normalization
(BN) layer: a modern layer that, by addressing internal covariance shift, has been proved very effective in increasing the accuracy of Deep Neural Networks. In order to be compatible with the use of FHE, we propose to reformulate batch normalization which results in a moderate decrease on the number of operations. Furthermore, we devise a re-parametrization method that allows the absorption of batch normalization by previous
layers. We show that whenever these two methods are integrated during the inference phase and executed over FHE-encrypted data, there is a signicant performance gain with no loss on accuracy. We also note that this gain is valid both in the encrypted and unencrypted domains.
© Springer. Personal use of this material is permitted. The definitive version of this paper was published in DPM 2018, 13th DPM International Workshop on Data Privacy Management, 6-7 September 2018, Barcelona, Spain and is available at : http://doi.org/10.1007/978-3-030-00305-0_27