IH&MMSEC 2021, 9th ACM Workshop on Information Hiding and Multimedia Security, June 22-25, 2021, Brussels, Belgium (Virtual Conference)
Binarized Neural Networks (BNN) provide efficient implementations of Convolutional Neural Networks (CNN). This makes them particularly suitable to perform fast and memory-light inference of neural networks running on resource-constrained devices. Motivated by the growing interest in CNN-based biometric recognition on potentially insecure devices, or as part of strong multi-factor authentication for sensitive applications, the protection of BNN inference on edge devices is rendered imperative. We propose a
new method to perform secure inference of BNN relying on secure multiparty computation. While preceding papers offered security in a semi-honest setting for BNN or malicious security for standard CNN, our work yields security with abort against one malicious adversary for BNN by leveraging on Replicated Secret Sharing (RSS) for an honest majority with three computing parties. Experimentally, we implement Banners on top of MP-SPDZ and compare it with prior work over binarized models trained for MNIST and CIFAR10 image classification datasets. Our results attest the efficiency
of Banners as a privacy-preserving inference technique.
© ACM, 2021. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in IH&MMSEC 2021, 9th ACM Workshop on Information Hiding and Multimedia Security, June 22-25, 2021, Brussels, Belgium (Virtual Conference) https://doi.org/10.1145/3437880.3460394