SECRYPT 2021, 18th International Conference on Security and Cryptography, 6-8 July 2021, (Virtual Conference)
With the development of machine learning models for task automation, watermarking appears to be a suitable solution to protect one’s own intellectual property. Indeed, by embedding secret specific markers into the model, the model owner is able to analyze the behavior of any model on these markers, called trigger instances and hence claim its ownership if this is the case. However, in the context of a Machine Learning as a Service (MLaaS) platform where models are available for inference, an attacker could forge such proofs in order to steal the ownership of these watermarked models in order to make a profit out of it. This type of attacks, called watermark forging attacks, is a serious threat against the intellectual property of models owners. Current work provides limited solutions to this problem: They constrain model owners to disclose either their models or their trigger set to a third party. In this paper, we propose counter-measures against watermark forging attacks, in a black-box environment and compatible with privacy-preserving machine learning where both the model weights and the inputs could be kept private. We show that our solution successfully prevents two different types of watermark forging attacks with minimalist assumptions regarding either the access to the model’s
weight or the content of the trigger set.
Type:
Conférence
Date:
2021-07-06
Department:
Sécurité numérique
Eurecom Ref:
6531
Copyright:
Scitepress
See also: