A comprehensive survey on robust image watermarking

W Wan, J Wang, Y Zhang, J Li, H Yu, J Sun - Neurocomputing, 2022 - Elsevier
With the rapid development and popularity of the Internet, multimedia security has become a
general essential concern. Especially, as manipulation of digital images gets much easier …

The stable signature: Rooting watermarks in latent diffusion models

P Fernandez, G Couairon, H Jégou… - Proceedings of the …, 2023 - openaccess.thecvf.com
Generative image modeling enables a wide range of applications but raises ethical
concerns about responsible deployment. This paper introduces an active strategy combining …

Watermarking neural networks with watermarked images

H Wu, G Liu, Y Yao, X Zhang - IEEE Transactions on Circuits …, 2020 - ieeexplore.ieee.org
Watermarking neural networks is a quite important means to protect the intellectual property
(IP) of neural networks. In this paper, we introduce a novel digital watermarking framework …

A survey of deep neural network watermarking techniques

Y Li, H Wang, M Barni - Neurocomputing, 2021 - Elsevier
Abstract Protecting the Intellectual Property Rights (IPR) associated to Deep Neural
Networks (DNNs) is a pressing need pushed by the high costs required to train such …

Deep model intellectual property protection via deep watermarking

J Zhang, D Chen, J Liao, W Zhang… - … on Pattern Analysis …, 2021 - ieeexplore.ieee.org
Despite the tremendous success, deep neural networks are exposed to serious IP
infringement risks. Given a target deep model, if the attacker knows its full information, it can …

Diffusionshield: A watermark for copyright protection against generative diffusion models

Y Cui, J Ren, H Xu, P He, H Liu, L Sun, Y Xing… - arXiv preprint arXiv …, 2023 - arxiv.org
Recently, Generative Diffusion Models (GDMs) have showcased their remarkable
capabilities in learning and generating images. A large community of GDMs has naturally …

Plmmark: a secure and robust black-box watermarking framework for pre-trained language models

P Li, P Cheng, F Li, W Du, H Zhao, G Liu - Proceedings of the AAAI …, 2023 - ojs.aaai.org
The huge training overhead, considerable commercial value, and various potential security
risks make it urgent to protect the intellectual property (IP) of Deep Neural Networks (DNNs) …

[HTML][HTML] Deepfacelab: Integrated, flexible and extensible face-swapping framework

K Liu, I Perov, D Gao, N Chervoniy, W Zhou… - Pattern Recognition, 2023 - Elsevier
Face swapping has drawn a lot of attention for its compelling performance. However, current
deepfake methods suffer the effects of obscure workflow and poor performance. To solve …

What can discriminator do? towards box-free ownership verification of generative adversarial networks

Z Huang, B Li, Y Cai, R Wang, S Guo… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract In recent decades, Generative Adversarial Network (GAN) and its variants have
achieved unprecedented success in image synthesis. However, well-trained GANs are …

Poison ink: Robust and invisible backdoor attack

J Zhang, C Dongdong, Q Huang, J Liao… - … on Image Processing, 2022 - ieeexplore.ieee.org
Recent research shows deep neural networks are vulnerable to different types of attacks,
such as adversarial attacks, data poisoning attacks, and backdoor attacks. Among them …