Label poisoning is all you need

R Jha, J Hayase, S Oh - Advances in Neural Information …, 2023 - proceedings.neurips.cc
In a backdoor attack, an adversary injects corrupted data into a model's training dataset in
order to gain control over its predictions on images with a specific attacker-defined trigger. A …

What can discriminator do? towards box-free ownership verification of generative adversarial networks

Z Huang, B Li, Y Cai, R Wang, S Guo… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract In recent decades, Generative Adversarial Network (GAN) and its variants have
achieved unprecedented success in image synthesis. However, well-trained GANs are …

Towards faithful xai evaluation via generalization-limited backdoor watermark

M Ya, Y Li, T Dai, B Wang, Y Jiang… - The Twelfth International …, 2023 - openreview.net
Saliency-based representation visualization (SRV)($ eg $, Grad-CAM) is one of the most
classical and widely adopted explainable artificial intelligence (XAI) methods for its simplicity …

Robust identity perceptual watermark against deepfake face swapping

T Wang, M Huang, H Cheng, B Ma, Y Wang - arXiv preprint arXiv …, 2023 - arxiv.org
Notwithstanding offering convenience and entertainment to society, Deepfake face
swapping has caused critical privacy issues with the rapid development of deep generative …

ModelLock: Locking Your Model With a Spell

Y Gao, Y Sun, X Ma, Z Wu, YG Jiang - Proceedings of the 32nd ACM …, 2024 - dl.acm.org
This paper presents a novel model protection paradigm Model Locking that locks the
performance of a finetuned model on private data to make it unusable or unextractable …

A Survey on Securing Image-Centric Edge Intelligence

L Tang, H Hu, M Gabbouj, Q Ye, Y Xiang, J Li… - ACM Transactions on …, 2024 - dl.acm.org
Facing enormous data generated at the network edge, Edge Intelligence (EI) emerges as
the fusion of Edge Computing and Artificial Intelligence, revolutionizing edge data …

Backdoor attack on hash-based image retrieval via clean-label data poisoning

K Gao, J Bai, B Chen, D Wu, ST Xia - arXiv preprint arXiv:2109.08868, 2021 - arxiv.org
A backdoored deep hashing model is expected to behave normally on original query
images and return the images with the target label when a specific trigger pattern presents …

Persistence of Backdoor-based Watermarks for Neural Networks: A Comprehensive Evaluation

AT Ngo, CS Heng, N Chattopadhyay… - arXiv preprint arXiv …, 2025 - arxiv.org
Deep Neural Networks (DNNs) have gained considerable traction in recent years due to the
unparalleled results they gathered. However, the cost behind training such sophisticated …

Towards Understanding and Enhancing Security of Proof-of-Training for DNN Model Ownership Verification

Y Chang, H Jiang, C Lin, X Huang, J Weng - arXiv preprint arXiv …, 2024 - arxiv.org
The great economic values of deep neural networks (DNNs) urge AI enterprises to protect
their intellectual property (IP) for these models. Recently, proof-of-training (PoT) has been …

Deep Watermarking for Deep Intellectual Property Protection: A Comprehensive Survey

Y Sun, L Liu, N Yu, Y Liu, Q Tian, D Guo - Available at SSRN 4697020 - papers.ssrn.com
Highlights We provide a comprehensive survey of deep learning watermarking. We present
the problem definition, criteria, challenges, and threats of watermarking. We give a …