Label poisoning is all you need
In a backdoor attack, an adversary injects corrupted data into a model's training dataset in
order to gain control over its predictions on images with a specific attacker-defined trigger. A …
order to gain control over its predictions on images with a specific attacker-defined trigger. A …
What can discriminator do? towards box-free ownership verification of generative adversarial networks
Abstract In recent decades, Generative Adversarial Network (GAN) and its variants have
achieved unprecedented success in image synthesis. However, well-trained GANs are …
achieved unprecedented success in image synthesis. However, well-trained GANs are …
Towards faithful xai evaluation via generalization-limited backdoor watermark
Saliency-based representation visualization (SRV)($ eg $, Grad-CAM) is one of the most
classical and widely adopted explainable artificial intelligence (XAI) methods for its simplicity …
classical and widely adopted explainable artificial intelligence (XAI) methods for its simplicity …
Robust identity perceptual watermark against deepfake face swapping
Notwithstanding offering convenience and entertainment to society, Deepfake face
swapping has caused critical privacy issues with the rapid development of deep generative …
swapping has caused critical privacy issues with the rapid development of deep generative …
ModelLock: Locking Your Model With a Spell
This paper presents a novel model protection paradigm Model Locking that locks the
performance of a finetuned model on private data to make it unusable or unextractable …
performance of a finetuned model on private data to make it unusable or unextractable …
A Survey on Securing Image-Centric Edge Intelligence
Facing enormous data generated at the network edge, Edge Intelligence (EI) emerges as
the fusion of Edge Computing and Artificial Intelligence, revolutionizing edge data …
the fusion of Edge Computing and Artificial Intelligence, revolutionizing edge data …
Backdoor attack on hash-based image retrieval via clean-label data poisoning
A backdoored deep hashing model is expected to behave normally on original query
images and return the images with the target label when a specific trigger pattern presents …
images and return the images with the target label when a specific trigger pattern presents …
Persistence of Backdoor-based Watermarks for Neural Networks: A Comprehensive Evaluation
AT Ngo, CS Heng, N Chattopadhyay… - arXiv preprint arXiv …, 2025 - arxiv.org
Deep Neural Networks (DNNs) have gained considerable traction in recent years due to the
unparalleled results they gathered. However, the cost behind training such sophisticated …
unparalleled results they gathered. However, the cost behind training such sophisticated …
Towards Understanding and Enhancing Security of Proof-of-Training for DNN Model Ownership Verification
The great economic values of deep neural networks (DNNs) urge AI enterprises to protect
their intellectual property (IP) for these models. Recently, proof-of-training (PoT) has been …
their intellectual property (IP) for these models. Recently, proof-of-training (PoT) has been …
Deep Watermarking for Deep Intellectual Property Protection: A Comprehensive Survey
Highlights We provide a comprehensive survey of deep learning watermarking. We present
the problem definition, criteria, challenges, and threats of watermarking. We give a …
the problem definition, criteria, challenges, and threats of watermarking. We give a …