Neural polarizer: A lightweight and effective backdoor defense via purifying poisoned features

M Zhu, S Wei, H Zha, B Wu - Advances in Neural …, 2024 - proceedings.neurips.cc
Recent studies have demonstrated the susceptibility of deep neural networks to backdoor
attacks. Given a backdoored model, its prediction of a poisoned sample with trigger will be …

Shared adversarial unlearning: Backdoor mitigation by unlearning shared adversarial examples

S Wei, M Zhang, H Zha, B Wu - Advances in Neural …, 2023 - proceedings.neurips.cc
Backdoor attacks are serious security threats to machine learning models where an
adversary can inject poisoned samples into the training set, causing a backdoored model …

Backdoor Attacks and Defenses Targeting Multi-Domain AI Models: A Comprehensive Review

S Zhang, Y Pan, Q Liu, Z Yan, KKR Choo… - ACM Computing …, 2024 - dl.acm.org
Since the emergence of security concerns in artificial intelligence (AI), there has been
significant attention devoted to the examination of backdoor attacks. Attackers can utilize …

Enhancing fine-tuning based backdoor defense with sharpness-aware minimization

M Zhu, S Wei, L Shen, Y Fan… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Backdoor defense, which aims to detect or mitigate the effect of malicious triggers introduced
by attackers, is becoming increasingly critical for machine learning security and integrity …

BadCLIP: Trigger-Aware Prompt Learning for Backdoor Attacks on CLIP

J Bai, K Gao, S Min, ST Xia, Z Li… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Abstract Contrastive Vision-Language Pre-training known as CLIP has shown promising
effectiveness in addressing downstream image recognition tasks. However recent works …

Not all prompts are secure: A switchable backdoor attack against pre-trained vision transfomers

S Yang, J Bai, K Gao, Y Yang, Y Li… - Proceedings of the …, 2024 - openaccess.thecvf.com
Given the power of vision transformers a new learning paradigm pre-training and then
prompting makes it more efficient and effective to address downstream visual recognition …

Tat: Targeted backdoor attacks against visual object tracking

Z Cheng, B Wu, Z Zhang, J Zhao - Pattern Recognition, 2023 - Elsevier
Visual object tracking (VOT) is a fundamental computer vision task that aims to track a target
in a sequence of video frames. It has been broadly adopted in safety-and security-critical …

Pointcrt: Detecting backdoor in 3d point cloud via corruption robustness

S Hu, W Liu, M Li, Y Zhang, X Liu, X Wang… - Proceedings of the 31st …, 2023 - dl.acm.org
Backdoor attacks for point clouds have elicited mounting interest with the proliferation of
deep learning. The point cloud classifiers can be vulnerable to malicious actors who seek to …

Follow-your-click: Open-domain regional image animation via short prompts

Y Ma, Y He, H Wang, A Wang, C Qi, C Cai, X Li… - arXiv preprint arXiv …, 2024 - arxiv.org
Despite recent advances in image-to-video generation, better controllability and local
animation are less explored. Most existing image-to-video methods are not locally aware …

Backdoor Attacks to Deep Neural Networks: A Survey of the Literature, Challenges, and Future Research Directions

O Mengara, A Avila, TH Falk - IEEE Access, 2024 - ieeexplore.ieee.org
Deep neural network (DNN) classifiers are potent instruments that can be used in various
security-sensitive applications. Nonetheless, they are vulnerable to certain attacks that …