Interpreting adversarial examples in deep learning: A review

S Han, C Lin, C Shen, Q Wang, X Guan - ACM Computing Surveys, 2023 - dl.acm.org
Deep learning technology is increasingly being applied in safety-critical scenarios but has
recently been found to be susceptible to imperceptible adversarial perturbations. This raises …

Backdoor learning: A survey

Y Li, Y Jiang, Z Li, ST Xia - IEEE Transactions on Neural …, 2022 - ieeexplore.ieee.org
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …

Backdoor attack with imperceptible input and latent modification

K Doan, Y Lao, P Li - Advances in Neural Information …, 2021 - proceedings.neurips.cc
Recent studies have shown that deep neural networks (DNN) are vulnerable to various
adversarial attacks. In particular, an adversary can inject a stealthy backdoor into a model …

Deepsweep: An evaluation framework for mitigating DNN backdoor attacks using data augmentation

H Qiu, Y Zeng, S Guo, T Zhang, M Qiu… - Proceedings of the …, 2021 - dl.acm.org
Public resources and services (eg, datasets, training platforms, pre-trained models) have
been widely adopted to ease the development of Deep Learning-based applications …

Black-box detection of backdoor attacks with limited information and data

Y Dong, X Yang, Z Deng, T Pang… - Proceedings of the …, 2021 - openaccess.thecvf.com
Although deep neural networks (DNNs) have made rapid progress in recent years, they are
vulnerable in adversarial environments. A malicious backdoor could be embedded in a …

Detecting backdoors during the inference stage based on corruption robustness consistency

X Liu, M Li, H Wang, S Hu, D Ye… - Proceedings of the …, 2023 - openaccess.thecvf.com
Deep neural networks are proven to be vulnerable to backdoor attacks. Detecting the trigger
samples during the inference stage, ie, the test-time trigger sample detection, can prevent …

Backdooring multimodal learning

X Han, Y Wu, Q Zhang, Y Zhou, Y Xu… - … IEEE Symposium on …, 2024 - ieeexplore.ieee.org
Deep Neural Networks (DNNs) are vulnerable to backdoor attacks, which poison the training
set to alter the model prediction over samples with a specific trigger. While existing efforts …

Can we use arbitrary objects to attack lidar perception in autonomous driving?

Y Zhu, C Miao, T Zheng, F Hajiaghajani, L Su… - Proceedings of the 2021 …, 2021 - dl.acm.org
As an effective way to acquire accurate information about the driving environment, LiDAR
perception has been widely adopted in autonomous driving. The state-of-the-art LiDAR …

A Comprehensive Survey on Backdoor Attacks and their Defenses in Face Recognition Systems

Q Le Roux, E Bourbao, Y Teglia, K Kallas - IEEE Access, 2024 - ieeexplore.ieee.org
Deep learning has significantly transformed face recognition, enabling the deployment of
large-scale, state-of-the-art solutions worldwide. However, the widespread adoption of deep …

Computation and data efficient backdoor attacks

Y Wu, X Han, H Qiu, T Zhang - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Backdoor attacks against deep learning have been widely studied. Various attack
techniques have been proposed for different domains and paradigms, eg, image, point …