Interpreting adversarial examples in deep learning: A review
Deep learning technology is increasingly being applied in safety-critical scenarios but has
recently been found to be susceptible to imperceptible adversarial perturbations. This raises …
recently been found to be susceptible to imperceptible adversarial perturbations. This raises …
Backdoor learning: A survey
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …
that the attacked models perform well on benign samples, whereas their predictions will be …
Backdoor attack with imperceptible input and latent modification
Recent studies have shown that deep neural networks (DNN) are vulnerable to various
adversarial attacks. In particular, an adversary can inject a stealthy backdoor into a model …
adversarial attacks. In particular, an adversary can inject a stealthy backdoor into a model …
Deepsweep: An evaluation framework for mitigating DNN backdoor attacks using data augmentation
Public resources and services (eg, datasets, training platforms, pre-trained models) have
been widely adopted to ease the development of Deep Learning-based applications …
been widely adopted to ease the development of Deep Learning-based applications …
Black-box detection of backdoor attacks with limited information and data
Although deep neural networks (DNNs) have made rapid progress in recent years, they are
vulnerable in adversarial environments. A malicious backdoor could be embedded in a …
vulnerable in adversarial environments. A malicious backdoor could be embedded in a …
Detecting backdoors during the inference stage based on corruption robustness consistency
Deep neural networks are proven to be vulnerable to backdoor attacks. Detecting the trigger
samples during the inference stage, ie, the test-time trigger sample detection, can prevent …
samples during the inference stage, ie, the test-time trigger sample detection, can prevent …
Backdooring multimodal learning
Deep Neural Networks (DNNs) are vulnerable to backdoor attacks, which poison the training
set to alter the model prediction over samples with a specific trigger. While existing efforts …
set to alter the model prediction over samples with a specific trigger. While existing efforts …
Can we use arbitrary objects to attack lidar perception in autonomous driving?
As an effective way to acquire accurate information about the driving environment, LiDAR
perception has been widely adopted in autonomous driving. The state-of-the-art LiDAR …
perception has been widely adopted in autonomous driving. The state-of-the-art LiDAR …
A Comprehensive Survey on Backdoor Attacks and their Defenses in Face Recognition Systems
Deep learning has significantly transformed face recognition, enabling the deployment of
large-scale, state-of-the-art solutions worldwide. However, the widespread adoption of deep …
large-scale, state-of-the-art solutions worldwide. However, the widespread adoption of deep …
Computation and data efficient backdoor attacks
Backdoor attacks against deep learning have been widely studied. Various attack
techniques have been proposed for different domains and paradigms, eg, image, point …
techniques have been proposed for different domains and paradigms, eg, image, point …