Recent advances in adversarial training for adversarial robustness
Adversarial training is one of the most effective approaches defending against adversarial
examples for deep learning models. Unlike other defense strategies, adversarial training …
examples for deep learning models. Unlike other defense strategies, adversarial training …
Trustworthy AI: From principles to practices
The rapid development of Artificial Intelligence (AI) technology has enabled the deployment
of various systems based on it. However, many current AI systems are found vulnerable to …
of various systems based on it. However, many current AI systems are found vulnerable to …
Cross-entropy loss functions: Theoretical analysis and applications
Cross-entropy is a widely used loss function in applications. It coincides with the logistic loss
applied to the outputs of a neural network, when the softmax is used. But, what guarantees …
applied to the outputs of a neural network, when the softmax is used. But, what guarantees …
LAS-AT: adversarial training with learnable attack strategy
Adversarial training (AT) is always formulated as a minimax problem, of which the
performance depends on the inner optimization that involves the generation of adversarial …
performance depends on the inner optimization that involves the generation of adversarial …
Reflection backdoor: A natural backdoor attack on deep neural networks
Recent studies have shown that DNNs can be compromised by backdoor attacks crafted at
training time. A backdoor attack installs a backdoor into the victim model by injecting a …
training time. A backdoor attack installs a backdoor into the victim model by injecting a …
Neural attention distillation: Erasing backdoor triggers from deep neural networks
Deep neural networks (DNNs) are known vulnerable to backdoor attacks, a training time
attack that injects a trigger pattern into a small proportion of training data so as to control the …
attack that injects a trigger pattern into a small proportion of training data so as to control the …
Privacy and robustness in federated learning: Attacks and defenses
As data are increasingly being stored in different silos and societies becoming more aware
of data privacy issues, the traditional centralized training of artificial intelligence (AI) models …
of data privacy issues, the traditional centralized training of artificial intelligence (AI) models …
Adversarial weight perturbation helps robust generalization
The study on improving the robustness of deep neural networks against adversarial
examples grows rapidly in recent years. Among them, adversarial training is the most …
examples grows rapidly in recent years. Among them, adversarial training is the most …
Improving adversarial robustness requires revisiting misclassified examples
Deep neural networks (DNNs) are vulnerable to adversarial examples crafted by
imperceptible perturbations. A range of defense techniques have been proposed to improve …
imperceptible perturbations. A range of defense techniques have been proposed to improve …
Understanding and improving fast adversarial training
M Andriushchenko… - Advances in Neural …, 2020 - proceedings.neurips.cc
A recent line of work focused on making adversarial training computationally efficient for
deep learning models. In particular, Wong et al.(2020) showed that $\ell_\infty $-adversarial …
deep learning models. In particular, Wong et al.(2020) showed that $\ell_\infty $-adversarial …