LAS-AT: adversarial training with learnable attack strategy
Adversarial training (AT) is always formulated as a minimax problem, of which the
performance depends on the inner optimization that involves the generation of adversarial …
performance depends on the inner optimization that involves the generation of adversarial …
Learning to augment distributions for out-of-distribution detection
Open-world classification systems should discern out-of-distribution (OOD) data whose
labels deviate from those of in-distribution (ID) cases, motivating recent studies in OOD …
labels deviate from those of in-distribution (ID) cases, motivating recent studies in OOD …
Robust generalization against photon-limited corruptions via worst-case sharpness minimization
Robust generalization aims to tackle the most challenging data distributions which are rare
in the training set and contain severe noises, ie, photon-limited corruptions. Common …
in the training set and contain severe noises, ie, photon-limited corruptions. Common …
Watermarking for out-of-distribution detection
Abstract Out-of-distribution (OOD) detection aims to identify OOD data based on
representations extracted from well-trained deep models. However, existing methods largely …
representations extracted from well-trained deep models. However, existing methods largely …
Harnessing out-of-distribution examples via augmenting content and style
Machine learning models are vulnerable to Out-Of-Distribution (OOD) examples, and such a
problem has drawn much attention. However, current methods lack a full understanding of …
problem has drawn much attention. However, current methods lack a full understanding of …
Better safe than sorry: Preventing delusive adversaries with adversarial training
Delusive attacks aim to substantially deteriorate the test accuracy of the learning model by
slightly perturbing the features of correctly labeled training examples. By formalizing this …
slightly perturbing the features of correctly labeled training examples. By formalizing this …
The enemy of my enemy is my friend: Exploring inverse adversaries for improving adversarial training
J Dong, SM Moosavi-Dezfooli… - Proceedings of the …, 2023 - openaccess.thecvf.com
Although current deep learning techniques have yielded superior performance on various
computer vision tasks, yet they are still vulnerable to adversarial examples. Adversarial …
computer vision tasks, yet they are still vulnerable to adversarial examples. Adversarial …
Certified robustness via dynamic margin maximization and improved lipschitz regularization
To improve the robustness of deep classifiers against adversarial perturbations, many
approaches have been proposed, such as designing new architectures with better …
approaches have been proposed, such as designing new architectures with better …
Exploring and exploiting decision boundary dynamics for adversarial robustness
The robustness of a deep classifier can be characterized by its margins: the decision
boundary's distances to natural data points. However, it is unclear whether existing robust …
boundary's distances to natural data points. However, it is unclear whether existing robust …
Defenses in adversarial machine learning: A survey
Adversarial phenomenon has been widely observed in machine learning (ML) systems,
especially in those using deep neural networks, describing that ML systems may produce …
especially in those using deep neural networks, describing that ML systems may produce …