Cfa: Class-wise calibrated fair adversarial training

Z Wei, Y Wang, Y Guo, Y Wang - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Adversarial training has been widely acknowledged as the most effective method to improve
the adversarial robustness against adversarial examples for Deep Neural Networks (DNNs) …

Revisiting adversarial training for imagenet: Architectures, training and generalization across threat models

ND Singh, F Croce, M Hein - Advances in Neural …, 2024 - proceedings.neurips.cc
While adversarial training has been extensively studied for ResNet architectures and low
resolution datasets like CIFAR-10, much less is known for ImageNet. Given the recent …

Balance, imbalance, and rebalance: Understanding robust overfitting from a minimax game perspective

Y Wang, L Li, J Yang, Z Lin… - Advances in neural …, 2024 - proceedings.neurips.cc
Adversarial Training (AT) has become arguably the state-of-the-art algorithm for extracting
robust features. However, researchers recently notice that AT suffers from severe robust …

Robust principles: Architectural design principles for adversarially robust cnns

SY Peng, W Xu, C Cornelius, M Hull, K Li… - arXiv preprint arXiv …, 2023 - arxiv.org
Our research aims to unify existing works' diverging opinions on how architectural
components affect the adversarial robustness of CNNs. To accomplish our goal, we …

Terd: A unified framework for safeguarding diffusion models against backdoors

Y Mo, H Huang, M Li, A Li, Y Wang - arXiv preprint arXiv:2409.05294, 2024 - arxiv.org
Diffusion models have achieved notable success in image generation, but they remain
highly vulnerable to backdoor attacks, which compromise their integrity by producing …

Generalist: Decoupling natural and robust generalization

H Wang, Y Wang - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
Deep neural networks obtained by standard training have been constantly plagued by
adversarial examples. Although adversarial training demonstrates its capability to defend …

On the duality between sharpness-aware minimization and adversarial training

Y Zhang, H He, J Zhu, H Chen, Y Wang… - arXiv preprint arXiv …, 2024 - arxiv.org
Adversarial Training (AT), which adversarially perturb the input samples during training, has
been acknowledged as one of the most effective defenses against adversarial attacks, yet …

Sharpness-aware minimization alone can improve adversarial robustness

Z Wei, J Zhu, Y Zhang - arXiv preprint arXiv:2305.05392, 2023 - arxiv.org
Sharpness-Aware Minimization (SAM) is an effective method for improving generalization
ability by regularizing loss sharpness. In this paper, we explore SAM in the context of …

Revisiting Adversarial Training at Scale

Z Wang, X Li, H Zhu, C Xie - Proceedings of the IEEE/CVF …, 2024 - openaccess.thecvf.com
The machine learning community has witnessed a drastic change in the training pipeline
pivoted by those" foundation models" with unprecedented scales. However the field of …

Robust Distillation via Untargeted and Targeted Intermediate Adversarial Samples

J Dong, P Koniusz, J Chen… - Proceedings of the …, 2024 - openaccess.thecvf.com
Adversarially robust knowledge distillation aims to compress large-scale models into
lightweight models while preserving adversarial robustness and natural performance on a …