Adversarial Initialization with Universal Adversarial Perturbation: A New Approach to Fast Adversarial Training

C Pan, Q Li, X Yao - Proceedings of the AAAI Conference on Artificial …, 2024 - ojs.aaai.org
Traditional adversarial training, while effective at improving machine learning model
robustness, is computationally intensive. Fast Adversarial Training (FAT) addresses this by …

ATRA: Efficient adversarial training with high-robust area

S Liu, Y Han - The Visual Computer, 2024 - Springer
Recent research has shown the vulnerability of deep networks to adversarial perturbations.
Adversarial training and its variants have been shown to be effective defense algorithms …

Batch-in-Batch: a new adversarial training framework for initial perturbation and sample selection

Y Wu, P Peng, B Cai, L Li - arXiv preprint arXiv:2406.04070, 2024 - arxiv.org
Adversarial training methods commonly generate independent initial perturbation for
adversarial samples from a simple uniform distribution, and obtain the training batch for the …

Catastrophic Overfitting: A Potential Blessing in Disguise

M Zhao, L Zhang, Y Kong, B Yin - arXiv preprint arXiv:2402.18211, 2024 - arxiv.org
Fast Adversarial Training (FAT) has gained increasing attention within the research
community owing to its efficacy in improving adversarial robustness. Particularly noteworthy …

[PDF][PDF] Rethinking Fast Adversarial Training: A Splitting Technique to Overcome Catastrophic Overfitting

M Zareappor, P Shamsolmoali - European Conference on Computer …, 2024 - ecva.net
Catastrophic overfitting (CO) poses a significant challenge to fast adversarial training
(FastAT), particularly at large perturbation scales, leading to dramatic reductions in …