Robust overfitting may be mitigated by properly learned smoothening

T Chen, Z Zhang, S Liu, S Chang… - … Conference on Learning …, 2020 - openreview.net
A recent study (Rice et al., 2020) revealed overfitting to be a dominant phenomenon in
adversarially robust training of deep networks, and that appropriate early-stopping of …

Robustart: Benchmarking robustness on architecture design and training techniques

S Tang, R Gong, Y Wang, A Liu, J Wang… - arXiv preprint arXiv …, 2021 - arxiv.org
Deep neural networks (DNNs) are vulnerable to adversarial noises, which motivates the
benchmark of model robustness. Existing benchmarks mainly focus on evaluating defenses …

[HTML][HTML] Self-adaptive logit balancing for deep neural network robustness: Defence and detection of adversarial attacks

J Wei, L Yao, Q Meng - Neurocomputing, 2023 - Elsevier
With the widespread applications of Deep Neural Networks (DNNs), the safety of DNNs has
become a significant issue. The vulnerability of the neural networks against adversarial …

Union label smoothing adversarial training: Recognize small perturbation attacks and reject larger perturbation attacks balanced

J Huang, H Xie, C Wu, X Xiang - Future Generation Computer Systems, 2023 - Elsevier
Recently, several adversarial training methods have been proposed for rejecting
perturbation-based adversarial examples, which enhance the robustness of deep neural …

LAFED: Towards robust ensemble models via Latent Feature Diversification

W Zhuang, L Huang, C Gao, N Liu - Pattern Recognition, 2024 - Elsevier
Adversarial examples pose a significant challenge to the security of deep neural networks
(DNNs). In order to defend against malicious attacks, adversarial training forces DNNs to …

Towards Test Time Domain Adaptation via Negative Label Smoothing

H Yang, H Zuo, R Zhou, M Wang, Y Zhou - Neurocomputing, 2024 - Elsevier
Label Smoothing (LS) is a widely-used training technique that adjusts hard labels towards a
softer distribution, which prevents model being over-confidence and enhances model …

Masked spatial–spectral autoencoders are excellent hyperspectral defenders

J Qi, Z Gong, X Liu, C Chen… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Deep learning (DL) methodology contributes a lot to the development of hyperspectral
image (HSI) analysis community. However, it also makes HSI analysis systems vulnerable to …

Enhancing adversarial defense for medical image analysis systems with pruning and attention mechanism

L Chen, L Zhao, CYC Chen - Medical physics, 2021 - Wiley Online Library
Purpose Deep learning has achieved impressive performance across a variety of tasks,
including medical image processing. However, recent research has shown that deep neural …

Batch-in-Batch: a new adversarial training framework for initial perturbation and sample selection

Y Wu, P Peng, B Cai, L Li - Complex & Intelligent Systems, 2025 - Springer
Adversarial training methods commonly generate initial perturbations that are independent
across epochs, and obtain subsequent adversarial training samples without selection …

In and Out-of-Domain Text Adversarial Robustness via Label Smoothing

Y Yang, S Dan, D Roth, I Lee - arXiv preprint arXiv:2212.10258, 2022 - arxiv.org
Recently it has been shown that state-of-the-art NLP models are vulnerable to adversarial
attacks, where the predictions of a model can be drastically altered by slight modifications to …