Robust overfitting may be mitigated by properly learned smoothening
A recent study (Rice et al., 2020) revealed overfitting to be a dominant phenomenon in
adversarially robust training of deep networks, and that appropriate early-stopping of …
adversarially robust training of deep networks, and that appropriate early-stopping of …
Robustart: Benchmarking robustness on architecture design and training techniques
Deep neural networks (DNNs) are vulnerable to adversarial noises, which motivates the
benchmark of model robustness. Existing benchmarks mainly focus on evaluating defenses …
benchmark of model robustness. Existing benchmarks mainly focus on evaluating defenses …
[HTML][HTML] Self-adaptive logit balancing for deep neural network robustness: Defence and detection of adversarial attacks
With the widespread applications of Deep Neural Networks (DNNs), the safety of DNNs has
become a significant issue. The vulnerability of the neural networks against adversarial …
become a significant issue. The vulnerability of the neural networks against adversarial …
Union label smoothing adversarial training: Recognize small perturbation attacks and reject larger perturbation attacks balanced
J Huang, H Xie, C Wu, X Xiang - Future Generation Computer Systems, 2023 - Elsevier
Recently, several adversarial training methods have been proposed for rejecting
perturbation-based adversarial examples, which enhance the robustness of deep neural …
perturbation-based adversarial examples, which enhance the robustness of deep neural …
LAFED: Towards robust ensemble models via Latent Feature Diversification
W Zhuang, L Huang, C Gao, N Liu - Pattern Recognition, 2024 - Elsevier
Adversarial examples pose a significant challenge to the security of deep neural networks
(DNNs). In order to defend against malicious attacks, adversarial training forces DNNs to …
(DNNs). In order to defend against malicious attacks, adversarial training forces DNNs to …
Towards Test Time Domain Adaptation via Negative Label Smoothing
Label Smoothing (LS) is a widely-used training technique that adjusts hard labels towards a
softer distribution, which prevents model being over-confidence and enhances model …
softer distribution, which prevents model being over-confidence and enhances model …
Masked spatial–spectral autoencoders are excellent hyperspectral defenders
J Qi, Z Gong, X Liu, C Chen… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Deep learning (DL) methodology contributes a lot to the development of hyperspectral
image (HSI) analysis community. However, it also makes HSI analysis systems vulnerable to …
image (HSI) analysis community. However, it also makes HSI analysis systems vulnerable to …
Enhancing adversarial defense for medical image analysis systems with pruning and attention mechanism
Purpose Deep learning has achieved impressive performance across a variety of tasks,
including medical image processing. However, recent research has shown that deep neural …
including medical image processing. However, recent research has shown that deep neural …
Batch-in-Batch: a new adversarial training framework for initial perturbation and sample selection
Y Wu, P Peng, B Cai, L Li - Complex & Intelligent Systems, 2025 - Springer
Adversarial training methods commonly generate initial perturbations that are independent
across epochs, and obtain subsequent adversarial training samples without selection …
across epochs, and obtain subsequent adversarial training samples without selection …
In and Out-of-Domain Text Adversarial Robustness via Label Smoothing
Recently it has been shown that state-of-the-art NLP models are vulnerable to adversarial
attacks, where the predictions of a model can be drastically altered by slight modifications to …
attacks, where the predictions of a model can be drastically altered by slight modifications to …