Towards verifying the geometric robustness of large-scale neural networks

F Wang, P Xu, W Ruan, X Huang - … of the AAAI Conference on Artificial …, 2023 - ojs.aaai.org
Deep neural networks (DNNs) are known to be vulnerable to adversarial geometric
transformation. This paper aims to verify the robustness of large-scale DNNs against the …

Towards Fairness-Aware Adversarial Learning

Y Zhang, T Zhang, R Mu, X Huang… - Proceedings of the …, 2024 - openaccess.thecvf.com
Although adversarial training (AT) has proven effective in enhancing the model's robustness
the recently revealed issue of fairness in robustness has not been well addressed ie the …

Self-adaptive adversarial training for robust medical segmentation

F Wang, Z Fu, Y Zhang, W Ruan - International Conference on Medical …, 2023 - Springer
Adversarial training has been demonstrated to be one of the most effective approaches to
training deep neural networks that are robust to malicious perturbations. Research on …

Comparative evaluation of recent universal adversarial perturbations in image classification

J Weng, Z Luo, D Lin, S Li - Computers & Security, 2023 - Elsevier
Abstract The vulnerability of Convolutional Neural Networks (CNNs) to adversarial samples
has recently garnered significant attention in the machine learning community. Furthermore …

Enhancing robustness in video recognition models: Sparse adversarial attacks and beyond

R Mu, L Marcolino, Q Ni, W Ruan - Neural Networks, 2024 - Elsevier
Recent years have witnessed increasing interest in adversarial attacks on images, while
adversarial video attacks have seldom been explored. In this paper, we propose a sparse …

Model-agnostic reachability analysis on deep neural networks

C Zhang, W Ruan, F Wang, P Xu, G Min… - Pacific-Asia Conference …, 2023 - Springer
Verification plays an essential role in the formal analysis of safety-critical systems. Most
current verification methods have specific requirements when working on Deep Neural …

Crafting Targeted Universal Adversarial Perturbations: Considering Images as Noise

H Wang, D Cai, L Wang, Z Xiong - IEEE Access, 2023 - ieeexplore.ieee.org
The vulnerability of Deep Neural Networks (DNNs) to adversarial perturbations has been
demonstrated in a large body of research. Compared to image-dependent adversarial …

Модель і метод навчання резільєнтної до збурень системи розпізнавання медичних діагностичних зображень

ВО Кугук - 2023 - essuir.sumdu.edu.ua
Розроблено інформаційне та програмне забезпечення, яке використовує незалежний
від моделі метод мета-навчання, який показав підвищену ефективність на прикладі …