Adversarial attacks and defenses in machine learning-empowered communication systems and networks: A contemporary survey

Y Wang, T Sun, S Li, X Yuan, W Ni… - … Surveys & Tutorials, 2023 - ieeexplore.ieee.org
Adversarial attacks and defenses in machine learning and deep neural network (DNN) have
been gaining significant attention due to the rapidly growing applications of deep learning in …

Adversarial training methods for deep learning: A systematic review

W Zhao, S Alwidian, QH Mahmoud - Algorithms, 2022 - mdpi.com
Deep neural networks are exposed to the risk of adversarial attacks via the fast gradient sign
method (FGSM), projected gradient descent (PGD) attacks, and other attack algorithms …

Cddfuse: Correlation-driven dual-branch feature decomposition for multi-modality image fusion

Z Zhao, H Bai, J Zhang, Y Zhang, S Xu… - Proceedings of the …, 2023 - openaccess.thecvf.com
Multi-modality (MM) image fusion aims to render fused images that maintain the merits of
different modalities, eg, functional highlight and detailed textures. To tackle the challenge in …

Binary neural networks: A survey

H Qin, R Gong, X Liu, X Bai, J Song, N Sebe - Pattern Recognition, 2020 - Elsevier
The binary neural network, largely saving the storage and computation, serves as a
promising technique for deploying deep models on resource-limited devices. However, the …

{X-Adv}: Physical adversarial object attacks against x-ray prohibited item detection

A Liu, J Guo, J Wang, S Liang, R Tao, W Zhou… - 32nd USENIX Security …, 2023 - usenix.org
Adversarial attacks are valuable for evaluating the robustness of deep learning models.
Existing attacks are primarily conducted on the visible light spectrum (eg, pixel-wise texture …

A comprehensive study on robustness of image classification models: Benchmarking and rethinking

C Liu, Y Dong, W Xiang, X Yang, H Su, J Zhu… - International Journal of …, 2024 - Springer
The robustness of deep neural networks is frequently compromised when faced with
adversarial examples, common corruptions, and distribution shifts, posing a significant …

Robustart: Benchmarking robustness on architecture design and training techniques

S Tang, R Gong, Y Wang, A Liu, J Wang… - arXiv preprint arXiv …, 2021 - arxiv.org
Deep neural networks (DNNs) are vulnerable to adversarial noises, which motivates the
benchmark of model robustness. Existing benchmarks mainly focus on evaluating defenses …

Bibench: Benchmarking and analyzing network binarization

H Qin, M Zhang, Y Ding, A Li, Z Cai… - International …, 2023 - proceedings.mlr.press
Network binarization emerges as one of the most promising compression approaches
offering extraordinary computation and memory savings by minimizing the bit-width …

Exploring the relationship between architectural design and adversarially robust generalization

A Liu, S Tang, S Liang, R Gong… - Proceedings of the …, 2023 - openaccess.thecvf.com
Adversarial training has been demonstrated to be one of the most effective remedies for
defending adversarial examples, yet it often suffers from the huge robustness generalization …

Bias-based universal adversarial patch attack for automatic check-out

A Liu, J Wang, X Liu, B Cao, C Zhang, H Yu - Computer Vision–ECCV …, 2020 - Springer
Adversarial examples are inputs with imperceptible perturbations that easily misleading
deep neural networks (DNNs). Recently, adversarial patch, with noise confined to a small …