A survey of adversarial defenses and robustness in nlp

S Goyal, S Doddapaneni, MM Khapra… - ACM Computing …, 2023 - dl.acm.org
In the past few years, it has become increasingly evident that deep neural networks are not
resilient enough to withstand adversarial perturbations in input data, leaving them …

Adversarial machine learning in wireless communications using RF data: A review

D Adesina, CC Hsieh, YE Sagduyu… - … Surveys & Tutorials, 2022 - ieeexplore.ieee.org
Machine learning (ML) provides effective means to learn from spectrum data and solve
complex tasks involved in wireless communications. Supported by recent advances in …

Adversarial attack and defense technologies in natural language processing: A survey

S Qiu, Q Liu, S Zhou, W Huang - Neurocomputing, 2022 - Elsevier
Recently, the adversarial attack and defense technology has made remarkable
achievements and has been widely applied in the computer vision field, promoting its rapid …

Towards robustness against natural language word substitutions

X Dong, AT Luu, R Ji, H Liu - arXiv preprint arXiv:2107.13541, 2021 - arxiv.org
Robustness against word substitutions has a well-defined and widely acceptable form, ie,
using semantically similar words as substitutions, and thus it is considered as a fundamental …

Robust natural language processing: Recent advances, challenges, and future directions

M Omar, S Choi, DH Nyang, D Mohaisen - IEEE Access, 2022 - ieeexplore.ieee.org
Recent natural language processing (NLP) techniques have accomplished high
performance on benchmark data sets, primarily due to the significant improvement in the …

Searching for an effective defender: Benchmarking defense against adversarial word substitution

Z Li, J Xu, J Zeng, L Li, X Zheng, Q Zhang… - arXiv preprint arXiv …, 2021 - arxiv.org
Recent studies have shown that deep neural networks are vulnerable to intentionally crafted
adversarial examples, and various methods have been proposed to defend against …

Certified robustness to text adversarial attacks by randomized [mask]

J Zeng, J Xu, X Zheng, X Huang - Computational Linguistics, 2023 - direct.mit.edu
Very recently, few certified defense methods have been developed to provably guarantee
the robustness of a text classifier to adversarial synonym substitutions. However, all the …

Grey-box adversarial attack and defence for sentiment classification

Y Xu, X Zhong, AJ Yepes, JH Lau - arXiv preprint arXiv:2103.11576, 2021 - arxiv.org
We introduce a grey-box adversarial attack and defence framework for sentiment
classification. We address the issues of differentiability, label preservation and input …

Transferable multimodal attack on vision-language pre-training models

H Wang, K Dong, Z Zhu, H Qin, A Liu, X Fang… - 2024 IEEE Symposium …, 2024 - computer.org
Abstract Vision-Language Pre-training (VLP) models have achieved remarkable success in
practice, while easily being misled by adversarial attack. Though harmful, adversarial …

Certified robustness to word substitution attack with differential privacy

W Wang, P Tang, J Lou, L Xiong - … of the 2021 conference of the …, 2021 - aclanthology.org
The robustness and security of natural language processing (NLP) models are significantly
important in real-world applications. In the context of text classification tasks, adversarial …