作者
John X Morris, Eli Lifland, Jack Lanchantin, Yangfeng Ji, Yanjun Qi
发表日期
2019/10/1
期刊
EMNLP 2020 (Findings)
简介
State-of-the-art attacks on NLP models lack a shared definition of a what constitutes a successful attack. We distill ideas from past work into a unified framework: a successful natural language adversarial example is a perturbation that fools the model and follows some linguistic constraints. We then analyze the outputs of two state-of-the-art synonym substitution attacks. We find that their perturbations often do not preserve semantics, and 38% introduce grammatical errors. Human surveys reveal that to successfully preserve semantics, we need to significantly increase the minimum cosine similarities between the embeddings of swapped words and between the sentence encodings of original and perturbed sentences.With constraints adjusted to better preserve semantics and grammaticality, the attack success rate drops by over 70 percentage points.
引用总数
20202021202220232024519254318
学术搜索中的文章
JX Morris, E Lifland, J Lanchantin, Y Ji, Y Qi - arXiv preprint arXiv:2004.14174, 2020