作者
Wei Emma Zhang, Quan Z Sheng, Ahoud Abdulrahmn F Alhazmi, Chenliang Li
发表日期
2020/3
期刊
ACM Transactions on Intelligent Systems and Technology (TIST)
出版商
TIST (to appear)
简介
With the development of high computational devices, deep neural networks (DNNs), in recent years, have gained significant popularity in many Artificial Intelligence (AI) applications. However, previous efforts have shown that DNNs are vulnerable to strategically modified samples, named adversarial examples. These samples are generated with some imperceptible perturbations, but can fool the DNNs to give false predictions. Inspired by the popularity of generating adversarial examples against DNNs in Computer Vision (CV), research efforts on attacking DNNs for Natural Language Processing (NLP) applications have emerged in recent years. However, the intrinsic difference between image (CV) and text (NLP) renders challenges to directly apply attacking methods in CV to NLP. Various methods are proposed addressing this difference and attack a wide range of NLP applications. In this article, we present a …
引用总数
201920202021202220232024665112161179111
学术搜索中的文章
WE Zhang, QZ Sheng, A Alhazmi, C Li - ACM Transactions on Intelligent Systems and …, 2020