Directions in abusive language training data, a systematic review: Garbage in, garbage out

B Vidgen, L Derczynski - Plos one, 2020 - journals.plos.org
Data-driven and machine learning based approaches for detecting, categorising and
measuring abusive content such as hate speech and harassment have gained traction due …

Robust natural language processing: Recent advances, challenges, and future directions

M Omar, S Choi, DH Nyang, D Mohaisen - IEEE Access, 2022 - ieeexplore.ieee.org
Recent natural language processing (NLP) techniques have accomplished high
performance on benchmark data sets, primarily due to the significant improvement in the …

In chatgpt we trust? measuring and characterizing the reliability of chatgpt

X Shen, Z Chen, M Backes, Y Zhang - arXiv preprint arXiv:2304.08979, 2023 - arxiv.org
The way users acquire information is undergoing a paradigm shift with the advent of
ChatGPT. Unlike conventional search engines, ChatGPT retrieves knowledge from the …

MoverScore: Text generation evaluating with contextualized embeddings and earth mover distance

W Zhao, M Peyrard, F Liu, Y Gao, CM Meyer… - arXiv preprint arXiv …, 2019 - arxiv.org
A robust evaluation metric has a profound impact on the development of text generation
systems. A desirable metric compares system output against references based on their …

Word-level textual adversarial attacking as combinatorial optimization

Y Zang, F Qi, C Yang, Z Liu, M Zhang, Q Liu… - arXiv preprint arXiv …, 2019 - arxiv.org
Adversarial attacks are carried out to reveal the vulnerability of deep neural networks.
Textual adversarial attacking is challenging because text is discrete and a small perturbation …

Mind the style of text! adversarial and backdoor attacks based on text style transfer

F Qi, Y Chen, X Zhang, M Li, Z Liu, M Sun - arXiv preprint arXiv …, 2021 - arxiv.org
Adversarial attacks and backdoor attacks are two common security threats that hang over
deep learning. Both of them harness task-irrelevant features of data in their implementation …

Towards robustness against natural language word substitutions

X Dong, AT Luu, R Ji, H Liu - arXiv preprint arXiv:2107.13541, 2021 - arxiv.org
Robustness against word substitutions has a well-defined and widely acceptable form, ie,
using semantically similar words as substitutions, and thus it is considered as a fundamental …

Openattack: An open-source textual adversarial attack toolkit

G Zeng, F Qi, Q Zhou, T Zhang, Z Ma, B Hou… - arXiv preprint arXiv …, 2020 - arxiv.org
Textual adversarial attacking has received wide and increasing attention in recent years.
Various attack models have been proposed, which are enormously distinct and …

On robustness of prompt-based semantic parsing with large pre-trained language model: An empirical study on codex

TY Zhuo, Z Li, Y Huang, F Shiri, W Wang… - arXiv preprint arXiv …, 2023 - arxiv.org
Semantic parsing is a technique aimed at constructing a structured representation of the
meaning of a natural-language question. Recent advancements in few-shot language …

Towards scalable and reliable capsule networks for challenging NLP applications

W Zhao, H Peng, S Eger, E Cambria… - arXiv preprint arXiv …, 2019 - arxiv.org
Obstacles hindering the development of capsule networks for challenging NLP applications
include poor scalability to large output spaces and less reliable routing processes. In this …