Revisiting out-of-distribution robustness in nlp: Benchmarks, analysis, and LLMs evaluations

L Yuan, Y Chen, G Cui, H Gao, F Zou… - Advances in …, 2023 - proceedings.neurips.cc
This paper reexamines the research on out-of-distribution (OOD) robustness in the field of
NLP. We find that the distribution shift settings in previous studies commonly lack adequate …

Defining a new NLP playground

S Li, C Han, P Yu, C Edwards, M Li, X Wang… - arXiv preprint arXiv …, 2023 - arxiv.org
The recent explosion of performance of large language models (LLMs) has changed the
field of Natural Language Processing (NLP) more abruptly and seismically than any other …

Evaluating the robustness of text-to-image diffusion models against real-world attacks

H Gao, H Zhang, Y Dong, Z Deng - arXiv preprint arXiv:2306.13103, 2023 - arxiv.org
Text-to-image (T2I) diffusion models (DMs) have shown promise in generating high-quality
images from textual descriptions. The real-world applications of these models require …

Kalt: generating adversarial explainable chinese legal texts

Y Zhang, S Li, L Ye, H Zhang, Z Chen, B Fang - Machine Learning, 2024 - Springer
Deep neural networks (DNNs) are vulnerable to adversarial examples (AEs), which are well-
designed input samples with imperceptible perturbations. Existing methods generate AEs to …

ZDDR: A Zero-Shot Defender for Adversarial Samples Detection and Restoration

M Chen, G He, J Wu - IEEE Access, 2024 - ieeexplore.ieee.org
Natural language processing (NLP) models find extensive applications but face
vulnerabilities against adversarial inputs. Traditional defenses lean heavily on supervised …