Revisiting out-of-distribution robustness in nlp: Benchmarks, analysis, and LLMs evaluations
This paper reexamines the research on out-of-distribution (OOD) robustness in the field of
NLP. We find that the distribution shift settings in previous studies commonly lack adequate …
NLP. We find that the distribution shift settings in previous studies commonly lack adequate …
Defining a new NLP playground
The recent explosion of performance of large language models (LLMs) has changed the
field of Natural Language Processing (NLP) more abruptly and seismically than any other …
field of Natural Language Processing (NLP) more abruptly and seismically than any other …
Evaluating the robustness of text-to-image diffusion models against real-world attacks
Text-to-image (T2I) diffusion models (DMs) have shown promise in generating high-quality
images from textual descriptions. The real-world applications of these models require …
images from textual descriptions. The real-world applications of these models require …
Kalt: generating adversarial explainable chinese legal texts
Y Zhang, S Li, L Ye, H Zhang, Z Chen, B Fang - Machine Learning, 2024 - Springer
Deep neural networks (DNNs) are vulnerable to adversarial examples (AEs), which are well-
designed input samples with imperceptible perturbations. Existing methods generate AEs to …
designed input samples with imperceptible perturbations. Existing methods generate AEs to …
ZDDR: A Zero-Shot Defender for Adversarial Samples Detection and Restoration
M Chen, G He, J Wu - IEEE Access, 2024 - ieeexplore.ieee.org
Natural language processing (NLP) models find extensive applications but face
vulnerabilities against adversarial inputs. Traditional defenses lean heavily on supervised …
vulnerabilities against adversarial inputs. Traditional defenses lean heavily on supervised …