A survey of controllable text generation using transformer-based pre-trained language models

H Zhang, H Song, S Li, M Zhou, D Song - ACM Computing Surveys, 2023 - dl.acm.org
Controllable Text Generation (CTG) is an emerging area in the field of natural language
generation (NLG). It is regarded as crucial for the development of advanced text generation …

A bibliometric review of large language models research from 2017 to 2023

L Fan, L Li, Z Ma, S Lee, H Yu, L Hemphill - ACM Transactions on …, 2024 - dl.acm.org
Large language models (LLMs), such as OpenAI's Generative Pre-trained Transformer
(GPT), are a class of language models that have demonstrated outstanding performance …

From pretraining data to language models to downstream tasks: Tracking the trails of political biases leading to unfair NLP models

S Feng, CY Park, Y Liu, Y Tsvetkov - arXiv preprint arXiv:2305.08283, 2023 - arxiv.org
Language models (LMs) are pretrained on diverse data sources, including news, discussion
forums, books, and online encyclopedias. A significant portion of this data includes opinions …

Sora: A review on background, technology, limitations, and opportunities of large vision models

Y Liu, K Zhang, Y Li, Z Yan, C Gao, R Chen… - arXiv preprint arXiv …, 2024 - arxiv.org
Sora is a text-to-video generative AI model, released by OpenAI in February 2024. The
model is trained to generate videos of realistic or imaginative scenes from text instructions …

A review on large Language Models: Architectures, applications, taxonomies, open issues and challenges

MAK Raiaan, MSH Mukta, K Fatema, NM Fahad… - IEEE …, 2024 - ieeexplore.ieee.org
Large Language Models (LLMs) recently demonstrated extraordinary capability in various
natural language processing (NLP) tasks including language translation, text generation …

ROBBIE: Robust bias evaluation of large generative language models

D Esiobu, X Tan, S Hosseini, M Ung, Y Zhang… - arXiv preprint arXiv …, 2023 - arxiv.org
As generative large language models (LLMs) grow more performant and prevalent, we must
develop comprehensive enough tools to measure and improve their fairness. Different …

The moral integrity corpus: A benchmark for ethical dialogue systems

C Ziems, JA Yu, YC Wang, A Halevy… - arXiv preprint arXiv …, 2022 - arxiv.org
Conversational agents have come increasingly closer to human competence in open-
domain dialogue settings; however, such models can reflect insensitive, hurtful, or entirely …

Fairness in deep learning: A survey on vision and language research

O Parraga, MD More, CM Oliveira, NS Gavenski… - ACM Computing …, 2023 - dl.acm.org
Despite being responsible for state-of-the-art results in several computer vision and natural
language processing tasks, neural networks have faced harsh criticism due to some of their …

Language generation models can cause harm: So what can we do about it? an actionable survey

S Kumar, V Balachandran, L Njoo… - arXiv preprint arXiv …, 2022 - arxiv.org
Recent advances in the capacity of large language models to generate human-like text have
resulted in their increased adoption in user-facing settings. In parallel, these improvements …

Efficient utilization of pre-trained models: A review of sentiment analysis via prompt learning

K Bu, Y Liu, X Ju - Knowledge-Based Systems, 2024 - Elsevier
Sentiment analysis is one of the traditional well-known tasks in Natural Language
Processing (NLP) research. In recent years, Pre-trained Models (PMs) have become one of …