Challenges and applications of large language models

J Kaddour, J Harris, M Mozes, H Bradley… - arXiv preprint arXiv …, 2023 - arxiv.org
Large Language Models (LLMs) went from non-existent to ubiquitous in the machine
learning discourse within a few years. Due to the fast pace of the field, it is difficult to identify …

Language model behavior: A comprehensive survey

TA Chang, BK Bergen - Computational Linguistics, 2024 - direct.mit.edu
Transformer language models have received widespread public attention, yet their
generated text is often surprising even to NLP researchers. In this survey, we discuss over …

[HTML][HTML] More human than human: measuring ChatGPT political bias

F Motoki, V Pinho Neto, V Rodrigues - Public Choice, 2024 - Springer
We investigate the political bias of a large language model (LLM), ChatGPT, which has
become popular for retrieving factual information and generating content. Although ChatGPT …

The Self‐Perception and Political Biases of ChatGPT

J Rutinowski, S Franke, J Endendyk… - Human Behavior …, 2024 - Wiley Online Library
This contribution analyzes the self‐perception and political biases of OpenAI's Large
Language Model ChatGPT. Considering the first small‐scale reports and studies that have …

Second thoughts are best: Learning to re-align with human values from text edits

R Liu, C Jia, G Zhang, Z Zhuang… - Advances in Neural …, 2022 - proceedings.neurips.cc
Abstract We present Second Thoughts, a new learning paradigm that enables language
models (LMs) to re-align with human values. By modeling the chain-of-edits between value …

Training socially aligned language models on simulated social interactions

R Liu, R Yang, C Jia, G Zhang, D Zhou, AM Dai… - arXiv preprint arXiv …, 2023 - arxiv.org
Social alignment in AI systems aims to ensure that these models behave according to
established societal values. However, unlike humans, who derive consensus on value …

Measuring and mitigating language model biases in abusive language detection

R Song, F Giunchiglia, Y Li, L Shi, H Xu - Information Processing & …, 2023 - Elsevier
Warning: This paper contains abusive samples that may cause discomfort to readers.
Abusive language on social media reinforces prejudice against an individual or a specific …

Attention-enabled ensemble deep learning models and their validation for depression detection: A domain adoption paradigm

J Singh, N Singh, MM Fouda, L Saba, JS Suri - Diagnostics, 2023 - mdpi.com
Depression is increasingly prevalent, leading to higher suicide risk. Depression detection
and sentimental analysis of text inputs in cross-domain frameworks are challenging. Solo …

Aligning with whom? large language models have gender and racial biases in subjective nlp tasks

H Sun, J Pei, M Choi, D Jurgens - arXiv preprint arXiv:2311.09730, 2023 - arxiv.org
Human perception of language depends on personal backgrounds like gender and
ethnicity. While existing studies have shown that large language models (LLMs) hold values …

Knowledge infused decoding

R Liu, G Zheng, S Gupta, R Gaonkar, C Gao… - arXiv preprint arXiv …, 2022 - arxiv.org
Pre-trained language models (LMs) have been shown to memorize a substantial amount of
knowledge from the pre-training corpora; however, they are still limited in recalling factually …