Challenges and applications of large language models
Large Language Models (LLMs) went from non-existent to ubiquitous in the machine
learning discourse within a few years. Due to the fast pace of the field, it is difficult to identify …
learning discourse within a few years. Due to the fast pace of the field, it is difficult to identify …
Language model behavior: A comprehensive survey
Transformer language models have received widespread public attention, yet their
generated text is often surprising even to NLP researchers. In this survey, we discuss over …
generated text is often surprising even to NLP researchers. In this survey, we discuss over …
[HTML][HTML] More human than human: measuring ChatGPT political bias
F Motoki, V Pinho Neto, V Rodrigues - Public Choice, 2024 - Springer
We investigate the political bias of a large language model (LLM), ChatGPT, which has
become popular for retrieving factual information and generating content. Although ChatGPT …
become popular for retrieving factual information and generating content. Although ChatGPT …
The Self‐Perception and Political Biases of ChatGPT
J Rutinowski, S Franke, J Endendyk… - Human Behavior …, 2024 - Wiley Online Library
This contribution analyzes the self‐perception and political biases of OpenAI's Large
Language Model ChatGPT. Considering the first small‐scale reports and studies that have …
Language Model ChatGPT. Considering the first small‐scale reports and studies that have …
Second thoughts are best: Learning to re-align with human values from text edits
Abstract We present Second Thoughts, a new learning paradigm that enables language
models (LMs) to re-align with human values. By modeling the chain-of-edits between value …
models (LMs) to re-align with human values. By modeling the chain-of-edits between value …
Training socially aligned language models on simulated social interactions
Social alignment in AI systems aims to ensure that these models behave according to
established societal values. However, unlike humans, who derive consensus on value …
established societal values. However, unlike humans, who derive consensus on value …
Measuring and mitigating language model biases in abusive language detection
Warning: This paper contains abusive samples that may cause discomfort to readers.
Abusive language on social media reinforces prejudice against an individual or a specific …
Abusive language on social media reinforces prejudice against an individual or a specific …
Attention-enabled ensemble deep learning models and their validation for depression detection: A domain adoption paradigm
Depression is increasingly prevalent, leading to higher suicide risk. Depression detection
and sentimental analysis of text inputs in cross-domain frameworks are challenging. Solo …
and sentimental analysis of text inputs in cross-domain frameworks are challenging. Solo …
Aligning with whom? large language models have gender and racial biases in subjective nlp tasks
Human perception of language depends on personal backgrounds like gender and
ethnicity. While existing studies have shown that large language models (LLMs) hold values …
ethnicity. While existing studies have shown that large language models (LLMs) hold values …
Knowledge infused decoding
Pre-trained language models (LMs) have been shown to memorize a substantial amount of
knowledge from the pre-training corpora; however, they are still limited in recalling factually …
knowledge from the pre-training corpora; however, they are still limited in recalling factually …