Bridging the gap: A survey on integrating (human) feedback for natural language generation
Natural language generation has witnessed significant advancements due to the training of
large language models on vast internet-scale datasets. Despite these advancements, there …
large language models on vast internet-scale datasets. Despite these advancements, there …
Evaluating human-language model interaction
Many real-world applications of language models (LMs), such as writing assistance and
code autocomplete, involve human-LM interaction. However, most benchmarks are non …
code autocomplete, involve human-LM interaction. However, most benchmarks are non …
Take a step back: Evoking reasoning via abstraction in large language models
We present Step-Back Prompting, a simple prompting technique that enables LLMs to do
abstractions to derive high-level concepts and first principles from instances containing …
abstractions to derive high-level concepts and first principles from instances containing …
Uncertainty in natural language generation: From theory to applications
Recent advances of powerful Language Models have allowed Natural Language
Generation (NLG) to emerge as an important technology that can not only perform traditional …
Generation (NLG) to emerge as an important technology that can not only perform traditional …
Fairness in language models beyond English: Gaps and challenges
With language models becoming increasingly ubiquitous, it has become essential to
address their inequitable treatment of diverse demographic groups and factors. Most …
address their inequitable treatment of diverse demographic groups and factors. Most …
Bioreader: a retrieval-enhanced text-to-text transformer for biomedical literature
The latest batch of research has equipped language models with the ability to attend over
relevant and factual information from non-parametric external sources, drawing a …
relevant and factual information from non-parametric external sources, drawing a …
A comprehensive survey on instruction following
Task semantics can be expressed by a set of input-output examples or a piece of textual
instruction. Conventional machine learning approaches for natural language processing …
instruction. Conventional machine learning approaches for natural language processing …
Never-ending learning of user interfaces
Machine learning models have been trained to predict semantic information about user
interfaces (UIs) to make apps more accessible, easier to test, and to automate. Currently …
interfaces (UIs) to make apps more accessible, easier to test, and to automate. Currently …
Analyzing dataset annotation quality management in the wild
Data quality is crucial for training accurate, unbiased, and trustworthy machine learning
models as well as for their correct evaluation. Recent work, however, has shown that even …
models as well as for their correct evaluation. Recent work, however, has shown that even …
Targen: Targeted data generation with large language models
The rapid advancement of large language models (LLMs) has sparked interest in data
synthesis techniques, aiming to generate diverse and high-quality synthetic datasets …
synthesis techniques, aiming to generate diverse and high-quality synthetic datasets …