Bridging the gap: A survey on integrating (human) feedback for natural language generation
Natural language generation has witnessed significant advancements due to the training of
large language models on vast internet-scale datasets. Despite these advancements, there …
large language models on vast internet-scale datasets. Despite these advancements, there …
Dress: Instructing large vision-language models to align and interact with humans via natural language feedback
We present DRESS a large vision language model (LVLM) that innovatively exploits Natural
Language feedback (NLF) from Large Language Models to enhance its alignment and …
Language feedback (NLF) from Large Language Models to enhance its alignment and …
The unreasonable effectiveness of few-shot learning for machine translation
We demonstrate the potential of few-shot translation systems, trained with unpaired
language data, for both high and low-resource language pairs. We show that with only 5 …
language data, for both high and low-resource language pairs. We show that with only 5 …
Automatically Correcting Large Language Models: Surveying the Landscape of Diverse Automated Correction Strategies
While large language models (LLMs) have shown remarkable effectiveness in various NLP
tasks, they are still prone to issues such as hallucination, unfaithful reasoning, and toxicity. A …
tasks, they are still prone to issues such as hallucination, unfaithful reasoning, and toxicity. A …
LENS: A learnable evaluation metric for text simplification
M Maddela, Y Dou, D Heineman, W Xu - arXiv preprint arXiv:2212.09739, 2022 - arxiv.org
Training learnable metrics using modern language models has recently emerged as a
promising method for the automatic evaluation of machine translation. However, existing …
promising method for the automatic evaluation of machine translation. However, existing …
Mitigating Hallucinations and Off-target Machine Translation with Source-Contrastive and Language-Contrastive Decoding
Hallucinations and off-target translation remain unsolved problems in machine translation,
especially for low-resource languages and massively multilingual models. In this paper, we …
especially for low-resource languages and massively multilingual models. In this paper, we …
Epsilon sampling rocks: Investigating sampling strategies for minimum bayes risk decoding for machine translation
Recent advances in machine translation (MT) have shown that Minimum Bayes Risk (MBR)
decoding can be a powerful alternative to beam search decoding, especially when …
decoding can be a powerful alternative to beam search decoding, especially when …
Follow the wisdom of the crowd: Effective text generation via minimum Bayes risk decoding
In open-ended natural-language generation, existing text decoding methods typically
struggle to produce text which is both diverse and high-quality. Greedy and beam search are …
struggle to produce text which is both diverse and high-quality. Greedy and beam search are …
On the limitations of reference-free evaluations of generated text
There is significant interest in developing evaluation metrics which accurately estimate the
quality of generated text without the aid of a human-written reference text, which can be time …
quality of generated text without the aid of a human-written reference text, which can be time …
It's MBR All the Way Down: Modern Generation Techniques Through the Lens of Minimum Bayes Risk
Minimum Bayes Risk (MBR) decoding is a method for choosing the outputs of a machine
learning system based not on the output with the highest probability, but the output with the …
learning system based not on the output with the highest probability, but the output with the …