Bridging the gap: A survey on integrating (human) feedback for natural language generation

P Fernandes, A Madaan, E Liu, A Farinhas… - Transactions of the …, 2023 - direct.mit.edu
Natural language generation has witnessed significant advancements due to the training of
large language models on vast internet-scale datasets. Despite these advancements, there …

Findings of the 2017 conference on machine translation (wmt17)

O Bojar, R Chatterjee, C Federmann, Y Graham… - 2017 - doras.dcu.ie
This paper presents the results of the WMT17 shared tasks, which included three machine
translation (MT) tasks (news, biomedical, and multimodal), two evaluation tasks (metrics and …

Context-aware monolingual repair for neural machine translation

E Voita, R Sennrich, I Titov - arXiv preprint arXiv:1909.01383, 2019 - arxiv.org
Modern sentence-level NMT systems often produce plausible translations of isolated
sentences. However, when put in context, these translations may end up being inconsistent …

[PDF][PDF] Parrot: Translating during chat using large language models

W Jiao, J Huang, W Wang, X Wang… - arXiv preprint arXiv …, 2023 - researchgate.net
Large language models (LLMs) like ChatGPT and GPT-4 have exhibited remarkable
abilities on a wide range of natural language processing (NLP) tasks, including various …

ParroT: Translating during chat using large language models tuned with human translation and feedback

W Jiao, J Huang, W Wang, Z He, T Liang… - arXiv preprint arXiv …, 2023 - arxiv.org
Large language models (LLMs) like ChatGPT have exhibited remarkable abilities on a wide
range of natural language processing~(NLP) tasks, including various machine translation …

Improving readability for automatic speech recognition transcription

J Liao, S Eskimez, L Lu, Y Shi, M Gong… - ACM Transactions on …, 2023 - dl.acm.org
Modern Automatic Speech Recognition (ASR) systems can achieve high performance in
terms of recognition accuracy. However, a perfectly accurate transcript still can be …

Should we find another model?: Improving neural machine translation performance with ONE-piece tokenization method without model modification

C Park, S Eo, H Moon, HS Lim - … of the 2021 conference of the …, 2021 - aclanthology.org
Most of the recent Natural Language Processing (NLP) studies are based on the Pretrain-
Finetuning Approach (PFA), but in small and medium-sized enterprises or companies with …

The task of post-editing machine translation for the low-resource language

D Rakhimova, A Karibayeva, A Turarbek - Applied Sciences, 2024 - mdpi.com
In recent years, machine translation has made significant advancements; however, its
effectiveness can vary widely depending on the language pair. Languages with limited …

[PDF][PDF] Neural Error Corrective Language Models for Automatic Speech Recognition.

T Tanaka, R Masumura, H Masataki, Y Aono - INTERSPEECH, 2018 - isca-archive.org
We present novel neural network based language models that can correct automatic speech
recognition (ASR) errors by using speech recognizer output as a context. These models …

[PDF][PDF] Multi-source Neural Automatic Post-Editing: FBK's participation in the WMT 2017 APE shared task

R Chatterjee, MA Farajian, M Negri… - Proceedings of the …, 2017 - aclanthology.org
Previous phrase-based approaches to Automatic Post-editing (APE) have shown that the
dependency of MT errors from the source sentence can be exploited by jointly learning from …