Bridging the gap: A survey on integrating (human) feedback for natural language generation
Natural language generation has witnessed significant advancements due to the training of
large language models on vast internet-scale datasets. Despite these advancements, there …
large language models on vast internet-scale datasets. Despite these advancements, there …
Findings of the 2017 conference on machine translation (wmt17)
This paper presents the results of the WMT17 shared tasks, which included three machine
translation (MT) tasks (news, biomedical, and multimodal), two evaluation tasks (metrics and …
translation (MT) tasks (news, biomedical, and multimodal), two evaluation tasks (metrics and …
Context-aware monolingual repair for neural machine translation
Modern sentence-level NMT systems often produce plausible translations of isolated
sentences. However, when put in context, these translations may end up being inconsistent …
sentences. However, when put in context, these translations may end up being inconsistent …
[PDF][PDF] Parrot: Translating during chat using large language models
Large language models (LLMs) like ChatGPT and GPT-4 have exhibited remarkable
abilities on a wide range of natural language processing (NLP) tasks, including various …
abilities on a wide range of natural language processing (NLP) tasks, including various …
ParroT: Translating during chat using large language models tuned with human translation and feedback
Large language models (LLMs) like ChatGPT have exhibited remarkable abilities on a wide
range of natural language processing~(NLP) tasks, including various machine translation …
range of natural language processing~(NLP) tasks, including various machine translation …
Improving readability for automatic speech recognition transcription
Modern Automatic Speech Recognition (ASR) systems can achieve high performance in
terms of recognition accuracy. However, a perfectly accurate transcript still can be …
terms of recognition accuracy. However, a perfectly accurate transcript still can be …
Should we find another model?: Improving neural machine translation performance with ONE-piece tokenization method without model modification
Most of the recent Natural Language Processing (NLP) studies are based on the Pretrain-
Finetuning Approach (PFA), but in small and medium-sized enterprises or companies with …
Finetuning Approach (PFA), but in small and medium-sized enterprises or companies with …
The task of post-editing machine translation for the low-resource language
In recent years, machine translation has made significant advancements; however, its
effectiveness can vary widely depending on the language pair. Languages with limited …
effectiveness can vary widely depending on the language pair. Languages with limited …
[PDF][PDF] Neural Error Corrective Language Models for Automatic Speech Recognition.
T Tanaka, R Masumura, H Masataki, Y Aono - INTERSPEECH, 2018 - isca-archive.org
We present novel neural network based language models that can correct automatic speech
recognition (ASR) errors by using speech recognizer output as a context. These models …
recognition (ASR) errors by using speech recognizer output as a context. These models …
[PDF][PDF] Multi-source Neural Automatic Post-Editing: FBK's participation in the WMT 2017 APE shared task
Previous phrase-based approaches to Automatic Post-editing (APE) have shown that the
dependency of MT errors from the source sentence can be exploited by jointly learning from …
dependency of MT errors from the source sentence can be exploited by jointly learning from …