Neurologic a* esque decoding: Constrained text generation with lookahead heuristics

X Lu, S Welleck, P West, L Jiang, J Kasai… - arXiv preprint arXiv …, 2021 - arxiv.org
The dominant paradigm for neural text generation is left-to-right decoding from
autoregressive language models. Constrained or controllable generation under complex …

Efficient wait-k models for simultaneous machine translation

M Elbayad, L Besacier, J Verbeek - arXiv preprint arXiv:2005.08595, 2020 - arxiv.org
Simultaneous machine translation consists in starting output generation before the entire
input sequence is available. Wait-k decoders offer a simple but efficient approach for this …

Hidden markov transformer for simultaneous machine translation

S Zhang, Y Feng - arXiv preprint arXiv:2303.00257, 2023 - arxiv.org
Simultaneous machine translation (SiMT) outputs the target sequence while receiving the
source sequence, and hence learning when to start translating each target token is the core …

Learning when to translate for streaming speech

Q Dong, Y Zhu, M Wang, L Li - arXiv preprint arXiv:2109.07368, 2021 - arxiv.org
How to find proper moments to generate partial sentence translation given a streaming
speech input? Existing approaches waiting-and-translating for a fixed duration often break …

TAPIR: Learning adaptive revision for incremental natural language understanding with a two-pass model

P Kahardipraja, B Madureira, D Schlangen - arXiv preprint arXiv …, 2023 - arxiv.org
Language is by its very nature incremental in how it is produced and processed. This
property can be exploited by NLP systems to produce fast responses, which has been …

Incremental text-to-speech synthesis with prefix-to-prefix framework

M Ma, B Zheng, K Liu, R Zheng, H Liu, K Peng… - arXiv preprint arXiv …, 2019 - arxiv.org
Text-to-speech synthesis (TTS) has witnessed rapid progress in recent years, where neural
methods became capable of producing audios with high naturalness. However, these efforts …

SimQA: Detecting simultaneous MT errors through word-by-word question answering

HJ Han, M Carpuat, J Boyd-Graber - Proceedings of the 2022 …, 2022 - aclanthology.org
Detractors of neural machine translation admit that while its translations are fluent, it
sometimes gets key facts wrong. This is particularly important in simultaneous interpretation …

The road to quality is paved with good revisions: A detailed evaluation methodology for revision policies in incremental sequence labelling

B Madureira, P Kahardipraja, D Schlangen - arXiv preprint arXiv …, 2023 - arxiv.org
Incremental dialogue model components produce a sequence of output prefixes based on
incoming input. Mistakes can occur due to local ambiguities or to wrong hypotheses, making …

Monotonic simultaneous translation with chunk-wise reordering and refinement

H Han, S Ahn, Y Choi, I Chung, S Kim… - arXiv preprint arXiv …, 2021 - arxiv.org
Recent work in simultaneous machine translation is often trained with conventional full
sentence translation corpora, leading to either excessive latency or necessity to anticipate …

Fluent and low-latency simultaneous speech-to-speech translation with self-adaptive training

R Zheng, M Ma, B Zheng, K Liu, J Yuan… - arXiv preprint arXiv …, 2020 - arxiv.org
Simultaneous speech-to-speech translation is widely useful but extremely challenging, since
it needs to generate target-language speech concurrently with the source-language speech …