Neurologic a* esque decoding: Constrained text generation with lookahead heuristics
The dominant paradigm for neural text generation is left-to-right decoding from
autoregressive language models. Constrained or controllable generation under complex …
autoregressive language models. Constrained or controllable generation under complex …
Efficient wait-k models for simultaneous machine translation
Simultaneous machine translation consists in starting output generation before the entire
input sequence is available. Wait-k decoders offer a simple but efficient approach for this …
input sequence is available. Wait-k decoders offer a simple but efficient approach for this …
Hidden markov transformer for simultaneous machine translation
Simultaneous machine translation (SiMT) outputs the target sequence while receiving the
source sequence, and hence learning when to start translating each target token is the core …
source sequence, and hence learning when to start translating each target token is the core …
Learning when to translate for streaming speech
How to find proper moments to generate partial sentence translation given a streaming
speech input? Existing approaches waiting-and-translating for a fixed duration often break …
speech input? Existing approaches waiting-and-translating for a fixed duration often break …
TAPIR: Learning adaptive revision for incremental natural language understanding with a two-pass model
Language is by its very nature incremental in how it is produced and processed. This
property can be exploited by NLP systems to produce fast responses, which has been …
property can be exploited by NLP systems to produce fast responses, which has been …
Incremental text-to-speech synthesis with prefix-to-prefix framework
Text-to-speech synthesis (TTS) has witnessed rapid progress in recent years, where neural
methods became capable of producing audios with high naturalness. However, these efforts …
methods became capable of producing audios with high naturalness. However, these efforts …
SimQA: Detecting simultaneous MT errors through word-by-word question answering
Detractors of neural machine translation admit that while its translations are fluent, it
sometimes gets key facts wrong. This is particularly important in simultaneous interpretation …
sometimes gets key facts wrong. This is particularly important in simultaneous interpretation …
The road to quality is paved with good revisions: A detailed evaluation methodology for revision policies in incremental sequence labelling
Incremental dialogue model components produce a sequence of output prefixes based on
incoming input. Mistakes can occur due to local ambiguities or to wrong hypotheses, making …
incoming input. Mistakes can occur due to local ambiguities or to wrong hypotheses, making …
Monotonic simultaneous translation with chunk-wise reordering and refinement
Recent work in simultaneous machine translation is often trained with conventional full
sentence translation corpora, leading to either excessive latency or necessity to anticipate …
sentence translation corpora, leading to either excessive latency or necessity to anticipate …
Fluent and low-latency simultaneous speech-to-speech translation with self-adaptive training
Simultaneous speech-to-speech translation is widely useful but extremely challenging, since
it needs to generate target-language speech concurrently with the source-language speech …
it needs to generate target-language speech concurrently with the source-language speech …