More than words: In-the-wild visually-driven prosody for text-to-speech

M Hassid, MT Ramanovich… - Proceedings of the …, 2022 - openaccess.thecvf.com
In this paper we present VDTTS, a Visually-Driven Text-to-Speech model. Motivated by
dubbing, VDTTS takes advantage of video frames as an additional input alongside text, and
generates speech that matches the video signal. We demonstrate how this allows VDTTS to,
unlike plain TTS models, generate speech that not only has prosodic variations like natural
pauses and pitch, but is also synchronized to the input video. Experimentally, we show our
model produces well-synchronized outputs, approaching the video-speech synchronization …

[引用][C] More than words: In-the-wild visually-driven prosody for text-to-speech. 2022 IEEE

M Hassid, MT Ramanovich, B Shillingford, M Wang… - CVF Conference on …, 2022
以上显示的是最相近的搜索结果。 查看全部搜索结果