Large-scale multilingual audio visual dubbing

Y Yang, B Shillingford, Y Assael, M Wang, W Liu… - arXiv preprint arXiv …, 2020 - arxiv.org
arXiv preprint arXiv:2011.03530, 2020arxiv.org
We describe a system for large-scale audiovisual translation and dubbing, which translates
videos from one language to another. The source language's speech content is transcribed
to text, translated, and automatically synthesized into target language speech using the
original speaker's voice. The visual content is translated by synthesizing lip movements for
the speaker to match the translated audio, creating a seamless audiovisual experience in
the target language. The audio and visual translation subsystems each contain a large-scale …
We describe a system for large-scale audiovisual translation and dubbing, which translates videos from one language to another. The source language's speech content is transcribed to text, translated, and automatically synthesized into target language speech using the original speaker's voice. The visual content is translated by synthesizing lip movements for the speaker to match the translated audio, creating a seamless audiovisual experience in the target language. The audio and visual translation subsystems each contain a large-scale generic synthesis model trained on thousands of hours of data in the corresponding domain. These generic models are fine-tuned to a specific speaker before translation, either using an auxiliary corpus of data from the target speaker, or using the video to be translated itself as the input to the fine-tuning process. This report gives an architectural overview of the full system, as well as an in-depth discussion of the video dubbing component. The role of the audio and text components in relation to the full system is outlined, but their design is not discussed in detail. Translated and dubbed demo videos generated using our system can be viewed at https://www.youtube.com/playlist?list=PLSi232j2ZA6_1Exhof5vndzyfbxAhhEs5
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果