An overview of deep-learning-based audio-visual speech enhancement and separation
Speech enhancement and speech separation are two related tasks, whose purpose is to
extract either one or more target speech signals, respectively, from a mixture of sounds …
extract either one or more target speech signals, respectively, from a mixture of sounds …
Lip to speech synthesis with visual context attentional gan
In this paper, we propose a novel lip-to-speech generative adversarial network, Visual
Context Attentional GAN (VCA-GAN), which can jointly model local and global lip …
Context Attentional GAN (VCA-GAN), which can jointly model local and global lip …
End-to-end video-to-speech synthesis using generative adversarial networks
Video-to-speech is the process of reconstructing the audio speech from a video of a spoken
utterance. Previous approaches to this task have relied on a two-step process where an …
utterance. Previous approaches to this task have relied on a two-step process where an …
Analyzing lower half facial gestures for lip reading applications: Survey on vision techniques
SJ Preethi - Computer Vision and Image Understanding, 2023 - Elsevier
Lip reading has gained popularity due to the proliferation of emerging real-world
applications. This article provides a comprehensive review of benchmark datasets available …
applications. This article provides a comprehensive review of benchmark datasets available …
[PDF][PDF] SVTS: scalable video-to-speech synthesis
Video-to-speech synthesis (also known as lip-to-speech) refers to the translation of silent lip
movements into the corresponding audio. This task has received an increasing amount of …
movements into the corresponding audio. This task has received an increasing amount of …
Nautilus: a versatile voice cloning system
HT Luong, J Yamagishi - IEEE/ACM Transactions on Audio …, 2020 - ieeexplore.ieee.org
We introduce a novel speech synthesis system, called NAUTILUS, that can generate speech
with a target voice either from a text input or a reference utterance of an arbitrary source …
with a target voice either from a text input or a reference utterance of an arbitrary source …
Speechin: A smart necklace for silent speech recognition
This paper presents SpeeChin, a smart necklace that can recognize 54 English and 44
Chinese silent speech commands. A customized infrared (IR) imaging system is mounted on …
Chinese silent speech commands. A customized infrared (IR) imaging system is mounted on …
[HTML][HTML] A robust voice spoofing detection system using novel CLS-LBP features and LSTM
Abstract Automatic Speaker Verification (ASV) systems are vulnerable to a variety of voice
spoofing attacks, eg, replays, speech synthesis, etc. The imposters/fraudsters often use …
spoofing attacks, eg, replays, speech synthesis, etc. The imposters/fraudsters often use …
Lip-to-speech synthesis in the wild with multi-task learning
Recent studies have shown impressive performance in Lip-to-speech synthesis that aims to
reconstruct speech from visual information alone. However, they have been suffering from …
reconstruct speech from visual information alone. However, they have been suffering from …
Audio-visual speech inpainting with deep learning
In this paper, we present a deep-learning-based framework for audio-visual speech
inpainting, ie, the task of restoring the missing parts of an acoustic speech signal from …
inpainting, ie, the task of restoring the missing parts of an acoustic speech signal from …