Silent speech interfaces for speech restoration: A review

JA Gonzalez-Lopez, A Gomez-Alanis… - IEEE …, 2020 - ieeexplore.ieee.org
This review summarises the status of silent speech interface (SSI) research. SSIs rely on non-
acoustic biosignals generated by the human body during speech production to enable …

SottoVoce: An ultrasound imaging-based silent speech interaction using deep neural networks

N Kimura, M Kono, J Rekimoto - … of the 2019 CHI Conference on Human …, 2019 - dl.acm.org
The availability of digital devices operated by voice is expanding rapidly. However, the
applications of voice interfaces are still restricted. For example, speaking in public places …

EMG-to-speech: Direct generation of speech from facial electromyographic signals

M Janke, L Diener - IEEE/ACM Transactions on Audio, Speech …, 2017 - ieeexplore.ieee.org
Silent speech interfaces are systems that enable speech communication even when an
acoustic signal is unavailable. Over the last years, public interest in such interfaces has …

[PDF][PDF] DNN-based ultrasound-to-speech conversion for a silent speech interface

TG Csapó, T Grósz, G Gosztolya, L Tóth, A Markó - 2017 - real.mtak.hu
In this paper we present our initial results in articulatory-toacoustic conversion based on
tongue movement recordings using Deep Neural Networks (DNNs). Despite the fact that …

TaL: a synchronised multi-speaker corpus of ultrasound tongue imaging, audio, and lip videos

MS Ribeiro, J Sanger, JX Zhang… - 2021 IEEE Spoken …, 2021 - ieeexplore.ieee.org
We present the Tongue and Lips corpus (TaL), a multi-speaker corpus of audio, ultrasound
tongue imaging, and lip videos. TaL consists of two parts: TaL1 is a set of six recording …

Updating the silent speech challenge benchmark with deep learning

Y Ji, L Liu, H Wang, Z Liu, Z Niu, B Denby - Speech Communication, 2018 - Elsevier
Abstract The term “Silent Speech Interface” was introduced almost a decade ago to describe
speech communication systems using only non-acoustic sensors, such as …

A silent speech system based on permanent magnet articulography and direct synthesis

JA Gonzalez, LA Cheah, JM Gilbert, J Bai, SR Ell… - Computer Speech & …, 2016 - Elsevier
In this paper we present a silent speech interface (SSI) system aimed at restoring speech
communication for individuals who have lost their voice due to laryngectomy or diseases …

Ultrasound-based articulatory-to-acoustic mapping with WaveGlow speech synthesis

TG Csapó, C Zainkó, L Tóth, G Gosztolya… - arXiv preprint arXiv …, 2020 - arxiv.org
For articulatory-to-acoustic mapping using deep neural networks, typically spectral and
excitation parameters of vocoders have been used as the training targets. However …

Statistical conversion of silent articulation into audible speech using full-covariance HMM

T Hueber, G Bailly - Computer Speech & Language, 2016 - Elsevier
This article investigates the use of statistical mapping techniques for the conversion of
articulatory movements into audible speech with no restriction on the vocabulary, in the …

DNN-based acoustic-to-articulatory inversion using ultrasound tongue imaging

D Porras, A Sepúlveda-Sepúlveda… - 2019 International Joint …, 2019 - ieeexplore.ieee.org
Speech sounds are produced as the coordinated movement of the speaking organs. There
are several available methods to model the relation of articulatory movements and the …