Lip reading of hearing impaired persons using HMM

N Puviarasan, S Palanivel - Expert Systems with Applications, 2011 - Elsevier
This paper describes a method for lip reading of hearing impaired persons. The term lip
reading refers to recognizing the spoken words using visual speech information such as lip …

Audio-to-visual speech conversion using deep neural networks

S Taylor, A Kato, B Milner, I Matthews - 2016 - ueaeprints.uea.ac.uk
We study the problem of mapping from acoustic to visual speech with the goal of generating
accurate, perceptually natural speech animation automatically from an audio speech signal …

Prediction-based audiovisual fusion for classification of non-linguistic vocalisations

S Petridis, M Pantic - IEEE Transactions on Affective Computing, 2015 - ieeexplore.ieee.org
Prediction plays a key role in recent computational models of the brain and it has been
suggested that the brain constantly makes multisensory spatiotemporal predictions. Inspired …

[PDF][PDF] 3D Head Pose and Facial Expression Tracking using a Single Camera.

LD Terissi, JC Gómez - J. Univers. Comput. Sci., 2010 - researchgate.net
Algorithms for 3D head pose and facial expression tracking using a single camera
(monocular image sequences) is presented in this paper. The proposed method is based on …

A linear model of acoustic-to-facial mapping: Model parameters, data set size, and generalization across speakers

MS Craig, P Van Lieshout, W Wong - The Journal of the Acoustical …, 2008 - pubs.aip.org
The relationship between acoustic and visual speech is important for understanding speech
perception, but it also forms the basis behind a type of facial animator, which can predict …

Expression transfer for facial sketch animation

Y Yang, N Zheng, Y Liu, S Du, Y Su, Y Nishio - Signal Processing, 2011 - Elsevier
This paper presents a hierarchical animation method for transferring facial expressions
extracted from a performance video to different facial sketches. Without any expression …

[HTML][HTML] HMM-based photo-realistic talking face synthesis using facial expression parameter mapping with deep neural networks

K Sato, T Nose, A Ito - Journal of Computer and Communications, 2017 - scirp.org
This paper proposes a technique for synthesizing a pixel-based photo-realistic talking face
animation using two-step synthesis with HMMs and DNNs. We introduce facial expression …

Animation of generic 3D head models driven by speech

L Terissi, M Cerda, JC Gomez… - … on Multimedia and …, 2011 - ieeexplore.ieee.org
In this paper, a system for speech-driven animation of generic 3D head models is presented.
The system is based on the inversion of a joint Audio-Visual Hidden Markov Model to …

A comprehensive system for facial animation of generic 3D head models driven by speech

LD Terissi, M Cerda, JC Gómez… - EURASIP Journal on …, 2013 - Springer
A comprehensive system for facial animation of generic 3D head models driven by speech is
presented in this article. In the training stage, audio-visual information is extracted from …

A Data-Driven Approach For Automatic Visual Speech In Swedish Speech Synthesis Applications

J Hagrot - 2019 - diva-portal.org
This project investigates the use of artificial neural networks for visual speech synthesis. The
objective was to produce a framework for animated chat bots in Swedish. A survey of the …