作者
Stefan Bleeck, Travis James Francis Paul Ralph-Donaldson
发表日期
2022/9/12
简介
Recent SOTA (state of the art) AVSR (Audio Visual Speech Recognition) systems such as Meta’s AV-Hubert have highlighted the superior efficacy of multi-modal speech recognition when compared to audio-based implementations, especially in noisy conditions. However, planar feature extraction methods are still susceptible to variable lighting conditions and skin tones. Moreover, these AVSR systems are currently unable to accurately classify visemes (visual phonemes) to phonemes with a one-one taxonomy. One potential avenue of research to prevent both these shortcomings is the application of newer RGB-D cameras (analogous to the Microsoft’s Kinect Sensor) to extract more comprehensive facial speech data that is both lighting and skin tone invariant. Depth data also includes additional more differentiable speech information pertaining to phonemes that involve lip protrusion, such as rounded vowels, that may allow for a more accurate discrimination between visemes. The current RGB-D AVSR literature has yet to thoroughly explore the applicability of the depth modality in more challenging classification tasks, such as continuous and free speech and has been limited to mostly smaller speaker-dependent datasets containing only individual words or phrases. This study will investigate the depth modality's influence on speech classification, using a bespoke broadly generalisable multi-modal speaker-independent dataset. This will contain both continuous and free speech, in a rigorous attempt to assess the depth modality's robustness against these more challenging classification tasks. This paper will then compare the proposed RGB-D …
学术搜索中的文章