A method for evaluating audio-visual scene analysis in multi-talker environments

KD Lund, A Ahrens, T Dau - … Auditory Learning in Biological and Artificial …, 2020 - orbit.dtu.dk
KD Lund, A Ahrens, T Dau
International Symposium on Auditory and Audiological Research: Auditory …, 2020orbit.dtu.dk
In cocktail-party environments, listeners are able to comprehend and localize multiple
simultaneous talkers. With current virtual reality (VR) technology and virtual acoustics it has
become possible to present an audio-visual cocktail-party in a controlled laboratory
environment. A new continuous speech corpus with ten monologues from five female and
five male talkers was designed and recorded. Each monologue contained a substantially
different topic. Using an egocentric interaction method in VR, subjects were asked to label …
Abstract
In cocktail-party environments, listeners are able to comprehend and localize multiple simultaneous talkers. With current virtual reality (VR) technology and virtual acoustics it has become possible to present an audio-visual cocktail-party in a controlled laboratory environment. A new continuous speech corpus with ten monologues from five female and five male talkers was designed and recorded. Each monologue contained a substantially different topic. Using an egocentric interaction method in VR, subjects were asked to label perceived talkers according to source position and content of speech, while varying the number of simultaneously presented talkers. With an increasing number of talkers, the subjects’ accuracy in performing this task was found to decrease. When more than six talkers were in a scene, the number of talkers was underestimated and the azimuth localization error increased. With this method, a new approach is presented to gauge listeners’ ability to analyze complex audio-visual scenes.
orbit.dtu.dk
以上显示的是最相近的搜索结果。 查看全部搜索结果