The segmentation of multi-channel meeting recordings for automatic speech recognition

J Dines, J Vepa, T Hain - 2006 - infoscience.epfl.ch
J Dines, J Vepa, T Hain
2006infoscience.epfl.ch
One major research challenge in the domain of the analysis of meeting room data is the
automatic transcription of what is spoken during meetings, a task which has gained
considerable attention within the ASR research community through the NIST rich
transcription evaluations conducted over the last three years. One of the major difficulties in
carrying out automatic speech recognition (ASR) on this data is dealing with the challenging
recording environment, which has instigated the development of novel audio pre-processing …
Abstract
One major research challenge in the domain of the analysis of meeting room data is the automatic transcription of what is spoken during meetings, a task which has gained considerable attention within the ASR research community through the NIST rich transcription evaluations conducted over the last three years. One of the major difficulties in carrying out automatic speech recognition (ASR) on this data is dealing with the challenging recording environment, which has instigated the development of novel audio pre-processing approaches. In this paper we present a system for the automatic segmentation of multiple-channel individual headset microphone (IHM) meeting recordings for automatic speech recognition. The system relies on an MLP classifier trained from several meeting room corpora to identify speech/non-speech segments of the recordings. We give a detailed analysis of the segmentation performance for a number of system configurations, with our best system achieving ASR performance on automatically generated segments within 1.3\%(3.7\% relative) of a manual segmentation of the data.
infoscience.epfl.ch
以上显示的是最相近的搜索结果。 查看全部搜索结果