Distortionless multi-channel target speech enhancement for overlapped speech recognition

B Wu, M Yu, L Chen, Y Xu, C Weng, D Su… - arXiv preprint arXiv …, 2020 - arxiv.org
B Wu, M Yu, L Chen, Y Xu, C Weng, D Su, D Yu
arXiv preprint arXiv:2007.01566, 2020arxiv.org
Speech enhancement techniques based on deep learning have brought significant
improvement on speech quality and intelligibility. Nevertheless, a large gain in speech
quality measured by objective metrics, such as perceptual evaluation of speech quality
(PESQ), does not necessarily lead to improved speech recognition performance due to
speech distortion in the enhancement stage. In this paper, a multi-channel dilated
convolutional network based frequency domain modeling is presented to enhance target …
Speech enhancement techniques based on deep learning have brought significant improvement on speech quality and intelligibility. Nevertheless, a large gain in speech quality measured by objective metrics, such as perceptual evaluation of speech quality (PESQ), does not necessarily lead to improved speech recognition performance due to speech distortion in the enhancement stage. In this paper, a multi-channel dilated convolutional network based frequency domain modeling is presented to enhance target speaker in the far-field, noisy and multi-talker conditions. We study three approaches towards distortionless waveforms for overlapped speech recognition: estimating complex ideal ratio mask with an infinite range, incorporating the fbank loss in a multi-objective learning and finetuning the enhancement model by an acoustic model. Experimental results proved the effectiveness of all three approaches on reducing speech distortions and improving recognition accuracy. Particularly, the jointly tuned enhancement model works very well with other standalone acoustic model on real test data.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果