Automatic assessment of depression from speech via a hierarchical attention transfer network and attention autoencoders

Z Zhao, Z Bao, Z Zhang, J Deng… - IEEE Journal of …, 2019 - ieeexplore.ieee.org
Z Zhao, Z Bao, Z Zhang, J Deng, N Cummins, H Wang, J Tao, B Schuller
IEEE Journal of Selected Topics in Signal Processing, 2019ieeexplore.ieee.org
Early interventions in mental health conditions such as Major Depressive Disorder (MDD)
are critical to improved health outcomes, as they can help reduce the burden of the disease.
As the efficient diagnosis of depression severity is therefore highly desirable, the use of
behavioural cues such as speech characteristics in diagnosis is attracting increasing interest
in the field of quantitative mental health research. However, despite the widespread use of
machine learning methods in the depression analysis community, the lack of adequate …
Early interventions in mental health conditions such as Major Depressive Disorder (MDD) are critical to improved health outcomes, as they can help reduce the burden of the disease. As the efficient diagnosis of depression severity is therefore highly desirable, the use of behavioural cues such as speech characteristics in diagnosis is attracting increasing interest in the field of quantitative mental health research. However, despite the widespread use of machine learning methods in the depression analysis community, the lack of adequate labelled data has become a bottleneck preventing the broader application of techniques such as deep learning. Accordingly, we herein describe a deep learning approach that combines unsupervised learning, knowledge transfer and hierarchical attention for the task of speech-based depression severity measurement. Our novel approach, a Hierarchical Attention Transfer Network (HATN), uses hierarchical attention autoencoders to learn attention from a source task, followed by speech recognition, and then transfers this knowledge into a depression analysis system. Experiments based on the depression sub-challenge dataset of the Audio/Visual Emotion Challenge (AVEC) 2017 demonstrate the effectiveness of our proposed model. On the test set, our technique outperformed other speech-based systems presented in the literature, achieving a Root Mean Square Error (RMSE) of 5.51 and a Mean Absolute Error (MAE) of 4.20 on a Patient Health Questionnaire (PHQ)-8 scale [0, 24]. To the best of our knowledge, these scores represent the best-known speech results on the AVEC 2017 depression corpus to date.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果