Emotion recognition using multimodal residual LSTM network

J Ma, H Tang, WL Zheng, BL Lu - Proceedings of the 27th ACM …, 2019 - dl.acm.org
Proceedings of the 27th ACM international conference on multimedia, 2019dl.acm.org
Various studies have shown that the temporal information captured by conventional long-
short-term memory (LSTM) networks is very useful for enhancing multimodal emotion
recognition using encephalography (EEG) and other physiological signals. However, the
dependency among multiple modalities and high-level temporal-feature learning using
deeper LSTM networks is yet to be investigated. Thus, we propose a multimodal residual
LSTM (MMResLSTM) network for emotion recognition. The MMResLSTM network shares …
Various studies have shown that the temporal information captured by conventional long-short-term memory (LSTM) networks is very useful for enhancing multimodal emotion recognition using encephalography (EEG) and other physiological signals. However, the dependency among multiple modalities and high-level temporal-feature learning using deeper LSTM networks is yet to be investigated. Thus, we propose a multimodal residual LSTM (MMResLSTM) network for emotion recognition. The MMResLSTM network shares the weights across the modalities in each LSTM layer to learn the correlation between the EEG and other physiological signals. It contains both the spatial shortcut paths provided by the residual network and temporal shortcut paths provided by LSTM for efficiently learning emotion-related high-level features. The proposed network was evaluated using a publicly available dataset for EEG-based emotion recognition, DEAP. The experimental results indicate that the proposed MMResLSTM network yielded a promising result, with a classification accuracy of 92.87% for arousal and 92.30% for valence.
ACM Digital Library
以上显示的是最相近的搜索结果。 查看全部搜索结果