CTNet: Conversational transformer network for emotion recognition

Z Lian, B Liu, J Tao - IEEE/ACM Transactions on Audio, Speech …, 2021 - ieeexplore.ieee.org
Emotion recognition in conversation is a crucial topic for its widespread applications in the
field of human-computer interactions. Unlike vanilla emotion recognition of individual …

AVEC 2019 workshop and challenge: state-of-mind, detecting depression with AI, and cross-cultural affect recognition

F Ringeval, B Schuller, M Valstar, N Cummins… - Proceedings of the 9th …, 2019 - dl.acm.org
The Audio/Visual Emotion Challenge and Workshop (AVEC 2019)'State-of-Mind, Detecting
Depression with AI, and Cross-cultural Affect Recognition'is the ninth competition event …

Missing modality imagination network for emotion recognition with uncertain missing modalities

J Zhao, R Li, Q Jin - Proceedings of the 59th Annual Meeting of …, 2021 - aclanthology.org
Multimodal fusion has been proved to improve emotion recognition performance in previous
works. However, in real-world applications, we often encounter the problem of missing …

Transformer encoder with multi-modal multi-head attention for continuous affect recognition

H Chen, D Jiang, H Sahli - IEEE Transactions on Multimedia, 2020 - ieeexplore.ieee.org
Continuous affect recognition is becoming an increasingly attractive research topic in
affective computing. Previous works mainly focused on modelling the temporal dependency …

Multi-modal continuous dimensional emotion recognition using recurrent neural network and self-attention mechanism

L Sun, Z Lian, J Tao, B Liu, M Niu - … of the 1st international on multimodal …, 2020 - dl.acm.org
Automatic perception and understanding of human emotion or sentiment has a wide range
of applications and has attracted increasing attention nowadays. The Multimodal Sentiment …

An active learning paradigm for online audio-visual emotion recognition

I Kansizoglou, L Bampis… - IEEE Transactions on …, 2019 - ieeexplore.ieee.org
The advancement of Human-Robot Interaction (HRI) drives research into the development of
advanced emotion identification architectures that fathom audio-visual (AV) modalities of …

Multi-resolution modulation-filtered cochleagram feature for LSTM-based dimensional emotion recognition from speech

Z Peng, J Dang, M Unoki, M Akagi - Neural Networks, 2021 - Elsevier
Continuous dimensional emotion recognition from speech helps robots or virtual agents
capture the temporal dynamics of a speaker's emotional state in natural human–robot …

Modeling emotion in complex stories: the Stanford Emotional Narratives Dataset

DC Ong, Z Wu, ZX Tan, M Reddan… - IEEE Transactions …, 2019 - ieeexplore.ieee.org
Human emotions unfold over time, and more affective computing research has to prioritize
capturing this crucial component of real-world affect. Modeling dynamic emotional stimuli …

Facial affect recognition based on transformer encoder and audiovisual fusion for the abaw5 challenge

Z Zhang, L An, Z Cui, T Dong, Y Jiang, J Shi… - arXiv preprint arXiv …, 2023 - arxiv.org
In this paper, we present our solutions for the 5th Workshop and Competition on Affective
Behavior Analysis in-the-wild (ABAW), which includes four sub-challenges of Valence …

EmoBed: Strengthening monomodal emotion recognition via training with crossmodal emotion embeddings

J Han, Z Zhang, Z Ren… - IEEE Transactions on …, 2019 - ieeexplore.ieee.org
Despite remarkable advances in emotion recognition, they are severely restrained from
either the essentially limited property of the employed single modality, or the synchronous …