Panosent: A panoptic sextuple extraction benchmark for multimodal conversational aspect-based sentiment analysis

M Luo, H Fei, B Li, S Wu, Q Liu, S Poria… - Proceedings of the …, 2024 - dl.acm.org
While existing Aspect-based Sentiment Analysis (ABSA) has received extensive effort and
advancement, there are still gaps in defining a more holistic research target seamlessly …

A Survey of Ontology Expansion for Conversational Understanding

J Liang, Y Wu, Y Fang, H Fei, L Liao - arXiv preprint arXiv:2410.15019, 2024 - arxiv.org
In the rapidly evolving field of conversational AI, Ontology Expansion (OnExp) is crucial for
enhancing the adaptability and robustness of conversational agents. Traditional models rely …

A unimodal valence-arousal driven contrastive learning framework for multimodal multi-label emotion recognition

W Zheng, J Yu, R Xia - Proceedings of the 32nd ACM International …, 2024 - dl.acm.org
Multimodal Multi-Label Emotion Recognition (MMER) aims to identify one or more emotion
categories expressed by an utterance of a speaker. Despite obtaining promising results …

Event-centric hierarchical hyperbolic graph for multi-hop question answering over knowledge graphs

X Zhu, W Gao, T Li, W Yao, H Deng - Engineering Applications of Artificial …, 2024 - Elsevier
Abstract Question Answering over Knowledge Graphs (KGQA) blends natural language
processing with structured knowledge representation. While much attention of existing …

SpeechEE: A Novel Benchmark for Speech Event Extraction

B Wang, M Zhang, H Fei, Y Zhao, B Li, S Wu… - Proceedings of the …, 2024 - dl.acm.org
Event extraction (EE) is a critical direction in the field of information extraction, laying an
important foundation for the construction of structured knowledge bases. EE from text has …

Multimodal emotion-cause pair extraction with holistic interaction and label constraint

B Li, H Fei, F Li, T Chua, D Ji - ACM Transactions on Multimedia …, 2024 - dl.acm.org
The multimodal emotion-cause pair extraction (MECPE) task aims to detect the emotions,
causes, and emotion-cause pairs from multimodal conversations. Existing methods for this …

Multimodal Consistency-Based Teacher for Semi-Supervised Multimodal Sentiment Analysis

Z Yuan, J Fang, H Xu, K Gao - IEEE/ACM Transactions on …, 2024 - ieeexplore.ieee.org
Multimodal sentiment analysis holds significant importance within the realm of human-
computer interaction. Due to the ease of collecting unlabeled online resources compared to …

FacialPulse: An Efficient RNN-based Depression Detection via Temporal Facial Landmarks

R Wang, J Huang, J Zhang, X Liu, X Zhang… - Proceedings of the …, 2024 - dl.acm.org
Depression is a prevalent mental health disorder that significantly impacts individuals' lives
and well-being. Early detection and intervention are crucial for effective treatment and …

Textualized and feature-based models for compound multimodal emotion recognition in the wild

N Richet, S Belharbi, H Aslam, ME Schadt… - arXiv preprint arXiv …, 2024 - arxiv.org
Systems for multimodal emotion recognition (ER) are commonly trained to extract features
from different modalities (eg, visual, audio, and textual) that are combined to predict …

A twin disentanglement Transformer Network with Hierarchical-Level Feature Reconstruction for robust multimodal emotion recognition

C Li, L Xie, X Wang, H Pan, Z Wang - Expert Systems with Applications, 2025 - Elsevier
In real-world human–computer interaction, the performance of multimodal emotion
recognition models is inevitably affected by random modality feature missing. Thus, robust …