A review of human activity recognition methods

M Vrigkas, C Nikou, IA Kakadiaris - Frontiers in Robotics and AI, 2015 - frontiersin.org
Recognizing human activities from video sequences or still images is a challenging task due
to problems, such as background clutter, partial occlusion, changes in scale, viewpoint …

Annotation and processing of continuous emotional attributes: Challenges and opportunities

A Metallinou, S Narayanan - 2013 10th IEEE international …, 2013 - ieeexplore.ieee.org
Human emotional and cognitive states evolve with variable intensity and clarity through the
course of social interactions and experiences, and they are continuously influenced by a …

Survey on emotional body gesture recognition

F Noroozi, CA Corneanu, D Kamińska… - IEEE transactions on …, 2018 - ieeexplore.ieee.org
Automatic emotion recognition has become a trending research topic in the past decade.
While works based on facial expressions or speech abound, recognizing affect from body …

Introducing the RECOLA multimodal corpus of remote collaborative and affective interactions

F Ringeval, A Sonderegger, J Sauer… - 2013 10th IEEE …, 2013 - ieeexplore.ieee.org
We present in this paper a new multimodal corpus of spontaneous collaborative and
affective interactions in French: RECOLA, which is being made available to the research …

Towards emotionally aware AI smart classroom: Current issues and directions for engineering and education

Y Kim, T Soyata, RF Behnagh - Ieee Access, 2018 - ieeexplore.ieee.org
Future smart classrooms that we envision will significantly enhance learning experience and
seamless communication among students and teachers using real-time sensing and …

A multimodal emotional human–robot interaction architecture for social robots engaged in bidirectional communication

A Hong, N Lunscher, T Hu, Y Tsuboi… - IEEE transactions on …, 2020 - ieeexplore.ieee.org
For social robots to effectively engage in human–robot interaction (HRI), they need to be
able to interpret human affective cues and to respond appropriately via display of their own …

Context-aware personality inference in dyadic scenarios: Introducing the udiva dataset

C Palmero, J Selva, S Smeureanu… - Proceedings of the …, 2021 - openaccess.thecvf.com
This paper introduces UDIVA, a new non-acted dataset of face-to-face dyadic interactions,
where interlocutors perform competitive and collaborative tasks with different behavior …

Multimodal prediction of affective dimensions and depression in human-computer interactions

R Gupta, N Malandrakis, B Xiao, T Guha… - Proceedings of the 4th …, 2014 - dl.acm.org
Depression is one of the most common mood disorders. Technology has the potential to
assist in screening and treating people with depression by robustly modeling and tracking …

An investigation of annotation delay compensation and output-associative fusion for multimodal continuous emotion prediction

Z Huang, T Dang, N Cummins, B Stasak, P Le… - Proceedings of the 5th …, 2015 - dl.acm.org
Continuous emotion dimension prediction has increased in popularity over the last few
years, as the shift away from discrete classification based tasks has introduced more realism …

The USC CreativeIT database of multimodal dyadic interactions: From speech and full body motion capture to continuous emotional annotations

A Metallinou, Z Yang, C Lee, C Busso… - Language resources …, 2016 - Springer
Improvised acting is a viable technique to study expressive human communication and to
shed light into actors' creativity. The USC CreativeIT database provides a novel, freely …