Energy-efficient and interpretable multisensor human activity recognition via deep fused lasso net

Y Zhou, J Xie, X Zhang, W Wu… - IEEE transactions on …, 2024 - ieeexplore.ieee.org
Utilizing data acquired by multiple wearable sensors can usually guarantee more accurate
recognition for deep learning based human activity recognition. However, an increased …

Fedmekt: Distillation-based embedding knowledge transfer for multimodal federated learning

HQ Le, MNH Nguyen, CM Thwal, Y Qiao, C Zhang… - Neural Networks, 2024 - Elsevier
Federated learning (FL) enables a decentralized machine learning paradigm for multiple
clients to collaboratively train a generalized global model without sharing their private data …

Wear: An outdoor sports dataset for wearable and egocentric activity recognition

M Bock, H Kuehne, K Van Laerhoven… - Proceedings of the ACM …, 2024 - dl.acm.org
Research has shown the complementarity of camera-and inertial-based data for modeling
human activities, yet datasets with both egocentric video and inertial-based sensor data …

Eqa-mx: Embodied question answering using multimodal expression

MM Islam, A Gladstone, R Islam… - The Twelfth International …, 2023 - openreview.net
Humans predominantly use verbal utterances and nonverbal gestures (eg, eye gaze and
pointing gestures) in their natural interactions. For instance, pointing gestures and verbal …

M3sense: Affect-agnostic multitask representation learning using multimodal wearable sensors

S Samyoun, MM Islam, T Iqbal, J Stankovic - Proceedings of the ACM on …, 2022 - dl.acm.org
Modern smartwatches or wrist wearables having multiple physiological sensing modalities
have emerged as a subtle way to detect different mental health conditions, such as anxiety …

Patron: perspective-aware multitask model for referring expression grounding using embodied multimodal cues

MM Islam, A Gladstone, T Iqbal - … of the AAAI Conference on Artificial …, 2023 - ojs.aaai.org
Humans naturally use referring expressions with verbal utterances and nonverbal gestures
to refer to objects and events. As these referring expressions can be interpreted differently …

CAESAR: An embodied simulator for generating multimodal referring expression datasets

MM Islam, R Mirzaiee, A Gladstone… - Advances in Neural …, 2022 - proceedings.neurips.cc
Humans naturally use verbal utterances and nonverbal gestures to refer to various objects
(known as $\textit {referring expressions} $) in different interactional scenarios. As collecting …

Maven: A memory augmented recurrent approach for multimodal fusion

MM Islam, MS Yasar, T Iqbal - IEEE Transactions on Multimedia, 2022 - ieeexplore.ieee.org
Multisensory systems provide complementary information that aids many machine learning
approaches in perceiving the environment comprehensively. These systems consist of …

VADER: Vector-Quantized Generative Adversarial Network for Motion Prediction

MS Yasar, T Iqbal - 2023 IEEE/RSJ International Conference on …, 2023 - ieeexplore.ieee.org
Human motion prediction is an essential component for enabling close-proximity human-
robot collaboration. The task of accurately predicting human motion is non-trivial and is …

IMPRINT: Interactional dynamics-aware motion prediction in teams using multimodal context

MS Yasar, MM Islam, T Iqbal - ACM Transactions on Human-Robot …, 2024 - dl.acm.org
Robots are moving from working in isolation to working with humans as a part of human-
robot teams. In such situations, they are expected to work with multiple humans and need to …