Intelligent wearable systems: Opportunities and challenges in health and sports

L Yang, O Amin, B Shihada - ACM Computing Surveys, 2024 - dl.acm.org
Wearable devices, or wearables, designed to be attached to the human body, can gather
personalized real-time data and continuously monitor an individual's health status and …

Personal llm agents: Insights and survey about the capability, efficiency and security

Y Li, H Wen, W Wang, X Li, Y Yuan, G Liu, J Liu… - arXiv preprint arXiv …, 2024 - arxiv.org
Since the advent of personal computing devices, intelligent personal assistants (IPAs) have
been one of the key technologies that researchers and engineers have focused on, aiming …

Mumu: Cooperative multitask learning-based guided multimodal fusion

MM Islam, T Iqbal - Proceedings of the AAAI conference on artificial …, 2022 - ojs.aaai.org
Multimodal sensors (visual, non-visual, and wearable) can provide complementary
information to develop robust perception systems for recognizing activities accurately …

Patron: perspective-aware multitask model for referring expression grounding using embodied multimodal cues

MM Islam, A Gladstone, T Iqbal - … of the AAAI Conference on Artificial …, 2023 - ojs.aaai.org
Humans naturally use referring expressions with verbal utterances and nonverbal gestures
to refer to objects and events. As these referring expressions can be interpreted differently …

CAESAR: An embodied simulator for generating multimodal referring expression datasets

MM Islam, R Mirzaiee, A Gladstone… - Advances in Neural …, 2022 - proceedings.neurips.cc
Humans naturally use verbal utterances and nonverbal gestures to refer to various objects
(known as $\textit {referring expressions} $) in different interactional scenarios. As collecting …

IMPRINT: Interactional dynamics-aware motion prediction in teams using multimodal context

MS Yasar, MM Islam, T Iqbal - ACM Transactions on Human-Robot …, 2024 - dl.acm.org
Robots are moving from working in isolation to working with humans as a part of human-
robot teams. In such situations, they are expected to work with multiple humans and need to …

SMTDKD: A Semantic-Aware Multimodal Transformer Fusion Decoupled Knowledge Distillation Method for Action Recognition

Z Quan, Q Chen, W Wang, M Zhang, X Li… - IEEE Sensors …, 2023 - ieeexplore.ieee.org
Multimodal sensors, including vision sensors and wearable sensors, offer valuable
complementary information for accurate recognition tasks. Nonetheless, the heterogeneity …

A State-of-the-Art Review of Computational Models for Analyzing Longitudinal Wearable Sensor Data in Healthcare

P Lago - arXiv preprint arXiv:2407.21665, 2024 - arxiv.org
Wearable devices are increasingly used as tools for biomedical research, as the continuous
stream of behavioral and physiological data they collect can provide insights about our …

ARCTIC: A knowledge distillation approach via attention-based relation matching and activation region constraint for RGB-to-Infrared videos action recognition

Z Quan, Q Chen, Y Li, Z Liu, Y Cui - Computer Vision and Image …, 2023 - Elsevier
The recognition effect of existing infrared-based action recognition is greatly reduced when
clear appearance and texture are required. To address this limitation, the amalgamation of …

MMBind: Unleashing the Potential of Distributed and Heterogeneous Data for Multimodal Learning in IoT

X Ouyang, J Wu, T Kimura, Y Lin, G Verma… - arXiv preprint arXiv …, 2024 - arxiv.org
Multimodal sensing systems are increasingly prevalent in various real-world applications.
Most existing multimodal learning approaches heavily rely on training with a large amount of …