Toward storytelling from visual lifelogging: An overview

M Bolanos, M Dimiccoli… - IEEE Transactions on …, 2016 - ieeexplore.ieee.org
Visual lifelogging consists of acquiring images that capture the daily experiences of the user
by wearing a camera over a long period of time. The pictures taken offer considerable …

Recognition of activities of daily living with egocentric vision: A review

THC Nguyen, JC Nebel, F Florez-Revuelta - Sensors, 2016 - mdpi.com
Video-based recognition of activities of daily living (ADLs) is being used in ambient assisted
living systems in order to support the independent living of older people. However, current …

Egobody: Human body shape and motion of interacting people from head-mounted devices

S Zhang, Q Ma, Y Zhang, Z Qian, T Kwon… - European conference on …, 2022 - Springer
Understanding social interactions from egocentric views is crucial for many applications,
ranging from assistive robotics to AR/VR. Key to reasoning about interactions is to …

An outlook into the future of egocentric vision

C Plizzari, G Goletto, A Furnari, S Bansal… - International Journal of …, 2024 - Springer
What will the future be? We wonder! In this survey, we explore the gap between current
research in egocentric vision and the ever-anticipated future, where wearable computing …

Privacy-preserving human activity recognition from extreme low resolution

M Ryoo, B Rothrock, C Fleming, HJ Yang - Proceedings of the AAAI …, 2017 - ojs.aaai.org
Privacy protection from surreptitious video recordings is an important societal challenge. We
desire a computer vision system (eg, a robot) that can recognize human activities and assist …

The evolution of first person vision methods: A survey

A Betancourt, P Morerio, CS Regazzoni… - … on Circuits and …, 2015 - ieeexplore.ieee.org
The emergence of new wearable technologies, such as action cameras and smart glasses,
has increased the interest of computer vision scientists in the first person perspective …

Ego-Humans: An Ego-Centric 3D Multi-Human Benchmark

R Khirodkar, A Bansal, L Ma… - Proceedings of the …, 2023 - openaccess.thecvf.com
We present EgoHumans, a new multi-view multi-human video benchmark to advance the
state-of-the-art of egocentric human 3D pose estimation and tracking. Existing egocentric …

[HTML][HTML] Egocentric vision-based action recognition: A survey

A Núñez-Marcos, G Azkune, I Arganda-Carreras - Neurocomputing, 2022 - Elsevier
The egocentric action recognition EAR field has recently increased its popularity due to the
affordable and lightweight wearable cameras available nowadays such as GoPro and …

GPT4Ego: unleashing the potential of pre-trained models for zero-shot egocentric action recognition

G Dai, X Shu, W Wu, R Yan… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Vision-Language Models (VLMs), pre-trained on large-scale datasets, have shown
impressive performance in various visual recognition tasks. This advancement paves the …

A survey of human action analysis in HRI applications

Y Ji, Y Yang, F Shen, HT Shen… - IEEE Transactions on …, 2019 - ieeexplore.ieee.org
The human action is an important information source for human social interaction, and it
simultaneously plays a crucial role in human-robot interaction (HRI). For a natural and fluent …