Visual affordance and function understanding: A survey

M Hassanin, S Khan, M Tahtali - ACM Computing Surveys (CSUR), 2021 - dl.acm.org
Nowadays, robots are dominating the manufacturing, entertainment, and healthcare
industries. Robot vision aims to equip robots with the capabilities to discover information …

Affordances from human videos as a versatile representation for robotics

S Bahl, R Mendonca, L Chen… - Proceedings of the …, 2023 - openaccess.thecvf.com
Building a robot that can understand and learn to interact by watching humans has inspired
several vision problems. However, despite some successful results on static datasets, it …

Affordpose: A large-scale dataset of hand-object interactions with affordance-driven hand pose

J Jian, X Liu, M Li, R Hu, J Liu - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
How human interact with objects depends on the functional roles of the target objects, which
introduces the problem of affordance-aware hand-object interaction. It requires a large …

Joint hand motion and interaction hotspots prediction from egocentric videos

S Liu, S Tripathi, S Majumdar… - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
We propose to forecast future hand-object interactions given an egocentric video. Instead of
predicting action labels or pixels, we directly predict the hand motion trajectory and the …

Affordancenet: An end-to-end deep learning approach for object affordance detection

TT Do, A Nguyen, I Reid - 2018 IEEE international conference …, 2018 - ieeexplore.ieee.org
We propose AffordanceNet, a new deep learning approach to simultaneously detect multiple
objects and their affordances from RGB images. Our AffordanceNet has two branches: an …

A survey of visual affordance recognition based on deep learning

D Chen, D Kong, J Li, S Wang… - IEEE Transactions on Big …, 2023 - ieeexplore.ieee.org
Visual affordance recognition is an important research topic in robotics, human-computer
interaction, and other computer vision tasks. In recent years, deep learning-based …

Locate: Localize and transfer object parts for weakly supervised affordance grounding

G Li, V Jampani, D Sun… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Humans excel at acquiring knowledge through observation. For example, we can learn to
use new tools by watching demonstrations. This skill is fundamental for intelligent systems to …

3d affordancenet: A benchmark for visual object affordance understanding

S Deng, X Xu, C Wu, K Chen… - proceedings of the IEEE …, 2021 - openaccess.thecvf.com
The ability to understand the ways to interact with objects from visual cues, aka visual
affordance, is essential to vision-guided robotic research. This involves categorizing …

Grounded human-object interaction hotspots from video

T Nagarajan, C Feichtenhofer… - Proceedings of the …, 2019 - openaccess.thecvf.com
Learning how to interact with objects is an important step towards embodied visual
intelligence, but existing techniques suffer from heavy supervision or sensing requirements …

Learning affordance grounding from exocentric images

H Luo, W Zhai, J Zhang, Y Cao… - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Affordance grounding, a task to ground (ie, localize) action possibility region in objects,
which faces the challenge of establishing an explicit link with object parts due to the diversity …