Visual affordance and function understanding: A survey

M Hassanin, S Khan, M Tahtali - ACM Computing Surveys (CSUR), 2021 - dl.acm.org
Nowadays, robots are dominating the manufacturing, entertainment, and healthcare
industries. Robot vision aims to equip robots with the capabilities to discover information …

Computational models of affordance in robotics: a taxonomy and systematic classification

P Zech, S Haller, SR Lakani, B Ridge… - Adaptive …, 2017 - journals.sagepub.com
JJ Gibson's concept of affordance, one of the central pillars of ecological psychology, is a
truly remarkable idea that provides a concise theory of animal perception predicated on …

Affordancenet: An end-to-end deep learning approach for object affordance detection

TT Do, A Nguyen, I Reid - 2018 IEEE international conference …, 2018 - ieeexplore.ieee.org
We propose AffordanceNet, a new deep learning approach to simultaneously detect multiple
objects and their affordances from RGB images. Our AffordanceNet has two branches: an …

50 years of object recognition: Directions forward

A Andreopoulos, JK Tsotsos - Computer vision and image understanding, 2013 - Elsevier
Object recognition systems constitute a deeply entrenched and omnipresent component of
modern intelligent systems. Research on object recognition algorithms has led to advances …

Grounded human-object interaction hotspots from video

T Nagarajan, C Feichtenhofer… - Proceedings of the …, 2019 - openaccess.thecvf.com
Learning how to interact with objects is an important step towards embodied visual
intelligence, but existing techniques suffer from heavy supervision or sensing requirements …

Multi-label affordance mapping from egocentric vision

L Mur-Labadia, JJ Guerrero… - Proceedings of the …, 2023 - openaccess.thecvf.com
Accurate affordance detection and segmentation with pixel precision is an important piece in
many complex systems based on interactions, such as robots and assitive devices. We …

A multi-scale cnn for affordance segmentation in rgb images

A Roy, S Todorovic - Computer Vision–ECCV 2016: 14th European …, 2016 - Springer
Given a single RGB image our goal is to label every pixel with an affordance type. By
affordance, we mean an object's capability to readily support a certain human action, without …

Depth-based hand pose estimation: data, methods, and challenges

JS Supancic, G Rogez, Y Yang… - Proceedings of the …, 2015 - openaccess.thecvf.com
Hand pose estimation has matured rapidly in recent years. The introduction of commodity
depth sensors and a multitude of practical applications have spurred new advances. We …

Affordance research in developmental robotics: A survey

H Min, R Luo, J Zhu, S Bi - IEEE Transactions on Cognitive …, 2016 - ieeexplore.ieee.org
Affordances capture the relationships between a robot and the environment in terms of the
actions that the robot is able to perform. The notable characteristic of affordance-based …

Affordance learning from play for sample-efficient policy learning

J Borja-Diaz, O Mees, G Kalweit… - … on Robotics and …, 2022 - ieeexplore.ieee.org
Robots operating in human-centered environments should have the ability to understand
how objects function: what can be done with each object, where this interaction may occur …