Behavior-1k: A benchmark for embodied ai with 1,000 everyday activities and realistic simulation

C Li, R Zhang, J Wong, C Gokmen… - … on Robot Learning, 2023 - proceedings.mlr.press
We present BEHAVIOR-1K, a comprehensive simulation benchmark for human-centered
robotics. BEHAVIOR-1K includes two components, guided and motivated by the results of an …

[PDF][PDF] Cerberus: Autonomous legged and aerial robotic exploration in the tunnel and urban circuits of the darpa subterranean challenge

M Tranzatto, F Mascarich, L Bernreiter… - arXiv preprint arXiv …, 2022 - academia.edu
Autonomous exploration of subterranean environments constitutes a major frontier for
robotic systems as underground settings present key challenges that can render robot …

Robust object recognition through symbiotic deep learning in mobile robots

J Cartucho, R Ventura, M Veloso - 2018 IEEE/RSJ international …, 2018 - ieeexplore.ieee.org
Despite the recent success of state-of-the-art deep learning algorithms in object recognition,
when these are deployed as-is on a mobile service robot, we observed that they failed to …

[HTML][HTML] A novel framework to improve motion planning of robotic systems through semantic knowledge-based reasoning

R Bernardo, JMC Sousa, PJS Gonçalves - Computers & Industrial …, 2023 - Elsevier
The need to improve motion planning techniques for manipulator robots, and new effective
strategies to manipulate different objects to perform more complex tasks, is crucial for …

Semantic slam with autonomous object-level data association

Z Qian, K Patath, J Fu, J Xiao - 2021 IEEE International …, 2021 - ieeexplore.ieee.org
It is often desirable to capture and map semantic information of an environment during
simultaneous localization and mapping (SLAM). Such semantic information can enable a …

[HTML][HTML] Gazeemd: Detecting visual intention in gaze-based human-robot interaction

L Shi, C Copot, S Vanlanduit - Robotics, 2021 - mdpi.com
In gaze-based Human-Robot Interaction (HRI), it is important to determine human visual
intention for interacting with robots. One typical HRI interaction scenario is that a human …

System for augmented human–robot interaction through mixed reality and robot training by non-experts in customer service environments

L El Hafi, S Isobe, Y Tabuchi, Y Katsumata… - Advanced …, 2020 - Taylor & Francis
Human–robot interaction during general service tasks in home or retail environment has
been proven challenging, partly because (1) robots lack high-level context-based cognition …

Automatic radar-camera dataset generation for sensor-fusion applications

A Sengupta, A Yoshizawa, S Cao - IEEE Robotics and …, 2022 - ieeexplore.ieee.org
Withheterogeneous sensors offering complementary advantages in perception, there has
been a significant growth in sensor-fusion based research and development in object …

Diver tracking in open waters: A low‐cost approach based on visual and acoustic sensor fusion

W Remmas, A Chemori… - Journal of Field Robotics, 2021 - Wiley Online Library
The design of a robust perception method is a substantial component towards achieving
underwater human–robot collaboration. However, in complex environments such as the …

3-d object tracking in panoramic video and lidar for radiological source–object attribution and improved source detection

MR Marshall, D Hellfeld, THY Joshi… - … on Nuclear Science, 2020 - ieeexplore.ieee.org
Networked detector systems can be deployed in urban environments to aid in the detection
and localization of radiological and/or nuclear material. However, effectively responding to …