Survey on robotic systems for internal logistics

R Bernardo, JMC Sousa, PJS Gonçalves - Journal of manufacturing …, 2022 - Elsevier
The evolution of production systems has established major challenges in internal logistics.
In order to overcome these challenges, new automation solutions have been developed and …

A review and comparison of ontology-based approaches to robot autonomy

A Olivares-Alarcos, D Beßler, A Khamis… - The Knowledge …, 2019 - cambridge.org
Within the next decades, robots will need to be able to execute a large variety of tasks
autonomously in a large variety of environments. To relax the resulting programming effort, a …

[PDF][PDF] DeepMPC: Learning deep latent features for model predictive control.

I Lenz, RA Knepper, A Saxena - Robotics: Science and …, 2015 - roboticsproceedings.org
Designing controllers for tasks with complex nonlinear dynamics is extremely challenging,
time-consuming, and in many cases, infeasible. This difficulty is exacerbated in tasks such …

Know rob 2.0—a 2nd generation knowledge processing framework for cognition-enabled robotic agents

M Beetz, D Beßler, A Haidu, M Pomarlan… - … on Robotics and …, 2018 - ieeexplore.ieee.org
In this paper we present KnowRob2, a second generation knowledge representation and
reasoning framework for robotic agents. KnowRob2 is an extension and partial redesign of …

Tell me dave: Context-sensitive grounding of natural language to manipulation instructions

DK Misra, J Sung, K Lee… - The International Journal …, 2016 - journals.sagepub.com
It is important for a robot to be able to interpret natural language commands given by a
human. In this paper, we consider performing a sequence of mobile manipulation tasks with …

A survey of knowledge representation in service robotics

D Paulius, Y Sun - Robotics and Autonomous Systems, 2019 - Elsevier
Within the realm of service robotics, researchers have placed a great amount of effort into
learning, understanding, and representing motions as manipulations for task execution by …

Vision-based navigation with language-based assistance via imitation learning with indirect intervention

K Nguyen, D Dey, C Brockett… - Proceedings of the IEEE …, 2019 - openaccess.thecvf.com
Abstract We present Vision-based Navigation with Language-based Assistance (VNLA), a
grounded vision-language task where an agent with visual perception is guided via …

What's cookin'? interpreting cooking videos using text, speech and vision

J Malmaud, J Huang, V Rathod, N Johnston… - arXiv preprint arXiv …, 2015 - arxiv.org
We present a novel method for aligning a sequence of instructions to a video of someone
carrying out a task. In particular, we focus on the cooking domain, where the instructions …

Watch-n-patch: Unsupervised understanding of actions and relations

C Wu, J Zhang, S Savarese… - Proceedings of the IEEE …, 2015 - openaccess.thecvf.com
We focus on modeling human activities comprising multiple actions in a completely
unsupervised setting. Our model learns the high-level action co-occurrence and temporal …

Same object, different grasps: Data and semantic knowledge for task-oriented grasping

A Murali, W Liu, K Marino… - Conference on robot …, 2021 - proceedings.mlr.press
Despite the enormous progress and generalization in robotic grasping in recent years,
existing methods have yet to scale and generalize task-oriented grasping to the same …