Learning robust, real-time, reactive robotic grasping

D Morrison, P Corke, J Leitner - The International journal of …, 2020 - journals.sagepub.com
The International journal of robotics research, 2020journals.sagepub.com
We present a novel approach to perform object-independent grasp synthesis from depth
images via deep neural networks. Our generative grasping convolutional neural network
(GG-CNN) predicts a pixel-wise grasp quality that can be deployed in closed-loop grasping
scenarios. GG-CNN overcomes shortcomings in existing techniques, namely discrete
sampling of grasp candidates and long computation times. The network is orders of
magnitude smaller than other state-of-the-art approaches while achieving better …
We present a novel approach to perform object-independent grasp synthesis from depth images via deep neural networks. Our generative grasping convolutional neural network (GG-CNN) predicts a pixel-wise grasp quality that can be deployed in closed-loop grasping scenarios. GG-CNN overcomes shortcomings in existing techniques, namely discrete sampling of grasp candidates and long computation times. The network is orders of magnitude smaller than other state-of-the-art approaches while achieving better performance, particularly in clutter. We run a suite of real-world tests, during which we achieve an 84% grasp success rate on a set of previously unseen objects with adversarial geometry and 94% on household items. The lightweight nature enables closed-loop control of up to 50 Hz, with which we observed 88% grasp success on a set of household objects that are moved during the grasp attempt. We further propose a method combining our GG-CNN with a multi-view approach, which improves overall grasp success rate in clutter by 10%. Code is provided at https://github.com/dougsm/ggcnn
Sage Journals
以上显示的是最相近的搜索结果。 查看全部搜索结果