Visuotactile 6d pose estimation of an in-hand object using vision and tactile sensor data

S Dikhale, K Patel, D Dhingra… - IEEE Robotics and …, 2022 - ieeexplore.ieee.org
IEEE Robotics and Automation Letters, 2022ieeexplore.ieee.org
Knowledge of the 6D pose of an object can benefit in-hand object manipulation. Existing 6D
pose estimation methods use vision data. In-hand 6D object pose estimation is challenging
because of heavy occlusion produced by the robot's grippers, which can have an adverse
effect on methods that rely on vision data only. Many robots are equipped with tactile
sensors at their fingertips that could be used to complement vision data. In this letter, we
present a method that uses both tactile and vision data to estimate the pose of an object …
Knowledge of the 6D pose of an object can benefit in-hand object manipulation. Existing 6D pose estimation methods use vision data. In-hand 6D object pose estimation is challenging because of heavy occlusion produced by the robot’s grippers, which can have an adverse effect on methods that rely on vision data only. Many robots are equipped with tactile sensors at their fingertips that could be used to complement vision data. In this letter, we present a method that uses both tactile and vision data to estimate the pose of an object grasped in a robot’s hand.The main challenges of this research include 1) lack of standard representation for tactile sensor data, 2) fusion of sensor data from heterogeneous sources—vision and tactile, and 3) a need for large training datasets. To address these challenges, first, we propose use of point clouds to represent object surfaces that are in contact with the tactile sensor. Second, we present a network architecture based on pixel-wise dense fusion to fuse vision and tactile data to estimate the 6D pose of an object. Third, we extend NVIDIA’s Deep Learning Dataset Synthesizer to produce synthetic photo-realistic vision data and the corresponding tactile point clouds for 11 objects from the YCB Object and Model Set in Unreal Engine 4. We present results of simulated experiments suggesting that using tactile data in addition to vision data improves the 6D pose estimate of an in-hand object. We also present qualitative results of experiments in which we deploy our network on real physical robots showing successful transfer of a network trained on synthetic data to a real system.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果