Fusionrcnn: Lidar-camera fusion for two-stage 3d object detection

X Xu, S Dong, T Xu, L Ding, J Wang, P Jiang, L Song… - Remote Sensing, 2023 - mdpi.com
X Xu, S Dong, T Xu, L Ding, J Wang, P Jiang, L Song, J Li
Remote Sensing, 2023mdpi.com
Accurate and reliable perception systems are essential for autonomous driving and robotics.
To achieve this, 3D object detection with multi-sensors is necessary. Existing 3D detectors
have significantly improved accuracy by adopting a two-stage paradigm that relies solely on
LiDAR point clouds for 3D proposal refinement. However, the sparsity of point clouds,
particularly for faraway points, makes it difficult for the LiDAR-only refinement module to
recognize and locate objects accurately. To address this issue, we propose a novel multi …
Accurate and reliable perception systems are essential for autonomous driving and robotics. To achieve this, 3D object detection with multi-sensors is necessary. Existing 3D detectors have significantly improved accuracy by adopting a two-stage paradigm that relies solely on LiDAR point clouds for 3D proposal refinement. However, the sparsity of point clouds, particularly for faraway points, makes it difficult for the LiDAR-only refinement module to recognize and locate objects accurately. To address this issue, we propose a novel multi-modality two-stage approach called FusionRCNN. This approach effectively and efficiently fuses point clouds and camera images in the Regions of Interest (RoI). The FusionRCNN adaptively integrates both sparse geometry information from LiDAR and dense texture information from the camera in a unified attention mechanism. Specifically, FusionRCNN first utilizes RoIPooling to obtain an image set with a unified size and gets the point set by sampling raw points within proposals in the RoI extraction step. Then, it leverages an intra-modality self-attention to enhance the domain-specific features, followed by a well-designed cross-attention to fuse the information from two modalities. FusionRCNN is fundamentally plug-and-play and supports different one-stage methods with almost no architectural changes. Extensive experiments on KITTI and Waymo benchmarks demonstrate that our method significantly boosts the performances of popular detectors. Remarkably, FusionRCNN improves the strong SECOND baseline by 6.14% mAP on Waymo and outperforms competing two-stage approaches.
MDPI
以上显示的是最相近的搜索结果。 查看全部搜索结果