3D object detection for autonomous driving: A comprehensive survey

J Mao, S Shi, X Wang, H Li - International Journal of Computer Vision, 2023 - Springer
Autonomous driving, in recent years, has been receiving increasing attention for its potential
to relieve drivers' burdens and improve the safety of driving. In modern autonomous driving …

Deepfusion: Lidar-camera deep fusion for multi-modal 3d object detection

Y Li, AW Yu, T Meng, B Caine… - Proceedings of the …, 2022 - openaccess.thecvf.com
Lidars and cameras are critical sensors that provide complementary information for 3D
detection in autonomous driving. While prevalent multi-modal methods simply decorate raw …

Deepinteraction: 3d object detection via modality interaction

Z Yang, J Chen, Z Miao, W Li… - Advances in Neural …, 2022 - proceedings.neurips.cc
Existing top-performance 3D object detectors typically rely on the multi-modal fusion
strategy. This design is however fundamentally restricted due to overlooking the modality …

Focalformer3d: focusing on hard instance for 3d object detection

Y Chen, Z Yu, Y Chen, S Lan… - Proceedings of the …, 2023 - openaccess.thecvf.com
False negatives (FN) in 3D object detection, eg, missing predictions of pedestrians, vehicles,
or other obstacles, can lead to potentially dangerous situations in autonomous driving. While …

Vision-centric bev perception: A survey

Y Ma, T Wang, X Bai, H Yang, Y Hou… - … on Pattern Analysis …, 2024 - ieeexplore.ieee.org
In recent years, vision-centric Bird's Eye View (BEV) perception has garnered significant
interest from both industry and academia due to its inherent advantages, such as providing …

Hoi4d: A 4d egocentric dataset for category-level human-object interaction

Y Liu, Y Liu, C Jiang, K Lyu, W Wan… - Proceedings of the …, 2022 - openaccess.thecvf.com
We present HOI4D, a large-scale 4D egocentric dataset with rich annotations, to catalyze the
research of category-level human-object interaction. HOI4D consists of 2.4 M RGB-D …

Multi-modality 3D object detection in autonomous driving: A review

Y Tang, H He, Y Wang, Z Mao, H Wang - Neurocomputing, 2023 - Elsevier
Autonomous driving perception has made significant strides in recent years, but accurately
sensing the environment using a single sensor remains a daunting task. This review offers a …

Epnet++: Cascade bi-directional fusion for multi-modal 3d object detection

Z Liu, T Huang, B Li, X Chen, X Wang… - IEEE transactions on …, 2022 - ieeexplore.ieee.org
Recently, fusing the LiDAR point cloud and camera image to improve the performance and
robustness of 3D object detection has received more and more attention, as these two …

Beyond 3d siamese tracking: A motion-centric paradigm for 3d single object tracking in point clouds

C Zheng, X Yan, H Zhang, B Wang… - Proceedings of the …, 2022 - openaccess.thecvf.com
Abstract 3D single object tracking (3D SOT) in LiDAR point clouds plays a crucial role in
autonomous driving. Current approaches all follow the Siamese paradigm based on …

Cramnet: Camera-radar fusion with ray-constrained cross-attention for robust 3d object detection

JJ Hwang, H Kretzschmar, J Manela, S Rafferty… - European conference on …, 2022 - Springer
Robust 3D object detection is critical for safe autonomous driving. Camera and radar
sensors are synergistic as they capture complementary information and work well under …