Towards compact 3d representations via point feature enhancement masked autoencoders

Y Zha, H Ji, J Li, R Li, T Dai, B Chen, Z Wang… - Proceedings of the AAAI …, 2024 - ojs.aaai.org
Learning 3D representation plays a critical role in masked autoencoder (MAE) based pre-
training methods for point cloud, including single-modal and cross-modal based MAE …

Pcp-mae: Learning to predict centers for point masked autoencoders

X Zhang, S Zhang, J Yan - arXiv preprint arXiv:2408.08753, 2024 - arxiv.org
Masked autoencoder has been widely explored in point cloud self-supervised learning,
whereby the point cloud is generally divided into visible and masked parts. These methods …

Pre-training Point Cloud Compact Model with Partial-aware Reconstruction

Y Zha, Y Wang, T Dai, ST Xia - arXiv preprint arXiv:2407.09344, 2024 - arxiv.org
The pre-trained point cloud model based on Masked Point Modeling (MPM) has exhibited
substantial improvements across various tasks. However, two drawbacks hinder their …

LCM: Locally Constrained Compact Point Cloud Model for Masked Point Modeling

Y Zha, N Li, Y Wang, T Dai, H Guo, B Chen… - arXiv preprint arXiv …, 2024 - arxiv.org
The pre-trained point cloud model based on Masked Point Modeling (MPM) has exhibited
substantial improvements across various tasks. However, these models heavily rely on the …

LR-MAE: Locate while Reconstructing with Masked Autoencoders for Point Cloud Self-supervised Learning

H Ji, Y Zha, Q Liao - 2024 IEEE International Conference on …, 2024 - ieeexplore.ieee.org
As an efficient self-supervised pre-training approach, Masked autoencoder (MAE) has
shown promising improvement across various 3D point cloud understanding tasks …