Shapellm: Universal 3d object understanding for embodied interaction

Z Qi, R Dong, S Zhang, H Geng, C Han, Z Ge… - arXiv preprint arXiv …, 2024 - arxiv.org
This paper presents ShapeLLM, the first 3D Multimodal Large Language Model (LLM)
designed for embodied interaction, exploring a universal 3D object understanding with 3D …

Precise-Physics Driven Text-to-3D Generation

Q Xu, J Liu, M Wong, C Chen, YS Ong - arXiv preprint arXiv:2403.12438, 2024 - arxiv.org
Text-to-3D generation has shown great promise in generating novel 3D content based on
given text prompts. However, existing generative methods mostly focus on geometric or …

Atlas3D: Physically Constrained Self-Supporting Text-to-3D for Simulation and Fabrication

Y Chen, T Xie, Z Zong, X Li, F Gao, Y Yang… - arXiv preprint arXiv …, 2024 - arxiv.org
Existing diffusion-based text-to-3D generation methods primarily focus on producing visually
realistic shapes and appearances, often neglecting the physical constraints necessary for …