Sdfusion: Multimodal 3d shape completion, reconstruction, and generation

YC Cheng, HY Lee, S Tulyakov… - Proceedings of the …, 2023 - openaccess.thecvf.com
In this work, we present a novel framework built to simplify 3D asset generation for amateur
users. To enable interactive generation, our method supports a variety of input modalities …

Locally attentional sdf diffusion for controllable 3d shape generation

XY Zheng, H Pan, PS Wang, X Tong, Y Liu… - ACM Transactions on …, 2023 - dl.acm.org
Although the recent rapid evolution of 3D generative neural networks greatly improves 3D
shape generation, it is still not convenient for ordinary users to create 3D shapes and control …

Multi-modal machine learning in engineering design: A review and future directions

B Song, R Zhou, F Ahmed - … of Computing and …, 2024 - asmedigitalcollection.asme.org
In the rapidly advancing field of multi-modal machine learning (MMML), the convergence of
multiple data modalities has the potential to reshape various applications. This paper …

Let 2d diffusion model know 3d-consistency for robust text-to-3d generation

J Seo, W Jang, MS Kwak, H Kim, J Ko, J Kim… - arXiv preprint arXiv …, 2023 - arxiv.org
Text-to-3D generation has shown rapid progress in recent days with the advent of score
distillation, a methodology of using pretrained text-to-2D diffusion models to optimize neural …

Text2tex: Text-driven texture synthesis via diffusion models

DZ Chen, Y Siddiqui, HY Lee… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract We present Text2Tex, a novel method for generating high-quality textures for 3D
meshes from the given text prompts. Our method incorporates inpainting into a pre-trained …

Scenetex: High-quality texture synthesis for indoor scenes via diffusion priors

DZ Chen, H Li, HY Lee, S Tulyakov… - Proceedings of the …, 2024 - openaccess.thecvf.com
We propose SceneTex a novel method for effectively generating high-quality and style-
consistent textures for indoor scenes using depth-to-image diffusion priors. Unlike previous …

Infinicity: Infinite-scale city synthesis

CH Lin, HY Lee, W Menapace, M Chai… - Proceedings of the …, 2023 - openaccess.thecvf.com
Toward infinite-scale 3D city synthesis, we propose a novel framework, InfiniCity, which
constructs and renders an unconstrainedly large and 3D-grounded environment from …

3d vr sketch guided 3d shape prototyping and exploration

L Luo, PN Chowdhury, T Xiang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract 3D shape modeling is labor-intensive, time-consuming, and requires years of
expertise. To facilitate 3D shape modeling, we propose a 3D shape generation network that …

Neural wavelet-domain diffusion for 3d shape generation, inversion, and manipulation

J Hu, KH Hui, Z Liu, R Li, CW Fu - ACM Transactions on Graphics, 2024 - dl.acm.org
This paper presents a new approach for 3D shape generation, inversion, and manipulation,
through a direct generative modeling on a continuous implicit representation in wavelet …

Blendfields: Few-shot example-driven facial modeling

K Kania, SJ Garbin, A Tagliasacchi… - Proceedings of the …, 2023 - openaccess.thecvf.com
Generating faithful visualizations of human faces requires capturing both coarse and fine-
level details of the face geometry and appearance. Existing methods are either data-driven …