Sdfusion: Multimodal 3d shape completion, reconstruction, and generation
In this work, we present a novel framework built to simplify 3D asset generation for amateur
users. To enable interactive generation, our method supports a variety of input modalities …
users. To enable interactive generation, our method supports a variety of input modalities …
Locally attentional sdf diffusion for controllable 3d shape generation
Although the recent rapid evolution of 3D generative neural networks greatly improves 3D
shape generation, it is still not convenient for ordinary users to create 3D shapes and control …
shape generation, it is still not convenient for ordinary users to create 3D shapes and control …
Multi-modal machine learning in engineering design: A review and future directions
In the rapidly advancing field of multi-modal machine learning (MMML), the convergence of
multiple data modalities has the potential to reshape various applications. This paper …
multiple data modalities has the potential to reshape various applications. This paper …
Let 2d diffusion model know 3d-consistency for robust text-to-3d generation
Text-to-3D generation has shown rapid progress in recent days with the advent of score
distillation, a methodology of using pretrained text-to-2D diffusion models to optimize neural …
distillation, a methodology of using pretrained text-to-2D diffusion models to optimize neural …
Text2tex: Text-driven texture synthesis via diffusion models
Abstract We present Text2Tex, a novel method for generating high-quality textures for 3D
meshes from the given text prompts. Our method incorporates inpainting into a pre-trained …
meshes from the given text prompts. Our method incorporates inpainting into a pre-trained …
Scenetex: High-quality texture synthesis for indoor scenes via diffusion priors
We propose SceneTex a novel method for effectively generating high-quality and style-
consistent textures for indoor scenes using depth-to-image diffusion priors. Unlike previous …
consistent textures for indoor scenes using depth-to-image diffusion priors. Unlike previous …
Infinicity: Infinite-scale city synthesis
Toward infinite-scale 3D city synthesis, we propose a novel framework, InfiniCity, which
constructs and renders an unconstrainedly large and 3D-grounded environment from …
constructs and renders an unconstrainedly large and 3D-grounded environment from …
3d vr sketch guided 3d shape prototyping and exploration
Abstract 3D shape modeling is labor-intensive, time-consuming, and requires years of
expertise. To facilitate 3D shape modeling, we propose a 3D shape generation network that …
expertise. To facilitate 3D shape modeling, we propose a 3D shape generation network that …
Neural wavelet-domain diffusion for 3d shape generation, inversion, and manipulation
This paper presents a new approach for 3D shape generation, inversion, and manipulation,
through a direct generative modeling on a continuous implicit representation in wavelet …
through a direct generative modeling on a continuous implicit representation in wavelet …
Blendfields: Few-shot example-driven facial modeling
Generating faithful visualizations of human faces requires capturing both coarse and fine-
level details of the face geometry and appearance. Existing methods are either data-driven …
level details of the face geometry and appearance. Existing methods are either data-driven …