Generalizable one-shot 3D neural head avatar

X Li, S De Mello, S Liu, K Nagano… - Advances in Neural …, 2024 - proceedings.neurips.cc
We present a method that reconstructs and animates a 3D head avatar from a single-view
portrait image. Existing methods either involve time-consuming optimization for a specific …

Bakedavatar: Baking neural fields for real-time head avatar synthesis

HB Duan, M Wang, JC Shi, XC Chen… - ACM Transactions on …, 2023 - dl.acm.org
Synthesizing photorealistic 4D human head avatars from videos is essential for VR/AR,
telepresence, and video game applications. Although existing Neural Radiance Fields …

Recent advances in implicit representation-based 3d shape generation

JM Sun, T Wu, L Gao - Visual Intelligence, 2024 - Springer
Various techniques have been developed and introduced to address the pressing need to
create three-dimensional (3D) content for advanced applications such as virtual reality and …

Triplanenet: An encoder for eg3d inversion

AR Bhattarai, M Nießner… - Proceedings of the …, 2024 - openaccess.thecvf.com
Recent progress in NeRF-based GANs has introduced a number of approaches for high-
resolution and high-fidelity generative modeling of human heads with a possibility for novel …

Portrait4D: Learning One-Shot 4D Head Avatar Synthesis using Synthetic Data

Y Deng, D Wang, X Ren, X Chen… - Proceedings of the …, 2024 - openaccess.thecvf.com
Existing one-shot 4D head synthesis methods usually learn from monocular videos with the
aid of 3DMM reconstruction yet the latter is evenly challenging which restricts them from …

What You See is What You GAN: Rendering Every Pixel for High-Fidelity Geometry in 3D GANs

A Trevithick, M Chan, T Takikawa… - Proceedings of the …, 2024 - openaccess.thecvf.com
Abstract 3D-aware Generative Adversarial Networks (GANs) have shown remarkable
progress in learning to generate multi-view-consistent images and 3D geometries of scenes …

VOODOO 3D: Volumetric Portrait Disentanglement for One-Shot 3D Head Reenactment

P Tran, E Zakharov, LN Ho, AT Tran… - Proceedings of the …, 2024 - openaccess.thecvf.com
We present a 3D-aware one-shot head reenactment method based on a fully volumetric
neural disentanglement framework for source appearance and driver expressions. Our …

DiffPortrait3D: Controllable Diffusion for Zero-Shot Portrait View Synthesis

Y Gu, H Xu, Y Xie, G Song, Y Shi… - Proceedings of the …, 2024 - openaccess.thecvf.com
We present DiffPortrait3D a conditional diffusion model that is capable of synthesizing 3D-
consistent photo-realistic novel views from as few as a single in-the-wild portrait. Specifically …

Neural Directional Encoding for Efficient and Accurate View-Dependent Appearance Modeling

L Wu, S Bi, Z Xu, F Luan, K Zhang… - Proceedings of the …, 2024 - openaccess.thecvf.com
Novel-view synthesis of specular objects like shiny metals or glossy paints remains a
significant challenge. Not only the glossy appearance but also global illumination effects …

Morphable Diffusion: 3D-Consistent Diffusion for Single-image Avatar Creation

X Chen, M Mihajlovic, S Wang… - Proceedings of the …, 2024 - openaccess.thecvf.com
Recent advances in generative diffusion models have enabled the previously unfeasible
capability of generating 3D assets from a single input image or a text prompt. In this work we …