Reloo: Reconstructing humans dressed in loose garments from monocular video in the wild

C Guo, T Jiang, M Kaufmann, C Zheng… - … on Computer Vision, 2025 - Springer
While previous years have seen great progress in the 3D reconstruction of humans from
monocular videos, few of the state-of-the-art methods are able to handle loose garments that …

Meshavatar: Learning high-quality triangular human avatars from multi-view videos

Y Chen, Z Zheng, Z Li, C Xu, Y Liu - arXiv preprint arXiv:2407.08414, 2024 - arxiv.org
We present a novel pipeline for learning high-quality triangular human avatars from multi-
view videos. Recent methods for avatar learning are typically based on neural radiance …

IntrinsicAvatar: Physically Based Inverse Rendering of Dynamic Humans from Monocular Videos via Explicit Ray Tracing

S Wang, B Antic, A Geiger… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
We present IntrinsicAvatar a novel approach to recovering the intrinsic properties of clothed
human avatars including geometry albedo material and environment lighting from only …

HSR: Holistic 3D Human-Scene Reconstruction from Monocular Videos

L Xue, C Guo, C Zheng, F Wang, T Jiang, HI Ho… - … on Computer Vision, 2024 - Springer
An overarching goal for computer-aided perception systems is the holistic understanding of
the human-centric 3D world, including faithful reconstructions of humans, scenes, and their …

Surfel-based gaussian inverse rendering for fast and relightable dynamic human reconstruction from monocular video

Y Zhao, C Wu, B Huang, Y Zhi, C Zhao, J Wang… - arXiv preprint arXiv …, 2024 - arxiv.org
Efficient and accurate reconstruction of a relightable, dynamic clothed human avatar from a
monocular video is crucial for the entertainment industry. This paper introduces the Surfel …

InstantGeoAvatar: Effective Geometry and Appearance Modeling of Animatable Avatars from Monocular Video

A Budria, A Lopez-Rodriguez… - Proceedings of the …, 2024 - openaccess.thecvf.com
We present InstantGeoAvatar, a method for efficient and effective learning from monocular
video of detailed 3D geometry and appearance of animatable implicit human avatars. Our …

Interactive Rendering of Relightable and Animatable Gaussian Avatars

Y Zhan, T Shao, H Wang, Y Yang, K Zhou - arXiv preprint arXiv …, 2024 - arxiv.org
Creating relightable and animatable avatars from multi-view or monocular videos is a
challenging task for digital human creation and virtual reality applications. Previous methods …

PRTGaussian: Efficient Relighting Using 3D Gaussians with Precomputed Radiance Transfer

L Zhang, Y Han, W Lin, J Ling, F Xu - arXiv preprint arXiv:2408.05631, 2024 - arxiv.org
We present PRTGaussian, a realtime relightable novel-view synthesis method made
possible by combining 3D Gaussians and Precomputed Radiance Transfer (PRT). By fitting …

TAGA: Self-supervised Learning for Template-free Animatable Gaussian Avatars

Z Zhai, G Chen, W Wang, D Zheng, J Xiao - openreview.net
Decoupling from customized parametric templates marks an integral leap towards creating
fully flexible, animatable avatars. In this work, we introduce TAGA (Template-free Animatable …

MeshAvatar: Learning High-Quality Triangular Human Avatars from Multi-view Videos

Y Liu - Springer
We present a novel pipeline for learning high-quality triangular human avatars from multi-
view videos. Recent methods for avatar learning are typically based on neural radiance …