A complete survey on generative ai (aigc): Is chatgpt from gpt-4 to gpt-5 all you need?

C Zhang, C Zhang, S Zheng, Y Qiao, C Li… - arXiv preprint arXiv …, 2023 - arxiv.org
As ChatGPT goes viral, generative AI (AIGC, aka AI-generated content) has made headlines
everywhere because of its ability to analyze and create text, images, and beyond. With such …

State of the art on monocular 3D face reconstruction, tracking, and applications

M Zollhöfer, J Thies, P Garrido, D Bradley… - Computer graphics …, 2018 - Wiley Online Library
The computer graphics and vision communities have dedicated long standing efforts in
building computerized tools for reconstructing, tracking, and analyzing human faces based …

Motiongpt: Human motion as a foreign language

B Jiang, X Chen, W Liu, J Yu, G Yu… - Advances in Neural …, 2023 - proceedings.neurips.cc
Though the advancement of pre-trained large language models unfolds, the exploration of
building a unified model for language and other multimodal data, such as motion, remains …

Executing your commands via motion diffusion in latent space

X Chen, B Jiang, W Liu, Z Huang… - Proceedings of the …, 2023 - openaccess.thecvf.com
We study a challenging task, conditional human motion generation, which produces
plausible human motion sequences according to various conditional inputs, such as action …

TEMOS: Generating Diverse Human Motions from Textual Descriptions

M Petrovich, MJ Black, G Varol - European Conference on Computer …, 2022 - Springer
We address the problem of generating diverse 3D human motions from textual descriptions.
This challenging task requires joint modeling of both modalities: understanding and …

Generating holistic 3d human motion from speech

H Yi, H Liang, Y Liu, Q Cao, Y Wen… - Proceedings of the …, 2023 - openaccess.thecvf.com
This work addresses the problem of generating 3D holistic body motions from human
speech. Given a speech recording, we synthesize sequences of 3D body poses, hand …

Codetalker: Speech-driven 3d facial animation with discrete motion prior

J Xing, M Xia, Y Zhang, X Cun… - Proceedings of the …, 2023 - openaccess.thecvf.com
Speech-driven 3D facial animation has been widely studied, yet there is still a gap to
achieving realism and vividness due to the highly ill-posed nature and scarcity of audio …

Learning an animatable detailed 3D face model from in-the-wild images

Y Feng, H Feng, MJ Black, T Bolkart - ACM Transactions on Graphics …, 2021 - dl.acm.org
While current monocular 3D face reconstruction methods can recover fine geometric details,
they suffer several limitations. Some methods produce faces that cannot be realistically …

Ad-nerf: Audio driven neural radiance fields for talking head synthesis

Y Guo, K Chen, S Liang, YJ Liu… - Proceedings of the …, 2021 - openaccess.thecvf.com
Generating high-fidelity talking head video by fitting with the input audio sequence is a
challenging problem that receives considerable attentions recently. In this paper, we …

Flow-guided one-shot talking face generation with a high-resolution audio-visual dataset

Z Zhang, L Li, Y Ding, C Fan - Proceedings of the IEEE/CVF …, 2021 - openaccess.thecvf.com
One-shot talking face generation should synthesize high visual quality facial videos with
reasonable animations of expression and head pose, and just utilize arbitrary driving audio …