A 3d morphable model of craniofacial shape and texture variation

H Dai, N Pears, WAP Smith… - Proceedings of the IEEE …, 2017 - openaccess.thecvf.com
Proceedings of the IEEE international conference on computer …, 2017openaccess.thecvf.com
We present a fully automatic pipeline to train 3D Morphable Models (3DMMs), with
contributions in pose normalisation, dense correspondence using both shape and texture
information, and high quality, high resolution texture mapping. We propose a dense
correspondence system, combining a hierarchical parts-based template morphing
framework in the shape channel and a refining optical flow in the texture channel. The
texture map is generated using raw texture images from five views. We employ a pixel …
Abstract
We present a fully automatic pipeline to train 3D Morphable Models (3DMMs), with contributions in pose normalisation, dense correspondence using both shape and texture information, and high quality, high resolution texture mapping. We propose a dense correspondence system, combining a hierarchical parts-based template morphing framework in the shape channel and a refining optical flow in the texture channel. The texture map is generated using raw texture images from five views. We employ a pixel-embedding method to maintain the texture map at the same high resolution as the raw texture images, rather than using per-vertex color maps. The high quality texture map is then used for statistical texture modelling. The Headspace dataset used for training includes demographic information about each subject, allowing for the construction of both global 3DMMs and models tailored for specific gender and age groups. We build both global craniofacial 3DMMs and demographic sub-population 3DMMs from more than 1200 distinct identities. To our knowledge, we present the first public 3DMM of the full human head in both shape and texture: the Liverpool-York Head Model. Furthermore, we analyse the 3DMMs in terms of a range of performance metrics. Our evaluations reveal that the training pipeline constructs state-of-the-art models.
openaccess.thecvf.com
以上显示的是最相近的搜索结果。 查看全部搜索结果