Self-supervised single-view 3d reconstruction via semantic consistency
Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28 …, 2020•Springer
We learn a self-supervised, single-view 3D reconstruction model that predicts the 3D mesh
shape, texture and camera pose of a target object with a collection of 2D images and
silhouettes. The proposed method does not necessitate 3D supervision, manually annotated
keypoints, multi-view images of an object or a prior 3D template. The key insight of our work
is that objects can be represented as a collection of deformable parts, and each part is
semantically coherent across different instances of the same category (eg, wings on birds …
shape, texture and camera pose of a target object with a collection of 2D images and
silhouettes. The proposed method does not necessitate 3D supervision, manually annotated
keypoints, multi-view images of an object or a prior 3D template. The key insight of our work
is that objects can be represented as a collection of deformable parts, and each part is
semantically coherent across different instances of the same category (eg, wings on birds …
Abstract
We learn a self-supervised, single-view 3D reconstruction model that predicts the 3D mesh shape, texture and camera pose of a target object with a collection of 2D images and silhouettes. The proposed method does not necessitate 3D supervision, manually annotated keypoints, multi-view images of an object or a prior 3D template. The key insight of our work is that objects can be represented as a collection of deformable parts, and each part is semantically coherent across different instances of the same category (e.g., wings on birds and wheels on cars). Therefore, by leveraging part segmentation of a large collection of category-specific images learned via self-supervision, we can effectively enforce semantic consistency between the reconstructed meshes and the original images. This significantly reduces ambiguities during joint prediction of shape and camera pose of an object, along with texture. To the best of our knowledge, we are the first to try and solve the single-view reconstruction problem without a category-specific template mesh or semantic keypoints. Thus our model can easily generalize to various object categories without such labels, e.g., horses, penguins, etc. Through a variety of experiments on several categories of deformable and rigid objects, we demonstrate that our unsupervised method performs comparably if not better than existing category-specific reconstruction methods learned with supervision. More details can be found at the project page https://sites.google.com/nvidia.com/unsup-mesh-2020 .
Springer
以上显示的是最相近的搜索结果。 查看全部搜索结果