Generate what you can't see-a view-dependent image generation

K Piaskowski, R Staszak… - 2019 IEEE/RSJ …, 2019 - ieeexplore.ieee.org
K Piaskowski, R Staszak, D Belter
2019 IEEE/RSJ International Conference on Intelligent Robots and …, 2019ieeexplore.ieee.org
In order to operate autonomously, a robot should explore the environment and build a model
of each of the surrounding objects. A common approach is to carefully scan the whole
workspace. This is time-consuming. It is also often impossible to reach all the viewpoints
required to acquire full knowledge about the environment. Humans can perform shape
completion of occluded objects by relying on past experience. Therefore, we propose a
method that generates images of an object from various viewpoints using a single input RGB …
In order to operate autonomously, a robot should explore the environment and build a model of each of the surrounding objects. A common approach is to carefully scan the whole workspace. This is time-consuming. It is also often impossible to reach all the viewpoints required to acquire full knowledge about the environment. Humans can perform shape completion of occluded objects by relying on past experience. Therefore, we propose a method that generates images of an object from various viewpoints using a single input RGB image. A deep neural network is trained to imagine the object appearance from many viewpoints. We present the whole pipeline, which takes a single RGB image as input and returns a sequence of RGB and depth images of the object. The method utilizes a CNN-based object detector to extract the object from the natural scene. Then, the proposed network generates a set of RGB and depth images. We show the results both on a synthetic dataset and on real images.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果