Counterfactual depth from a single rgb image

T Issaranon, C Zou, D Forsyth - Proceedings of the IEEE …, 2019 - openaccess.thecvf.com
T Issaranon, C Zou, D Forsyth
Proceedings of the IEEE/CVF International Conference on …, 2019openaccess.thecvf.com
We describe a method that predicts, from a single RGB image, a depth map that describes
the scene when a masked object is removed-we call this" counterfactual depth" that models
hidden scene geometry together with the observations. Our method works for the same
reason that scene completion works: the spatial structure of objects is simple. But we offer a
much higher resolution representation of space than current scene completion methods, as
we operate at pixel-level precision and do not rely on a voxel representation. Furthermore …
Abstract
We describe a method that predicts, from a single RGB image, a depth map that describes the scene when a masked object is removed-we call this" counterfactual depth" that models hidden scene geometry together with the observations. Our method works for the same reason that scene completion works: the spatial structure of objects is simple. But we offer a much higher resolution representation of space than current scene completion methods, as we operate at pixel-level precision and do not rely on a voxel representation. Furthermore, we do not require RGBD inputs. Our method uses a standard encoder-decoder architecture, and with a decoder modified to accept an object mask. We describe a small evaluation dataset that we have collected, which allows inference about what factors affect reconstruction most strongly. Using this dataset, we show that our depth predictions for masked objects are better than other baselines.
openaccess.thecvf.com
以上显示的是最相近的搜索结果。 查看全部搜索结果