Bungeenerf: Progressive neural radiance field for extreme multi-scale scene rendering
European conference on computer vision, 2022•Springer
Neural radiance fields (NeRF) has achieved outstanding performance in modeling 3D
objects and controlled scenes, usually under a single scale. In this work, we focus on multi-
scale cases where large changes in imagery are observed at drastically different scales.
This scenario vastly exists in real-world 3D environments, such as city scenes, with views
ranging from satellite level that captures the overview of a city, to ground level imagery
showing complex details of an architecture; and can also be commonly identified in …
objects and controlled scenes, usually under a single scale. In this work, we focus on multi-
scale cases where large changes in imagery are observed at drastically different scales.
This scenario vastly exists in real-world 3D environments, such as city scenes, with views
ranging from satellite level that captures the overview of a city, to ground level imagery
showing complex details of an architecture; and can also be commonly identified in …
Abstract
Neural radiance fields (NeRF) has achieved outstanding performance in modeling 3D objects and controlled scenes, usually under a single scale. In this work, we focus on multi-scale cases where large changes in imagery are observed at drastically different scales. This scenario vastly exists in real-world 3D environments, such as city scenes, with views ranging from satellite level that captures the overview of a city, to ground level imagery showing complex details of an architecture; and can also be commonly identified in landscape and delicate minecraft 3D models. The wide span of viewing positions within these scenes yields multi-scale renderings with very different levels of detail, which poses great challenges to neural radiance field and biases it towards compromised results. To address these issues, we introduce BungeeNeRF, a progressive neural radiance field that achieves level-of-detail rendering across drastically varied scales. Starting from fitting distant views with a shallow base block, as training progresses, new blocks are appended to accommodate the emerging details in the increasingly closer views. The strategy progressively activates high-frequency channels in NeRF’s positional encoding inputs and successively unfolds more complex details as the training proceeds. We demonstrate the superiority of BungeeNeRF in modeling diverse multi-scale scenes with drastically varying views on multiple data sources (city models, synthetic, and drone captured data) and its support for high-quality rendering in different levels of detail.
Springer
以上显示的是最相近的搜索结果。 查看全部搜索结果