Urban radiance field representation with deformable neural mesh primitives
Abstract Neural Radiance Fields (NeRFs) have achieved great success in the past few
years. However, most current methods still require intensive resources due to ray marching-
based rendering. To construct urban-level radiance fields efficiently, we design Deformable
Neural Mesh Primitive (DNMP), and propose to parameterize the entire scene with such
primitives. The DNMP is a flexible and compact neural variant of classic mesh
representation, which enjoys both the efficiency of rasterization-based rendering and the …
years. However, most current methods still require intensive resources due to ray marching-
based rendering. To construct urban-level radiance fields efficiently, we design Deformable
Neural Mesh Primitive (DNMP), and propose to parameterize the entire scene with such
primitives. The DNMP is a flexible and compact neural variant of classic mesh
representation, which enjoys both the efficiency of rasterization-based rendering and the …
[PDF][PDF] Urban Radiance Field Representation with Deformable Neural Mesh Primitives—Supplementary Material
The overall framework is implemented using Py-Torch [11]. The differentiable rasterization is
implemented based on PyTorch3D [12]. The network Fθ is composed of 8 layers with width
256 for opacity prediction and additional 2 layers for view-dependent radiance value
prediction. In our lightweight version, the layer number and width of the MLPs for opacity
prediction are reduced to 2 and 64, respectively. We use positional encoding with frequency
L= 4 to encode the view-dependent factors. As mentioned in the manuscript, we use Mip …
implemented based on PyTorch3D [12]. The network Fθ is composed of 8 layers with width
256 for opacity prediction and additional 2 layers for view-dependent radiance value
prediction. In our lightweight version, the layer number and width of the MLPs for opacity
prediction are reduced to 2 and 64, respectively. We use positional encoding with frequency
L= 4 to encode the view-dependent factors. As mentioned in the manuscript, we use Mip …
以上显示的是最相近的搜索结果。 查看全部搜索结果