Volumetric Rendering with Baked Quadrature Fields

1University of British Columbia, 2Google DeepMind 3University of Toronto 4Simon Fraser University
teaser

TL;DR: Our approach generates a mesh using a quadrature field. We associate texture with the mesh and render the mesh using depth peeling, which can produce volumetric effects such as fur.

Abstract

We propose a novel Neural Radiance Field (NeRF) representation for non-opaque scenes that allows fast inference by utilizing textured polygons. Despite the high-quality novel view rendering that NeRF provides, a critical limitation is that it relies on volume rendering that can be computationally expensive and does not utilize the advancements in modern graphics hardware. Existing methods for this problem fall short when it comes to modelling volumetric effects as they rely purely on surface rendering. We thus propose to model the scene with polygons, which can then be used to obtain the quadrature points required to model volumetric effects, and also their opacity and colour from the texture. To obtain such polygonal mesh, we train a specialized field whose zero-crossings would correspond to the quadrature points when volume rendering, and perform marching cubes on this field. We then rasterize the polygons and utilize the fragment shaders to obtain the final colour image. Our method allows rendering on various devices and easy integration with existing graphics frameworks while keeping the benefits of volume rendering alive.


Approach

teaser

The core of our approach consists of learning a quadrature field \(\sin (\omega \mathcal{F}(x)\), the zero-crossings of which gives us quadrature points deterministically. We train this field such that the field gives more quadrature points in regions where volumetric weights are higher. The relative number of zero crossings can be controlled by the frequency term omega as shown above.

teaser
In short, We start with a pre-trained network to train a quadrature field that learns the placement of quadrature points. The extracted mesh from the quadrature field is fine-tuned using a deformation field (deformation is shown using red colour on the deformed mesh). Lastly, the neural features are baked into a texture map and the mesh, which can be rendered with WebGL.

Results

Visualization of reconstructed nerf-synthetic scenes and our fur dataset.


Visualization of reconstructed real 360 scenes.


Original Image
Modified Image

Our approach uses multiple quadrature points per ray, which allows it to represent volumetric effects like fur. Purely surface rendering approach fails at this task.

Original Image

Our approach can represent transparency better than surface based baking approaches like MobileNeRF.

BibTeX

@article{sharmaquadraturefield,
  author    = {Gopal Sharma, Daniel Rebain, Andrea Tagliasacchi, Kwang Moo Yi},
  title     = {Volumetric Rendering with Baked Quadrature Fields},
  journal   = {Arxiv},
  year      = {2023},
}