Neural Precomputed Radiance Transfer (NPRT)

Computer Graphics Forum (Proceedings of the Eurographics 2022 conference)

1 2

Summary

Teaser

We investigate the design space of small neural networks that learn real-time rendering of a fixed scene, in particular neural shaders that take G-buffer pixels and envmap lighting as input to predict outgoing radiance. We show that through tweaks in the internal architecture of the network, we can achieve much higher quality at no extra cost in terms of neural budget.

Our design choices are inspired by traditional Precomputed Radiance Transfer techniques that paved the way for efficient neural renderers, which essentially operate in the same paradigm.

Examples of neural renderers running at over 30 FPS are shown below.

Overview

We examine four architectures of increasing complexity and analogy to PRT. Each proposed framework has the same number of trainable parameters, and takes as input the per-pixel G-buffer information plus a learnt encoding of the environment lighting.

Architectures

Baseline

The most straightforward architecture is simply a fully-connected network that takes concatenated G-Buffer properties and light codes as input, and directly outputs radiance. However, we do not help the network to disentangle rendering invariants in any way, meaning the learning will converge to a suboptimal average solution. Most noticeably, the shading will appear blurry and baked.

PRT-Inspired

We take inspiration from PRT techniques to modify the flow of information inside the network. Rather than feeding all the inputs into a black box, we train the first half of the network to predict a transport code given the G-Buffer inputs. This code is concatenated to the lighting code and fed to the second half of the network, allowing the learnt processes find better correlations within spatial and angular effects. Visually this translates to sharper highlights and shadows.

Albedo Factorized

We also propose to separate the outputs into diffuse and specular radiance, which are subsequently modulated by the diffuse and specular albedos of the shaded pixel. Here the network does not need to learn the multiplication by the albedos, and can instead focus on producing a quantity closer to irradiance, which is smoother and hence easier to learn.

Diffuse-specular Separation

Finally we also propose to separate the inputs and feed them to two different tracks (of half the size of the previous architectures). The diffuse track only gets diffuse properties of the G-Buffer inputs, while the specular track also receives viewing direction and reflected direction. This allows an internal specialization of each track to the respective appearance, allowing a disentangled learning of view-dependent versus diffuse transport. Visually this results in much more physically correct renderings, with accurate highlights and shadows.

results

Presentation

BibTeX

@Article{RBRD22,
  author       = "Rainer, Gilles and Bousseau, Adrien and Ritschel, Tobias and Drettakis, George",
  title        = "Neural Precomputed Radiance Transfer",
  journal      = "Computer Graphics Forum (Proceedings of the Eurographics conference)",
  number       = "2",
  volume       = "41",
  month        = "April",
  year         = "2022",
  url          = "http://www-sop.inria.fr/reves/Basilic/2022/RBRD22"
}

Acknowledgments and Funding

This research was funded by the ERC Advanced grant FUNGRAPH No 788065. The authors are grateful to the OPAL infrastructure from Université Côte d’Azur for providing resources and support. The authors would also like to thank Adobe for their generous donations, and acknowledge Fabrice Rousselle for helping with the comparison to NRC by running the code on our scenes. Finally, the authors thank the anonymous reviewers for their valuable feedback.