Active Exploration for Neural Global Illumination of Variable Scenes

ACM Transaction on Graphics (Presented at SIGGRAPH 2022)

1 2 3

Active Exploration optimizes ground truth data generation during training of a Neural Generator using MCMC. Our Neural Generator can render variable scenes with effects at interactive rates that were previously prohibitve.


Neural rendering algorithms introduce a fundamentally new approach for photorealistic rendering, typically by learning a neural representation of illumination on large numbers of ground truth images. When training for a given variable scene, i.e., changing objects, materials, lights and viewpoint, the space D of possible training data instances quickly becomes unmanageable as the dimensions of variable parameters increase.

We introduce a novel Active Exploration method using Markov Chain Monte Carlo, which explores D , generating samples (i.e., ground truth renderings) that best help training and interleaves training and on-the-fly sample data generation. We introduce a self-tuning sample reuse strategy to minimize the expensive step of rendering training samples. We apply our approach on a neural generator that learns to render novel scene instances given an explicit parameterization of the scene configuration.

Our results show that Active Exploration trains our network much more efficiently than uniformly sampling, and together with our resolution enhancement approach, achieves better quality than uniform sampling at convergence. Our method allows interactive rendering of hard light transport paths (e.g., complex caustics) – that require very high samples counts to be captured – and provides dynamic scene navigation and manipulation, after training for 5-18 hours depending on required quality and variations.




Overview of our method. At training time we operate on patches of G-Buffers and we use a Pixel Generator to generate the neural rendering. Our active exploration finds compositions of the scene where the Generator is struggling and explores similar samples. Our explicit scene representation vector compactly represents the variability of the scene to the Generator and defines the space D of possible scene configurations. Our sample reuse stochastically chooses to render a new ground truth or to reuse a stored sample. During inference, the Pixel Generator takes as input G-Buffers and the current state of the scene through the explicit scene representation vector and generates an image with global illumination at interactive rates.

Ours vs Uniform


Compared to uniform sampling of the possible scene configurations, our method samples the hard effects more and as a result achieves high quality much closer to the ground truth.

Interactive Sessions


Veach Door


Living Room

Sphere Caustic



  title={Active Exploration for Neural Global Illumination of Variable Scenes},
  author={Diolatzis, Stavros and Philip, Julien and Drettakis, George},
  journal={ACM Transactions on Graphics},

Acknowledgments and Funding

This research was funded by ERC Advanced grant FUNGRAPH No 788065. The authors are grateful to the OPAL infrastructure from Université Côte d’Azur for providing resources and support. The authors would also like to thank the anonymous referees for their valuable comments and helpful suggestions, as well as Georgios Kopanas and Felix Hähnlein for fruitful discussions.