Editable Physically-based Reflections in Raytraced Gaussian Radiance Fields

SIGGRAPH Asia 2025

Yohan Poirier-Ginter1,2      Jeffrey Hu1      Jean-François Lalonde2      George Drettakis1     
1Inria, Université Côte d'Azur      2Université Laval      3Adobe Research     

Abstract

Radiance fields such as 3D Gaussian Splatting allow real-time rendering of scenes captured from photos. They also reconstruct most specular reflections with high visual quality, but typically model them with “fake” reflected geometry, using primitives behind the reflector. Our goal is to correctly reconstruct the reflector and the reflected objects, such as to make specular reflections editable. We present a proof of concept that exploits promising learning-based methods to extract diffuse and specular buffers from photos, as well as geometry and BRDF buffers. Our method builds on three key components. First, by using diffuse and specular buffers of input training views, we optimize a diffuse version of the scene and use path tracing to efficiently generate physically based specular reflections. Second, we present a specialized training method that allows this process to converge. Finally, we present a fast ray tracing algorithm for 3D Gaussian primitives that enables efficient multi-bounce reflections. Our method reconstructs reflectors and reflected objects—including those not seen in the input images—in a unique scene representation. Our solution allows real-time, consistent editing of captured scenes with specular reflections, including multi-bounce effects, changing roughness, and more. We mainly show results using ground truth buffers from synthetic scenes, and also preliminary results in real scenes with currently imperfect learning-based buffers.

Method

In short, our method leverages intrinsic decomposition to separately reconstruct the diffuse and reflected components of a scene, making editable specular reflections possible (see video). We optimize a unique representation of the scene in which both the reflectors and reflected objects are represented with the same Gaussian primitives, achieved by carefully designing separate losses and a training schedule.

Indirect Reconstruction

Our unique scene representation makes it possible to reconstruct objects only viewed indirectly through specular reflections. We showcase this by recovering the cover of a book only visible indirectly through a mirror in the training views (left). From these views alone, our method accurately recovers the cover (right), unlike standard methods like 3DGS (middle). Note that in these results, our method uses ground truth input buffers and Gaussians were initialized around the book cover for both methods.

Editing Results

Our method makes editable specular reflections possible. We show several examples of scene editing, including changing the base reflectance of objects (top), rotating objects (middle), and changing the roughness of a teapot (bottom). All edits are consistent with the scene, including multi-bounce reflections. Note that in these results, our method uses ground truth input buffers.

BibTeX

@inproceedings{
  poirierginter:hal-05306634,
  TITLE = {{Editable Physically-based Reflections in Raytraced Gaussian Radiance Fields}},
  AUTHOR = {Poirier-Ginter, Yohan and Hu, Jeffrey and Lalonde, Jean-Fran{\c c}ois and Drettakis, George},
  URL = {https://inria.hal.science/hal-05306634},
  BOOKTITLE = {{SIGGRAPH Asia 2025 - 18th ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia}},
  ADDRESS = {Hong Kong, Hong Kong SAR China},
  YEAR = {2025},
  MONTH = Dec,
  DOI = {10.1145/3757377.3763971},
  KEYWORDS = {path tracing ; differentiable rendering ; Reconstruction Gaussian splatting ; Reconstruction Gaussian splatting differentiable rendering path tracing ; Computing methodologies $\rightarrow$ Rendering},
  PDF = {https://inria.hal.science/hal-05306634v1/file/saconferencepapers25-163.pdf},
  HAL_ID = {hal-05306634},
  HAL_VERSION = {v1},
}

Acknowledgments and Funding

This research was co-funded by the European Union (EU) ERC Advanced grant FUNGRAPH No 788065 and ERC Advanced Grant NERPHYS No 101141721. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the EU or the European Research Council. Neither the EU nor the granting authority can be held responsible for them. This research was also supported by NSERC grant RGPIN-2020-04799 and the Digital Research Alliance Canada. The authors are grateful to Adobe and NVIDIA for generous donations, and the OPAL infrastructure from Université Côte d’Azur.