In short, our method leverages intrinsic decomposition to separately reconstruct the diffuse and reflected components of a scene, making editable specular reflections possible (see video). We optimize a unique representation of the scene in which both the reflectors and reflected objects are represented with the same Gaussian primitives, achieved by carefully designing separate losses and a training schedule.
Our unique scene representation makes it possible to reconstruct objects only viewed indirectly through specular reflections. We showcase this by recovering the cover of a book only visible indirectly through a mirror in the training views (left). From these views alone, our method accurately recovers the cover (right), unlike standard methods like 3DGS (middle). Note that in these results, our method uses ground truth input buffers and Gaussians were initialized around the book cover for both methods.
Our method makes editable specular reflections possible. We show several examples of scene editing, including changing the base reflectance of objects (top), rotating objects (middle), and changing the roughness of a teapot (bottom). All edits are consistent with the scene, including multi-bounce reflections. Note that in these results, our method uses ground truth input buffers.
@inproceedings{
poirierginter:hal-05306634,
TITLE = {{Editable Physically-based Reflections in Raytraced Gaussian Radiance Fields}},
AUTHOR = {Poirier-Ginter, Yohan and Hu, Jeffrey and Lalonde, Jean-Fran{\c c}ois and Drettakis, George},
URL = {https://inria.hal.science/hal-05306634},
BOOKTITLE = {{SIGGRAPH Asia 2025 - 18th ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia}},
ADDRESS = {Hong Kong, Hong Kong SAR China},
YEAR = {2025},
MONTH = Dec,
DOI = {10.1145/3757377.3763971},
KEYWORDS = {path tracing ; differentiable rendering ; Reconstruction Gaussian splatting ; Reconstruction Gaussian splatting differentiable rendering path tracing ; Computing methodologies $\rightarrow$ Rendering},
PDF = {https://inria.hal.science/hal-05306634v1/file/saconferencepapers25-163.pdf},
HAL_ID = {hal-05306634},
HAL_VERSION = {v1},
}
This research was co-funded by the European Union (EU) ERC Advanced grant FUNGRAPH No 788065 and ERC Advanced Grant NERPHYS No 101141721. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the EU or the European Research Council. Neither the EU nor the granting authority can be held responsible for them. This research was also supported by NSERC grant RGPIN-2020-04799 and the Digital Research Alliance Canada. The authors are grateful to Adobe and NVIDIA for generous donations, and the OPAL infrastructure from Université Côte d’Azur.