Multi-view Relighting Using a
Geometry-Aware Network


Julien PhilipMichaël GharbiTinghui ZhouAlexei (Alyosha) EfrosGeorge Drettakis
Université Côte d'Azur, InriaAdobeUC BerkeleyUC BerkeleyUniversité Côte d'Azur, Inria


PAPER | VIDEO | SUPPLEMENTALS | TEST SCENES (new) | CODE (new)

PAPER

Multi-view Relighting Using a Geometry-Aware Network pdf Multi-view Relighting Using a Geometry-Aware Network (Small Version) pdf

Code

The code is released as a SIBR framework project at: gitlab.inria.fr/sibr/projects/outdoor_relighting

Video

Abstract

We propose the first learning-based algorithm that can relight images in a plausible and controllable manner given multiple views of an outdoor scene. In particular, we introduce a geometry-aware neural network that utilizes multiple geometry cues (normal maps, specular direction, etc.) and source and target shadow masks computed from a noisy proxy geometry obtained by multi-view stereo. Our model is a three-stage pipeline: two subnetworks refine the source and target shadow masks, and a third performs the final relighting. Furthermore, we introduce a novel representation for the shadow masks, which we call RGB shadow images. They reproject the colors from all views into the shadowed pixels and enable our network to cope with inacuraccies in the proxy and the non-locality of the shadow casting interactions. Acquiring large-scale multi-view relighting datasets for real scenes is challenging, so we train our network on photorealistic synthetic data. At train time, we also compute a noisy stereo-based geometric proxy, this time from the synthetic renderings. This allows us to bridge the gap between the real and synthetic domains. Our model generalizes well to real scenes. It can alter the illumination of drone footage, image-based renderings, textured mesh reconstructions, and even internet photo collections.

Funding and Acknowledgements

The authors thank G. Kopanas, L. Boiron and S. Morgenthaler for the development of the 3DSMax to Mitsuba exporter and ground truth rendering system. Thanks to A. Bousseau and K. Sunkavalli for proofreading earlier drafts. Funding was provided by the European Commission grants EMOTIVE H2020 project No. 727188 and ERC Advanced Grant FUNGRAPH (No. 788065, http://fungraph.inria.fr ). Drone footage copyright Drones Yucatan and Namyeska - info@namyeska.com.

Supplemental Materials and Results