Deep Blending for Free-Viewpoint Image-Based Rendering


Peter Hedman Julien Philip True Price Jan-Michael Frahm George Drettakis Gabriel Brostow
University College London Inria, Université Côte d'Azur UNC Chapel Hill UNC Chapel Hill Inria, Université Côte d'Azur University College London

Paper

More Information UCL Project page  
Abstract
Free-viewpoint image-based rendering (IBR) is a standing challenge. IBR methods combine warped versions of input photos to synthesize a novel view. The image quality of this combination is directly affected by geometric inaccuracies of multi-view stereo (MVS) reconstruction and by view- and image-dependent effects that produce artifacts when contributions from different input views are blended. We present a new deep learning approach to blending for IBR, in which we use held-out real image data to learn blending weights to combine input photo contributions. Our Deep Blending method requires us to address several challenges to achieve our goal of interactive free-viewpoint IBR navigation. We first need to provide sufficiently accurate geometry so the Convolutional Neural Network (CNN) can succeed in finding correct blending weights. We do this by combining two different MVS reconstructions with complementary accuracy vs. completeness tradeoffs. To tightly integrate learning in an interactive IBR system, we need to adapt our rendering algorithm to produce a fixed number of input layers that can then be blended by the CNN. We generate training data with a variety of captured scenes, using each input photo as ground truth in a held-out approach. We also design the network architecture and the training loss to provide high quality novel view synthesis, while reducing temporal flickering artifacts. Our results demonstrate free-viewpoint IBR in a wide variety of scenes, clearly surpassing previous methods in visual quality, especially when moving far from the input cameras.
Results Video

NEW!!!!! Source Code and Datasets

Full source code and datasets are available at https://gitlab.inria.fr/sibr/projects/inside_out_deep_blending.git , as part of the SIBR system. For full documentation see https://sibr.gitlabpages.inria.fr/ .

Supplemental Material

Method Comparisons
InsideOut Soft3D Selective IBR ULR Our Heuristic Blending RealityCapture
Ablation Analyses
InsideOut Refinement Leave-one-out Single Scene 30% Data Regression No Temporal Loss
No VGG Loss Single Mosaic Two Mosaics
Results for Individual Scenes (Click on scene name to view)
Comparisons with InsideOut, Soft3D, Selective IBR, ULR, Our Heuristic Blending, RealityCapture
Comparisons with InsideOut, Selective IBR, ULR, Our Heuristic Blending, RealityCapture
Comparisons with Soft3D, Selective IBR, ULR, Our Heuristic Blending, RealityCapture
Comparisons with Soft3D, ULR, Our Heuristic Blending, RealityCapture
Comparisons with Soft3D, ULR, Our Heuristic Blending, RealityCapture
Comparisons with InsideOut, Selective IBR, ULR, Our Heuristic Blending, RealityCapture
Comparisons with InsideOut, ULR, Our Heuristic Blending, RealityCapture
Comparisons with Selective IBR, ULR, Our Heuristic Blending, RealityCapture
Comparisons with Selective IBR, ULR, Our Heuristic Blending, RealityCapture
Comparisons with Selective IBR, ULR, Our Heuristic Blending, RealityCapture
Comparisons with Selective IBR, ULR, Our Heuristic Blending, RealityCapture
Comparisons with ULR, Our Heuristic Blending, RealityCapture
Comparisons with ULR, Our Heuristic Blending, RealityCapture
Comparisons with ULR, Our Heuristic Blending, RealityCapture
Comparisons with ULR, Our Heuristic Blending, RealityCapture
Comparisons with ULR, Our Heuristic Blending, RealityCapture
Comparisons with ULR, Our Heuristic Blending, RealityCapture
Comparisons with ULR, Our Heuristic Blending, RealityCapture
Comparisons with ULR, Our Heuristic Blending, RealityCapture