Crowdsampling the Plenoptic Function

ECCV 2020 Oral

Overview Video


Many popular tourist landmarks are captured in a multitude of online, public photos. These photos represent a sparse and unstructured sampling of the plenoptic function for a particular scene. In this paper, we present a new approach to novel view synthesis under time-varying illumination from such data. We present a new approach for reconstructing light fields under the varying viewing conditions (e.g., different illuminations) present in Internet photo collections. Our approach builds on the recent multi-plane image (MPI) format for representing local light fields under fixed viewing conditions. We introduce a new DeepMPI representation, motivated by observations on the sparsity structure of the plenoptic function, that allows for real-time synthesis of photorealistic views that are continuous in both space and across changes in lighting. Our method can synthesize the same compelling parallax and view-dependent effects as previous MPI methods, while simultaneously interpolating along changes in reflectance and illumination with time. We show how to learn a model of these effects in an unsupervised way from an unstructured collection of photos without temporal registration, demonstrating significant improvements over recent work in neural rendering.


Below you will find links to videos of our results, as well as other methods for comparison.
Click "play all" button in each part for better visualization.

Hyperlapse Results

Space-Time Photos

A gallery showing results of space-time interpolation
Hyperlapse Results

Diverse Results

A gallery showing novel view synthesis of landmarks under diverse illumination
Comparison with NRW

Comparison with NRW

A gallery of results comparing with Neural Rerendering in the Wild (NRW) [Meshry et al 2019]
Comparison with MUNIT

Comparison with MUNIT

A gallery of results comparing with Multimodal Unsupervised Image-to-image Translation (MUNIT) [Huang et al 2018]
Failure Cases

Failure Cases

Examples of failure cases


Click to view the paper.

Click to view the supp document.


Code and dataset coming soon.


We thank Kai Zhang, Jin Sun, and Qianqian Wang for helpful discussions. This research was supported in part by the generosity of Eric and Wendy Schmidt by recommendation of the Schmidt Futures program.