Novel View Synthesis: From Depth-Based Warping to Multi-Plane Images and Beyond
Novel view synthesis is a long-standing problem at the intersection of computer graphics and computer vision.
Seminal work in this field dates back to the 1990s, with early methods proposing to interpolate either between corresponding pixels from the input images, or between rays in space.
Recent deep learning methods enabled tremendous improvements to the quality of the results, and brought renewed popularity to the field.
The teaser above shows novel view synthesis from different recent methods. From left to right: Yoon et al. [1], Mildenhall et al. [2], Wiles et al. [3], and Choi et al. [4]. Images and videos courtesy of the respective authors.
We would like to thank again our speakers for the great talks, which made this tutorial really great.
You can also click on the links in the table below to go to specific talks.
We will share the slides from the talks soon.
Goal of the Tutorial
In this tutorial we will first introduce the problem, including offering context and a taxonomy of the different methods. We will then have talks by the researchers behind the most recent approaches in the field.
At the end of the tutorial we will have a roundtable discussion with all the speakers.
Date and Location
The tutorial took place on June 14th, 2020 within CVPR 2020.
Contact us here.
Round Table Discussion With the Invited Speakers
[Video]
References
[1] Yoon, Kim, Gallo, Park, and Kautz, "Novel View Synthesis of Dynamic Scenes with Globally
Coherent Depths from a Monocular Camera" IEEE CVPR 2020.
[2] Mildenhall, Srinivasan, Tancik, Barron, Ramamoorthi, and Ng, "NeRF: Representing Scenes
as Neural Radiance Fields for View Synthesis" arXiv 2020.
[3] Wiles, Gkioxari, Szeliski, and Johnson, "SynSin: End-to-end View Synthesis from a Single Image"
IEEE CVPR 2020.
[4] Choi, Gallo, Troccoli, Kim, and Kautz, "Extreme view synthesis" IEEE
ICCV 2019.