CVPR 2020 Tutorial on

Novel View Synthesis: From Depth-Based Warping to Multi-Plane Images and Beyond

Novel view synthesis is a long-standing problem at the intersection of computer graphics and computer vision. Seminal work in this field dates back to the 1990s, with early methods proposing to interpolate either between corresponding pixels from the input images, or between rays in space. Recent deep learning methods enabled tremendous improvements to the quality of the results, and brought renewed popularity to the field. The teaser above shows novel view synthesis from different recent methods. From left to right: Yoon et al. [1], Mildenhall et al. [2], Wiles et al. [3], and Choi et al. [4]. Images and videos courtesy of the respective authors.

>>> If you missed it, you can watch the full replay here <<<

We would like to thank again our speakers for the great talks, which made this tutorial really great. You can also click on the links in the table below to go to specific talks. We will share the slides from the talks soon.

Goal of the Tutorial

In this tutorial we will first introduce the problem, including offering context and a taxonomy of the different methods. We will then have talks by the researchers behind the most recent approaches in the field. At the end of the tutorial we will have a roundtable discussion with all the speakers.

Date and Location

The tutorial took place on June 14th, 2020 within CVPR 2020.
Contact us here.


Orazio's pic
Orazio Gallo
Varun's pic
Varun Jampani

Invited Speakers

Rick's pic
Rick Szeliski
Pratul's pic
Pratul Srinivasan
UC Berkeley
Olivia's pic
Olivia Wiles
U. of Oxford
Nima's pic
Nima Kalantari
Texas A&M

Program with Link to the Videos of the Talks

Talk Title Speaker
9:20 - 9:50 Novel View Synthesis: A Gentle Introduction
[Video] [Slides (pptx)]
9:50 - 10:20 Reflections on Image-Based Rendering
[Video] [Slides (pdf)]
10:20 - 10:50 SynSin: Single Image View Synthesis
[Video] [Slides (pdf)]
10:50 - 11:00 Coffee break (15m)
11:00 - 11:30 View synthesis with Multiplane Images
[Video] [Slides]
11:30 - 12:00 View Synthesis and Immersive Mixed Reality for VR devices
[Video] [Slides (pptx)]
12:00 - 12:45 Lunch break (45m)
12:45 - 13:15 View and Frame Interpolation for Consumer Light Field Cameras
[Video] [Slides (pptx)]
13:15 - 13:45 NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
[Video] [Slides (pdf)] [Slides (key)]
13:45 - 14:15 Novel View Synthesis from Dynamic Scenes
[Video] [Slides (pdf)]
Jae Shin
14:15 - 14:30 Coffee break (15m)
14:30 - 15:30 Round Table Discussion With the Invited Speakers


[1] Yoon, Kim, Gallo, Park, and Kautz, "Novel View Synthesis of Dynamic Scenes with Globally Coherent Depths from a Monocular Camera" IEEE CVPR 2020.
[2] Mildenhall, Srinivasan, Tancik, Barron, Ramamoorthi, and Ng, "NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis" arXiv 2020.
[3] Wiles, Gkioxari, Szeliski, and Johnson, "SynSin: End-to-end View Synthesis from a Single Image" IEEE CVPR 2020.
[4] Choi, Gallo, Troccoli, Kim, and Kautz, "Extreme view synthesis" IEEE ICCV 2019.