Fast and photorealistic view synthesis for real-world scenes
Applications such as AR/VR and multi-view 3D displays require virtual view synthesis from a limited set of input views. High framerate and low latency are typically required to give a user a realistic impression when looking around in a virtual scene. This PhD project will investigate efficient methods to generate photorealistic views from stereo or RGB-D input. Problems to address include view extrapolation, occlusion hole filling, and implementation optimizations such as parallelization. Both classic and deep learning-based methods will be explored.