< Back to previous page

Project

3D reconstruction using mobile devices for VR playback (CSC-2019-22)

 In this project, we will investigate computational photography methods to capture 3D environments and reconstruct 3D worlds from a large set of images on a mobile device. This can be done using algorithms known as Structure from Motion (SfM), visual SLAM or Multi-View Stereo (MVS). However, those popular frameworks typically focus on 3D scanning of a small object. We will extend those approaches by creating a 3D reconstruction of an environment, indoor or outdoor, while taking robustness and efficient implementation into account. Existing approaches for 3D reconstruction typically impose strict geometric boundary conditions (such as Manhattan world requirements when reconstructing the interior of a room). Their performance degrades rapidly in the presence of deviations from this model, such as arcs and dormers in a house, or irregular natural environments outdoors. Our aim is to release those restrictions to allow robust scanning of arbitrary environments with a typical smartphone camera. In addition, the learning based model via AI also has great potential in 3D reconstruction, like low-level feature extraction, feature match, semantic labelling, which can provide a more general model for 3D task. We will explore the AI potential in 3D world. When such a 3D reconstruction is available, it allows a remote user to virtually explore the environment as if he/she was really there, such as for virtual tours or virtual tourism. If the reconstruction can be made in real-time, it allows for virtual presence at live events, such as concerts or sports events.

Date:12 May 2021 →  Today
Keywords:3D Reconstruction, Simultaneously Localization and Mapping (SLAM), Sensor Fusion, Navigation, Artificial Intelligence
Disciplines:Computer vision, Virtual reality and related simulation, Computer graphics
Project type:PhD project