< Back to previous page

Project

6D object pose tracking

With the rapid development of 3D acquisition technologies, 3D sensors are becoming increasingly available and affordable, including various types of 3D scanners, LiDARs, and RGB-D cameras (such as Kinect and RealSense). 3D data acquired by these sensors can provide rich geometric, shape and scale information, which makes it suitable for applications such as robotics, autonomous driving, virtual reality and augmented reality. In this research, we will be focusing on three aspects of point cloud processing: point cloud registration, object detection and semantic segmentation. Point cloud registration is defined as finding the transformation between two separate point cloud coordinate systems. It is key for Simultaneous Localization and Mapping (SLAM), 3D reconstruction of scenes, and it has become central in vision-based autonomous driving. In robotics applications such as autonomous driving we are interested in detecting objects in 3D space. This is fundamental for motion planning in order to plan a safe route. Point cloud semantic segmentation(PCSS) is the task of associating each point of a point cloud with a semantic label. PCSS is basic and critical task for robot to understand the scene. Besides accuracy, processing speed is also critical for these applications. We hope to propose a framework which is able to build an accurate 3D semantic map with object location information in real time by combining these three tasks.

Date:29 May 2020 →  Today
Keywords:SLAM, object tracking, 3D vision
Disciplines:Computer vision, Mobile and distributed robotics, Adaptive agents and intelligent robotics
Project type:PhD project