< Back to previous page

Project

3D hyperspectral point cloud processing for smarter UAVs

Nowadays, Unmanned Aerial Vehicles (UAVs) are often employed as a measuring or monitoring device in a wide range of remote sensing applications: e.g., in precision agriculture (the need for observing the inter- and intra-variability in crops), in infrastructure inspection (the need for detecting defects) or in culture heritage digitization (the need for creating a digital twin). Today, a correct interpretation of UAV captured image data is very challenging because the appearance in images highly depends on for example sun-sensor viewing angles, the intensity of the incident solar radiation and the underlying geometric surface of the scene. It often happens that during analysis, the collected UAV data is found to be of insufficient quality for example due to poor image quality (e.g., saturation, motion blur, focal blur, etc.) or missing data (e.g., areas that are not scanned, angles that are missing for Bidirectional Reflectance Distribution Function (BRDF) analysis, lack of resolution/details for certain regions of interest, etc.). This creates not only a lot of frustration, but it also leads to a lot of wasted efforts, costs and time. Moreover, it even has a negative impact on research that depends on time critical/bounded measurements: e.g. diurnal observations, limited time window of crop growth, weather conditions, etc.

The main goal of this research is to overcome the aforementioned problems by improving the interpretation and the analysis of UAV imaging data in an efficient way. To reach this goal, we will rely on advancements in sensor pose (position and orientation) estimation through sensor fusion between Laser Imaging Detection And Ranging (LIDAR) scanners and hyperspectral cameras. The proposed multi-modal sensor fusion results in 3D hyperspectral point clouds and moreover it can serve as a fast high-quality alternative to common photogrammetry workflows. The key novelties are 1) to perform early fusion between LIDAR and hyperspectral data (coming from one or multiple cameras), rather than fusion in a post-processing step, and 2) to perform both computational as memory efficient analysis of hyperspectral point cloud data, which brings us a step closer towards autonomous UAVs. We envision the following main benefits from the proposed research: 1) the complementarity between LIDAR and hyperspectral imaging, i.e. accurate and fast 3D mapping through real-time Simultaneous Localization And Mapping with the high discriminative power of materials or plant stress, 2) more information for analysis or classification tasks: not only height and hyperspectral signatures, but also BRDF models with detailed orientation (from for example normal vectors) can be taken into account, 3) improved online monitoring for the (automatic/human) pilot (e.g., onsite quality control, adaptive path plans, etc.) and 4) enabling new (more dynamic) research such as scanning fields multiple times a day to have a better understanding of BRDF effects in function of the different positions of the sun.

Date:1 Sep 2020 →  Today
Keywords:hyperspectral imaging, Unmanned Aerial Vehicle, LIDAR, sensor fusion
Disciplines:Data visualisation and imaging, Computer vision