Map Construction Based on LiDAR Vision Inertial Multi-Sensor Fusion

Summary

The research proposes a SLAM based mapping system integrating LiDAR for autonomous positioning and map construction in complex environments. It combines visual information, LiDAR point collection, IMU data, and estimates pose as well as optimizes motion trajectory by a factor graph. This was then updated through motion estimation and laser point cloud data to construct a high-precision map.

Objective

Simultaneous localization and mapping (SLAM) algorithms are used widely for mobile robot navigation. The laser based SLAM techniques produce high resolution results with accurate obstacle distance information, however, it falls short in providing reliable motion estimation. Vision-based SLAM techniques produce rich image feature information but has low positioning accuracy since the image sensor is affected by light. The idea of this research is to use wheel odometer and inertial measurement units (IMU) along with SLAM techniques to perform a multi-sensor fusion scheme and thereby to map the environment.

Method

This is an overview of the system:

The feature extraction process is inspired by lightweight and ground-optimized LiDAR odometer and mapping (LEGO-LOAM) which divides the point cloud of LiDAR into ground points and segmentation points. These are then divided into edge and flat surface points.

To reduce the loss of computing resources, the point clouds are screened to remove outlier points. Then the visuals filtered through intensity value of point clouds along with laser point cloud features are used to improve the accuracy of positioning. A LiDAR inertial system based on factor graph optimization is used to construct a 3D point cloud map.

The point cloud from the LiDAR is processed and the irrelevant point clouds are eliminated based on whether they are at the edge of the LiDAR angle of view, have extreme reflection intensities, or are hidden behind the object.

A map construction and optimization algorithm is used to generate a global map and a closed loop detection module is employed to register feature points and obtain relative pose. The factor graph optimization approach is used to estimate the robot state and trajectory.

Result

Mapping and localization experiments were conducted to verify the performance of the algorithm. The method proposed in the research has the least error of many other mainstream methods.

Inference

It is a very comprehensive 3D vision research paper that combines visual images, LiDAR data, and IMU to employ one of the most highly precise algorithms for map generation. It has a lot of mathematical insight into how to approach the problem and has step-by-step guidelines on how it can be incorporated. It can be a really useful research paper to look into when it comes to incorporating 3D vision mapping onto the WARG UAV.

References

  1. https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwi-sdKmsbD5AhV8hIkEHe8YB0AQFnoECCQQAQ&url=https%3A%2F%2Fwww.mdpi.com%2F2032-6653%2F12%2F4%2F261%2Fpdf&usg=AOvVaw0_s3n1-FI2kHputUXWepTi

  2. http://doi.org/10.1007/s10514-016-9548-2

  3. http://doi.org/10.3390/s20072068

  4. http://doi.org/10.3390/app9102105