Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Summary

The research paper discusses a stereo vision based mapping algorithm for modelling hazards in an urban environment for the case of a mobile robot. It consists of generating a depth of disparity map from the stereo images; the depth range coordinates calculated are then passed into a 3D grid generation algorithm to analyze the environment. This 3D grid is then segmented into planes which are checked against the plane of the robot to see if it is accessible or not. With this information a 2D local safety map is created which is used for the navigation of the robot.

Objective

A wheeled mobile robot requires to detect obstacles, drop-offs, and inclines for navigation. This objective of the research is to present a stereo vision based mapping algorithm using cameras alone for detecting inclines, drop-offs, and static obstacles. The stereo mapping algorithm would identify regions that are safe and unsafe in the robot’s local surroundings. This way, the mobile robot could safely navigate its surroundings.

Method

An algorithm known as the 6-DOF SLAM is employed for providing the robot’s pose to the mapping algorithm. Essentially, this is used for creating a disparity map from the image pair returned by the stereo camera setup. While post-processing, the range readings that are significantly different from neighbouring range readings are removed due to their likelihood of being incorrect. This localization step essentially helps figure the robot’s pose in 3D space and returns a set of 3D range points in the globe coordinate frame.

The next step involves updating a 3D model consisting of a 3D grid and 3D point cloud with the new 3D range point readings using an occupancy algorithm. Essentially, for each 3D range point, a ray is cast from the camera to the 3D point and voxels along the ray have their log odds probability (LOP) of occupancy updated. A voxel starts off with a LOP of zero and is either incremented or decremented based on the change in the surroundings of the robot. Each voxel’s list is ordered according to the distance of the camera from the point and the voxels that have a high probability of occupancy are identified in the 3D occupancy grid.

In this step, the 3D model is segmented into traversable ground regions and then planes are fitted to points in accordance with the segments using the linear least squares algorithm.

Next, a local safety map is built. We assume that the segment in which the robot is located is safe. Based on the 3D grid, all regions reachable by the robot from its original segment is labelled as Safe. All other regions are labelled as Non-ground. This is done by labelling individual segments as Level, Inclined, or Unknown. The boundary cells of the segments labelled as Level or Inclined can be checked with respect to Unknown boundary cells to know if there's a drop-off edge. Furthermore, some mathematical parameters are used to calculated whether a certain incline is navigable by the robot. The output from this step is a 2D local safety map with various annotations for safety.

Result

The error rate for the stereo safety map versus ground truth safety map was analyzed. The FN rates are very less, however, the FP rate is quite high which means that the robot might detect an object when there is none.

Furthermore, it takes a while and several frames for the algorithm to detect objects beyond 4 to 5 meters. This means that it is unreliable for detecting objects beyond a certain radius.

Inference

This research paper provides a great idea of how we can go about implementing a stereo vision based model in various steps. While the algorithm is not best suited for detecting objects beyond 4-5 metres and while airborne, it gives a good example of how we should be thinking about proceeding with 3D vision mapping algorithm development. As it uses a stereo camera based system, it is very easily implementable on the WARG UAV. A few of the research’s steps could be used in addition to other mapping steps to generate a suitable a final mapping algorithm.

References

  1. https://web.eecs.umich.edu/~kuipers/papers/Murarka-iros-09.pdf

  2. A. Murarka, M. Sridharan, and B. Kuipers, “Detecting obstacles and drop-offs using stereo and motion cues for safe local motion,” in IROS, 2008.

  3. N. Heckman, J. Lalonde, N. Vandapel, and M. Hebert, “Potential negative obstacle detection by occlusion labeling,” in IROS, 2007.

  4. J.-S. Gutmann, M. Fukuchi, and M. Fujita, “3D perception and environment map generation for humanoid robot navigation,” Intl. Journal of Robotics Research, 2008.

  5. A. Rankin, A. Huertas, and L. Matthies, “Evaluation of stereo vision obstacle detection algorithms for off-road autonomous navigation,” in 32nd AUVSI Symposium on Unmanned Systems, 2005.

  6. A. Murarka, “Building safety maps using vision for safe local mobile robot navigation,” Ph.D. dissertation, The University of Texas at Austin, 2009.

  • No labels