Map building fusing acoustic and visual information using autonomous underwater vehicles
Map building fusing acoustic and visual information using autonomous underwater vehicles
Date
2012-10
Authors
Kunz, Clayton G.
Singh, Hanumant
Singh, Hanumant
Linked Authors
Alternative Title
Citable URI
As Published
Date Created
Location
DOI
Related Materials
Replaces
Replaced By
Keywords
Abstract
We present a system for automatically building 3-D maps of underwater terrain fusing
visual data from a single camera with range data from multibeam sonar. The six-degree
of freedom location of the camera relative to the navigation frame is derived as part of the
mapping process, as are the attitude offsets of the multibeam head and the on-board velocity
sensor. The system uses pose graph optimization and the square root information smoothing
and mapping framework to simultaneously solve for the robot’s trajectory, the map, and
the camera location in the robot’s frame. Matched visual features are treated within the
pose graph as images of 3-D landmarks, while multibeam bathymetry submap matches are
used to impose relative pose constraints linking robot poses from distinct tracklines of the
dive trajectory. The navigation and mapping system presented works under a variety of
deployment scenarios, on robots with diverse sensor suites. Results of using the system to
map the structure and appearance of a section of coral reef are presented using data acquired
by the Seabed autonomous underwater vehicle.
Description
Author Posting. © The Author(s), 2012. This is the author's version of the work. It is posted here by permission of John Wiley & Sons for personal use, not for redistribution. The definitive version was published in Journal of Field Robotics 30 (2013): 763–783, doi:10.1002/rob.21473.