Large-area visually augmented navigation for autonomous underwater vehicles


a service of the MBLWHOI Library | About WHOAS

Show simple item record

dc.contributor.author Eustice, Ryan M.
dc.date.accessioned 2007-01-17T18:49:26Z
dc.date.available 2007-01-17T18:49:26Z
dc.date.issued 2005-06
dc.identifier.uri http://hdl.handle.net/1912/1414
dc.description Submitted to the Joint Program in Applied Ocean Science & Engineering in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution June 2005 en
dc.description.abstract This thesis describes a vision-based, large-area, simultaneous localization and mapping (SLAM) algorithm that respects the low-overlap imagery constraints typical of autonomous underwater vehicles (AUVs) while exploiting the inertial sensor information that is routinely available on such platforms. We adopt a systems-level approach exploiting the complementary aspects of inertial sensing and visual perception from a calibrated pose-instrumented platform. This systems-level strategy yields a robust solution to underwater imaging that overcomes many of the unique challenges of a marine environment (e.g., unstructured terrain, low-overlap imagery, moving light source). Our large-area SLAM algorithm recursively incorporates relative-pose constraints using a view-based representation that exploits exact sparsity in the Gaussian canonical form. This sparsity allows for efficient O(n) update complexity in the number of images composing the view-based map by utilizing recent multilevel relaxation techniques. We show that our algorithmic formulation is inherently sparse unlike other feature-based canonical SLAM algorithms, which impose sparseness via pruning approximations. In particular, we investigate the sparsification methodology employed by sparse extended information filters (SEIFs) and offer new insight as to why, and how, its approximation can lead to inconsistencies in the estimated state errors. Lastly, we present a novel algorithm for efficiently extracting consistent marginal covariances useful for data association from the information matrix. In summary, this thesis advances the current state-of-the-art in underwater visual navigation by demonstrating end-to-end automatic processing of the largest visually navigated dataset to date using data collected from a survey of the RMS Titanic (path length over 3 km and 3100 m2 of mapped area). This accomplishment embodies the summed contributions of this thesis to several current SLAM research issues including scalability, 6 degree of freedom motion, unstructured environments, and visual perception. en
dc.description.sponsorship This work was funded in part by the CenSSIS ERC of the National Science Foundation under grant EEC-9986821, in part by the Woods Hole Oceanographic Institution through a grant from the Penzance Foundation, and in part by a NDSEG Fellowship awarded through the Department of Defense. en
dc.format.extent 24143873 bytes
dc.format.mimetype application/pdf
dc.language.iso en_US en
dc.publisher Massachusetts Institute of Technology and Woods Hole Oceanographic Institution en
dc.relation.ispartofseries WHOI Theses en
dc.subject Underwater imaging systems en
dc.subject Underwater navigation en
dc.subject Submersibles en
dc.title Large-area visually augmented navigation for autonomous underwater vehicles en
dc.type Thesis en
dc.identifier.doi 10.1575/1912/1414

Files in this item

This item appears in the following Collection(s)

Show simple item record

Search WHOAS


My Account