A tightly-coupled stereo vision-aided inertial navigation system is proposed in this work, as a synergistic incorporation of vision with other sensors. In order to avoid loss of information possibly resulting by visual preprocessing, a set of feature-based motion sensors and an inertial measurement unit are directly fused together to estimate the vehicle state. Two alternative feature-based observation models are considered within the proposed fusion architecture. The first model uses the trifocal tensor to propagate feature points by homography, so as to express geometric constraints among three consecutive scenes. The second one is derived by using a rigid body motion model applied to three-dimensional (3D) reconstructed feature points. A kinematic model accounts for the vehicle motion, and a Sigma-Point Kalman filter is used to achieve a robust state estimation in the presence of non-linearities. The proposed formulation is derived for a general platform-independent 3D problem, and it is tested and demonstrated with a real dynamic indoor data-set alongside of a simulation experiment. Results show improved estimates than in the case of a classical visual odometry approach and of a loosely-coupled stereo vision-aided inertial navigation system, even in GPS (Global Positioning System)-denied conditions and when magnetometer measurements are not reliable.

Tightly-Coupled Stereo Vision-Aided Inertial Navigation Using Feature-Based Motion Sensors

ASADI, EHSAN;BOTTASSO, CARLO LUIGI
2014-01-01

Abstract

A tightly-coupled stereo vision-aided inertial navigation system is proposed in this work, as a synergistic incorporation of vision with other sensors. In order to avoid loss of information possibly resulting by visual preprocessing, a set of feature-based motion sensors and an inertial measurement unit are directly fused together to estimate the vehicle state. Two alternative feature-based observation models are considered within the proposed fusion architecture. The first model uses the trifocal tensor to propagate feature points by homography, so as to express geometric constraints among three consecutive scenes. The second one is derived by using a rigid body motion model applied to three-dimensional (3D) reconstructed feature points. A kinematic model accounts for the vehicle motion, and a Sigma-Point Kalman filter is used to achieve a robust state estimation in the presence of non-linearities. The proposed formulation is derived for a general platform-independent 3D problem, and it is tested and demonstrated with a real dynamic indoor data-set alongside of a simulation experiment. Results show improved estimates than in the case of a classical visual odometry approach and of a loosely-coupled stereo vision-aided inertial navigation system, even in GPS (Global Positioning System)-denied conditions and when magnetometer measurements are not reliable.
2014
sensor fusion, tight-coupling, trifocal constraint, vision-aided inertial navigation
File in questo prodotto:
File Dimensione Formato  
ASADE01-14.pdf

Accesso riservato

: Publisher’s version
Dimensione 1.79 MB
Formato Adobe PDF
1.79 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/771302
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 11
  • ???jsp.display-item.citation.isi??? 10
social impact