Multi-camera Visual Simultaneous Localization and Mapping (V-SLAM) increases spa- tial coverage through multi-view image streams, improving localization accuracy and reducing data acquisition time. Despite its speed and generally robustness, V-SLAM of- ten struggles to achieve precise camera poses necessary for accurate 3D reconstruction, especially in complex environments. This study introduces two novel multi-camera opti- mization methods to enhance pose accuracy, reduce drift, and ensure loop closures. These methods refine multi-camera V-SLAM outputs within existing frameworks and are evalu- ated in two configurations: (1) multiple independent stereo V-SLAM instances operating on separate camera pairs; and (2) multi-view odometry processing all camera streams simultaneously. The proposed optimizations include (1) a multi-view feature-based op- timization that integrates V-SLAM poses with rigid inter-camera constraints and bundle adjustment; and (2) a multi-camera pose graph optimization that fuses multiple trajectories using relative pose constraints and robust noise models. Validation is conducted through two complex 3D surveys using the ATOM-ANT3D multi-camera fisheye mobile mapping system. Results demonstrate survey-grade accuracy comparable to traditional photogram- metry, with reduced computational time, advancing toward near real-time 3D mapping of challenging environments.

Optimizing Multi-Camera Mobile Mapping Systems with Pose Graph and Feature-Based Approaches

Elalailyi, Ahmad;Fassi, Francesco;Remondino, Fabio
2025-01-01

Abstract

Multi-camera Visual Simultaneous Localization and Mapping (V-SLAM) increases spa- tial coverage through multi-view image streams, improving localization accuracy and reducing data acquisition time. Despite its speed and generally robustness, V-SLAM of- ten struggles to achieve precise camera poses necessary for accurate 3D reconstruction, especially in complex environments. This study introduces two novel multi-camera opti- mization methods to enhance pose accuracy, reduce drift, and ensure loop closures. These methods refine multi-camera V-SLAM outputs within existing frameworks and are evalu- ated in two configurations: (1) multiple independent stereo V-SLAM instances operating on separate camera pairs; and (2) multi-view odometry processing all camera streams simultaneously. The proposed optimizations include (1) a multi-view feature-based op- timization that integrates V-SLAM poses with rigid inter-camera constraints and bundle adjustment; and (2) a multi-camera pose graph optimization that fuses multiple trajectories using relative pose constraints and robust noise models. Validation is conducted through two complex 3D surveys using the ATOM-ANT3D multi-camera fisheye mobile mapping system. Results demonstrate survey-grade accuracy comparable to traditional photogram- metry, with reduced computational time, advancing toward near real-time 3D mapping of challenging environments.
2025
multi-camera V-SLAM, ORBSLAM3.0, COLMAP-SLAM, 3D reconstruction, multi-view optimization, pose graph optimization, ATOM-ANT3D
File in questo prodotto:
File Dimensione Formato  
remotesensing-17-02810-with-cover.pdf

accesso aperto

: Publisher’s version
Dimensione 4.76 MB
Formato Adobe PDF
4.76 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1295294
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact