Traditional 3D surveying methods often fall short in complex spaces due to lack of mobility, time constraints and high risk. For this reason there is an actual demand for 3D data acquisition tools and methods, particularly suitable for complex and narrow environments, due to their capacity for efficiently capturing detailed and accurate spatial information, maybe also automatically. This study presents a novel approach for fusing 3D spatial data collected by two separate and independent mobile mapping systems: (1) ATOM-ANT3D and (2) MandEye. We propose an innovative fusion technique that combines visual and LiDAR data from asynchronous acquisitions, reducing the need for strict temporal and spatial synchronizations between the two systems. We compare the outputs of both systems before and after fusion, studying the individual limitations and highlighting the complementary benefits achieved by the proposed fusion framework. Results demonstrate improved accuracy of global alignment and spatial completeness of the final point clouds, proving the efficiency and flexibility of the proposed approach.

Pose Graph Data Fusion for Visual- and LiDAR-based Low-Cost Portable Mapping Systems

Elalailyi, Ahmad;Fassi, Francesco;Fregonese, Luigi
2024-01-01

Abstract

Traditional 3D surveying methods often fall short in complex spaces due to lack of mobility, time constraints and high risk. For this reason there is an actual demand for 3D data acquisition tools and methods, particularly suitable for complex and narrow environments, due to their capacity for efficiently capturing detailed and accurate spatial information, maybe also automatically. This study presents a novel approach for fusing 3D spatial data collected by two separate and independent mobile mapping systems: (1) ATOM-ANT3D and (2) MandEye. We propose an innovative fusion technique that combines visual and LiDAR data from asynchronous acquisitions, reducing the need for strict temporal and spatial synchronizations between the two systems. We compare the outputs of both systems before and after fusion, studying the individual limitations and highlighting the complementary benefits achieved by the proposed fusion framework. Results demonstrate improved accuracy of global alignment and spatial completeness of the final point clouds, proving the efficiency and flexibility of the proposed approach.
2024
Multi-camera systems, LiDAR SLAM, Visual SLAM, 3D Point Clouds, Data Fusion, Accuracy, Pose Graph
File in questo prodotto:
File Dimensione Formato  
isprs-archives-XLVIII-2-W8-2024-147-2024_RED.pdf

accesso aperto

: Publisher’s version
Dimensione 816.89 kB
Formato Adobe PDF
816.89 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1279289
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact