Simultaneous localization and mapping (SLAM) is one fundamental topic in robotics due to its applications in autonomous driving. Over the last decades, many systems have been proposed, working on data coming from different sensors, such as cameras or LiDARs. Although excellent results were reached, the majority of these methods exploit the data as is, without extracting additional information or considering multiple sensors simultaneously. In this paper, we present MCS-SLAM, a Graph SLAM system that performs sensor fusion by exploiting multi-cues extracted from sensor data: color/intensity, depth/range and normal information. For each sensor, motion estimation is achieved through minimization of the pixel-wise difference between two multi-cue images. All estimates are then collectively optimized to achieve a coherent transformation. Point clouds received as input are also used to perform loop detection and closure. We compare the performance of the proposed system with state-of-the-art point cloud-based methods, LeGO-LOAM-BOR, LIO-SAM, HDL and ART-SLAM, and show that the proposed algorithm achieves less accuracy than the state-of-the-art, while needing much less computational time. The comparison is made by evaluating the estimated trajectory displacement, using the KITTI dataset.

MCS-SLAM: Multi-Cues Multi-Sensors Fusion SLAM

Frosi M.;Matteucci M.
2022-01-01

Abstract

Simultaneous localization and mapping (SLAM) is one fundamental topic in robotics due to its applications in autonomous driving. Over the last decades, many systems have been proposed, working on data coming from different sensors, such as cameras or LiDARs. Although excellent results were reached, the majority of these methods exploit the data as is, without extracting additional information or considering multiple sensors simultaneously. In this paper, we present MCS-SLAM, a Graph SLAM system that performs sensor fusion by exploiting multi-cues extracted from sensor data: color/intensity, depth/range and normal information. For each sensor, motion estimation is achieved through minimization of the pixel-wise difference between two multi-cue images. All estimates are then collectively optimized to achieve a coherent transformation. Point clouds received as input are also used to perform loop detection and closure. We compare the performance of the proposed system with state-of-the-art point cloud-based methods, LeGO-LOAM-BOR, LIO-SAM, HDL and ART-SLAM, and show that the proposed algorithm achieves less accuracy than the state-of-the-art, while needing much less computational time. The comparison is made by evaluating the estimated trajectory displacement, using the KITTI dataset.
2022
Proceedings of 2022 IEEE Intelligent Vehicles Symposium (IV)
978-1-6654-8821-1
File in questo prodotto:
File Dimensione Formato  
multicue_iv22 (2).pdf

accesso aperto

Descrizione: Author pre-print
: Pre-Print (o Pre-Refereeing)
Dimensione 2.41 MB
Formato Adobe PDF
2.41 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1220448
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact