Autonomous spacecraft relative navigation via monocular images became a hot topic in the past few years and, recently, received a further push thanks to the constantly growing field of artificial neural networks and the publication of several spaceborne image datasets. Despite the proliferation of spacecraft relative-state initialization algorithms developed, most architectures adopt computationally expensive solutions relying on convolutional neural networks (CNNs) that provide accurate output at the cost of a high computational burden that seems unfeasible for current spaceborne hardware. The paper addresses this issue by proposing a novel pose initialization algorithm based on lightweight CNNs. Inspired by previous state-of-the-art algorithms, the developed architecture leverages a fast and accurate target detection CNN followed by a line segment detection CNN capable of running with low inference time on mobile devices. The line segments and their junctions are grouped into complex geometrical groups, reducing the solution search space, and subsequently, they are adopted to extract the final pose estimate. As a main outcome, the analyses demonstrate that the lightweight architecture developed scores high accuracy in the pose estimation task, with a mean estimation error of less than 10 cm in translation and 2.5°in rotation. The baseline algorithm scores a mean SLAB error of 0.04552 with a standard deviation of 0.22972 in the test dataset. Detailed analyses demonstrate that the uncertainties on the overall pose score are driven mainly by errors in the relative attitude, which gives the highest contribution to the pose error metric adopted. The analyses on the error distributions point out that the uncertainties on the estimated relative position are higher in the camera boresight axis direction. Concerning the relative attitude, the algorithm proposed has higher uncertainties in estimating directions of the target x and y axes due to ambiguities related to the target geometry. Notably, the target detection CNN trained in this work outperforms the previous top scores in the benchmark dataset. The performances of the proposed algorithm have been investigated further by analyzing the effects on the accuracy due to the relative distance and the presence of background in the images. Lastly, the paper delves into the possibility of adopting a sub-portion of the 2D-to-3D match matrix made by the most complex perceptual groups identified that positively affects the overall run-time, pointing out the performances in terms of accuracy of the estimates and providing a comparison of both the baseline and the reduced match matrix versions against state-of-the-art algorithms concerning relative position and attitude errors and solution availability, highlighting the high accuracy and solution availability of the proposed architectures.

Robust spacecraft relative pose estimation via CNN-aided line segments detection in monocular images

Bechini, Michele;Lunghi, Paolo;Lavagna, Michèle
2024-01-01

Abstract

Autonomous spacecraft relative navigation via monocular images became a hot topic in the past few years and, recently, received a further push thanks to the constantly growing field of artificial neural networks and the publication of several spaceborne image datasets. Despite the proliferation of spacecraft relative-state initialization algorithms developed, most architectures adopt computationally expensive solutions relying on convolutional neural networks (CNNs) that provide accurate output at the cost of a high computational burden that seems unfeasible for current spaceborne hardware. The paper addresses this issue by proposing a novel pose initialization algorithm based on lightweight CNNs. Inspired by previous state-of-the-art algorithms, the developed architecture leverages a fast and accurate target detection CNN followed by a line segment detection CNN capable of running with low inference time on mobile devices. The line segments and their junctions are grouped into complex geometrical groups, reducing the solution search space, and subsequently, they are adopted to extract the final pose estimate. As a main outcome, the analyses demonstrate that the lightweight architecture developed scores high accuracy in the pose estimation task, with a mean estimation error of less than 10 cm in translation and 2.5°in rotation. The baseline algorithm scores a mean SLAB error of 0.04552 with a standard deviation of 0.22972 in the test dataset. Detailed analyses demonstrate that the uncertainties on the overall pose score are driven mainly by errors in the relative attitude, which gives the highest contribution to the pose error metric adopted. The analyses on the error distributions point out that the uncertainties on the estimated relative position are higher in the camera boresight axis direction. Concerning the relative attitude, the algorithm proposed has higher uncertainties in estimating directions of the target x and y axes due to ambiguities related to the target geometry. Notably, the target detection CNN trained in this work outperforms the previous top scores in the benchmark dataset. The performances of the proposed algorithm have been investigated further by analyzing the effects on the accuracy due to the relative distance and the presence of background in the images. Lastly, the paper delves into the possibility of adopting a sub-portion of the 2D-to-3D match matrix made by the most complex perceptual groups identified that positively affects the overall run-time, pointing out the performances in terms of accuracy of the estimates and providing a comparison of both the baseline and the reduced match matrix versions against state-of-the-art algorithms concerning relative position and attitude errors and solution availability, highlighting the high accuracy and solution availability of the proposed architectures.
2024
File in questo prodotto:
File Dimensione Formato  
BECHM01-24.pdf

accesso aperto

: Publisher’s version
Dimensione 4.83 MB
Formato Adobe PDF
4.83 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1256698
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact