When considering the proximity environment of a small body, the capability to navigate around it is of paramount importance to enable any onboard autonomous decision-making process. Onboard optical-based navigation is often performed by coupling image processing algorithms with filtering techniques to generate position and velocity estimates, providing compelling navigation performance with cost-effective hardware. These same processes could be addressed with data-driven ones, at the expense of a sufficiently large dataset. To investigate to what extent can these methods substitute traditional ones, in this paper we develop a possible onboard methodology based on segmentation masks, convolutional extreme learning machine architectures, and recurrent neural networks to respectively generate simpler image inputs, map single-frame data into position estimates, and process multiple-frame position sequences to generate both position and velocity estimates. Considering the primary of the Didymos binary system as a case study and the possibility to complement optical observations with LiDAR data, we show that recurrent neural networks would bring only limited improvement in position reconstruction for the case considered while they would be beneficial in estimating the velocity, especially when considering complementing LiDAR data.
Onboard State Estimation Around Didymos with Recurrent Neural Networks and Segmentation Maps
M. Pugliatti;F. Topputo
2024-01-01
Abstract
When considering the proximity environment of a small body, the capability to navigate around it is of paramount importance to enable any onboard autonomous decision-making process. Onboard optical-based navigation is often performed by coupling image processing algorithms with filtering techniques to generate position and velocity estimates, providing compelling navigation performance with cost-effective hardware. These same processes could be addressed with data-driven ones, at the expense of a sufficiently large dataset. To investigate to what extent can these methods substitute traditional ones, in this paper we develop a possible onboard methodology based on segmentation masks, convolutional extreme learning machine architectures, and recurrent neural networks to respectively generate simpler image inputs, map single-frame data into position estimates, and process multiple-frame position sequences to generate both position and velocity estimates. Considering the primary of the Didymos binary system as a case study and the possibility to complement optical observations with LiDAR data, we show that recurrent neural networks would bring only limited improvement in position reconstruction for the case considered while they would be beneficial in estimating the velocity, especially when considering complementing LiDAR data.File | Dimensione | Formato | |
---|---|---|---|
PUGLM02-23.pdf
Accesso riservato
:
Publisher’s version
Dimensione
6.47 MB
Formato
Adobe PDF
|
6.47 MB | Adobe PDF | Visualizza/Apri |
PUGLM_OA_02-23.pdf
Open Access dal 24/06/2023
:
Post-Print (DRAFT o Author’s Accepted Manuscript-AAM)
Dimensione
6.07 MB
Formato
Adobe PDF
|
6.07 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.