Traditional vision-based relative navigation algorithms are highly affected from non-nominal conditions, which comprise illumination conditions and environmental uncertainties. Thanks to the outstanding generalization capability and flexibility, deep neural networks (and AI algorithms in general) are excellent candidates to solve the aforementioned shortcoming of navigation algorithms. The paper presents a vision-based navigation system using AI to solve the task of pinpoint landing on the Moon. The Moon landing scenario consists in the spacecraft descent on the South Pole from a parking orbit to the powered descent phase. A 2D planar Moon landing is taken as reference, nevertheless the approach is easily applicable to a 3D scenario. The presented architecture is based on a Convolutional Neural Network (CNN) trained with supervised learning approach. The CNN is used to extract features of the observed craters that are then processed by standard image processing algorithms in order to provide pseudo-measurements that can be used by navigation filters. The dataset for training includes images with different illumination and surface viewing conditions. The supervised learning approach is preferred since the knowledge of the landing area can be exploited in the pinpoint Moon landing scenario. Thus, the net can be trained with an appropriate dataset of synthetic images of the landing area at different relative poses and illumination conditions. The CNN has proven excellent performance and has been deeply developed for image processing and feature extraction in non-space applications. The AI-system is coupled with a navigation filter which refines the estimate and perform sensor-fusion with other measurement sources.

Artificial Intelligence Techniques in Autonomous Vision-Based Navigation System for Lunar Landing

Stefano Silvestrini;Paolo Lunghi;Margherita Piccinin;Giovanni Zanotti;Michele Lavagna
2020-01-01

Abstract

Traditional vision-based relative navigation algorithms are highly affected from non-nominal conditions, which comprise illumination conditions and environmental uncertainties. Thanks to the outstanding generalization capability and flexibility, deep neural networks (and AI algorithms in general) are excellent candidates to solve the aforementioned shortcoming of navigation algorithms. The paper presents a vision-based navigation system using AI to solve the task of pinpoint landing on the Moon. The Moon landing scenario consists in the spacecraft descent on the South Pole from a parking orbit to the powered descent phase. A 2D planar Moon landing is taken as reference, nevertheless the approach is easily applicable to a 3D scenario. The presented architecture is based on a Convolutional Neural Network (CNN) trained with supervised learning approach. The CNN is used to extract features of the observed craters that are then processed by standard image processing algorithms in order to provide pseudo-measurements that can be used by navigation filters. The dataset for training includes images with different illumination and surface viewing conditions. The supervised learning approach is preferred since the knowledge of the landing area can be exploited in the pinpoint Moon landing scenario. Thus, the net can be trained with an appropriate dataset of synthetic images of the landing area at different relative poses and illumination conditions. The CNN has proven excellent performance and has been deeply developed for image processing and feature extraction in non-space applications. The AI-system is coupled with a navigation filter which refines the estimate and perform sensor-fusion with other measurement sources.
2020
71st International Astronautical Congress (IAC 2020)
File in questo prodotto:
File Dimensione Formato  
SILVS05-20.pdf

Accesso riservato

Descrizione: Paper
: Publisher’s version
Dimensione 1.41 MB
Formato Adobe PDF
1.41 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1166183
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? ND
social impact