Over the past few years, encouraged by advancements in parallel computing technologies (e.g., Graphic Processing Units, GPUs), availability of massive labeled data as well as breakthrough in understanding of deep neural networks, there has been an explosion of machine learning algorithms that can accurately process images for classification and regression tasks. It is expected that deep learning methods will play a critical role in autonomous and intelligent space guidance problems. The goal of this paper is to design a set of deep neural networks, i.e. Convolutional Neural Networks (CNN) and Recurrent Neural Net-works (RNN) which are able to predict the fuel-optimal control actions to perform autonomous Moon landing, using only raw images taken by on board optimal cameras. Such approach can be employed to directly select actions with-out the need of direct filters for state estimation. Indeed, the optimal guidance is determined processing the images only. For this purpose, Supervised Machine Learning algorithms are designed and tested. In this framework, deep networks are trained with many example inputs and their desired outputs (labels), given by a supervisor. During the training phase, the goal is to model the unknown functional relationship that links the given inputs with the given outputs. Inputs and labels come from a properly generated dataset. The images associated to each state are the inputs and the fuel-optimal control actions are the labels. Here we consider two possible scenarios, i.e. 1) a vertical 1-D Moon landing and 2) a planar 2-D Moon landing. For both cases, fuel-optimal trajectories are generated by software packages such as the General Pseudospectral Optimal Control Software (GPOPS) considering a set of initial conditions. With this dataset a training phase is performed. Subsequently, in order to improve the network accuracy a Dataset Aggregation (Dagger) approach is applied. Performances are verified on test optimal trajectories never seen by the networks.

Deep Learning for Autonomous Lunar Landing

P. Di Lizia;F. Topputo;
2018-01-01

Abstract

Over the past few years, encouraged by advancements in parallel computing technologies (e.g., Graphic Processing Units, GPUs), availability of massive labeled data as well as breakthrough in understanding of deep neural networks, there has been an explosion of machine learning algorithms that can accurately process images for classification and regression tasks. It is expected that deep learning methods will play a critical role in autonomous and intelligent space guidance problems. The goal of this paper is to design a set of deep neural networks, i.e. Convolutional Neural Networks (CNN) and Recurrent Neural Net-works (RNN) which are able to predict the fuel-optimal control actions to perform autonomous Moon landing, using only raw images taken by on board optimal cameras. Such approach can be employed to directly select actions with-out the need of direct filters for state estimation. Indeed, the optimal guidance is determined processing the images only. For this purpose, Supervised Machine Learning algorithms are designed and tested. In this framework, deep networks are trained with many example inputs and their desired outputs (labels), given by a supervisor. During the training phase, the goal is to model the unknown functional relationship that links the given inputs with the given outputs. Inputs and labels come from a properly generated dataset. The images associated to each state are the inputs and the fuel-optimal control actions are the labels. Here we consider two possible scenarios, i.e. 1) a vertical 1-D Moon landing and 2) a planar 2-D Moon landing. For both cases, fuel-optimal trajectories are generated by software packages such as the General Pseudospectral Optimal Control Software (GPOPS) considering a set of initial conditions. With this dataset a training phase is performed. Subsequently, in order to improve the network accuracy a Dataset Aggregation (Dagger) approach is applied. Performances are verified on test optimal trajectories never seen by the networks.
2018
2018 AAS/AIAA Astrodynamics Specialist Conference
978-087703657-9
File in questo prodotto:
File Dimensione Formato  
FURFR01-18.pdf

accesso aperto

Descrizione: Paper
: Publisher’s version
Dimensione 2.84 MB
Formato Adobe PDF
2.84 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1063150
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 46
  • ???jsp.display-item.citation.isi??? 0
social impact