In the last years, several Artificial Intelligence (AI) application studies have been performed to enhance spacecraft Guidance, Navigation and Control (GNC) autonomy. This substantial and fast development of machine learning techniques is strongly influencing the nowadays aerospace research, especially due to the increasingly request of spacecraft autonomy in very complex scenarios, as active debris removal, spacecraft constellation, on-orbit servicing and vision-based navigation may be. This work is based on the use of deep reinforcement learning as the mathematical tool to perform adaptive guidance and control of a spacecraft in a relative dynamics scenario in order to accomplish particular tasks and objectives: i.e. unknown target object shape reconstruction. The formulation makes use of the definition of an active SLAM (Simultaneous Localization and Mapping) problem constructed as a Partially Observable Markov Decision Process (POMDP) and therefore solved with Deep Reinforcement Learning (DRL) methods. The work is then mainly focused on how the input state uncertainty may affect the overall performance of an already trained agent in this particular relative motion scenario. The resulting analysis is critically discussed and some further methodologies are studied in order to increase again the performance, when needed.
Spacecraft Adaptive Deep Reinforcement Learning Guidance with Input State Uncertainties in Relative Motion Scenario
Brandonisio, Andrea;Capra, Lorenzo;Lavagna, Michèle
2023-01-01
Abstract
In the last years, several Artificial Intelligence (AI) application studies have been performed to enhance spacecraft Guidance, Navigation and Control (GNC) autonomy. This substantial and fast development of machine learning techniques is strongly influencing the nowadays aerospace research, especially due to the increasingly request of spacecraft autonomy in very complex scenarios, as active debris removal, spacecraft constellation, on-orbit servicing and vision-based navigation may be. This work is based on the use of deep reinforcement learning as the mathematical tool to perform adaptive guidance and control of a spacecraft in a relative dynamics scenario in order to accomplish particular tasks and objectives: i.e. unknown target object shape reconstruction. The formulation makes use of the definition of an active SLAM (Simultaneous Localization and Mapping) problem constructed as a Partially Observable Markov Decision Process (POMDP) and therefore solved with Deep Reinforcement Learning (DRL) methods. The work is then mainly focused on how the input state uncertainty may affect the overall performance of an already trained agent in this particular relative motion scenario. The resulting analysis is critically discussed and some further methodologies are studied in order to increase again the performance, when needed.File | Dimensione | Formato | |
---|---|---|---|
BRANA01-23.pdf
Accesso riservato
:
Publisher’s version
Dimensione
1.25 MB
Formato
Adobe PDF
|
1.25 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.