A novel autonomous vision-based navigation system has been designed to support the upcoming AVANTI (Autonomous Vision Approach Navigation and Target Identification) experiment. AVANTI aims at demonstrating the fully autonomous approach to a noncooperative satellite using a simple camera in a safe and fuel-efficient manner. To that end, the pictures of the camera are first processed onboard by a target identification algorithm, which extracts line-of-sight measurements to the target spacecraft. In a second step, the measurements feed a navigation filter which provides the relative state estimate of the target to the onboard guidance module. Being embarked as autonomous embedded system, the navigation module needs to guarantee robustness and simplicity of use without sacrifying the navigation performance. The paper describes the strategy adopted for the robust target identification, relying on a kinematic identification of the target trajectory throughout a sequence of pictures. The filtering is done using an analytical model for the relative motion which considers the mean effects of the perturbations due to the Earth's equatorial bulge (J2) and the differential drag. The vision-based navigation filter has been tested and validated in a highly realistic simulation environment and using flight data from the PRISMA formation flying mission. Overall, the results show that reliable target recognition (more than 97% success) and accurate navigation performance at the meter-level can be achieved.

Spaceborne autonomous vision-based navigation system for AVANTI

Gaias G.
2014-01-01

Abstract

A novel autonomous vision-based navigation system has been designed to support the upcoming AVANTI (Autonomous Vision Approach Navigation and Target Identification) experiment. AVANTI aims at demonstrating the fully autonomous approach to a noncooperative satellite using a simple camera in a safe and fuel-efficient manner. To that end, the pictures of the camera are first processed onboard by a target identification algorithm, which extracts line-of-sight measurements to the target spacecraft. In a second step, the measurements feed a navigation filter which provides the relative state estimate of the target to the onboard guidance module. Being embarked as autonomous embedded system, the navigation module needs to guarantee robustness and simplicity of use without sacrifying the navigation performance. The paper describes the strategy adopted for the robust target identification, relying on a kinematic identification of the target trajectory throughout a sequence of pictures. The filtering is done using an analytical model for the relative motion which considers the mean effects of the perturbations due to the Earth's equatorial bulge (J2) and the differential drag. The vision-based navigation filter has been tested and validated in a highly realistic simulation environment and using flight data from the PRISMA formation flying mission. Overall, the results show that reliable target recognition (more than 97% success) and accurate navigation performance at the meter-level can be achieved.
2014
65th International Astronautical Congress (IAC)
9781634399869
File in questo prodotto:
File Dimensione Formato  
ARDAJ01-14.pdf

Accesso riservato

Descrizione: Paper
: Publisher’s version
Dimensione 3.4 MB
Formato Adobe PDF
3.4 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1139272
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 7
  • ???jsp.display-item.citation.isi??? ND
social impact