Active debris removal and unmanned on-orbit servicing missions have gained interest in the last few years, along with the possibility to perform them through the use of an autonomous chasing spacecraft. In this work, new resources are proposed to aid the implementation of guidance, navigation, and control algorithms for satellites devoted to the inspection of noncooperative targets before any proximity operation is initiated. In particular, the use of convolutional neural networks (CNN) performing object detection and instance segmentation is proposed, and its effectiveness in recognizing the components and parts of the target satellite is evaluated. Yet, no reliable training images dataset of this kind exists to date. A tailored and publicly available software has been developed to overcome this limitation by generating synthetic images. Computer-aided design models of existing satellites are loaded on a three-dimensional animation software and used to programmatically render images of the objects from different points of view and in different lighting conditions, together with the necessary ground truth labels and masks for each image. The results show how a relatively low number of iterations is sufficient for a CNN trained on such datasets to reach a mean average precision value in line with state-of-the-art performances achieved by CNN in common datasets. An assessment of the performance of the neural network when trained on different conditions is provided. To conclude, the method is tested on real images from the Mission Extension Vehicle-1 on-orbit servicing mission, showing that using only artificially generated images to train the model does not compromise the learning process.

Instance Segmentation for Feature Recognition on Noncooperative Resident Space Objects

Faraco, N;Maestrini, M;Di Lizia, P
2022-01-01

Abstract

Active debris removal and unmanned on-orbit servicing missions have gained interest in the last few years, along with the possibility to perform them through the use of an autonomous chasing spacecraft. In this work, new resources are proposed to aid the implementation of guidance, navigation, and control algorithms for satellites devoted to the inspection of noncooperative targets before any proximity operation is initiated. In particular, the use of convolutional neural networks (CNN) performing object detection and instance segmentation is proposed, and its effectiveness in recognizing the components and parts of the target satellite is evaluated. Yet, no reliable training images dataset of this kind exists to date. A tailored and publicly available software has been developed to overcome this limitation by generating synthetic images. Computer-aided design models of existing satellites are loaded on a three-dimensional animation software and used to programmatically render images of the objects from different points of view and in different lighting conditions, together with the necessary ground truth labels and masks for each image. The results show how a relatively low number of iterations is sufficient for a CNN trained on such datasets to reach a mean average precision value in line with state-of-the-art performances achieved by CNN in common datasets. An assessment of the performance of the neural network when trained on different conditions is provided. To conclude, the method is tested on real images from the Mission Extension Vehicle-1 on-orbit servicing mission, showing that using only artificially generated images to train the model does not compromise the learning process.
2022
Resident Space Object
Convolutional Neural Network
Satellites
Computer Aided Design
Control Algorithm
Earth
Spacecraft Models
Graphics Processing Unit
Machine Learning
Application Programming Interface
File in questo prodotto:
File Dimensione Formato  
FARAN_IP_02-22.pdf

Accesso riservato

: Publisher’s version
Dimensione 7.67 MB
Formato Adobe PDF
7.67 MB Adobe PDF   Visualizza/Apri
FARAN_OA_02-22.pdf

Open Access dal 18/08/2022

: Post-Print (DRAFT o Author’s Accepted Manuscript-AAM)
Dimensione 4.83 MB
Formato Adobe PDF
4.83 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1220505
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 4
social impact