This paper aims to present a deep learning-based pipeline for estimating the pose of an uncooperative target spacecraft from a single grayscale monocular image. The possibility of enabling autonomous vision-based relative navigation in close proximity to a noncooperative resident space object would be especially appealing for mission scenarios such as on-orbit servicing and active debris removal. The relative pose estimation pipeline proposed in this work leverages state-of-the-art convolutional neural network (CNN) architectures to detect the features of the target spacecraft using monocular vision. Specifically, the overall pipeline is composed of three main subsystems. The input image is first processed using an object detection CNN that localizes the bounding box enclosing our target. This is followed by a second CNN that regresses the location of semantic key points of the spacecraft. Eventually, a geometric optimization algorithm exploits the detected key-point locations to solve for the final relative pose. The proposed pipeline demonstrated centimeter-/degree-level pose accuracy on the spacecraft pose estimation dataset (SPEED), along with considerable robustness to changes in illumination and background conditions. In addition, the architecture showed to generalize well on real images, despite having exclusively exploited synthetic data from the SPEED to train the CNNs.

Monocular Relative Pose Estimation Pipeline for Uncooperative Resident Space Objects

Maestrini, Michele;Di Lizia, Pierluigi
2022-01-01

Abstract

This paper aims to present a deep learning-based pipeline for estimating the pose of an uncooperative target spacecraft from a single grayscale monocular image. The possibility of enabling autonomous vision-based relative navigation in close proximity to a noncooperative resident space object would be especially appealing for mission scenarios such as on-orbit servicing and active debris removal. The relative pose estimation pipeline proposed in this work leverages state-of-the-art convolutional neural network (CNN) architectures to detect the features of the target spacecraft using monocular vision. Specifically, the overall pipeline is composed of three main subsystems. The input image is first processed using an object detection CNN that localizes the bounding box enclosing our target. This is followed by a second CNN that regresses the location of semantic key points of the spacecraft. Eventually, a geometric optimization algorithm exploits the detected key-point locations to solve for the final relative pose. The proposed pipeline demonstrated centimeter-/degree-level pose accuracy on the spacecraft pose estimation dataset (SPEED), along with considerable robustness to changes in illumination and background conditions. In addition, the architecture showed to generalize well on real images, despite having exclusively exploited synthetic data from the SPEED to train the CNNs.
2022
File in questo prodotto:
File Dimensione Formato  
PIAZM01-22.pdf

Accesso riservato

Descrizione: Paper
: Publisher’s version
Dimensione 9.71 MB
Formato Adobe PDF
9.71 MB Adobe PDF   Visualizza/Apri
PIAZM_OA_01-22.pdf

Open Access dal 27/05/2022

Descrizione: Paper Open Access
: Post-Print (DRAFT o Author’s Accepted Manuscript-AAM)
Dimensione 13.73 MB
Formato Adobe PDF
13.73 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1216796
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 10
  • ???jsp.display-item.citation.isi??? 9
social impact