Automation is widely applied for classical robotics application but is still underexploited in space missions. However, autonomous operations would be extremely advantageous in this field, with a wide range of possible applications. One of the most relevant topic in this context is autonomous relative navigation around unknown and uncooperative objects. In the past, several different approaches have been proposed to solve this problem. This paper wants to investigate the potentiality of an hybrid approach, combining image processing techniques for pose estimation with robust filtering methods. A first estimate of the pose of the observed uncooperative object is provided by the image processing step on monocular camera images. The Vision Based navigation algorithm, developed at PoliMi, works detecting salient features from the incoming images, given by the monocamera; among these, the ones corresponding to the uncooperative object are matched to an already available on-board map (constituted by a 3D point cloud correlated with descriptors from real images), in this way a set of 3D to 2D correspondences is obtained. The set of correspondences is then used to solve the so called Perspective-n-Point problem (PnP), which gives as result a first estimate of the relative position of the camera (and hence of the target-chaser) which is expressed in terms of rototranslation. Motion only Bundle Adjustment, is then applied for pose optimization. The estimate of the pose is fed to the navigation filter. Relative translation and rotation are considered decoupled without any loss of generality. A linear model that accounts for eccentricity is implemented and used as model of the dynamics inside the filter. The use of a linear model allows for the exploitation of robust Kalman filter techniques. For the rotational dynamics, the non-linear equations constrain to use a non-linear filter such as the Extended Kalman Filter. However, the use of an alternative formulation, based on Lie-Groups, is here investigated. A first numerical validation of the filter is presented by using, as input, measurements obtained from synthetic images. Moreover, preliminary description of a future experimental validation of the complete algorithm is presented. A Mistubishi PA-10 robotic arm is used to replicate the relative motion of the target satellite. A realistic 3D-printed mock-up of a satellite is installed on the robotic arm end-effector. A single fixed camera acquires images of the moving mock-up.

Vision-Based State Estimation of an Uncooperative Space Object

Losi, L.;Pesce, V.;Lavagna, M.
2017-01-01

Abstract

Automation is widely applied for classical robotics application but is still underexploited in space missions. However, autonomous operations would be extremely advantageous in this field, with a wide range of possible applications. One of the most relevant topic in this context is autonomous relative navigation around unknown and uncooperative objects. In the past, several different approaches have been proposed to solve this problem. This paper wants to investigate the potentiality of an hybrid approach, combining image processing techniques for pose estimation with robust filtering methods. A first estimate of the pose of the observed uncooperative object is provided by the image processing step on monocular camera images. The Vision Based navigation algorithm, developed at PoliMi, works detecting salient features from the incoming images, given by the monocamera; among these, the ones corresponding to the uncooperative object are matched to an already available on-board map (constituted by a 3D point cloud correlated with descriptors from real images), in this way a set of 3D to 2D correspondences is obtained. The set of correspondences is then used to solve the so called Perspective-n-Point problem (PnP), which gives as result a first estimate of the relative position of the camera (and hence of the target-chaser) which is expressed in terms of rototranslation. Motion only Bundle Adjustment, is then applied for pose optimization. The estimate of the pose is fed to the navigation filter. Relative translation and rotation are considered decoupled without any loss of generality. A linear model that accounts for eccentricity is implemented and used as model of the dynamics inside the filter. The use of a linear model allows for the exploitation of robust Kalman filter techniques. For the rotational dynamics, the non-linear equations constrain to use a non-linear filter such as the Extended Kalman Filter. However, the use of an alternative formulation, based on Lie-Groups, is here investigated. A first numerical validation of the filter is presented by using, as input, measurements obtained from synthetic images. Moreover, preliminary description of a future experimental validation of the complete algorithm is presented. A Mistubishi PA-10 robotic arm is used to replicate the relative motion of the target satellite. A realistic 3D-printed mock-up of a satellite is installed on the robotic arm end-effector. A single fixed camera acquires images of the moving mock-up.
2017
68th International Astronautical Congress (IAC 2017)
978-151085537-3
File in questo prodotto:
File Dimensione Formato  
LOSIL02-17.pdf

Accesso riservato

Descrizione: Paper
: Publisher’s version
Dimensione 3.03 MB
Formato Adobe PDF
3.03 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1061620
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact