Vision Based algorithms for relative navigation represent nowadays a trending topic in the Computer Vision field, but are still not widely exploited in space exploration missions due to their high computational costs and low TRL. In recent years anyway, a step in this direction has been taken by a lot of companies and agencies, with the development of dedicated algorithms and hardware (e.g. LVS and APLNav by NASA, SINPLEX by DLR), enhancing the possibility to adapt Computer Vision techniques to space applications. Vision Based algorithms for planetary landing represent a very promising tool: Cameras are cheap and reliable sensors with great potential in terms of obtainable accuracy in spacecraft state reconstruction. This paper presents a Vision Based algorithm for spacecraft Terrain Relative Navigation during landing designed from scratch at PoliMi DAER, based on a monocamera working in the visible spectrum. The Vision Based Navigation algorithm works processing images from the monocamera. First two frames are used for initialization: Features are extracted from the first image and tracked onto the second; a set of 2D to 2D correspondences is obtained, relative pose between the frames is calculated and a sparse map of 3D points is initialized exploiting triangulation. 2D features are then tracked for each subsequent frame and correlated to the 3D map: This way a set of 3D to 2D correspondences is obtained and used to solve the Perspective-n-Point problem, which along with a RANSAC routine set to delete incoming outliers (wrong match between target image and map), gives as result a first estimate of the relative position of the camera. Bundle Adjustment, an optimization technique widely diffused in Computer Vision, is applied both on the map and relative pose during initialization, and on each successive step to the obtained camera pose only to increase the overall navigation accuracy. Each time the number of tracked features drops below a fixed threshold, a new map is triangulated and merged with the existing one. Performance assessment of the navigation system using synthetic video sequences of different landing trajectories on the Moon surface is presented, along with preliminary results of the experimental calibration and verification campaign on PoliMI-DAER facility for GNC testing with HIL; the facility includes a Mitsubishi PA-10 robotic arm to simulate the lander's dynamics with monocamera mounted on tip, a calibrated 2.4 x4m Lunar surface diorama and a dimmable 5600K LED lighting system to provide a fully controllable illumination environment.

Vision based navigation for autonomous planetary landing

Losi, Luca;Lavagna, Michèle
2017-01-01

Abstract

Vision Based algorithms for relative navigation represent nowadays a trending topic in the Computer Vision field, but are still not widely exploited in space exploration missions due to their high computational costs and low TRL. In recent years anyway, a step in this direction has been taken by a lot of companies and agencies, with the development of dedicated algorithms and hardware (e.g. LVS and APLNav by NASA, SINPLEX by DLR), enhancing the possibility to adapt Computer Vision techniques to space applications. Vision Based algorithms for planetary landing represent a very promising tool: Cameras are cheap and reliable sensors with great potential in terms of obtainable accuracy in spacecraft state reconstruction. This paper presents a Vision Based algorithm for spacecraft Terrain Relative Navigation during landing designed from scratch at PoliMi DAER, based on a monocamera working in the visible spectrum. The Vision Based Navigation algorithm works processing images from the monocamera. First two frames are used for initialization: Features are extracted from the first image and tracked onto the second; a set of 2D to 2D correspondences is obtained, relative pose between the frames is calculated and a sparse map of 3D points is initialized exploiting triangulation. 2D features are then tracked for each subsequent frame and correlated to the 3D map: This way a set of 3D to 2D correspondences is obtained and used to solve the Perspective-n-Point problem, which along with a RANSAC routine set to delete incoming outliers (wrong match between target image and map), gives as result a first estimate of the relative position of the camera. Bundle Adjustment, an optimization technique widely diffused in Computer Vision, is applied both on the map and relative pose during initialization, and on each successive step to the obtained camera pose only to increase the overall navigation accuracy. Each time the number of tracked features drops below a fixed threshold, a new map is triangulated and merged with the existing one. Performance assessment of the navigation system using synthetic video sequences of different landing trajectories on the Moon surface is presented, along with preliminary results of the experimental calibration and verification campaign on PoliMI-DAER facility for GNC testing with HIL; the facility includes a Mitsubishi PA-10 robotic arm to simulate the lander's dynamics with monocamera mounted on tip, a calibrated 2.4 x4m Lunar surface diorama and a dimmable 5600K LED lighting system to provide a fully controllable illumination environment.
2017
68th International Astronautical Congress (IAC 2017)
9781510855373
Planetary landing; Terrain Relative Navigation; Vision Based Navigation; Visual Odometry; Aerospace Engineering; Astronomy and Astrophysics; Space and Planetary Science
File in questo prodotto:
File Dimensione Formato  
LOSIL01-17.pdf

Accesso riservato

Descrizione: Paper
: Publisher’s version
Dimensione 2.68 MB
Formato Adobe PDF
2.68 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1060439
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact