In the aviation industry, autonomous vehicles are gaining more and more importance over time, asking for a reduction of accidents by lowering the pilots' workload in the most stressful maneuvers and introducing automated systems. While autonomous flight is widely studied, being de-facto a standard in many aircraft, autonomous ground navigation is still in its early stages. With increasing air traffic, managing on-ground operations has become very challenging, especially in low-visibility conditions. Exploiting the Global Positioning System (GPS) is a possible solution. Unfortunately, this approach is prone to inaccuracies, disturbances, and jamming. For this reason, a more robust solution includes using other exteroceptive sensors, such as cameras or radars. In this paper, we propose a 2-layer control system architecture capable of performing autonomous taxiing maneuvers using a monocular camera. We propose a model-oriented approach to control the ground handling dynamics, which are used to actively exploit the information from the camera. The results, obtained using a validated multibody simulator interfaced with a graphical engine, show good tracking performance and robustness to external light conditions.

Enabling Autonomous Aircraft Taxiing Navigation Through Monocular Vision

Desiderato, Lorenzo;Mendoza Lopetegui, José Joaquín;Tanelli, Mara
2025-01-01

Abstract

In the aviation industry, autonomous vehicles are gaining more and more importance over time, asking for a reduction of accidents by lowering the pilots' workload in the most stressful maneuvers and introducing automated systems. While autonomous flight is widely studied, being de-facto a standard in many aircraft, autonomous ground navigation is still in its early stages. With increasing air traffic, managing on-ground operations has become very challenging, especially in low-visibility conditions. Exploiting the Global Positioning System (GPS) is a possible solution. Unfortunately, this approach is prone to inaccuracies, disturbances, and jamming. For this reason, a more robust solution includes using other exteroceptive sensors, such as cameras or radars. In this paper, we propose a 2-layer control system architecture capable of performing autonomous taxiing maneuvers using a monocular camera. We propose a model-oriented approach to control the ground handling dynamics, which are used to actively exploit the information from the camera. The results, obtained using a validated multibody simulator interfaced with a graphical engine, show good tracking performance and robustness to external light conditions.
2025
2025 IEEE Conference on Control Technology and Applications, CCTA 2025
979-8-3315-3908-5
File in questo prodotto:
File Dimensione Formato  
Enabling_Autonomous_Aircraft_Taxiing_Navigation_Through_Monocular_Vision.pdf

Accesso riservato

: Publisher’s version
Dimensione 1.39 MB
Formato Adobe PDF
1.39 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1301306
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact