Unmanned Arial Vehicles (UAVs), and more specifically, quadcopters need to be stable during their flights. Altitude stability is usually achieved by using a PID controller that is built into the flight controller software. Furthermore, the PID controller has gains that need to be tuned to reach optimal altitude stabilization during the quadcopter's flight. For that, control system engineers need to tune those gains by using extensive modeling of the environment, which might change from one environment and condition to another. As quadcopters penetrate more sectors from the military to the consumer sectors, they have been put into complex and challenging environments more than ever before. Hence, intelligent self-stabilizing quadcopters are needed to maneuver through those complex environments and situations. Here we show that by using online reinforcement learning with minimal background knowledge, the altitude stability of the quadcopter can be achieved using a model-free approach. We found that by using background knowledge and an activation function like Sigmoid, altitude stabilization can be achieved faster with a small memory footprint. In addition, using this approach will accelerate development by avoiding extensive simulations before applying the PID gains to the real-world quadcopter. Our results demonstrate the possibility of using the trial and error approach of reinforcement learning combined with activation function and background knowledge to achieve faster quadcopter altitude stabilization in different environments and conditions.

Using Q-learning to Automatically Tune Quadcopter PID Controller Online for Fast Altitude Stabilization

Bonarini A.
2022-01-01

Abstract

Unmanned Arial Vehicles (UAVs), and more specifically, quadcopters need to be stable during their flights. Altitude stability is usually achieved by using a PID controller that is built into the flight controller software. Furthermore, the PID controller has gains that need to be tuned to reach optimal altitude stabilization during the quadcopter's flight. For that, control system engineers need to tune those gains by using extensive modeling of the environment, which might change from one environment and condition to another. As quadcopters penetrate more sectors from the military to the consumer sectors, they have been put into complex and challenging environments more than ever before. Hence, intelligent self-stabilizing quadcopters are needed to maneuver through those complex environments and situations. Here we show that by using online reinforcement learning with minimal background knowledge, the altitude stability of the quadcopter can be achieved using a model-free approach. We found that by using background knowledge and an activation function like Sigmoid, altitude stabilization can be achieved faster with a small memory footprint. In addition, using this approach will accelerate development by avoiding extensive simulations before applying the PID gains to the real-world quadcopter. Our results demonstrate the possibility of using the trial and error approach of reinforcement learning combined with activation function and background knowledge to achieve faster quadcopter altitude stabilization in different environments and conditions.
2022
Proceedings of the 2022 IEEE International Conference on Mechatronics and Automation (ICMA)
978-1-6654-0853-0
reinforcement learning (RL)
Q-learning
PID tuning
unmanned aerial vehicle (UAV)
quadcopter
File in questo prodotto:
File Dimensione Formato  
Using_Q-learning_to_Automatically_Tune_Quadcopter_PID_Controller_Online_for_Fast_Altitude_Stabilization.pdf

Accesso riservato

Dimensione 936.8 kB
Formato Adobe PDF
936.8 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1232844
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 2
social impact