Steering a nonlinear system from an initial state to a desired one is a common task in control. While a nominal trajectory can be obtained rather systematically using a model, for example, via numerical optimization, heuristics, or reinforcement learning, the design of a computationally fast and reliable feedback control law that guarantees bounded deviations around the found trajectory can be much more involved. An approach that does not require high online computational power and is well-accepted in industry is gain-scheduling. The results presented here pertain to the boundedness guarantees and the set of safe initial conditions of gain-scheduled control laws, based on subsequent linearizations along the reference trajectory. The approach bounds the uncertainty arising from the linearization process, builds polytopic sets of linear time-varying systems covering the nonlinear dynamics along the trajectory, and exploits sufficient conditions for the existence of a robust polyquadratic Lyapunov function to attempt the derivation of the desired gain-scheduled controller via the solution of linear matrix inequalities (LMIs). A result to estimate an ellipsoidal set of safe initial conditions is provided too. Moreover, arbitrary scheduling strategies between the control gains are considered in the analysis, and the method can also be used to check/assess the boundedness properties obtained with an existing gain-scheduled law. The approach is demonstrated experimentally on a small quadcopter, as well as in simulation to design a scheduled controller for a chemical reactor model and to validate an existing control law for a gantry crane model.

On Gain Scheduling Trajectory Stabilization for Nonlinear Systems: Theoretical Insights and Experimental Results

Kessler, Nicolas;Fagiano, Lorenzo
2025-01-01

Abstract

Steering a nonlinear system from an initial state to a desired one is a common task in control. While a nominal trajectory can be obtained rather systematically using a model, for example, via numerical optimization, heuristics, or reinforcement learning, the design of a computationally fast and reliable feedback control law that guarantees bounded deviations around the found trajectory can be much more involved. An approach that does not require high online computational power and is well-accepted in industry is gain-scheduling. The results presented here pertain to the boundedness guarantees and the set of safe initial conditions of gain-scheduled control laws, based on subsequent linearizations along the reference trajectory. The approach bounds the uncertainty arising from the linearization process, builds polytopic sets of linear time-varying systems covering the nonlinear dynamics along the trajectory, and exploits sufficient conditions for the existence of a robust polyquadratic Lyapunov function to attempt the derivation of the desired gain-scheduled controller via the solution of linear matrix inequalities (LMIs). A result to estimate an ellipsoidal set of safe initial conditions is provided too. Moreover, arbitrary scheduling strategies between the control gains are considered in the analysis, and the method can also be used to check/assess the boundedness properties obtained with an existing gain-scheduled law. The approach is demonstrated experimentally on a small quadcopter, as well as in simulation to design a scheduled controller for a chemical reactor model and to validate an existing control law for a gantry crane model.
2025
gain scheduling
linear matrix inequalities
nonlinear systems
trajectory stabilization
File in questo prodotto:
File Dimensione Formato  
2025.Intl J Robust Nonlinear - 2025 - Kessler - On Gain Scheduling Trajectory Stabilization for Nonlinear Systems Theoretical.pdf

accesso aperto

: Publisher’s version
Dimensione 2 MB
Formato Adobe PDF
2 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1311126
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
social impact