This paper addresses the problem of a hierarchical sliding mode surface (HSMS) (Formula presented.) control design for nonlinear systems via a dynamic event-triggered mechanism. Initially, the HSMS containing the system states is constructed to enhance the system's response rate and robustness. By assigning a cost function associated with the HSMS, such an (Formula presented.) control problem is equivalently transformed into a zero-sum game problem, where the control policy and the exogenous disturbance are treated as two players with opposite interests. Afterwards, a novel dynamic event-triggered mechanism is designed, where the triggering condition depends on HSMS variables. To solve the corresponding event-triggered Hamilton–Jacobi–Isaacs equation, a single-critic reinforcement learning algorithm is developed, which removes the error generated by approximating the actor network in the actor-critic network. According to the Lyapunov stability theory, all signals of the considered system are strictly proved to be bounded. Finally, the validity of the proposed control method is demonstrated through simulations of a tunnel diode circuit system and a mass-spring-damper system.

A reinforcement learning methodology to hierarchical sliding-mode surface H∞ control of nonlinear systems via a dynamic event-triggered mechanism

Karimi, Hamid Reza;
2025-01-01

Abstract

This paper addresses the problem of a hierarchical sliding mode surface (HSMS) (Formula presented.) control design for nonlinear systems via a dynamic event-triggered mechanism. Initially, the HSMS containing the system states is constructed to enhance the system's response rate and robustness. By assigning a cost function associated with the HSMS, such an (Formula presented.) control problem is equivalently transformed into a zero-sum game problem, where the control policy and the exogenous disturbance are treated as two players with opposite interests. Afterwards, a novel dynamic event-triggered mechanism is designed, where the triggering condition depends on HSMS variables. To solve the corresponding event-triggered Hamilton–Jacobi–Isaacs equation, a single-critic reinforcement learning algorithm is developed, which removes the error generated by approximating the actor network in the actor-critic network. According to the Lyapunov stability theory, all signals of the considered system are strictly proved to be bounded. Finally, the validity of the proposed control method is demonstrated through simulations of a tunnel diode circuit system and a mass-spring-damper system.
2025
H; ∞; control; dynamic event-triggered mechanism; hierarchical sliding-mode surface technique; neural network; reinforcement learning;
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1288219
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? 1
social impact