Iterative Feedback Tuning (IFT) is a direct, data-driven control technique, that relies on a reference model to capture the desired behavior of the unknown system. The choice of this hyper-parameter is particularly critical, as it potentially jeopardizes performance and even closed-loop stability. This paper aims to explore the suitability of three search methods (grid search, random search, and successive halving) to automatically tune the reference model from data based on a set of user-defined, soft specifications on the desired closed-loop behavior. To compare the three methods and demonstrate their effectiveness, we consider a benchmark simulation case study on the control of a mass-spring-damper system. From our results, successive halving turns out to be the most efficient way to run IFT with automatic reference model selection on a finite budget of time for data collection.
Iterative Feedback Tuning with automated reference model selection
Breschi, Valentina;Formentin, Simone
2024-01-01
Abstract
Iterative Feedback Tuning (IFT) is a direct, data-driven control technique, that relies on a reference model to capture the desired behavior of the unknown system. The choice of this hyper-parameter is particularly critical, as it potentially jeopardizes performance and even closed-loop stability. This paper aims to explore the suitability of three search methods (grid search, random search, and successive halving) to automatically tune the reference model from data based on a set of user-defined, soft specifications on the desired closed-loop behavior. To compare the three methods and demonstrate their effectiveness, we consider a benchmark simulation case study on the control of a mass-spring-damper system. From our results, successive halving turns out to be the most efficient way to run IFT with automatic reference model selection on a finite budget of time for data collection.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


