Virtual Reference Feedback Tuning (VRFT) is a well established tool to design model-reference controllers directly from input-output data. A major drawback of the method lies in that the variance of the controller is high, due to the instrumental variable method employed to obtain unbiased estimates. Recent results on the use of kernel-based regularization in system identification showed that a good bias-variance trade-off can be found by suitably tuning a penalty term in the identification criterion within a Bayesian framework. In this paper, we apply such a regularization approach to the VRFT method and we show that significant performance improvement can be obtained also for controller design. A benchmark example is used to illustrate the effectiveness of the proposed approach.
|Titolo:||Virtual Reference Feedback Tuning with Bayesian regularization|
|Data di pubblicazione:||2016|
|Appare nelle tipologie:||04.1 Contributo in Atti di convegno|