Model structure selection (MSS) is a critical problem in the nonlinear identification field. In the framework of polynomial nonlinear autoregressive [moving average] models with exogenous input variables, it is formulated as the combinatorial problem of finding the subset of regressors that yields optimal model accuracy. Increasing the set of potential model terms improves the flexibility of the model but results in a computational overload and may even jeopardize the ability of the MSS algorithm to find the optimal model. In this work, a distributed optimization scheme is developed to tackle the MSS task for large-sized candidate regressor sets. The regressor set is split among a group of independent processors, and each of them executes an MSS routine on its local subset. Then, the processors exchange information regarding the selected models, and the corresponding regressors are distributed among all the units for a new MSS round. The procedure is repeated until convergence of all processors to the same solution. Besides a drastic reduction in the computational time, thanks to the inherent parallelizability of the algorithm execution, the proposed distributed optimization scheme can also be beneficial in terms of model accuracy, due to a more efficient exploration of the search space.
Distributed randomized model structure selection for NARX models
AVELLINA, MATTEO;BRANKOVIC, AIDA;Piroddi, L.
2017-01-01
Abstract
Model structure selection (MSS) is a critical problem in the nonlinear identification field. In the framework of polynomial nonlinear autoregressive [moving average] models with exogenous input variables, it is formulated as the combinatorial problem of finding the subset of regressors that yields optimal model accuracy. Increasing the set of potential model terms improves the flexibility of the model but results in a computational overload and may even jeopardize the ability of the MSS algorithm to find the optimal model. In this work, a distributed optimization scheme is developed to tackle the MSS task for large-sized candidate regressor sets. The regressor set is split among a group of independent processors, and each of them executes an MSS routine on its local subset. Then, the processors exchange information regarding the selected models, and the corresponding regressors are distributed among all the units for a new MSS round. The procedure is repeated until convergence of all processors to the same solution. Besides a drastic reduction in the computational time, thanks to the inherent parallelizability of the algorithm execution, the proposed distributed optimization scheme can also be beneficial in terms of model accuracy, due to a more efficient exploration of the search space.File | Dimensione | Formato | |
---|---|---|---|
AvellinaBrankovicPiroddi.pdf
Accesso riservato
Descrizione: AvellinaBrankovicPiroddi17_preprint
:
Pre-Print (o Pre-Refereeing)
Dimensione
336.4 kB
Formato
Adobe PDF
|
336.4 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.