Model structure selection (MSS) is a critical problem in the nonlinear identification field. In the framework of polynomial nonlinear autoregressive [moving average] models with exogenous input variables, it is formulated as the combinatorial problem of finding the subset of regressors that yields optimal model accuracy. Increasing the set of potential model terms improves the flexibility of the model but results in a computational overload and may even jeopardize the ability of the MSS algorithm to find the optimal model. In this work, a distributed optimization scheme is developed to tackle the MSS task for large-sized candidate regressor sets. The regressor set is split among a group of independent processors, and each of them executes an MSS routine on its local subset. Then, the processors exchange information regarding the selected models, and the corresponding regressors are distributed among all the units for a new MSS round. The procedure is repeated until convergence of all processors to the same solution. Besides a drastic reduction in the computational time, thanks to the inherent parallelizability of the algorithm execution, the proposed distributed optimization scheme can also be beneficial in terms of model accuracy, due to a more efficient exploration of the search space.

Distributed randomized model structure selection for NARX models

AVELLINA, MATTEO;BRANKOVIC, AIDA;Piroddi, L.
2017-01-01

Abstract

Model structure selection (MSS) is a critical problem in the nonlinear identification field. In the framework of polynomial nonlinear autoregressive [moving average] models with exogenous input variables, it is formulated as the combinatorial problem of finding the subset of regressors that yields optimal model accuracy. Increasing the set of potential model terms improves the flexibility of the model but results in a computational overload and may even jeopardize the ability of the MSS algorithm to find the optimal model. In this work, a distributed optimization scheme is developed to tackle the MSS task for large-sized candidate regressor sets. The regressor set is split among a group of independent processors, and each of them executes an MSS routine on its local subset. Then, the processors exchange information regarding the selected models, and the corresponding regressors are distributed among all the units for a new MSS round. The procedure is repeated until convergence of all processors to the same solution. Besides a drastic reduction in the computational time, thanks to the inherent parallelizability of the algorithm execution, the proposed distributed optimization scheme can also be beneficial in terms of model accuracy, due to a more efficient exploration of the search space.
2017
Distributed optimization; Model structure selection; Nonlinear model identification; Parallel processing; Polynomial NARX models; Randomized algorithms; Control and Systems Engineering; Signal Processing; Electrical and Electronic Engineering
File in questo prodotto:
File Dimensione Formato  
AvellinaBrankovicPiroddi.pdf

Accesso riservato

Descrizione: AvellinaBrankovicPiroddi17_preprint
: Pre-Print (o Pre-Refereeing)
Dimensione 336.4 kB
Formato Adobe PDF
336.4 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1038906
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 10
  • ???jsp.display-item.citation.isi??? 7
social impact