The problem of forecasting a time series with a neural network is well-defined when considering a single step-ahead prediction. The situation becomes more tangled in the prediction on a multiple-step horizon and consequently the task can be framed in different ways. For example, one can develop a single-step predictor to be used recursively along the forecasting horizon (recursive approach) or develop a multi-output model that directly forecasts the entire sequence of output values (multi-output approach). Additionally, the internal structure of each predictor may be constituted by a classical feed-forward (FF) or by a recurrent architecture, such as the long short-term memory (LSTM) nets. The latter are traditionally trained with the teacher forcing algorithm (LSTM-TF) to speed up the convergence of the optimization, or without it (LSTM-no-TF), in order to avoid the issue of exposure bias. Time series forecasting requires organizing the available data into input-output sequences for parameter training, hyperparameter tuning and performance testing. An additional developers’ choice explored in the chapter is the definition of the similarity index (error metric) that the training procedure must optimize and the other performance indicators that may be used to examine how well the prediction replicates test data.
Neural Approaches for Time Series Forecasting
M. Sangiorgio;F. Dercole;G. Guariso
2021-01-01
Abstract
The problem of forecasting a time series with a neural network is well-defined when considering a single step-ahead prediction. The situation becomes more tangled in the prediction on a multiple-step horizon and consequently the task can be framed in different ways. For example, one can develop a single-step predictor to be used recursively along the forecasting horizon (recursive approach) or develop a multi-output model that directly forecasts the entire sequence of output values (multi-output approach). Additionally, the internal structure of each predictor may be constituted by a classical feed-forward (FF) or by a recurrent architecture, such as the long short-term memory (LSTM) nets. The latter are traditionally trained with the teacher forcing algorithm (LSTM-TF) to speed up the convergence of the optimization, or without it (LSTM-no-TF), in order to avoid the issue of exposure bias. Time series forecasting requires organizing the available data into input-output sequences for parameter training, hyperparameter tuning and performance testing. An additional developers’ choice explored in the chapter is the definition of the similarity index (error metric) that the training procedure must optimize and the other performance indicators that may be used to examine how well the prediction replicates test data.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.