The growing demand for machine learning applications in industry has created a need for fast and efficient methods to develop accurate machine learning models. Automated Machine Learning (AutoML) algorithms have emerged as a promising solution to this problem, designing models without the need for human expertise. Given the effectiveness of neural network models, Neural Architecture Search (NAS) specialises in designing their architectures autonomously, with results that rival the most advanced hand-crafted models. However, this approach requires significant computational resources and hardware investment, making it less attractive for real-world applications. This article presents the third version of Pareto-Optimal Progressive Neural Architecture Search (POPNASv3), a new NAS algorithm that employs Sequential Model-Based Optimisation and Pareto optimality. This choice makes POPNASv3 flexible to different hardware environments, computational budgets and tasks, as the algorithm can efficiently explore user-defined search spaces of varying complexity. Pareto optimality extracts the architectures that achieve the best trade-off with respect to the metrics considered, reducing the number of models sampled during the search and dramatically improving time efficiency without sacrificing accuracy. The experiments performed on image and time series classification datasets provide evidence that POPNASv3 can explore a large set of different operators and converge to optimal architectures suited to the type of data provided under different scenarios.

POPNASv3: A pareto-optimal neural architecture search solution for image and time series classification

Falanti A.;Lomurno E.;Ardagna D.;Matteucci M.
2023-01-01

Abstract

The growing demand for machine learning applications in industry has created a need for fast and efficient methods to develop accurate machine learning models. Automated Machine Learning (AutoML) algorithms have emerged as a promising solution to this problem, designing models without the need for human expertise. Given the effectiveness of neural network models, Neural Architecture Search (NAS) specialises in designing their architectures autonomously, with results that rival the most advanced hand-crafted models. However, this approach requires significant computational resources and hardware investment, making it less attractive for real-world applications. This article presents the third version of Pareto-Optimal Progressive Neural Architecture Search (POPNASv3), a new NAS algorithm that employs Sequential Model-Based Optimisation and Pareto optimality. This choice makes POPNASv3 flexible to different hardware environments, computational budgets and tasks, as the algorithm can efficiently explore user-defined search spaces of varying complexity. Pareto optimality extracts the architectures that achieve the best trade-off with respect to the metrics considered, reducing the number of models sampled during the search and dramatically improving time efficiency without sacrificing accuracy. The experiments performed on image and time series classification datasets provide evidence that POPNASv3 can explore a large set of different operators and converge to optimal architectures suited to the type of data provided under different scenarios.
2023
File in questo prodotto:
File Dimensione Formato  
11311-1249320_Falanti.pdf

accesso aperto

: Post-Print (DRAFT o Author’s Accepted Manuscript-AAM)
Dimensione 898.18 kB
Formato Adobe PDF
898.18 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1249320
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 6
  • ???jsp.display-item.citation.isi??? 3
social impact