计算机科学
帕累托原理
人工神经网络
机器学习
人工智能
帕累托最优
建筑
集合(抽象数据类型)
多目标优化
数学优化
数学
艺术
视觉艺术
程序设计语言
作者
Andrea Falanti,Eugenio Lomurno,Danilo Ardagna,Matteo Matteucci
标识
DOI:10.1016/j.asoc.2023.110555
摘要
The growing demand for machine learning applications in industry has created a need for fast and efficient methods to develop accurate machine learning models. Automated Machine Learning (AutoML) algorithms have emerged as a promising solution to this problem, designing models without the need for human expertise. Given the effectiveness of neural network models, Neural Architecture Search (NAS) specialises in designing their architectures autonomously, with results that rival the most advanced hand-crafted models. However, this approach requires significant computational resources and hardware investment, making it less attractive for real-world applications. This article presents the third version of Pareto-Optimal Progressive Neural Architecture Search (POPNASv3), a new NAS algorithm that employs Sequential Model-Based Optimisation and Pareto optimality. This choice makes POPNASv3 flexible to different hardware environments, computational budgets and tasks, as the algorithm can efficiently explore user-defined search spaces of varying complexity. Pareto optimality extracts the architectures that achieve the best trade-off with respect to the metrics considered, reducing the number of models sampled during the search and dramatically improving time efficiency without sacrificing accuracy. The experiments performed on image and time series classification datasets provide evidence that POPNASv3 can explore a large set of different operators and converge to optimal architectures suited to the type of data provided under different scenarios.1
科研通智能强力驱动
Strongly Powered by AbleSci AI