计算机科学
启发式
宏
管道(软件)
初始化
波束搜索
建筑
搜索算法
人工智能
理论计算机科学
机器学习
算法
操作系统
艺术
视觉艺术
程序设计语言
作者
Vasco Lopes,Luı́s A. Alexandre
标识
DOI:10.1109/tnnls.2023.3326648
摘要
Networks found with neural architecture search (NAS) achieve the state-of-the-art performance in a variety of tasks, out-performing human-designed networks. However, most NAS methods heavily rely on human-defined assumptions that constrain the search: architecture's outer skeletons, number of layers, parameter heuristics, and search spaces. In addition, common search spaces consist of repeatable modules (cells) instead of fully exploring the architecture's search space by designing entire architectures (macro-search). Imposing such constraints requires deep human expertise and restricts the search to predefined settings. In this article, we propose less constrained macro-neural architecture search (LCMNAS), a method that pushes NAS to less constrained search spaces by performing macro-search without relying on predefined heuristics or bounded search spaces. LCMNAS introduces three components for the NAS pipeline: 1) a method that leverages information about well-known architectures to autonomously generate complex search spaces based on weighted directed graphs (WDGs) with hidden properties; 2) an evolutionary search strategy that generates complete architectures from scratch; and 3) a mixed-performance estimation approach that combines information about architectures at the initialization stage and lower fidelity estimates to infer their trainability and capacity to model complex functions. We present experiments in 14 different datasets showing that LCMNAS is capable of generating both cell and macro-based architectures with minimal GPU computation and state-of-the-art results. Moreover, we conduct extensive studies on the importance of different NAS components in both cell and macro-based settings. The code for reproducibility is publicly available at https://github.com/VascoLopes/LCMNAS.
科研通智能强力驱动
Strongly Powered by AbleSci AI