计算机科学
人工神经网络
估计员
人工智能
建筑
网络体系结构
机器学习
数据挖掘
统计
数学
艺术
计算机安全
视觉艺术
作者
Yanxi Li,Minjing Dong,Yunhe Wang,Chang Xu
标识
DOI:10.1109/tpami.2022.3217648
摘要
This paper searches for the optimal neural architecture by minimizing a proxy of validation loss. Existing neural architecture search (NAS) methods used to discover the optimal neural architecture that best fits the validation examples given the up-to-date network weights. These intermediate validation results are invaluable but have not been fully explored. We propose to approximate the validation loss landscape by learning a mapping from neural architectures to their corresponding validate losses. The optimal neural architecture thus can be easily identified as the minimum of this proxy validation loss landscape. To improve the efficiency, a novel architecture sampling strategy is developed for the approximation of the proxy validation loss landscape. We also propose an operation importance weight (OIW) to balance the randomness and certainty of architecture sampling. The representation of neural architecture is learned through a graph autoencoder (GAE) over both architectures sampled during search and randomly generated architectures. We provide theoretical analyses on the validation loss estimator learned with our sampling strategy. Experimental results demonstrate that the proposed proxy validation loss landscape can be effective in both the differentiable NAS and the evolutionary-algorithm-based (EA-based) NAS.
科研通智能强力驱动
Strongly Powered by AbleSci AI