超参数
超参数优化
计算机科学
系列(地层学)
启发式
背景(考古学)
人工智能
时间序列
机器学习
贝叶斯优化
集合(抽象数据类型)
Echo(通信协议)
算法
数据挖掘
支持向量机
计算机网络
生物
操作系统
古生物学
程序设计语言
作者
Jacob Reinier Maat,Nikolaos Gianniotis,Pavlos Protopapas
标识
DOI:10.1109/ijcnn.2018.8489094
摘要
Echo State Networks (ESNs) are recurrent neural networks that only train their output layer, thereby precluding the need to backpropagate gradients through time, which leads to significant computational gains. Nevertheless, a common issue in ESNs is determining its hyperparameters, which are crucial in instantiating a well performing reservoir, but are often set manually or using heuristics. In this work we optimize the ESN hyperparameters using Bayesian optimization which, given a limited budget of function evaluations, outperforms a grid search strategy. In the context of large volumes of time series data, such as light curves in the field of astronomy, we can further reduce the optimization cost of ESNs. In particular, we wish to avoid tuning hyperparameters per individual time series as this is costly; instead, we want to find ESNs with hyperparameters that perform well not just on individual time series but rather on groups of similar time series without sacrificing predictive performance significantly. This naturally leads to a notion of clusters, where each cluster is represented by an ESN tuned to model a group of time series of similar temporal behavior. We demonstrate this approach both on synthetic datasets and real world light curves from the MACHO survey. We show that our approach results in a significant reduction in the number of ESN models required to model a whole dataset, while retaining predictive performance for the series in each cluster.
科研通智能强力驱动
Strongly Powered by AbleSci AI