极限学习机
计算机科学
人工神经网络
人工智能
机器学习
图层(电子)
化学
有机化学
作者
Nilesh Rathod,Wankhade Sunil
标识
DOI:10.1109/icaccs51430.2021.9442007
摘要
In the course of the recent decade, ELM intrigued various scholars of different domains in a brief timeframe because of its noteworthy qualities over single hidden-layer feed-forward neural networks. The extraordinary Extreme Learning Machine (ELM) as a simple and fast feed forward neural network has been greatly presented in different zones. Not the same as the overall SLFN (single-layer neural network), the information loads and inclinations in the enclosed ELM layer are haphazardly formed, so only a small cost algorithm is required to prepare the model. However, the procedure for the selection of input loads and predispositions on an arbitrary basis might give rise to a poorly presented issue. Although ELM has many advantages, it also has some potential shortcomings such as sensitivity and specificity of performance to the underlying state of the neurons that are hidden, input weights and the choice of functions of activation. So as to beat the impediments of the classic ELM, numerous metaheuristic algorithms have proposed to optimize the various segments of ELM by analysts intending to improve the ELM model performance for various kinds of complex issues and applications. Hence through this study we intend to study the different algorithms developed for enhancing the ELM performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI