极限学习机
计算机科学
感知器
分类器(UML)
人工智能
一般化
人工神经网络
前馈
机器学习
集合预报
前馈神经网络
随机子空间法
集成学习
模式识别(心理学)
数学
工程类
数学分析
控制工程
作者
Boon Pin Ooi,Norasmadi Abdul Rahim,Maz Jamilah Masnan,Ammar Zakaria
出处
期刊:Journal of physics
[IOP Publishing]
日期:2021-11-01
卷期号:2107 (1): 012013-012013
被引量:1
标识
DOI:10.1088/1742-6596/2107/1/012013
摘要
Abstract Extreme learning machine (ELM) is a special type of single hidden layer feedforward neural network that emphasizes training speed and optimal generalization. The ELM model proposes that the weights of hidden neurons need not be tuned, and the weights of output neurons can be calculated by finding the Moore-Penrose generalized inverse method. Thus, the ELM classifier is suitable to use in a homogeneous ensemble model due to the untuned random hidden weights which promote diversity even with the same training data. This paper studies the effectiveness of the ELM ensemble models in solving small sample-sized classification problems. The research involves two variants of the ensemble model: the normal ELM ensemble with majority voting (ELE), and the random subspace method (RS-ELM). To simulate the small sample cases, only 30% of the total data will be used as the training data. Experiment results show that the RS-ELM model can outperform a multi-layer perceptron (MLP) model under the assumptions of a Friedman test. Furthermore, the ELE model has similar performance as an MLP model under the same assumptions.
科研通智能强力驱动
Strongly Powered by AbleSci AI