支持向量机
计算机科学
核(代数)
人工智能
核方法
机器学习
径向基函数核
决策树
模式识别(心理学)
排序支持向量机
高斯函数
树核
分类器(UML)
比例(比率)
多项式核
数据挖掘
数学
高斯分布
组合数学
物理
量子力学
作者
Feiping Nie,Wei Zhu,Xuelong Li
标识
DOI:10.1016/j.neucom.2019.10.051
摘要
Kernel trick is widely applied to Support Vector Machine (SVM) to deal with linearly inseparable data which is known as kernel SVM. However, kernel SVM always has high computational cost in practice which makes it unsuitable to handle large scale data. Moreover, kernel SVM always brings hyper-parameters, e.g. bandwidth in Gaussian kernel. Since the hyper-parameters have a significant influence on the final performance of kernel SVM and are pretty hard to tune especially for large scale data, one may need to put lots of effort into finding good enough parameters, and improper settings of the hyper-parameters often make the classification performance even lower than that of linear SVM. Inspired by recent progresses on linear SVM for dealing with large scale data, we propose a well-designed classifier to efficiently handle large scale linearly inseparable data, i.e., Decision Tree SVM (DTSVM). DTSVM has much lower computational cost compared with kernel SVM, and it brings almost no hyper-parameters except a few thresholds which can be fixed in practice. Comprehensive experiments on large scale datasets demonstrate the superiority of the proposed method.
科研通智能强力驱动
Strongly Powered by AbleSci AI