光学(聚焦)
归纳偏置
一般化
ID3
计算机科学
对数
人工智能
机器学习
大概是正确的学习
算法
决策树
泛化误差
数学
多任务学习
决策树学习
无监督学习
任务(项目管理)
经济
管理
数学分析
物理
光学
作者
Hussein Almuallim,Thomas G. Dietterich
出处
期刊:National Conference on Artificial Intelligence
日期:1991-07-14
卷期号:: 547-552
被引量:625
摘要
In many domains, an appropriate inductive bias is the MIN-FEATURES bias, which prefers consistent hypotheses definable over as few features as possible. This paper defines and studies this bias. First, it is shown that any learning algorithm implementing the MIN-FEATURES bias requires Θ(1/e ln 1/δ+ 1/e[2p + p ln n]) training examples to guarantee PAC-learning a concept having p relevant features out of n available features. This bound is only logarithmic in the number of irrelevant features. The paper also presents a quasi-polynomial time algorithm, FOCUS, which implements MIN-FEATURES. Experimental studies are presented that compare FOCUS to the ID3 and FRINGE algorithms. These experiments show that-- contrary to expectations--these algorithms do not implement good approximations of MIN-FEATURES. The coverage, sample complexity, and generalization performance of FOCUS is substantially better than either ID3 or FRINGE on learning problems where the MIN-FEATURES bias is appropriate. This suggests that, in practical applications, training data should be preprocessed to remove irrelevant features before being given to ID3 or FRINGE.
科研通智能强力驱动
Strongly Powered by AbleSci AI