计算机科学
特征(语言学)
机器学习
人工智能
代表(政治)
背景(考古学)
模糊逻辑
可靠性(半导体)
特征学习
功率(物理)
哲学
语言学
古生物学
物理
量子力学
政治
政治学
法学
生物
作者
Divish Rengasamy,Jimiama Mafeni Mase,Aayush Kumar,Benjamin Rothwell,Mercedes Torres Torres,Morgan R. Alexander,David A. Winkler,Grazziela P. Figueredo
出处
期刊:Neurocomputing
[Elsevier BV]
日期:2022-10-01
卷期号:511: 163-174
被引量:27
标识
DOI:10.1016/j.neucom.2022.09.053
摘要
With the widespread use of machine learning to support decision-making, it is increasingly important to verify and understand the reasons why a particular output is produced. Although post-training feature importance approaches assist this interpretation, there is an overall lack of consensus regarding how feature importance should be quantified, making explanations of model predictions unreliable. In addition, many of these explanations depend on the specific machine learning approach employed and on the subset of data used when calculating feature importance. A possible solution to improve the reliability of explanations is to combine results from multiple feature importance quantifiers from different machine learning approaches coupled with re-sampling. Current state-of-the-art ensemble feature importance fusion uses crisp techniques to fuse results from different approaches. There is, however, significant loss of information as these approaches are not context-aware and reduce several quantifiers to a single crisp output. More importantly, their representation of “importance” as coefficients may be difficult to comprehend by end-users and decision makers. Here we show how the use of fuzzy data fusion methods can overcome some of the important limitations of crisp fusion methods by making the importance of features easily understandable.
科研通智能强力驱动
Strongly Powered by AbleSci AI