可解释性
机器学习
人工智能
计算机科学
逻辑回归
变量(数学)
树(集合论)
可能性
多层感知器
决策树
数据挖掘
人工神经网络
数学
数学分析
摘要
The interpretability of machine learning models, even though with an excellent prediction performance, remains a challenge in practical applications. The model interpretability and variable importance for well-performed supervised machine learning models are investigated in this study. With the commonly accepted concept of odds ratio (OR), we propose a novel and computationally efficient Variable Importance evaluation framework based on the Personalized Odds Ratio (VIPOR). It is a model-agnostic interpretation method that can be used to evaluate variable importance both locally and globally. Locally, the variable importance is quantified by the personalized odds ratio (POR), which can account for subject heterogeneity in machine learning. Globally, we utilize a hierarchical tree to group the predictors into five groups: completely positive, completely negative, positive dominated, negative dominated, and neutral groups. The relative importance of predictors within each group is ranked based on different statistics of PORs across subjects for different application purposes. For illustration, we apply the proposed VIPOR method to interpreting a multilayer perceptron (MLP) model, which aims to predict the mortality of subarachnoid hemorrhage (SAH) patients using real-world electronic health records (EHR) data. We compare the important variables derived from MLP with other machine learning models, including tree-based models and the L1-regularized logistic regression model. The top importance variables are consistently identified by VIPOR across different prediction models. Comparisons with existing interpretation methods are also conducted and discussed based on publicly available data sets.
科研通智能强力驱动
Strongly Powered by AbleSci AI