可解释性
计算机科学
利用
领域知识
深度学习
人工智能
机器学习
循环神经网络
领域(数学分析)
健康档案
病历
图形
人工神经网络
数据挖掘
医疗保健
医学
理论计算机科学
数学分析
计算机安全
数学
经济
放射科
经济增长
作者
Changchang Yin,Rongjian Zhao,Buyue Qian,Xin Lv,Ping Zhang
标识
DOI:10.1109/icdm.2019.00084
摘要
Due to their promising performance in clinical risk prediction with Electronic Health Records (EHRs), deep learning methods have attracted significant interest from healthcare researchers. However, there are 4 challenges: (i) Data insufficiency. Many methods require large amounts of training data to achieve satisfactory results. (ii) Interpretability. Results from many methods are hard to explain to clinicians (e.g., why the models make particular predictions and which events cause clinical outcomes). (iii) Domain knowledge integration. No existing method dynamically exploits complicated medical knowledge (e.g., relations such as cause and is-caused-by between clinical events). (iv) Time interval information. Most existing methods only consider the relative order of visits from EHRs, but ignore the irregular time intervals between neighboring visits. In the study, we propose a new model, Domain Knowledge Guided Recurrent Neural Networks (DG-RNN), by directly introducing domain knowledge from the medical knowledge graph into an RNN architecture, as well as taking the irregular time intervals into account. Experimental results on heart failure risk prediction tasks show that our model not only outperforms state-of-the-art deep-learning based risk prediction models, but also associates individual medical events with heart failure onset, thus paving the way for interpretable accurate clinical risk predictions.
科研通智能强力驱动
Strongly Powered by AbleSci AI