不确定度量化
计算机科学
频数推理
不确定度分析
敏感性分析
机器学习
人工智能
测量不确定度
贝叶斯概率
概率逻辑
蒙特卡罗方法
不确定性传播
贝叶斯推理
数据挖掘
算法
数学
统计
模拟
作者
Sai Munikoti,Deepesh Agarwal,Laya Das,Balasubramaniam Natarajan
标识
DOI:10.1016/j.neucom.2022.11.049
摘要
Graph Neural Networks (GNN) provide a powerful framework that elegantly integrates Graph theory with Machine learning for modeling and analysis of networked data. We consider the problem of quantifying the uncertainty in predictions of GNN stemming from modeling errors and measurement uncertainty. We consider aleatoric uncertainty in the form of probabilistic links and noise in feature vector of nodes, while epistemic uncertainty is incorporated via a probability distribution over the model parameters. We propose a unified approach to treat both sources of uncertainty in a Bayesian framework, where Assumed Density Filtering is used to quantify aleatoric uncertainty and Monte Carlo dropout captures uncertainty in model parameters. Finally, the two sources of uncertainty are aggregated to estimate the total uncertainty in predictions of a GNN. Results in the real-world datasets demonstrate that the Bayesian model performs at par with a frequentist model and provides additional information about predictions uncertainty that are sensitive to uncertainties in the data and model.
科研通智能强力驱动
Strongly Powered by AbleSci AI