可解释性
计算机科学
人工智能
深度学习
人工神经网络
图形
机器学习
预测建模
黑匣子
过程(计算)
理论计算机科学
操作系统
作者
Yuan Gao,Shohei Miyata,Yasunori Akashi
出处
期刊:Applied Energy
[Elsevier BV]
日期:2022-05-31
卷期号:321: 119288-119288
被引量:55
标识
DOI:10.1016/j.apenergy.2022.119288
摘要
With the rapid development of high-performance computing technology, data-driven models, especially deep learning models, are being used increasingly for solar radiation prediction. However, the characteristics of the black box model lead to a lack of interpretability in their prediction results. This limits the application of the model in final optimization scenarios (such as model predictive control), as operation managers might not fully trust models lacking explanatory results. In our study, models were proposed based on the prediction model of the recurrent neural network. We hope to improve the interpretability of the models through the design and improvement of the model structure, thereby increasing the credibility of the model results. The interpretability in time and spatial dependencies of the prediction process were studied by the attention mechanism and graph neural network, respectively. Our results showed that the deep learning model, with attention, could effectively shift the attention mechanism to adapt to varying prediction target hours. The graph neural network expresses the most relevant variables in the dataset related to solar radiation through a self-learning graph structure. The results showed that solar radiation is connected directly with month, hour, temperature, penetrating rainfall, water vapor pressure, and radiation time.
科研通智能强力驱动
Strongly Powered by AbleSci AI