可解释性
黑匣子
机器学习
人工智能
合并(版本控制)
计算机科学
情报检索
作者
Jacques A. Esterhuizen,Bryan R. Goldsmith,Suljo Linic
出处
期刊:Nature Catalysis
[Nature Portfolio]
日期:2022-03-17
卷期号:5 (3): 175-184
被引量:238
标识
DOI:10.1038/s41929-022-00744-z
摘要
Most applications of machine learning in heterogeneous catalysis thus far have used black-box models to predict computable physical properties (descriptors), such as adsorption or formation energies, that can be related to catalytic performance (that is, activity or stability). Extracting meaningful physical insights from these black-box models has proved challenging, as the internal logic of these black-box models is not readily interpretable due to their high degree of complexity. Interpretable machine learning methods that merge the predictive capacity of black-box models with the physical interpretability of physics-based models offer an alternative to black-box models. In this Perspective, we discuss the various interpretable machine learning methods available to catalysis researchers, highlight the potential of interpretable machine learning to accelerate hypothesis formation and knowledge generation, and outline critical challenges and opportunities for interpretable machine learning in heterogeneous catalysis. Most applications of machine learning in catalysis use black-box models to predict physical properties, but extracting meaningful physical insights from them is challenging. This Perspective discusses machine learning approaches for heterogeneous catalysis and classifies them in terms of their interpretability.
科研通智能强力驱动
Strongly Powered by AbleSci AI