密度泛函理论
缩小
轨道自由密度泛函理论
人工神经网络
能量最小化
功能(生物学)
含时密度泛函理论
能量泛函
计算机科学
能量(信号处理)
功能理论
统计物理学
物理
量子力学
人工智能
生物
进化生物学
程序设计语言
作者
Yang Li,Zechen Tang,Zezhou Chen,Minghui Sun,Boheng Zhao,He Li,Honggeng Tao,Zilong Yuan,Wenhui Duan,Yong Xu
标识
DOI:10.1103/physrevlett.133.076401
摘要
Deep-learning density functional theory (DFT) shows great promise to significantly accelerate material discovery and potentially revolutionize materials research. However, current research in this field primarily relies on data-driven supervised learning, making the developments of neural networks and DFT isolated from each other. In this work, we present a theoretical framework of neural-network DFT, which unifies the optimization of neural networks with the variational computation of DFT, enabling physics-informed unsupervised learning. Moreover, we develop a differential DFT code incorporated with deep-learning DFT Hamiltonian, and introduce algorithms of automatic differentiation and backpropagation into DFT, demonstrating the capability of neural-network DFT. The physics-informed neural-network architecture not only surpasses conventional approaches in accuracy and efficiency, but also offers a new paradigm for developing deep-learning DFT methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI