Softmax函数
人工神经网络
燃烧
计算机科学
网络体系结构
分数(化学)
生物系统
化学
人工智能
物理化学
计算机安全
生物
有机化学
作者
Ahmed Almeldein,Noah Van Dam
出处
期刊:Journal of engineering for gas turbines and power
[ASME International]
日期:2023-07-27
卷期号:145 (9)
摘要
Abstract Detailed chemical kinetics calculations can be very computationally expensive, and so various approaches have been used to speed up combustion calculations. Deep neural networks (DNNs) are one promising approach that has seen significant development recently. Standard DNNs, however, do not necessarily follow physical constraints such as conservation of mass. Physics Informed Neural Networks (PINNs) are a class of neural networks that have physical laws embedded within the training process to create networks that follow those physical laws. A new PINN-based DNN approach to chemical kinetics modeling has been developed to make sure mass fraction predictions adhere to the conservation of atomic species. The approach also utilizes a mixture-of-experts (MOE) architecture where the data is distributed on multiple subnetworks followed by a softmax selective layer. The MOE architecture allows the different subnetworks to specialize in different thermochemical regimes, such as early stage ignition reactions or post-flame equilibrium chemistry, then the softmax layer smoothly transitions between the subnetwork predictions. This modeling approach was applied to the prediction of methane-air combustion using the GRI-Mech 3.0 as the reference mechanism. The training database was composed of data from 0D ignition delay simulations under initial conditions of 0.2–50 bar pressure, 500–2000 K temperature, an equivalence ratio between 0 and 2, and an N2-dilution percentage of up to 50%. A wide variety of network sizes and architectures of between 3 and 20 subnetworks and 6,600 to 77,000 neurons were tested. The resulting networks were able to predict 0D combustion simulations with similar accuracy and atomic mass conservation as standard kinetics solvers while having a 10-50× speedup in online evaluation time using CPUs, and on average over 200× when using a GPU.
科研通智能强力驱动
Strongly Powered by AbleSci AI