自编码
计算机科学
深度学习
人工神经网络
人工智能
算法
非线性系统
子空间拓扑
流量(数学)
数学
几何学
量子力学
物理
作者
Rui Fu,Dunhui Xiao,I. M. Navon,F. Fang,Liang Yang,Chengyuan Wang,Sibo Cheng
摘要
Abstract This paper presents a new nonlinear non‐intrusive reduced‐order model (NL‐NIROM) that outperforms traditional proper orthogonal decomposition (POD)‐based reduced order model (ROM). This improvement is achieved through the use of auto‐encoder (AE) and self‐attention based deep learning methods. The novelty of this work is that it uses stacked auto‐encoder (SAE) network to project the original high‐dimensional dynamical systems onto a low dimensional nonlinear subspace and predict fluid dynamics using an self‐attention based deep learning method. This paper introduces a new model reduction neural network architecture for fluid flow problem, as well as, a linear non‐intrusive reduced order model (L‐NIROM) based on POD and self‐attention mechanism. In the NL‐NIROM, the SAE network compresses high‐dimensional physical information into several much smaller sized representations in a reduced latent space. These representations are expressed by a number of codes in the middle layer of SAE neural network. Then, those codes at different time levels are trained to construct a set of hyper‐surfaces using self‐attention based deep learning methods. The inputs of the self‐attention based network are previous time levels' codes and the outputs of the network are current time levels' codes. The codes at current time level are then projected back to the original full space by the decoder layers in the SAE network. The capability of the new model, NL‐NIROM, is demonstrated through two test cases: flow past a cylinder, and a lock exchange. The results show that the NL‐NIROM is more accurate than the popular model reduction method namely POD based L‐NIROM.
科研通智能强力驱动
Strongly Powered by AbleSci AI