计算机科学
动力学(音乐)
计算流体力学
计算科学
并行计算
机械
物理
声学
作者
Zhen Dong,Zhen Lu,Yang Yue
标识
DOI:10.1016/j.taml.2025.100594
摘要
Configuring computational fluid dynamics (CFD) simulations typically demands extensive domain expertise, limiting broader access. Although large language models (LLMs) have advanced scientific computing, their use in automating CFD workflows is underdeveloped. We introduce a novel approach centered on domain-specific LLM adaptation. By fine-tuning Qwen2.5-7B-Instruct on NL2FOAM, our custom dataset of 28,716 natural language-to-OpenFOAM configuration pairs with chain-of-thought (CoT) annotations enables direct translation from natural language descriptions to executable CFD setups. A multi-agent system orchestrates the process, autonomously verifying inputs, generating configurations, running simulations, and correcting errors. Evaluation on a benchmark of 21 diverse flow cases demonstrates state-of-the-art performance, achieving 88.7% solution accuracy and 82.6% first-attempt success rate. This significantly outperforms larger general-purpose models such as Qwen2.5-72B-Instruct, DeepSeek-R1, and Llama3.3-70B-Instruct, while also requiring fewer correction iterations and maintaining high computational efficiency. The results highlight the critical role of domain-specific adaptation in deploying LLM assistants for complex engineering workflows. Our code and fine-tuned model have been deposited at https://github.com/YYgroup/AutoCFD.
科研通智能强力驱动
Strongly Powered by AbleSci AI