编码器
计算机科学
合金
均方误差
人工智能
集合(抽象数据类型)
产量(工程)
变压器
财产(哲学)
主成分分析
机器学习
自然语言处理
材料科学
数学
统计
物理
哲学
认识论
冶金
量子力学
电压
复合材料
程序设计语言
操作系统
作者
Akshat Chaudhari,Chakradhar Guntuboina,Hongshuo Huang,Amir Barati Farimani
标识
DOI:10.1016/j.commatsci.2024.113256
摘要
The pursuit of novel alloys tailored to specific requirements poses significant challenges for researchers in the field. This underscores the importance of developing predictive techniques for essential physical properties of alloys based on their chemical composition and processing parameters. This study introduces AlloyBERT, a transformer encoder-based model designed to predict properties such as elastic modulus and yield strength of alloys using textual inputs. Leveraging the pre-trained RoBERTa and BERT encoder model as its foundation, AlloyBERT employs self-attention mechanisms to establish meaningful relationships between words, enabling it to interpret human-readable input and predict target alloy properties. By combining a tokenizer trained on our textual data and a RoBERTa encoder pre-trained and fine-tuned for this specific task, we achieved a mean squared error (MSE) of 0.00015 on the Multi Principal Elemental Alloys (MPEA) data set and 0.00527 on the Refractory Alloy Yield Strength (RAYS) dataset using BERT encoder. This surpasses the performance of shallow models, which achieved a best-case MSE of 0.02376 and 0.01459 on the MPEA and RAYS datasets respectively. Our results highlight the potential of language models in material science and establish a foundational framework for text-based prediction of alloy properties that does not rely on complex underlying representations, calculations, or simulations.
科研通智能强力驱动
Strongly Powered by AbleSci AI