计算机科学
自然语言处理
对比分析
语言学
人工智能
心理学
哲学
作者
Chuanbin Liu,Xiaowu Zhang,Hongfei Zhao,Zhijie Liu,Xi Xi,Lean Yu
标识
DOI:10.1109/tcyb.2025.3550203
摘要
The acceptance of academic papers involves a complex peer-review process that requires substantial human and material resources and is susceptible to biases. With advancements in deep learning technologies, researchers have explored automated approaches for assessing paper acceptance. Existing automated academic paper rating methods primarily rely on the full content of papers to estimate acceptance probabilities. However, these methods are often inefficient and introduce redundant or irrelevant information. Additionally, while Bert can capture general semantic representations through pretraining on large-scale corpora, its performance on the automatic academic paper rating (AAPR) task remains suboptimal due to discrepancies between its pretraining corpus and academic texts. To address these issues, this study proposes LMCBert, a model that integrates large language models (LLMs) with momentum contrastive learning (MoCo). LMCBert utilizes LLMs to extract the core semantic content of papers, reducing redundancy and improving the understanding of academic texts. Furthermore, it incorporates MoCo to optimize Bert training, enhancing the differentiation of semantic representations and improving the accuracy of paper acceptance predictions. Empirical evaluations demonstrate that LMCBert achieves effective performance on the evaluation dataset, supporting the validity of the proposed approach. The code and data used in this article are publicly available at https://github.com/iioSnail/LMCBert.
科研通智能强力驱动
Strongly Powered by AbleSci AI