Predicting Protein-DNA Binding Sites by Fine-Tuning BERT

计算机科学 变压器 编码器 人工智能 语言模型 特征学习 自然语言处理 嵌入 命名实体识别 量子力学 操作系统 物理 经济 电压 管理 任务(项目管理)
作者
Yue Zhang,Yuehui Chen,Baitong Chen,Yi Cao,Jiazi Chen,Hanhan Cong
出处
期刊:Lecture Notes in Computer Science 卷期号:: 663-669
标识
DOI:10.1007/978-3-031-13829-4_57
摘要

AbstractThe study of Protein-DNA binding sites is one of the fundamental problems in genome biology research. It plays an important role in understanding gene expression and transcription, biological research, and drug development. In recent years, language representation models have had remarkable results in the field of Natural Language Processing (NLP) and have received extensive attention from researchers. Bidirectional Encoder Representations for Transformers (BERT) has been shown to have state-of-the-art results in other domains, using the concept of word embedding to capture the semantics of sentences. In the case of small datasets, previous models often cannot capture the upstream and downstream global information of DNA sequences well, so it is reasonable to refer the BERT model to the training of DNA sequences. Models pre-trained with large datasets and then fine-tuned with specific datasets have excellent results on different downstream tasks. In this study, firstly, we regard DNA sequences as sentences and tokenize them using K-mer method, and later utilize BERT to matrix the fixed length of the tokenized sentences, perform feature extraction, and later perform classification operations. We compare this method with current state-of-the-art models, and the DNABERT method has better performance with average improvement 0.013537, 0.010866, 0.029813, 0.052611, 0.122131 in ACC, F1-score, MCC, Precision, Recall, respectively. Overall, one of the advantages of BERT is that the pre-training strategy speeds up the convergence in the network in migration learning and improves the learning ability of the network. DNABER model has advantageous generalization ability on other DNA datasets and can be utilized on other sequence classification tasks.KeywordsProtein-DNA binding sitesTranscription factorTraditional machine learningDeep learningTransformersBERT
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
yue发布了新的文献求助30
3秒前
4秒前
6秒前
yunnguw发布了新的文献求助10
7秒前
懵懂的小夏完成签到,获得积分10
8秒前
sky123举报求助违规成功
13秒前
寻道图强举报求助违规成功
13秒前
浮尘举报求助违规成功
13秒前
13秒前
孤行者完成签到,获得积分10
14秒前
明亮柜子发布了新的文献求助10
14秒前
滴答发布了新的文献求助10
20秒前
优雅慕梅完成签到,获得积分10
21秒前
yunnguw完成签到,获得积分20
21秒前
23秒前
SciGPT应助meng采纳,获得30
23秒前
24秒前
cctv18应助bai采纳,获得30
26秒前
搜集达人应助Coke采纳,获得10
27秒前
122发布了新的文献求助10
27秒前
29秒前
KEHUGE发布了新的文献求助30
30秒前
结实初翠发布了新的文献求助20
31秒前
helios完成签到,获得积分10
32秒前
大水发布了新的文献求助10
33秒前
不空是空完成签到,获得积分0
34秒前
37秒前
39秒前
cctv18应助yeyeming采纳,获得10
40秒前
41秒前
汪少侠发布了新的文献求助10
43秒前
英俊的铭应助KEHUGE采纳,获得10
43秒前
充电宝应助农大彭于晏采纳,获得10
43秒前
纳拉123发布了新的文献求助10
44秒前
44秒前
47秒前
47秒前
星河完成签到,获得积分10
47秒前
852应助kitsuki采纳,获得10
48秒前
50秒前
高分求助中
请在求助之前详细阅读求助说明!!!! 20000
One Man Talking: Selected Essays of Shao Xunmei, 1929–1939 1000
The Three Stars Each: The Astrolabes and Related Texts 900
Yuwu Song, Biographical Dictionary of the People's Republic of China 700
[Lambert-Eaton syndrome without calcium channel autoantibodies] 520
Bernd Ziesemer - Maos deutscher Topagent: Wie China die Bundesrepublik eroberte 500
A radiographic standard of reference for the growing knee 400
热门求助领域 (近24小时)
化学 材料科学 医学 生物 有机化学 工程类 生物化学 纳米技术 物理 内科学 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 电极 光电子学 量子力学
热门帖子
关注 科研通微信公众号,转发送积分 2471923
求助须知:如何正确求助?哪些是违规求助? 2138259
关于积分的说明 5449167
捐赠科研通 1862187
什么是DOI,文献DOI怎么找? 926101
版权声明 562752
科研通“疑难数据库(出版商)”最低求助积分说明 495326