计算机科学
自动汇总
学习迁移
人工智能
自然语言处理
变压器
机器翻译
编码(集合论)
语言模型
任务(项目管理)
深度学习
机器学习
程序设计语言
物理
管理
集合(抽象数据类型)
量子力学
电压
经济
作者
Antonio Mastropaolo,Nathan Cooper,David N. Palacio,Simone Scalabrino,Denys Poshyvanyk,Rocco Oliveto,Gabriele Bavota
标识
DOI:10.1109/tse.2022.3183297
摘要
Deep learning (DL) techniques have been used to support several code-related tasks such as code summarization and bug-fixing. In particular, pre-trained transformer models are on the rise, also thanks to the excellent results they achieved in Natural Language Processing (NLP) tasks. The basic idea behind these models is to first pre-train them on a generic dataset using a self-supervised task (e.g., filling masked words in sentences). Then, these models are fine-tuned to support specific tasks of interest (e.g., language translation). A single model can be fine-tuned to support multiple tasks, possibly exploiting the benefits of transfer learning . This means that knowledge acquired to solve a specific task (e.g., language translation) can be useful to boost performance on another task (e.g., sentiment classification). While the benefits of transfer learning have been widely studied in NLP, limited empirical evidence is available when it comes to code-related tasks. In this paper, we assess the performance of the Text-To-Text Transfer Transformer (T5) model in supporting four different code-related tasks: (i) automatic bug-fixing, (ii) injection of code mutants, (iii) generation of assert statements, and (iv) code summarization. We pay particular attention in studying the role played by pre-training and multi-task fine-tuning on the model's performance. We show that (i) the T5 can achieve better performance as compared to state-of-the-art baselines; and (ii) while pre-training helps the model, not all tasks benefit from a multi-task fine-tuning.
科研通智能强力驱动
Strongly Powered by AbleSci AI