计算机科学
任务(项目管理)
抽象语法树
编码(集合论)
代码重用
背景(考古学)
语法
连锁
重新使用
文档
程序设计语言
人工智能
软件
工程类
古生物学
系统工程
集合(抽象数据类型)
生物
废物管理
心理学
心理治疗师
作者
Qing Huang,Jianguo Zhu,Zhilong Li,Zhenchang Xing,Changjing Wang,Xiwei Xu
标识
DOI:10.1109/icse-companion58688.2023.00013
摘要
API documentation, technical blogs and programming Q&A sites contain a large amount of partial code that can be reused in programming tasks. However, due to unresolved simple names and last-mile syntax errors, such partial code is frequently not compilable. To facilitate partial code reuse, we develop PCR-Chain for resolving FQNs and fixing last-mile syntax errors in partial code based on a giant pre-trained code model (e.g., Copilot). Methodologically, PCR-Chain is backed up by the underlying global-level prompt architecture (which combines three design ideas: hierarchical task breakdown, prompt composition including sequential and conditional structures, and a mix of prompt-based AI and non-AI units) and the local-level prompt design. Technically, we propose PCR-Chain, which employs in-context learning rather than supervised fine-tuning with gradient updates on downstream task data. This approach enables the frozen, giant pre-trained code model to learn the desired behavior for a specific task through behavior-describing prompts and imitate it to complete the task. Experimental results show that PCR-Chain automatically resolves the FQNs and fixes last-mile syntax errors in 50 partial code samples collected from Stack Overflow with high success rates, without requiring any program analysis. The correct execution of the unit, module, and PCR-Chain demonstrates the effectiveness of the prompt design, prompt composition, and prompt architecture. Website:https://github.com/SE-qinghuang/PCR-ChainDemoVideo: https://youtu.be/6HGRNc2JE
科研通智能强力驱动
Strongly Powered by AbleSci AI