炼金术
考试(生物学)
计算机科学
历史
地质学
艺术史
古生物学
作者
Shuzheng Gao,Chaozheng Wang,Cuiyun Gao,Xiao‐Dong Jiao,Chun Yong Chong,Shan Gao,Michael R. Lyu
出处
期刊:Cornell University - arXiv
日期:2025-01-02
标识
DOI:10.48550/arxiv.2501.01329
摘要
Test cases are essential for validating the reliability and quality of software applications. Recent studies have demonstrated the capability of Large Language Models (LLMs) to generate useful test cases for given source code. However, the existing work primarily relies on human-written plain prompts, which often leads to suboptimal results since the performance of LLMs can be highly influenced by the prompts. Moreover, these approaches use the same prompt for all LLMs, overlooking the fact that different LLMs might be best suited to different prompts. Given the wide variety of possible prompt formulations, automatically discovering the optimal prompt for each LLM presents a significant challenge. Although there are methods on automated prompt optimization in the natural language processing field, they are hard to produce effective prompts for the test case generation task. First, the methods iteratively optimize prompts by simply combining and mutating existing ones without proper guidance, resulting in prompts that lack diversity and tend to repeat the same errors in the generated test cases. Second, the prompts are generally lack of domain contextual knowledge, limiting LLMs' performance in the task.
科研通智能强力驱动
Strongly Powered by AbleSci AI