计算机科学
断言
正确性
考试(生物学)
甲骨文公司
单元测试
集成测试
测试线束
测试用例
人工智能
情报检索
软件
软件工程
程序设计语言
机器学习
软件系统
古生物学
回归分析
软件建设
生物
作者
Weifeng Sun,Hongyan Li,Meng Yan,Yan Lei,Hongyu Zhang
标识
DOI:10.1109/ase56229.2023.00090
摘要
Unit testing validates the correctness of the unit under test and has become an essential activity in software development process. A unit test consists of a test prefix that drives the unit under test into a particular state, and a test oracle (e.g., assertion), which specifies the behavior in that state. To reduce manual efforts in conducting unit testing, Yu et al. proposed an integrated approach (integration for short), combining information retrieval with a deep learning-based approach, to generate assertions for a unit test. Despite being promising, there is still a knowledge gap as to why or where integration works or does not work. In this paper, we describe an in-depth analysis of the effectiveness of integration. Our analysis shows that: ① The overall performance of integration is mainly due to its success in retrieving assertions. ② integration struggles to understand the semantic differences between the retrieved focal-test (focal-test includes a test prefix and a unit under test) and the input focal-test, resulting in many tokens being incorrectly modified; ③ integration is limited to specific types of edit operations (i.e., replacement) and cannot handle token addition or deletion. To improve the effectiveness of assertion generation, this paper proposes a novel retrieve-and-edit approach named EDITAS. Specifically, Editas first retrieves a similar focal-test from a pre-defined corpus and treats its assertion as a prototype. Then, Editas reuses the information in the prototype and edits the prototype automatically. Editas is more generalizable than integration because it can ❶ comprehensively understand the semantic differences between input and similar focal-tests; ❷ apply appropriate assertion edit patterns with greater flexibility; and ❸ generate more diverse edit actions than just replacement operations. We conduct experiments on two large-scale datasets and the experimental results demonstrate that Editas outperforms the state-of-the-art approaches, with an average improvement of 10.00%-87.48% and 3.30%-42.65% in accuracy and BLEU score, respectively.
科研通智能强力驱动
Strongly Powered by AbleSci AI