模糊测试
计算机科学
一套
软件工程
测试套件
程序设计语言
象征性执行
随机测试
软件测试
代码覆盖率
一致性(知识库)
测试用例
软件
机器学习
人工智能
回归分析
考古
历史
作者
Hoang Lam Nguyen,Nebras Nassar,Timo Kehrer,Lars Grunske
出处
期刊:Software engineering
[Science Publishing Group]
日期:2021-01-01
卷期号:: 81-82
被引量:2
摘要
Fuzzing or fuzz testing is an established technique that aims to discover unexpected program behavior (e.g., bugs, security vulnerabilities, or crashes) by feeding automatically generated data into a program under test. However, the application of fuzzing to test Model-Driven Software Engineering (MDSE) tools is still limited because of the difficulty of existing fuzzers to provide structured, well-typed inputs, namely models that conform to typing and consistency constraints induced by a given meta-model and underlying modeling framework. By drawing from recent advances on both fuzz testing and automated model generation, we present three different approaches for fuzzing MDSE tools: A graph grammar-based fuzzer and two variants of a coverage-guided mutation-based fuzzer working with different sets of model mutation operators. Our evaluation on a set of real-world MDSE tools shows that our approaches can outperform both standard fuzzers and model generators w.r.t. their fuzzing capabilities. Moreover, we found that each of our approaches comes with its own strengths and weaknesses in terms of fault finding capabilities and the ability to cover different aspects of the system under test. Thus the approaches complement each other, forming a fuzzer suite for testing MDSE tools.
科研通智能强力驱动
Strongly Powered by AbleSci AI