Search: onr:"swepub:oai:research.chalmers.se:1d5df74e-d235-4a7f-9044-ad22d7c2b0db" >
Using mutation test...
Using mutation testing to measure behavioural test diversity
-
- Gomes, Francisco, 1987 (author)
- Gothenburg University,Göteborgs universitet,Institutionen för data- och informationsteknik, Software Engineering (GU),Institutionen för data- och informationsteknik, Software Engineering (GU)
-
- Dobslaw, Felix, 1983 (author)
- Gothenburg University,Göteborgs universitet,Institutionen för data- och informationsteknik, Software Engineering (GU),Institutionen för data- och informationsteknik, Software Engineering (GU)
-
- Feldt, Robert, 1972 (author)
- Gothenburg University,Göteborgs universitet,Institutionen för data- och informationsteknik, Software Engineering (GU),Institutionen för data- och informationsteknik, Software Engineering (GU)
-
(creator_code:org_t)
- 2020
- 2020
- English.
-
In: Proceedings - 2020 IEEE 13th International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2020. ; , s. 254-263
- Related links:
-
https://research.cha...
-
show more...
-
https://doi.org/10.1...
-
https://gup.ub.gu.se...
-
show less...
Abstract
Subject headings
Close
- Diversity has been proposed as a key criterion to improve testing effectiveness and efficiency. It can be used to optimise large test repositories but also to visualise test maintenance issues and raise practitioners' awareness about waste in test artefacts and processes. Even though these diversitybased testing techniques aim to exercise diverse behavior in the system under test (SUT), the diversity has mainly been measured on and between artefacts (e.g., inputs, outputs or test scripts). Here, we introduce a family of measures to capture behavioural diversity (b-div) of test cases by comparing their executions and failure outcomes. Using failure information to capture the SUT behaviour has been shown to improve effectiveness of history-based test prioritisation approaches. However, historybased techniques require reliable test execution logs which are often not available or can be difficult to obtain due to flaky tests, scarcity of test executions, etc. To be generally applicable we instead propose to use mutation testing to measure behavioral diversity by running the set of test cases on various mutated versions of the SUT. Concretely, we propose two specific b-div measures (based on accuracy and Matthew's correlation coefficient, respectively) and compare them with artefact-based diversity (a-div) for prioritising the test suites of 6 different open-source projects. Our results show that our b-div measures outperform a-div and random selection in all of the studied projects. The improvement is substantial with an average increase in average percentage of faults detected (APFD) of between 19% to 31% depending on the size of the subset of prioritised tests.
Subject headings
- TEKNIK OCH TEKNOLOGIER -- Samhällsbyggnadsteknik -- Geoteknik (hsv//swe)
- ENGINEERING AND TECHNOLOGY -- Civil Engineering -- Geotechnical Engineering (hsv//eng)
- NATURVETENSKAP -- Data- och informationsvetenskap -- Programvaruteknik (hsv//swe)
- NATURAL SCIENCES -- Computer and Information Sciences -- Software Engineering (hsv//eng)
- NATURVETENSKAP -- Matematik -- Sannolikhetsteori och statistik (hsv//swe)
- NATURAL SCIENCES -- Mathematics -- Probability Theory and Statistics (hsv//eng)
Keyword
- test selection
- diversity-based testing
- test prioritisation
- empirical study
- diversity-based testing
- empirical study
- test prioritisation
- test selection
Publication and Content Type
- kon (subject category)
- ref (subject category)
To the university's database