SwePub
Sök i LIBRIS databas

  Utökad sökning

hsv:(NATURVETENSKAP) hsv:(Data och informationsvetenskap) hsv:(Datavetenskap)
 

Sökning: hsv:(NATURVETENSKAP) hsv:(Data och informationsvetenskap) hsv:(Datavetenskap) > An Evaluation of Op...

An Evaluation of Open-Source Software Microbenchmark Suites for Continuous Performance Assessment

Laaber, C. (författare)
Universität Zürich,University of Zürich
Leitner, Philipp, 1982 (författare)
Gothenburg University,Göteborgs universitet,Institutionen för data- och informationsteknik, datavetenskap (GU),Department of Computer Science and Engineering, Computing Science (GU),Chalmers tekniska högskola,Chalmers University of Technology
 (creator_code:org_t)
2018-05-28
2018
Engelska.
Ingår i: MSR '18 Proceedings of the 15th International Conference on Mining Software Repositories. - New York, NY, USA : ACM Digital Library. - 9781450357166 ; , s. 119-130, s. 119-130
  • Bokkapitel (övrigt vetenskapligt/konstnärligt)
Abstract Ämnesord
Stäng  
  • Continuous integration (CI) emphasizes quick feedback to developers. This is at odds with current practice of performance testing, which predominantely focuses on long-running tests against entire systems in production-like environments. Alternatively, software microbenchmarking attempts to establish a performance baseline for small code fragments in short time. This paper investigates the quality of microbenchmark suites with a focus on suitability to deliver quick performance feedback and CI integration. We study ten open-source libraries written in Java and Go with benchmark suite sizes ranging from 16 to 983 tests, and runtimes between 11 minutes and 8.75 hours. We show that our study subjects include benchmarks with result variability of 50% or higher, indicating that not all benchmarks are useful for reliable discovery of slow-downs. We further artificially inject actual slowdowns into public API methods of the study subjects and test whether test suites are able to discover them. We introduce a performance-test quality metric called the API benchmarking score (ABS). ABS represents a benchmark suite's ability to find slowdowns among a set of defined core API methods. Resulting benchmarking scores (i.e., fraction of discovered slowdowns) vary between 10% and 100% for the study subjects. This paper's methodology and results can be used to (1) assess the quality of existing microbenchmark suites, (2) select a set of tests to be run as part of CI, and (3) suggest or generate benchmarks for currently untested parts of an API.

Ämnesord

NATURVETENSKAP  -- Data- och informationsvetenskap (hsv//swe)
NATURAL SCIENCES  -- Computer and Information Sciences (hsv//eng)
NATURVETENSKAP  -- Data- och informationsvetenskap -- Datorteknik (hsv//swe)
NATURAL SCIENCES  -- Computer and Information Sciences -- Computer Engineering (hsv//eng)
NATURVETENSKAP  -- Data- och informationsvetenskap -- Programvaruteknik (hsv//swe)
NATURAL SCIENCES  -- Computer and Information Sciences -- Software Engineering (hsv//eng)
NATURVETENSKAP  -- Data- och informationsvetenskap -- Datavetenskap (hsv//swe)
NATURAL SCIENCES  -- Computer and Information Sciences -- Computer Sciences (hsv//eng)

Nyckelord

continuous integration
software performance testing
microbench-marking
empirical study
Java
OGRAMMING
SYSTEMS
LANGUAGES
AND APPLICATIONS
PROCEEDINGS22nd International
continuous integration

Publikations- och innehållstyp

vet (ämneskategori)
kap (ämneskategori)

Hitta via bibliotek

Till lärosätets databas

Sök utanför SwePub

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy