SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Weyuker Elaine) "

Sökning: WFRF:(Weyuker Elaine)

  • Resultat 1-34 av 34
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Avritzer, Alberto, et al. (författare)
  • Generating Test Cases Using using a Performability Model
  • 2011
  • Ingår i: IET Software. - : Institution of Engineering and Technology (IET). - 1751-8806 .- 1751-8814. ; 5:2, s. 113-119
  • Tidskriftsartikel (refereegranskat)abstract
    • The authors present a new approach for the automated generation of test cases to be used for demonstrating the reliability of large industrial mission-critical systems. In this study they extend earlier work by using a performability model to track resource usage and resource failures. Results from the transient Markov chain analysis are used to estimate the software reliability at a given system execution time.
  •  
2.
  • Avritzer, Alberto, et al. (författare)
  • Methods and Opportunities for Rejuvenation in Aging Distributed Software
  • 2010
  • Ingår i: Journal of Systems and Software. - : Elsevier BV. - 0164-1212 .- 1873-1228. ; 83:9, s. 1568-1578
  • Tidskriftsartikel (refereegranskat)abstract
    • In this paper we describe several methods for detecting the need for software rejuvenation in mission critical systems that are subjected to worm infection, and introduce new software rejuvenation algorithms. We evaluate these algorithms' effectiveness using both simulation studies and analytic modeling, by assessing the probability of mission success. The system under study emulates a Mobile Ad-Hoc Network (MANET) of processing nodes. Our analysis determined that some of our rejuvenation algorithms are quite effective in maintaining a high probability of mission success while the system is under explicit attack by a worm infection.
  •  
3.
  • Avritzer, A, et al. (författare)
  • Monitoring for Security Intrusion using Performance Signatures
  • 2010
  • Ingår i: WOSP/SIPEW'10 - Proceedings of the 1st Joint WOSP/SIPEW International Conference on Performance Engineering. - New York, NY, USA : ACM. - 9781605585635 ; , s. 93-103
  • Konferensbidrag (refereegranskat)abstract
    • A new approach for detecting security attacks on software systems by monitoring the software system performance signatures is introduced. We present a proposed architecture for security intrusion detection using off-the-shelf security monitoring tools and performance signatures. Our approach relies on the assumption that the performance signature of the well-behaved system can be measured and that the performancesignature of several types of attacks can be identified. This assumption has been validated for operations support systems that are used to monitor large infrastructures and receive aggregated traffic that is periodic in nature. Examples of such infrastructures include telecommunications systems, transportation systems and power generation systems. In addition, significant deviation from well-behaved system performance signatures can be used to trigger alerts about new types of security attacks. We used a custom performance benchmark and five types of security attacks to deriveperformance signatures for the normal mode of operation and the security attack mode of operation. We observed that one of the types of thesecurity attacks went undetected by the off-the-shelf security monitoring tools but was detected by our approach of monitoring performance signatures. We conclude that an architecture for security intrusion detection can be effectively complemented by monitoring of performance signatures.
  •  
4.
  •  
5.
  • Bell, R, et al. (författare)
  • Assessing the Impact of Using Fault-Prediction in Industry
  • 2011
  • Ingår i: Proceedings - 4th IEEE International Conference on Software Testing, Verification, and Validation Workshops, ICSTW 2011. - 9780769543451 ; , s. 561-565
  • Konferensbidrag (refereegranskat)abstract
    • Do fault prediction models that guide testing and other efforts to improve software reliability lead to finding different or additional faults in the next release, to an improved process for finding the same faults that would occur were the models not used, or do they have no impact at all? In this challenge paper, we describe the difficulties involved in estimating effects of this sort of intervention and discuss ways to empirically answer that question and ways of assessing any changes, if present. We present several experimental design options and discuss the pros and cons of each.
  •  
6.
  •  
7.
  • Bell, R, et al. (författare)
  • The limited impact of individual developer data on software defect prediction
  • 2013
  • Ingår i: Empirical Software Engineering. - : Springer Science and Business Media LLC. - 1382-3256 .- 1573-7616. ; 18:3, s. 478-505
  • Tidskriftsartikel (refereegranskat)abstract
    • Previous research has provided evidence that a combination of static code metrics and software history metrics can be used to predict with surprising success which files in the next release of a large system will havethe largest numbers of defects. In contrast, very little research exists to indicate whether information about individual developers can profitably be used to improve predictions. We investigate whether files in a large system that are modified by an individual developer consistently contain either more or fewer faults than the average of all files in the system. The goal of the investigation is to determine whether information about which particular developer modified a file is able to improve defect predictions. We also extend earlier research evaluating use of counts of the number of developers who modified a file as predictors of the file's future faultiness. We analyze change reports filed for three large systems, each containing 18 releases, with a combined total of nearly 4 million LOC and over 11,000 files. A buggy file ratio is defined for programmers, measuring the proportion of faulty files in Release R out of all files modified by the programmer in Release R-1. We assess the consistency of the buggy file ratio across releases for individual programmers both visually and within the context of a fault prediction model. Buggy file ratios for individual programmers often varied widely across all the releases that they participated in. A prediction model that takes account of the history of faulty files that were changed by individual developers shows improvement over the standard negative binomial model of less than 0.13% according to one measure, and no improvement at all according to another measure. In contrast, augmenting a standard model with counts of cumulative developers changing files in prior releases produced up to a 2% improvement in the percentage of faults detected in the top 20% of predicted faulty files. The cumulative number of developers interacting with a file can be a useful variable for defect prediction. However, the study indicates that adding information to a model about which particular developermodified a file is not likely to improve defect predictions.
  •  
8.
  • Derehag, Jesper, et al. (författare)
  • Transitioning Fault Prediction Models to a New Environment
  • 2016
  • Ingår i: Proceedings - 2016 12th European Dependable Computing Conference, EDCC 2016. - 9781509015825 ; , s. 241-248
  • Konferensbidrag (refereegranskat)abstract
    • We describe the application and evaluation of fault prediction algorithms to a project developed by a Swedish company that transitioned from waterfall to agile development methods. The project used two different version control systems and a separate bug tracking system during its lifetime. The algorithms were originally designed for use on systems implemented with a traditional waterfall process at American companies that maintained their project records in an integrated database system that combined bug recording and version control. We compare the performance of the original prediction model on the American systems to the results obtained in the Swedish environment in both its pre-agile and agile stages. We also consider the impact of additional variables in the model.
  •  
9.
  • Enoiu, Eduard Paul, et al. (författare)
  • Automated Test Generation using Model-Checking: An Industrial Evaluation
  • 2016
  • Ingår i: International Journal on Software Tools for Technology Transfer. - Germany : Springer. - 1433-2779 .- 1433-2787. ; 18:3, s. 335-353
  • Tidskriftsartikel (refereegranskat)abstract
    • In software development, testers often focus on functional testing to validate implemented programs against their specifications. In safety critical software development, testers are also required to show that tests exercise, or cover, the structure and logic of the implementation. To achieve different types of logic coverage, various program artifacts such as decisions and conditions are required to be exercised during testing. Use of model-checking for structural test generation has been proposed by several researchers. The limited application to models used in practice and the state-space explosion can, however, impact model-checking and hence the process of deriving tests for logic coverage. Thus, there is a need to validate these approaches against relevant industrial systems such that more knowledge is built on how to efficiently use them in practice. In this paper, we present a tool-supported approach to handle software written in the Function Block Diagram language such that logic coverage criteria can be formalized and used by a model-checker to automatically generate tests. To this end, we conducted a study based on industrial use-case scenarios from Bombardier Transportation AB, showing how our toolbox COMPLETETEST can be applied to generate tests in software systems used in the safety-critical domain. To evaluate the approach, we applied the toolbox to 157 programs and found that it is efficient in terms of time required to generate tests that satisfy logic coverage and scales well for most of the programs.
  •  
10.
  • Landwehr, Carl, et al. (författare)
  • Software Systems Engineering programmes a capability approach
  • 2017
  • Ingår i: Journal of Systems and Software. - : Elsevier BV. - 0164-1212 .- 1873-1228. ; 125, s. 354-364
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper discusses third-level educational programmes that are intended to prepare their graduates for a career building systems in which software plays a major role. Such programmes are modelled on traditional Engineering programmes but have been tailored to applications that depend heavily on software. Rather than describe knowledge that should be taught, we describe capabilities that students should acquire in these programmes. The paper begins with some historical observations about the software development field. 
  •  
11.
  • Ostrand, T, et al. (författare)
  • Can File Level Characteristics Help Identify System Level Fault-Proneness?
  • 2011
  • Ingår i: Hardware and Software: Verification and Testing. - Berlin, Heidelberg : Springer. - 9783642341878 ; , s. 176-189
  • Bokkapitel (refereegranskat)abstract
    • In earlier studies of multiple-release systems, we observed that the number of changes and the number of faults in a file in the past release, the size of a file, and the maturity of a file are all useful predictors of the file's fault proneness in the next release. In each case the data needed to make predictions have been extracted from a configuration management system which provides integrated change management and version control functionality. In this paper we investigate analogous questions for the system as a whole, rather than looking at its constituent files. Using two large industrial software systems, each with many field releases, we examine a number of questions relating defects to system maturity, how often the system has changed, the size difference of a release from the prior release, and the length of time a release has been under development before the start of system testing. Most of our observations match neither our intuition, nor the relations observed for these two systems when similar questions were asked at the file level.
  •  
12.
  • Ostrand, T, et al. (författare)
  • Predicting Bugs in Large Industrial Software Systems
  • 2013
  • Ingår i: Software Engineering. - Germany : Springer Berlin/Heidelberg. - 9783642360534 ; , s. 71-93
  • Bokkapitel (refereegranskat)abstract
    • This chapter is a survey of close to ten years of software fault prediction research performed by our group. We describe our initial motivation, the variables used to make predictions, provide a description of our standard model based on Negative Binomial Regression, and summarize the results of using this model to make predictions for nine large industrial software systems. The systems range in size from hundreds of thousands to millions of lines of code. All have been in the field for multiple years and many releases, and continue to be maintained and enhanced, usually at 3 month intervals. Effectiveness of the fault predictions is assessed using two different metrics. We compare the effectiveness of the standard model to augmented models that include variables related to developer counts, to inter-file calling structure, and to information about specific developers who modified the code. We also evaluate alternate prediction models based on different training algorithms, including Recursive Partitioning, Bayesian Additive Regression Trees, and Random Forests.
  •  
13.
  • Ostrand, T, et al. (författare)
  • Progress in Automated Software Defect Prediction
  • 2008
  • Ingår i: Lecture Notes in Computer Science, v. 5394. - Berlin, Heidelberg : Springer Berlin Heidelberg. - 9783642017018 ; , s. 200-204
  • Konferensbidrag (refereegranskat)
  •  
14.
  • Shin, Y, et al. (författare)
  • On the use of calling structure information to improve fault prediction
  • 2012
  • Ingår i: Empirical Software Engineering. - : Springer Science and Business Media LLC. - 1382-3256 .- 1573-7616. ; 17:4-5, s. 390-423
  • Tidskriftsartikel (refereegranskat)abstract
    • Previous studies have shown that software code attributes, such as lines of source code, and history information, such as the number of code changes and the number of faults in prior releases of software, are useful for predicting where faults will occur. In this study of two large industrial software systems, we investigate the effectiveness of adding information about calling structure to fault prediction models. Adding callingstructure information to a model based solely on non-calling structure code attributes modestly improved prediction accuracy. However, the addition of calling structure information to a model that included both history and non-calling structure code attributes produced no improvement.
  •  
15.
  • Strandberg, Per, et al. (författare)
  • Automated System Level Regression Test Prioritization in a Nutshell
  • 2017
  • Ingår i: IEEE Software. - 0740-7459 .- 1937-4194. ; 34:4, s. 30-37
  • Tidskriftsartikel (refereegranskat)abstract
    • Westermo Research and Development has developed SuiteBuilder, an automated tool to determine an effective ordering of regression test cases. The ordering is based on factors such as fault detection success, the interval since the last execution, and code modifications. SuiteBuilder has enabled Westermo to overcome numerous regression-testing problems, including lack of time to run a complete regression suite, failure to detect bugs in a timely manner, and repeatedly omitted tests. In the tool's first two years of use, reordered test suites finished in the available time, most fault-detecting test cases were located in the first third of suites, no important test case was omitted, and the necessity for manual work on the suites decreased greatly. 
  •  
16.
  • Strandberg, Per Erik, et al. (författare)
  • Automated test mapping and coverage for network topologies
  • 2018
  • Ingår i: ISSTA 2018 - Proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis. - New York, NY, USA : Association for Computing Machinery, Inc. - 9781450356992 ; , s. 73-83
  • Konferensbidrag (refereegranskat)abstract
    • Communication devices such as routers and switches play a critical role in the reliable functioning of embedded system networks. Dozens of such devices may be part of an embedded system network, and they need to be tested in conjunction with various computational elements on actual hardware, in many different configurations that are representative of actual operating networks. An individual physical network topology can be used as the basis for a test system that can execute many test cases, by identifying the part of the physical network topology that corresponds to the configuration required by each individual test case. Given a set of available test systems and a large number of test cases, the problem is to determine for each test case, which of the test systems are suitable for executing the test case, and to provide the mapping that associates the test case elements (the logical network topology) with the appropriate elements of the test system (the physical network topology). We studied a real industrial environment where this problem was originally handled by a simple software procedure that was very slow in many cases, and also failed to provide thorough coverage of each network's elements. In this paper, we represent both the test systems and the test cases as graphs, and develop a new prototype algorithm that a) determines whether or not a test case can be mapped to a subgraph of the test system, b) rapidly finds mappings that do exist, and c) exercises diverse sets of network nodes when multiple mappings exist for the test case. The prototype has been implemented and applied to over 10,000 combinations of test cases and test systems, and reduced the computation time by a factor of more than 80 from the original procedure. In addition, relative to a meaningful measure of network topology coverage, the mappings achieved an increased level of thoroughness in exercising the elements of each test system.
  •  
17.
  • Strandberg, Per Erik, et al. (författare)
  • Experience Report : Automated System Level Regression Test Prioritization Using Multiple Factors
  • 2016
  • Ingår i: 27th International Symposium on Software Reliability Engineering ISSRE'16.
  • Konferensbidrag (refereegranskat)abstract
    • We propose a new method of determining an effective ordering of regression test cases, and describe its implementation as an automated tool called SuiteBuilder developed by Westermo Research and Development AB. The tool generates an efficient order to run the cases in an existing test suite by using expected or observed test duration and combining priorities of multiple factors associated with test cases, including previous fault detection success, interval since last executed, and modifications to the code tested. The method and tool were developed to address problems in the traditional process of regression testing, such as lack of time to run a complete regression suite, failure to detect bugs in time, and tests that are repeatedly omitted. The tool has been integrated into the existing nightly test framework for Westermo software that runs on large-scale data communication systems.  In experimental evaluation of the tool, we found significant improvement in regression testing results. The re-ordered test suites finish within the available time, the majority of fault-detecting test cases are located in the first third of the suite, no important test case is omitted, and the necessity for manual work on the suites is greatly reduced.
  •  
18.
  • Strandberg, Per Erik, et al. (författare)
  • Intermittently Failing Tests in the Embedded Systems Domain
  • 2020
  • Ingår i: International Symposium on Software Testing and Analysis ISSTA'20. - New York, NY, USA : ACM. ; , s. 337-348
  • Konferensbidrag (refereegranskat)abstract
    • Software testing is sometimes plagued with intermittently failing tests and finding the root causes of such failing tests is often difficult. This problem has been widely studied at the unit testing level for open source software, but there has been far less investigation at the system test level, particularly the testing of industrial embedded systems. This paper describes our investigation of the root causes of intermittently failing tests in the embedded systems domain, with the goal of better understanding, explaining and categorizing the underlying faults. The subject of our investigation is a currently-running industrial embedded system, along with the system level testing that was performed. We devised and used a novel metric for classifying test cases as intermittent. From more than a half million test verdicts, we identified intermittently and consistently failing tests, and identified their root causes using multiple sources. We found that about 1-3% of all test cases were intermittently failing. From analysis of the case study results and related work, we identified nine factors associated with test case intermittence. We found that a fix for a consistently failing test typically removed a larger number of failures detected by other tests than a fix for an intermittent test. We also found that more effort was usually needed to identify fixes for intermittent tests than for consistent tests. An overlap between root causes leading to intermittent and consistent tests was identified. Many root causes of intermittence are the same in industrial embedded systems and open source software. However, when comparing unit testing to system level testing, especially for embedded systems, we observed that the test environment itself is often the cause of intermittence.
  •  
19.
  • Weyuker, Elaine, et al. (författare)
  • Comparing Methods to Identify Defect Reports in a Change Management Database
  • 2008
  • Ingår i: DEFECTS'08: 2008 International Symposium on Software Testing and Analysis - Proceedings of the 2008 Workshop on Defects in Large Software Systems 2008, DEFECTS'08. - New York, NY, USA : ACM. - 9781605580517 ; , s. 27-31
  • Konferensbidrag (refereegranskat)abstract
    • A key problem when doing automated fault analysis and fault prediction from information in a software change management database is how to determine which change reports represent software faults. In some change management systems, there is no simple way to distinguish fault reports from changes made to add new functionality or perform routine maintenance. This paper describes a comparison of two methods for classifying change reports for a large software system, and concludes that, for that particular system, the stage of development when the report was initialized is a more accurate indicator of its fault status than the presence of certain keywords in the report's natural language description.
  •  
20.
  •  
21.
  • Weyuker, Elaine, et al. (författare)
  • Comparing the Effectiveness of Several Modeling Methods for Fault Prediction
  • 2010
  • Ingår i: Empirical Software Engineering. - : Springer Science and Business Media LLC. - 1382-3256 .- 1573-7616. ; 15:3, s. 277-295
  • Tidskriftsartikel (refereegranskat)abstract
    • We compare the effectiveness of four modeling methods-negative binomial regression, recursive partitioning, random forests and Bayesian additive regression trees-for predicting the files likely to contain the most faults for 28 to 35 releases of three large industrial software systems. Predictor variables included lines of code, file age, faults in the previous release, changes in the previous two releases, and programming language. To compare the effectiveness of the different models, we use two metrics-the percent of faults contained in the top 20% of files identified by the model, and a new, more general metric, the fault-percentile-average. The negative binomial regression and random forests models performed significantly better than recursive partitioning and Bayesian additive regression trees, as assessed by either of the metrics. For each of the three systems, the negative binomial and random forests models identified 20% of the files in each release that contained an average of 76% to 94% of the faults. 
  •  
22.
  • Weyuker, Elaine (författare)
  • Comparing the Effectiveness of Testing Techniques
  • 2008
  • Ingår i: Formal Methods and Testing. - Germany : Springer. - 9783540789178 ; , s. 271-297
  • Bokkapitel (refereegranskat)abstract
    • Testing software systems requires practitioners to decide how to select test data. This chapter discusses what it means for one test data selection criterion to be more effective than another. Several proposed comparison relations are discussed, highlighting the strengths and weaknesses of each. Also included is a discussion of how these relations evolved and argue that large scale empirical studies are needed.
  •  
23.
  • weyuker, elaine, et al. (författare)
  • Do Too Many Cooks Spoil the Broth? Using the Number of Developers to Enhance Defect Prediction Models
  • 2008
  • Ingår i: Empirical Software Engineering. - : Springer Science and Business Media LLC. - 1382-3256 .- 1573-7616. ; 13:5, s. 539-559
  • Tidskriftsartikel (refereegranskat)abstract
    • Fault prediction by negative binomial regression models is shown to be effective for four large production software systems from industry. A model developed originally with data from systems with regularly scheduled releases was successfully adapted to a system without releases to identify 20% of that system's files that contained 75% of the faults. A model with a pre-specified set of variables derived from earlier research was applied to three additional systems, and proved capable of identifying averages of 81, 94 and 76% of the faults in those systems. A primary focus of this paper is to investigate the impact on predictive accuracy of using data about the number of developers who access individual code units. For each system, including the cumulative number of developers who had previously modified a file yielded no more than a modest improvement in predictive accuracy. We conclude that while many factors can "spoil the broth" (lead to the release of software with too many defects), the number of developers is not a major influence.
  •  
24.
  •  
25.
  • Weyuker, Elaine (författare)
  • Empirical Software Engineering Research - The Good, The Bad, The Ugly
  • 2011
  • Ingår i: International Symposium on Empirical Software Engineering and Measurement, 2012. ; , s. -Article number 6092548
  • Konferensbidrag (refereegranskat)abstract
    • The Software Engineering Research community has slowly recognized that empirical studies are an important way of validating ideas and increasingly our community has stopped accepting the sufficiency of arguing that a smart person has come up with the idea and therefore it must be good. This has led to a flood of Software Engineering papers that contain at least some form of empirical study. However, not all empirical studies are created equal, and many may not even provide any useful information or value. We survey the gradual shift from essentially no empirical studies, to a small number of ones of questionable value, and look at what we need to do to insure that our empirical studies really contribute to the state of knowledge in the field. Thus we have the good, the bad, and the ugly. What are we as a community doing correctly? What are we doing less well than we should be because we either don't have the necessary artifacts or because the time and resources required to do "the good" is perceived to be too great? And where are we missing the boat entirely in terms of not addressing critical questions and often not even recognizing that these questions are central even if we don't know the answers. We look to see whether we can find some commonality in the projects that have really made the transition from research to widespread practice to see whether we can identify some common themes.
  •  
26.
  • Weyuker, Elaine, et al. (författare)
  • Experiences with academic-industrial collaboration on empirical studies of software systems
  • 2017
  • Ingår i: 2017 IEEE 28TH INTERNATIONAL SYMPOSIUM ON SOFTWARE RELIABILITY ENGINEERING WORKSHOPS (ISSREW 2017). - : Institute of Electrical and Electronics Engineers Inc.. ; , s. 164-168
  • Konferensbidrag (refereegranskat)abstract
    • The authors have held both academic and industrial research positions, and have designed and carried out many empirical studies of large software systems that were built and maintained in industrial environments. Their experiences show that the most crucial component of a successful study is the participation of at least one industrial collaborator who is committed to the study’s goals and is able to provide advice and assistance throughout the course of the study. This paper describes studies carried out in three different industrial environments, discusses obstacles that arise, and how the authors have been able to overcome some of those obstacles. 
  •  
27.
  • Weyuker, Elaine (författare)
  • Individual working memory capacity traced from multivariate pattern classification of EEG spectral power
  • 2018
  • Ingår i: Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS. - : Institute of Electrical and Electronics Engineers Inc.. - 9781538636466 ; , s. 4812-4815
  • Konferensbidrag (refereegranskat)abstract
    • Working Memory (WM) processing is central for human cognitive behavior. Using neurofeedback training to enhance the individual WM capacity is a promising technique but requires careful consideration when choosing the feedback signal. Feedback in terms of univariate spectral power (specifically theta and alpha power) has yielded questionable behavioral effects. However, a promising new direction for WM neurofeedback training is by using a measure of WM that is extracted by multivariate pattern classification. This study recorded EEG oscillatory activity from 15 healthy participants while they were engaged in the n-back task, n[1,2]. Univariate measures of the theta, alpha, and theta-over-alpha power ratio and a measure of WM extracted from multivariate pattern classification (of n-back task load conditions) was compared in relation to individual n-back task performance. Results show that classification performance is positively correlated to individual 2-back task performance while theta, alpha and thetaover-alpha power ratio is not. These results suggest that the discriminability of multivariate EEG oscillatory patterns between two WM load conditions reflects individual WM capacity. 
  •  
28.
  •  
29.
  • Weyuker, Elaine, et al. (författare)
  • Programmer-based Fault Prediction
  • 2010
  • Ingår i: ACM International Conference Proceeding Series. - New York, NY, USA : ACM. - 9781450304047
  • Konferensbidrag (refereegranskat)abstract
    • Background: Previous research has provided evidence that a combination of static code metrics and software history metrics can be used to predict with surprising success which files in the next release of a large system will have the largest numbers of defects. In contrast, very little research exists to indicate whether information about individual developers can profitably be used to improve predictions. Aims: We investigate whether files in a large system that are modified by an individual developer consistently contain either more or fewer faults than the average of all files in the system. The goal of the investigation is to determine whether information about which particular developer modified a file is able to improve defect predictions. We also continue an earlier study to evaluate the use of counts of the number of developers who modified a file as predictors of the file's future faultiness. Method: We analyzed change reports filed by 107 programmers for 16 releases of a system with 1,400,000 LOC and 3100 files. A "bug ratio" was defined for programmers, measuring the proportion of faulty files in release R out of all files modified by the programmer in release R-1. The study compares the bug ratios of individual programmers to the average bug ratio, and also assesses the consistency of the bug ratio across releases for individual programmers. Results: Bug ratios varied widely among all the programmers, as well as for many individual programmers across all the releases that they participated in. We found a statistically significant correlation between the bug ratios for programmers for the first half of changed files versus the ratios for the second half, indicating a measurable degree of persistence in the bug ratio. However, when the computation was repeated with the bug ratio controlled not only by release, but also by file size, the correlation disappeared. In addition to the bug ratios, we confirmed that counts of the cumulative number of different developers changing a file over its lifetime can help to improve predictions, while other developer counts are not helpful.
  •  
30.
  • weyuker, elaine, et al. (författare)
  • Replicate, Replicate, Replicate
  • 2011
  • Konferensbidrag (refereegranskat)abstract
    • Replication is a standard part of scientific experimentation. Unfortunately, in software engineering, replication of experiments is often considered an inferior type of research, or not even research at all. In this paper we describe four different types of replication that we have been performing as part of validating the effectiveness and applicability of our software fault prediction research. We discuss replication over time, replication by using different subject systems, replication by changing the variables in prediction models, and replication by varying the modeling algorithms.
  •  
31.
  •  
32.
  •  
33.
  • Weyuker, Elaine, et al. (författare)
  • We’re Finding Most of the Bugs, but What Are We Missing
  • 2010
  • Ingår i: ICST 2010 - 3rd International Conference on Software Testing, Verification and Validation. - 9780769539904 ; , s. 313-322
  • Konferensbidrag (refereegranskat)abstract
    • We compare two types of model that have been used to predict software fault-proneness in the next release of a software system. Classification models make a binary prediction that a software entity such as a file or module is likely to be either faulty or not faulty in the next release. Ranking models order the entities according to their predicted number of faults. They are generally used to establish a priority for more intensive testing of the entities that occur early in the ranking. We investigate ways of assessing both classification models and ranking models, and the extent to which metrics appropriate for one type of model are also appropriate for the other. Previous work has shown that ranking models are capable of identifying relatively small sets of files that contain 75-95% of the faults detected in the next release of large legacy systems. In our studies of the rankings produced by these models, the faults not contained in the predicted most faultprone files are nearly always distributed across many of the remaining files; i.e., a single file that is in the lower portion of the ranking virtually never contains a large number of faults.
  •  
34.
  • Weyuker, Elaine, et al. (författare)
  • What Can Fault Prediction Do For YOU?
  • 2008
  • Ingår i: Tests and Proofs. - Berlin, Heidelberg : Springer. - 9783540791232 ; , s. 18-29
  • Konferensbidrag (refereegranskat)abstract
    • It would obviously be very valuable to know in advance which files in the next release of a large software system are most likely to contain the largest numbers of faults. This is true whether the goal is to validate the system by testing or formally verifying it, or by using some hybrid approach. To accomplish this, we developed negative binomial regression models and used them to predict the expected number of faults in each file of the next release of a system. The predictions are based on code characteristics and fault and modification history data. This paper discusses what we have learned from applying the model to several large industrial systems, each with multiple years of field exposure. It also discusses our success in making accurate predictions and some of the issues that had to be considered.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-34 av 34

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy