SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Meding Wilhelm) "

Search: WFRF:(Meding Wilhelm)

  • Result 1-50 of 57
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Al Sabbagh, Khaled, 1987, et al. (author)
  • Improving Data Quality for Regression Test Selection by Reducing Annotation Noise
  • 2020
  • In: Proceedings - 46th Euromicro Conference on Software Engineering and Advanced Applications, SEAA 2020. ; , s. 191-194
  • Conference paper (peer-reviewed)abstract
    • Big data and machine learning models have been increasingly used to support software engineering processes and practices. One example is the use of machine learning models to improve test case selection in continuous integration. However, one of the challenges in building such models is the identification and reduction of noise that often comes in large data. In this paper, we present a noise reduction approach that deals with the problem of contradictory training entries. We empirically evaluate the effectiveness of the approach in the context of selective regression testing. For this purpose, we use a curated training set as input to a tree-based machine learning ensemble and compare the classification precision, recall, and f-score against a non-curated set. Our study shows that using the noise reduction approach on the training instances gives better results in prediction with an improvement of 37% on precision, 70% on recall, and 59% on f-score.
  •  
2.
  • Al Sabbagh, Khaled, 1987, et al. (author)
  • Predicting Test Case Verdicts Using TextualAnalysis of Commited Code Churns
  • 2019
  • In: CEUR Workshop Proceedings. - 1613-0073. ; 2476, s. 138-153
  • Conference paper (peer-reviewed)abstract
    • Background: Continuous Integration (CI) is an agile software development practice that involves producing several clean builds of the software per day. The creation of these builds involve running excessive executions of automated tests, which is hampered by high hardware cost and reduced development velocity. Goal: The goal of our research is to develop a method that reduces the number of executed test cases at each CI cycle.Method: We adopt a design research approach with an infrastructure provider company to develop a method that exploits Ma-chine Learning (ML) to predict test case verdicts for committed sourcecode. We train five different ML models on two data sets and evaluate their performance using two simple retrieval measures: precision and recall. Results: While the results from training the ML models on the first data-set of test executions revealed low performance, the curated data-set for training showed an improvement on performance with respect to precision and recall. Conclusion: Our results indicate that the method is applicable when training the ML model on churns of small sizes
  •  
3.
  • Al Sabbagh, Khaled, 1987, et al. (author)
  • Selective Regression Testing based on Big Data: Comparing Feature Extraction Techniques
  • 2020
  • In: IEEE Software. - 1937-4194 .- 0740-7459. ; , s. 322-329
  • Conference paper (peer-reviewed)abstract
    • Regression testing is a necessary activity in continuous integration (CI) since it provides confidence that modified parts of the system are correct at each integration cycle. CI provides large volumes of data which can be used to support regression testing activities. By using machine learning, patterns about faulty changes in the modified program can be induced, allowing test orchestrators to make inferences about test cases that need to be executed at each CI cycle. However, one challenge in using learning models lies in finding a suitable way for characterizing source code changes and preserving important information. In this paper, we empirically evaluate the effect of three feature extraction algorithms on the performance of an existing ML-based selective regression testing technique. We designed and performed an experiment to empirically investigate the effect of Bag of Words (BoW), Word Embeddings (WE), and content-based feature extraction (CBF). We used stratified cross validation on the space of features generated by the three FE techniques and evaluated the performance of three machine learning models using the precision and recall metrics. The results from this experiment showed a significant difference between the models' precision and recall scores, suggesting that the BoW-fed model outperforms the other two models with respect to precision, whereas a CBF-fed model outperforms the rest with respect to recall.
  •  
4.
  •  
5.
  •  
6.
  • Antinyan, Vard, 1984, et al. (author)
  • Identifying Complex Functions : By Investigating Various Aspects of Code Complexity
  • 2015
  • In: Proceedings of 2015 Science and Information Conference (SAI). - : IEEE Press. - 9781479985470 - 9781479985487 - 9781479985463 ; , s. 879-888
  • Conference paper (peer-reviewed)abstract
    • The complexity management of software code has become one of the major problems in software development industry. With growing complexity the maintenance effort of code increases. Moreover, various aspects of complexity create difficulties for complexity assessment. The objective of this paper is to investigate the relationships of various aspects of code complexity and propose a method for identifying the most complex functions. We have conducted an action research project in two software development companies and complemented it with a study of three open source products. Four complexity metrics are measured, and their nature and mutual influence are investigated. The results and possible explanations are discussed with software engineers in industry. The results show that there are two distinguishable aspects of complexity of source code functions: Internal and outbound complexities. Those have an inverse relationship. Moreover, the product of them does not seem to be greater than a certain limit, regardless of software size. We present a method that permits identification of most complex functions considering the two aspects of complexities. The evaluation shows that the use of the method is effective in industry: It enables identification of 0.5% most complex functions out of thousands of functions for reengineering.
  •  
7.
  • Antinyan, Vard, 1984, et al. (author)
  • Identifying risky areas of software code in Agile/Lean software development: An industrial experience report
  • 2014
  • In: 2014 Software Evolution Week - IEEE Conference on Software Maintenance, Reengineering, and Reverse Engineering, CSMR-WCRE 2014 - Proceedings. - : IEEE. - 9781479941742
  • Conference paper (peer-reviewed)abstract
    • Modern software development relies on incremental delivery to facilitate quick response to customers' requests. In this dynamic environment the continuous modifications of software code can cause risks for software developers; when developing a new feature increment, the added or modified code may contain fault-prone or difficult-to-maintain elements. The outcome of these risks can be defective software or decreased development velocity. This study presents a method to identify the risky areas and assess the risk when developing software code in Lean/Agile environment. We have conducted an action research project in two large companies, Ericsson AB and Volvo Group Truck Technology. During the study we have measured a set of code properties and investigated their influence on risk. The results show that the superposition of two metrics, complexity and revisions of a source code file, can effectively enable identification and assessment of the risk. We also illustrate how this kind of assessment can be successfully used by software developers to manage risks on a weekly basis as well as release-wise. A measurement system for systematic risk assessment has been introduced to two companies. © 2014 IEEE.
  •  
8.
  • Antinyan, Vard, 1984, et al. (author)
  • Monitoring Evolution of Code Complexity and Magnitude of Changes
  • 2014
  • In: Acta Cybernetica. - 0324-721X. ; 21:3, s. 367-382
  • Journal article (peer-reviewed)abstract
    • Background: Complexity management has become a crucial activity in continuous software development. While the overall perceived complexity of a product grows rather insignificantly, the small units, such as functions and files, can have noticeable complexity growth with every increment of product features. This kind of evolution triggers risks of escalating fault-proneness and deteriorating maintainability. Goal: The goal of this research was to develop a measurement system which enables effective monitoring of complexity evolution. Method: An action research has been conducted in two large software development organiza-tions. We have measured three complexity and two change properties of code for two large industrial products. The complexity growth has been measured for five consecutive releases of products. Different patterns of growth have been identified and evaluated with software engi-neers in industry. Results: The results show that monitoring cyclomatic complexity evolution of functions and number of revisions of files focuses the attention of designers to potentially problematic files and functions for manual assessment and improvement. A measurement system was developed at Ericsson to support the monitoring process.
  •  
9.
  • Antinyan, Vard, et al. (author)
  • Monitoring evolution of code complexity and magnitude of changes
  • 2014
  • In: Acta Cybernetica. - : University of Szeged, Institute of Informatics. - 0324-721X. ; 21:3, s. 367-382
  • Journal article (peer-reviewed)abstract
    • Complexity management has become a crucial activity in continuous software development. While the overall perceived complexity of a product grows rather insignificantly, the small units, such as functions and files, can have noticeable complexity growth with every increment of product features. This kind of evolution triggers risks of escalating fault-proneness and deteriorating maintainability. The goal of this research was to develop a measurement system which enables effective monitoring of complexity evolution. An action research has been conducted in two large software development organizations. We have measured three complexity and two change properties of code for two large industrial products. The complexity growth has been measured for five consecutive releases of the products. Different patterns of growth have been identified and evaluated with software engineers in industry. The results show that monitoring cyclomatic complexity evolution of functions and number of revisions of files focuses the attention of designers to potentially problematic files and functions for manual assessment and improvement. A measurement system was developed at Ericsson to support the monitoring process.
  •  
10.
  • Antinyan, Vard, 1984, et al. (author)
  • Monitoring Evolution of Code Complexity in Agile/Lean Software Development - A Case Study at Two Companies
  • 2013
  • In: 13th Symposium on Programming Languages and Software Tools, SPLST 2013 - Proceedings. - 9789633062289 ; , s. 1-15
  • Conference paper (peer-reviewed)abstract
    • One of the distinguishing characteristics of Agile and Lean software development is that software products “grow” with new functionality with relatively small increments. Continuous customer demands of new features and the companies’ abilities to deliver on those demands are the two driving forces behind this kind of software evolution. Despite the numerous benefits there are a number of risks associated with this kind of growth. One of the main risks is the fact that the complexity of the software product grows slowly, but over time reaches scales which makes the product hard to maintain or evolve. The goal of this paper is to present a measurement system for monitoring the growth of complexity and drawing attention when it becomes problematic. The measurement system was developed during a case study at Ericsson and Volvo Group Truck Technology. During the case study we explored the evolution of size, complexity, revisions and number of designers of two large software products from the telecom and automotive domains. The results show that two measures needed to be monitored to keep the complexity development under control - McCabe’s complexity and number of revisions.
  •  
11.
  • Antinyan, Vard, 1984, et al. (author)
  • Profiling prerelease software product and organizational performance
  • 2014
  • In: Continuous software engineering. - Cham : Springer. - 9783319112831 ; , s. 167-182
  • Book chapter (peer-reviewed)abstract
    • Background: Large software development organizations require effective means of quantifying excellence of products and improvement areas. A good quantification of excellence supports organizations in retaining market leadership. In addition, a good quantification of improvement areas is needed to continuously increase performance of products and processes. Objective: In this chapter we present a method for developing product and organizational performance profiles. The profiles are a means of quantifying prerelease properties of products and quantifying performance of software development processes. Method: We conducted two case studies at three companies-Ericsson, Volvo Group Truck Technology, and Volvo Car Corporation. The goal of first case study is to identify risky areas of source code. We used a focus group to elicit and evaluate measures and indicators at Ericsson. Volvo Group Truck Technology was used to validate our profiling method. Results: The results of the first case study showed that profiling of product performance can be done by identifying risky areas of source code using combination of two measures-McCabe complexity and number of revisions of files. The results of second case study show that profiling change frequencies of models can help developers identify implicit architectural dependencies. Conclusions: We conclude that profiling is an effective tool for supporting improvements of product and organizational performance. The key for creating useful profiles is the close collaboration between research and development organizations. © 2014 Springer International Publishing Switzerland. All rights reserved.
  •  
12.
  • Berbyuk Lindström, Nataliya, 1978, et al. (author)
  • Understanding Metrics Team-Stakeholder Communication in Agile Metrics Service Delivery
  • 2021
  • In: APSEC (Asian Pacific Software Engineering conference), December 6-10, Taiwan-Virtual.. ; 2021-December, s. 401-409
  • Conference paper (peer-reviewed)abstract
    • In this paper, we explore challenges in communication between metrics teams and stakeholders in metrics service delivery. Drawing on interviews and interactive workshops with team members and stakeholders at two different Swedish agile software development organizations, we identify interrelated challenges such as aligning expectations, prioritizing demands, providing regular feedback, and maintaining continuous dialogue, which influence team-stakeholder interaction, relationships and performance. Our study shows the importance of understanding communicative hurdles and provides suggestions for their mitigation, therefore meriting further empirical research.
  •  
13.
  • Calikli, Gul, et al. (author)
  • Measure early and decide fast: Transforming quality management and measurement to continuous deployment
  • 2018
  • In: ACM International Conference Proceeding Series. - New York, NY, USA : ACM.
  • Conference paper (peer-reviewed)abstract
    • © 2018 Association for Computing Machinery. Continuous deployment has become software companies' inevitable response to the economic pressures of the market. At the same time, software quality is crucial in order to meet customers' expectations and hence succeed in the market. Therefore, current quality management processes require transformation in order to keep up with the fast pace of the market while at the same time meeting customers' expectations. In order to figure out how the current quality management process should be transformed to keep up with the fast pace of the market while ensuring both product quality and continuous deployment, we conducted a qualitative study at a large infrastructure provider company. During the interviews we conducted with the quality manager, developer and test architect, we used a metrics portfolio consisting of 59 candidate metrics that can be used in the transformed quality management process. Our findings show that, out of these candidate metrics, 9 metrics should be used in the internal quality measurement dashboard for quality check at the end of the software development life-cycle (SDLC) before the software is released to customer site, while 3 metrics should be used by quality manager to monitor earlier phases of SDLC and 5 metrics should also be delegated to earlier phases of SDLC but without the involvement of the quality manager. To summarize, our study support the claim that quality managers should not be only gatekeepers, but also proactive controllers of quality by monitoring earlier phases of the SDLC.
  •  
14.
  • Horkoff, Jennifer, 1980, et al. (author)
  • A Method for Modeling Data Anomalies in Practice
  • 2021
  • In: Proceedings - 2021 47th Euromicro Conference on Software Engineering and Advanced Applications, SEAA 2021.
  • Conference paper (peer-reviewed)abstract
    • As technology has allowed us to collect large amounts of industrial data, it has become critical to analyze and understand the data collected, in particular to find data anomalies. Anomaly analysis allows a company to detect, analyze and understand anomalous or unusual data patterns. This is an important activity to understand, for example, deviations in service which may indicate potential problems, or differing customer behavior which may reveal new business opportunities. Much previous work has focused on anomaly detection, in particular using machine learning. Such approaches allow clustering of data patterns by common attributes, and, although useful, clusters often do not correspond to the root causes of anomalies, meaning that more manual analysis is needed. In this paper we report on a design science study with two different teams, in a partner company which focuses on modeling and understanding the attributes and root causes of data anomalies. After iteration, for each team, we have created general and anomaly-specific UML class diagrams and goal models to capture anomaly details. We use our experiences to create an example taxonomy, classifying anomalies by their root causes, and to create a general method for modeling and understanding data anomalies. This work paves the way for a better understanding of anomalies and their root causes, leading towards creating a training set which may be used for machine learning approaches.
  •  
15.
  • Johansson, Ludvig, et al. (author)
  • An Industrial Case Study on Visualization of Dependencies between Software Measurements
  • 2007
  • In: 7th Conference on Software Engineering Research and Practice in Sweden. - 1654-4870. ; 1:2007:02, s. 23-33
  • Conference paper (peer-reviewed)abstract
    • Managing large software projects requires working with a large set of measurements to plan, monitor, and control the projects. The measurements can, and usually are, related to each other which raise an issue of efficiently managing the measurements by identifying, quantifying, and comparing dependencies between measurements within a project or between projects. This paper presents a case study performed at one of the units of Ericsson. The case study was designed to elicit and evaluate viable methods for visualizing dependencies between software measurements from a perspective of project and quality managers. By developing a series of prototypes, and evaluating them in interviews, we get results showing applicability of each visualization method in the context of the studied organization. The prototypes were used to visualize correlation coefficients, distribution dependencies, and project differences. The results show that even simple methods could significantly improve the work of quality managers and make the work with measurements more efficient in the organization.
  •  
16.
  • Knauss, Eric, 1977, et al. (author)
  • Supporting Continuous Integration by Code-Churn Based Test Selection
  • 2015
  • In: RCoSE - 2nd International Workshop on Rapid Continuous Software Engineering @ ICSE 2015 Florence, Italy.
  • Conference paper (other academic/artistic)abstract
    • Continuous integration promises advantages in large-scale software development by enabling software development organizations to deliver new functions faster. However, implementing continuous integration in large software development organizations is challenging because of organizational, social and technical reasons. One of the technical challenges is the ability to rapidly prioritize the test cases which can be executed quickly and trigger the most failures as early as possible. In our research we propose and evaluate a method for selecting a suitable set of functional regression tests on system level. The method is based on analysis of correlations between test-case failures and source code changes and is evaluated by combining semi-structured interviews and workshops with practitioners at Ericsson and Axis Communications in Sweden. The results show that using measures of precision and recall, the test cases can be prioritized. The prioritization leads to finding an optimal test suite to execute before the integration.
  •  
17.
  • Meding, Wilhelm, et al. (author)
  • The Role of Design and Implementation Models in Establishing Mature Measurement Programs
  • 2009
  • In: 7th Nordic workshop on Model Driven Engineering/ Tampere University of Technology research report. - 1797-836X. - 9789521522123 ; 2009:5, s. 284-299
  • Conference paper (peer-reviewed)abstract
    • Adoption of Model Driven Engineering is often caused by industrial needs for increased productivity and/or effective communication within teams and with the customers (thus leading to improved quality of the final products). Introducing measurement programs into large organizations, like Ericsson, can benefit from using models in order to overcome difficulties with establishing a common understanding of software metrics. Using automated transformations of domain-specific modeling of measurement systems (systems used to collect, analyze and present metric data) can decrease the time required to develop customized measurement systems for project, product and line managers. In this paper we present a set of experiences how introducing models and Model Driven Engineering led to decreased development time of measurement systems (by the factor of 2), establishment of mature metric culture in the organization and contributed to large-scale spread of metric programs
  •  
18.
  • Ochodek, M., et al. (author)
  • Automated Code Review Comment Classification to Improve Modern Code Reviews
  • 2022
  • In: Lecture Notes in Business Information Processing. - Cham : Springer International Publishing. - 1865-1356 .- 1865-1348. ; 439 LNBIP, s. 23-40
  • Conference paper (peer-reviewed)abstract
    • Modern Code Reviews (MCRs) are a widely-used quality assurance mechanism in continuous integration and deployment. Unfortunately, in medium and large projects, the number of changes that need to be integrated, and consequently the number of comments triggered during MCRs could be overwhelming. Therefore, there is a need for quickly recognizing which comments are concerning issues that need prompt attention to guide the focus of the code authors, reviewers, and quality managers. The goal of this study is to design a method for automated classification of review comments to identify the needed change faster and with higher accuracy. We conduct a Design Science Research study on three open-source systems. We designed a method (CommentBERT) for automated classification of the code-review comments based on the BERT (Bidirectional Encoder Representations from Transformers) language model and a new taxonomy of comments. When applied to 2,672 comments from Wireshark, The Mono Framework, and Open Network Automation Platform (ONAP) projects, the method achieved accuracy, measured using Matthews Correlation Coefficient, of 0.46–0.82 (Wireshark), 0.12–0.8 (ONAP), and 0.48–0.85 (Mono). Based on the results, we conclude that the proposed method seems promising and could be potentially used to build machine-learning-based tools to support MCRs as long as there is a sufficient number of historical code-review comments to train the model.
  •  
19.
  • Ochodek, Miroslaw, et al. (author)
  • On Identifying Similarities in Git Commit Trends—A Comparison Between Clustering and SimSAX
  • 2020
  • In: SWQD 2020: Software Quality: Quality Intelligence in Software and Systems Engineering. - Cham : Springer. - 1865-1348 .- 1865-1356. - 9783030355104
  • Conference paper (peer-reviewed)abstract
    • Software products evolve increasingly fast as markets continuously demand new features and agility to customer’s need. This evolution of products triggers an evolution of software development practices in a different way. Compared to classical methods, where products were developed in projects, contemporary methods for continuous integration, delivery, and deployment develop products as part of continuous programs. In this context, software architects, designers, and quality engineers need to understand how the processes evolve over time since there is no natural start and stop of projects. For example, they need to know how similar two iterations of the same program or how similar two development programs are. In this paper, we compare three methods for calculating the degree of similarity between projects by comparing their Git commit series. We test three approaches—the DNA-motifs-inspired …
  •  
20.
  • Ochodek, Miroslaw, et al. (author)
  • Recognizing lines of code violating company-specific coding guidelines using machine learning A Method and Its Evaluation
  • 2020
  • In: Empirical Software Engineering. - : Springer Science and Business Media LLC. - 1382-3256 .- 1573-7616. ; 25, s. 220-265
  • Journal article (peer-reviewed)abstract
    • Software developers in big and medium-size companies are working with millions of lines of code in their codebases. Assuring the quality of this code has shifted from simple defect management to proactive assurance of internal code quality. Although static code analysis and code reviews have been at the forefront of research and practice in this area, code reviews are still an effort-intensive and interpretation-prone activity. The aim of this research is to support code reviews by automatically recognizing company-specific code guidelines violations in large-scale, industrial source code. In our action research project, we constructed a machine-learning-based tool for code analysis where software developers and architects in big and medium-sized companies can use a few examples of source code lines violating code/design guidelines (up to 700 lines of code) to train decision-tree classifiers to find similar …
  •  
21.
  • Ochodek, Miroslaw, 1980, et al. (author)
  • Using Machine Learning to Design a Flexible LOC Counter
  • 2017
  • In: Workshop on Machine Learning Techniques for Software Quality Evaluation. - : IEEE. - 9781509065974
  • Conference paper (peer-reviewed)abstract
    • Abstract—Background: The results of counting the size of programs in terms of Lines-of-Code (LOC) depends on the rules used for counting (i.e. definition of which lines should be counted). In the majority of the measurement tools, the rules are statically coded in the tool and the users of the measurement tools do not know which lines were counted and which were not. Goal: The goal of our research is to investigate how to use machine learning to teach a measurement tool which lines should be counted and which should not. Our interest is to identify which parameters of the learning algorithm can be used to classify lines to be counted. Method: Our research is based on the design science research methodology where we construct a measurement tool based on machine learning and evaluate it based on open source programs. As a training set, we use industry professionals to classify which lines should be counted. Results: The results show that classifying the lines as to be counted or not has an average accuracy varying between 0.90 and 0.99 measured as Matthew’s Correlation Coefficient and between 95% and nearly 100% measured as the percentage of correctly classified lines. Conclusions: Based on the results we conclude that using machine learning algorithms as the core of modern measurement instruments has a large potential and should be explored further.
  •  
22.
  •  
23.
  •  
24.
  •  
25.
  • Rana, Rakesh, et al. (author)
  • Software defect prediction in automotive and telecom domain : A life-cycle approach
  • 2015
  • In: Software Technologies. - Cham : Springer. - 9783319255781 - 9783319255798 ; , s. 217-232
  • Book chapter (peer-reviewed)abstract
    • Embedded software is playing an ever increasing role in providing functionality and user experience. At the same time, size and complexity of this software is also increasing which bring new challenges for ensuring quality and dependability. For developing high quality software with superior dependability characteristics requires an effective software development process with greater control. Methods of software defect predictions can help optimize the software verification and validation activities by providing useful information for test resource allocation and release planning decisions. We review the software development and testing process for two large companies from the automotive and telecom domain and map different defect prediction methods and their applicability to their lifecycle phases. Based on the overview and current trends we also identify possible directions for software defect prediction techniques and application in these domains. © Springer International Publishing Switzerland 2015.
  •  
26.
  •  
27.
  • Staron, Miroslaw, 1977, et al. (author)
  • A framework for developing measurement systems and its industrial evaluation
  • 2009
  • In: Information and Software Technology. ; 51:April, s. 721-737
  • Journal article (peer-reviewed)abstract
    • As in every engineering discipline, metrics play an important role in software development, with the difference that almost all software projects need the customization of metrics used. In other engineering disciplines, the notion of a measurement system (i.e. a tool used to collect, calculate, and report quantitative data) is well known and defined, whereas it is not as widely used in software engineering. In this paper we present a framework for developing custom measurement systems and its industrial evaluation in a software development unit within Ericsson. The results include the framework for designing measurement systems and its evaluation in real life projects at the company. The results show that with the help of ISO/IEC standards, measurement systems can be effectively used in software industry and that the presented framework improves the way of working with metrics. This paper contributes with the presentation of how automation of metrics collection and processing can be successfully introduced into a large organization and shows the benefits of it: increased efficiency of metrics collection, increased adoption of metrics in the organization, independence from individuals and standardized nomenclature for metrics in the organization.
  •  
28.
  • Staron, Miroslaw, 1977, et al. (author)
  • A Key Performance Indicator Quality Model and Its Industrial Evaluation
  • 2016
  • In: IWSM Mensura 2016.
  • Conference paper (peer-reviewed)abstract
    • Background: Modern software development companies increasingly rely on quantitative data in their decisionmaking for product releases, organizational performance assessment and monitoring of product quality. KPIs (Key Performance Indicators) are a critical element in the transformation of raw data (numbers) into decisions (indicators). Goal: The goal of the paper is to develop, document and evaluate a quality model for KPIs – addressing the research question of What characterizes a good KPI? In this paper we consider a KPI to be ”good” when it is actionable and supports the organization in achieving its strategic goals. Method: We use an action research collaborative project with an infrastructure provider company and an automotive OEM to develop and evaluate the model. We analyze a set of KPIs used at both companies and verify whether the organization’s perception of these evaluated KPIs is aligned with the KPI’s assessment according to our model. Results: The results show that the model organizes good practices of KPI development and that it is easily used by the stakeholders to improve the quality of the KPIs or reduce the number of the KPIs. Conclusions: Using the KPI quality model provides the possibility to increase the effect of the KPIs in the organization and decreases the risk of wasting resources for collecting KPI data which cannot be used in practice.
  •  
29.
  • Staron, Miroslaw, 1977, et al. (author)
  • A Modeling Language for Specifying and Visualizing Measurement Systems for Software Metr ics
  • 2009
  • In: Tampere University of Technology research report, Nordic workhop on MDE. - 1797-836X. - 9789521522123 ; 2009:5, s. 300-315
  • Conference paper (peer-reviewed)abstract
    • Using metrics in software engineering usually entails developing custom measurement systems dedicated for particular stakeholders (project, product, and line managers). As processes for collecting, analyzing and presenting metrics are usually not the main goal of software development companies, these processes should be as efficient as possible and so should be the development of measurement systems. In this paper we present a domain specific modeling language for modeling and developing measurement systems which makes the processes efficient. The efficiency is achieved by automatically generating measurement systems from the models of metrics (used when identifying important metrics together with stakeholders). The language has been successfully applied in industry and its use lead to significant productivity improvements when developing measurement systems at Ericsson. By providing the possibility of using either formalized language in MS Visual Studio or an informal notation in MS PowerPoint (reducing the need for specialized training) the language contributed to spreading the metric culture in the organization.
  •  
30.
  • Staron, Miroslaw, 1977, et al. (author)
  • A portfolio of internal quality metrics for software architects
  • 2017
  • In: Lecture Notes in Business Information Processing. Software Quality. Complexity and Challenges of Software Engineering in Emerging Technologies. SWQD 2017 (Winkler D., Biffl S., Bergsmann J., eds.). - Cham : Springer. - 1865-1348. - 9783319494203
  • Conference paper (peer-reviewed)abstract
    • Maintaining high quality software in the age of interconnected systems, systems of systems and Internet of things requires efficient management of the evolution of the software. Evolving the architecture of the software together with the evolution of the design is one of the key areas in maintaining the high quality. In this paper we present the results of our studies at Software Center (nine companies and five universities) with the goal to identify the main information needs and quality metrics for the role of software architects. As a result of our studies we can describe the most commonly used metrics in software engineering in general and in software architecting in particular – e.g. architectural stability or component coupling. We present these metrics and their interpretation (what should be done and why based on the values of metrics). The theoretical framing for this chapter is the ISO/IEC 15939 standard -- Systems and Software Engineering, Measurement Processes.
  •  
31.
  • Staron, Miroslaw, 1977, et al. (author)
  • A portfolio of internal quality metrics for software architects
  • 2016
  • In: Software Quality Days 2017.
  • Conference paper (peer-reviewed)abstract
    • Maintaining high quality software in the age of interconnected systems, systems of systems and Internet of things requires efficient management of the evolution of the software. Evolving the architecture of the software together with the evolution of the design is one of the key areas in maintaining the high quality. In this paper we present the results of our studies at Software Center (nine companies and five universities) with the goal to identify the main information needs and quality metrics for the role of software architects. As a result of our studies we can describe the most commonly used metrics in software engineering in general and in software architecting in particular – e.g. architectural stability or component coupling. We present these metrics and their interpretation (what should be done and why based on the values of metrics). The theoretical framing for this chapter is the ISO/IEC 15939 standard -- Systems and Software Engineering, Measurement Processes.
  •  
32.
  • Staron, Miroslaw, 1977, et al. (author)
  • Classifying obstructive and non-obstructive code clones of Type I using simplified classification scheme–A Case Study
  • 2015
  • In: Advances in Software Engineering. - : Hindawi Limited. - 1687-8655 .- 1687-8663.
  • Journal article (peer-reviewed)abstract
    • Code cloning is a part of many commercial and open-source development products. Multiple methods for detecting code clones have been developed and finding the clones is often used in modern quality assurance tools in industry. There is no consensus whether the detected clones are negative for the product and therefore the detected clones are often left unmanaged in the product code base. In this paper we investigate how obstructive code clones of type I (duplicated exact code fragments) are in large software systems from the perspective of the quality of the product after the release. We conduct a case study at Ericsson and three of its large products, that handle mobile data traffic. We use a newly developed classification scheme which categorizes code clones according to their potential obstructiveness. We show how to use automated analogy-based classification to decrease the classification effort required to determine whether a clone pair should be refactored or remain untouched. The automated method allows to classify 96% of the Type I clones (both algorithms and data declarations) leaving the remaining 4% for the manual classification. The results show that cloning is common in the studied commercial software, but that only 1% of these clones are potentially obstructive, i.e. can jeopardize the quality of the product if left unmanaged.
  •  
33.
  • Staron, Miroslaw, 1977, et al. (author)
  • Consequences of Mispredictions of Software Reliability: A Model and its Industrial Evaluation
  • 2014
  • In: IWSM Mensura 2014.
  • Conference paper (other academic/artistic)abstract
    • Predicting reliability of software under development is an important part of estimations in software engineering projects. In many organizations as the goal is that software products are released with no known defects, the process of finding and removing defects correlates with the effort for software projects. Software development projects estimate the resources needed to design, develop, test and release software products, and the number of defects which have to be handled. In this paper we present a model for consequence analysis of inaccurate predictions of quality in software projects. The model is a result of multiple case studies and is evaluated at two companies. The model recognizes the most common mispredictions – e.g. over- and under-prediction, early- and late-predictions – and the combination of theses. The results from the industrial evaluation show that the consequences can be grouped according to under- and over-predictions and that the late- and early-predictions have the same consequences. The results show also that mispredicting the shape of the reliability curve has a significant consequence with regard to assessment of release readiness and resource planning.
  •  
34.
  • Staron, Miroslaw, 1977, et al. (author)
  • Dashboards for continuous monitoring of quality for software product under development
  • 2014
  • In: System Qualities and Software Architecture (SQSA). - Amsterdam : Elsevier. - 9780124171688
  • Book chapter (peer-reviewed)abstract
    • This chapter contributes with a systematic overview of good examples on how dashboards are used to monitor quality of software products under development – both using multiple measures and a single indicator which combines quality and development progress. In this chapter we extract recommendations for building such dashboards for practitioners by exploring how three companies use dashboards for monitoring and controlling external and internal quality of large software products under development. The dashboards presented by each company contain a number of indicators each, and have different premises due to the domain of the product, its purpose and the organization. We describe a number of common principles behind a set of measures, which address the challenge of quantifying readiness to deliver of software products to their end customers. The experiences presented in this chapter come from multiple case studies at Ericsson, two studies at Volvo Car Corporation (VCC) and one at Saab Electronic Defense Systems in Sweden. All companies have a long experience with software development and have undergone a transition into Agile and Lean software development; however the experience with these new paradigms differs from two to five years depending on the company. The difference in the experience provide a possibility to observe that companies with longer experience tend to focus on using measures to support self-organized teams whereas companies with shorter experience tend to focus on using measures to communicate the status from teams to management.
  •  
35.
  •  
36.
  • Staron, Miroslaw, 1977, et al. (author)
  • Developing measurement systems: an industrial case study
  • 2010
  • In: Journal of Software Maintenance and Evolution: Research and Practice. - : Wiley. - 1532-0618 .- 1532-060X.
  • Journal article (peer-reviewed)abstract
    • The process of measuring in software engineering has already been standardized in the ISO/IEC 15939 standard, where activities related to identifying, creating, and evaluating of measures are described. In the process of measuring software entities, however, an organization usually needs to create custom measurement systems, which are intended to collect, analyze, and present data for a specific purpose. In this paper, we present a proven industrial process for developing measurement systems including the artifacts and deliverables important for a successful deployment of measurement systems in industry. The process has been elicited during a case study at Ericsson and is used in the organization for over 3 years when the paper was written. The process is supported by a framework that simplifies the implementation of the measurement systems and shortens the time from the initial idea to a working measurement system by the factor of 5 compared with using a standard development process not tailored for measurement systems. Copyright © 2010 John Wiley & Sons, Ltd.
  •  
37.
  • Staron, Miroslaw, 1977, et al. (author)
  • Ensuring Reliability of Information Provided by Measurement Systems
  • 2009
  • In: Software Process and Product Measurement. - Berlin, Heidelberg : Springer Berlin Heidelberg. ; , s. 1-16
  • Conference paper (peer-reviewed)abstract
    • Controlling the development of large and complex software is usually done in a quantitative manner using software metrics as the foundation for decision making. Large projects usually collect large amounts of metrics although present only a few key ones for daily project, product, and organization monitoring. The process of collecting, analyzing and presenting the key information is usually supported by automated measurement systems. Since in this process there is a transition from a lot of information (data) to a small number of indicators (metrics with decision criteria), the usual question which arises during discussions with managers is whether the stakeholders can “trust” the indicators w.r.t. the correctness of information and its timeliness. In this paper we present a method for addressing this question by assessing information quality for ISO/IEC 15939-based measurement systems. The method is realized and used in measurement systems at one of the units of Ericsson. In the paper, we also provide a short summary of the evaluation of this method through its use at Ericsson.
  •  
38.
  • Staron, Miroslaw, 1977, et al. (author)
  • Ensuring Sustainability of Knowledge
  • 2020
  • In: Action Research in Software Engineering. ; , s. 153-168
  • Book chapter (other academic/artistic)
  •  
39.
  • Staron, Miroslaw, 1977, et al. (author)
  • Factors Determining Long-term Success of a Measurement Program: An Industrial Case Study
  • 2011
  • In: e-Informatica, Software Engineering Journal. - : Walter de Gruyter GmbH. - 1897-7979. ; 5:1, s. 7-23
  • Journal article (peer-reviewed)abstract
    • Introducing measurement programs into organizations is a lengthy process affected by organizational and technical constraints. There exist several aspects that determine whether a measurement program has the chances of succeeding, like management commitment or existence of proper tool support. The establishing of a program, however, is only a part of the success. As organizations are dynamic entities, the measurement programs should constantly be maintained and adapted in order to cope with changing needs of the organizations. In this paper we study one of the measurement programs at Ericsson AB in Sweden and as a result we identify factors determining successful adoption and use of the measurement program. The results of our research in this paper are intended to support quality managers and project managers in establishing and maintaining successful metrics programs.
  •  
40.
  • Staron, Miroslaw, 1977, et al. (author)
  • Identifying Implicit Architectural Dependencies using Measures of Source Code Change Waves
  • 2013
  • In: 39th Euromicro Conference Series on Software Engineering and Advanced Applications, SEAA 2013; Santander; Spain; 4 September 2013 through 6 September 2013. - 9780769550916 ; , s. 325-332
  • Conference paper (peer-reviewed)abstract
    • The principles of Agile software development are increasingly used in large software development projects, e.g. using Scrum of Scrums or combining Agile and Lean development methods. When large software products are developed by self-organized, usually feature-oriented teams, there is a risk that architectural dependencies between software components become uncontrolled. In particular there is a risk that the prescriptive architecture models in form of diagrams are outdated and implicit architectural dependencies may become more frequent than the explicit ones. In this paper we present a method for automated discovery of potential dependencies between software components based on analyzing revision history of software repositories. The result of this method is a map of implicit dependencies which is used by architects in decisions on the evolution of the architecture. The software architects can assess the validity of the dependencies and can prevent unwanted component couplings and design erosion hence minimizing the risk of post-release quality problems. Our method was evaluated in a case study at one large product at Saab Electronic Defense Systems (Saab EDS) and one large software product at Ericsson AB.
  •  
41.
  • Staron, Miroslaw, 1977, et al. (author)
  • Improving Completeness of Measurement Systems for Monitoring Software Development Workflows
  • 2013
  • In: Lecture Notes in Business Information Processing. - 1865-1348. - 9783642357015 ; 133, s. 230-243
  • Conference paper (peer-reviewed)abstract
    • Monitoring and controlling of software projects executed according to Lean or Agile software development requires, in principle, continuous measurement and use of indicators to monitor development areas and/or identify problem areas. Indicators are specific kind of measures with associated analysis models and decision criteria (ISO/IEC 15939). Indicating/highlighting problems in processes, is often used in Lean SW development and despite obvious benefits there are also dangers with improper use of indicators – using inadequate indicators can mislead the stakeholders towards sub-optimizations/erroneous decisions. In this paper we present a method for assessing completeness of information provided by measurement systems (i.e. both measures and indicators). The method is a variation of value stream mapping modeling with an application in a software development organization in the telecom domain. We also show the use of this method at one of the units of Ericsson where it was applied to provide stakeholders with an early warning system about upcoming problems with software quality.
  •  
42.
  • Staron, Miroslaw, 1977, et al. (author)
  • Improving Quality of Code Review Datasets – Token-Based Feature Extraction Method
  • 2021
  • In: Lecture Notes in Business Information Processing. - Cham : Springer International Publishing. - 1865-1356 .- 1865-1348. ; 404, s. 81-93
  • Conference paper (peer-reviewed)abstract
    • Machine learning is used increasingly frequent in software engineering to automate tasks and improve the speed and quality of software products. One of the areas where machine learning starts to be used is the analysis of software code. The goal of this paper is to evaluate a new method for creating machine learning feature vectors, based on the content of a line of code. We designed a new feature extraction algorithm and evaluated it in an industrial case study. Our results show that using the new feature extraction technique improves the overall performance in terms of MCC (Matthews Correlation Coefficient) by 0.39 – from 0.31 to 0.70, while reducing the precision by 0.05. The implications of this is that we can improve overall prediction accuracy for both true positives and true negatives significantly. This increases the trust in the predictions by the practitioners and contributes to its deeper adoption in practice.
  •  
43.
  • Staron, Miroslaw, 1977, et al. (author)
  • Industrial Experiences from Evolving Measurement Systems into Self-Healing Systems for Improved Availability
  • 2018
  • In: Software, Practice & Experience. - : Wiley. - 0038-0644 .- 1097-024X. ; 48:3
  • Journal article (peer-reviewed)abstract
    • Automated measurement programs are an efficient way of collecting, processing and visualizing measures in large software development companies. The number of measurements in these programs is usually large, which is caused by diversity of the needs of the stakeholders. In this paper we present the application of self-healing concepts to assure the availability of measurements to stakeholders without the need for effort-intensive and costly manual interventions of the operators. We study the measurement infrastructure at one of the development units of a large infrastructure provider. In this paper we present how MAPE-K model was instantiated in a simplistic manner in order to reduce the need for manual intervention in the operation of measurement systems. Based on the experiences from the two cases studied in this paper we show how an evolution towards self-healing measurement systems is done both with a dedicated failure taxonomy and with an effective straightforward handling of the most common errors in the execution. The mechanisms studied and presented in this paper show that self-healing provides significant improvements to the operation of the measurement program and reduces the need for daily oversight by an operator for the measurement systems.
  •  
44.
  • Staron, Miroslaw, 1977, et al. (author)
  • Industrial Self-Healing Measurement Systems
  • 2014
  • In: Continuous Software Engineering. ; , s. 183-200
  • Book chapter (other academic/artistic)abstract
    • Automated measurement programs (i.e., placeholders for large number of measurement systems) are an efficient way of collecting, processing, and visualizing measurements in large software development companies. The measurement programs rely both on the software for data collection, analysis, and visualization—measurement systems—and humans for reporting of the data, design, and maintenance of the measurement systems. As the outcome of the measurement program—visualized measurement data—is an important input for decision making in the companies, it needs to be trustworthy and up to date. In this paper we present an experience report on development, deployment, and use of a self-healing measurement systems infrastructure at Ericsson AB. The infrastructure has been in use for a number of years and handles over 4,000 measurement systems in a fully automated way. Monitoring and self-healing of the infrastructure lead to the availability of measurement systems 24/7 and reducing the costs of managing them
  •  
45.
  • Staron, Miroslaw, 1977, et al. (author)
  • Measurement-as-a-Service - A New Way of Organizing Measurement Programs in Large Software Development Companies
  • 2015
  • In: International Conference on Software Measurement (Mensura).
  • Conference paper (peer-reviewed)abstract
    • Modern software development companies focus on their primary business objectives, delivering customer value and customer satisfaction which often leads to prioritization of core business areas over such areas as measurement. Although the companies recognize the need and importance of software measurement, they often do not have the competence and/or time to focus on software measurement. In this paper we address the challenge of optimizing the measurement processes in modern companies by using cloud computing and by providing measurement (process) as a service for core business of the companies. Similar to the concept of Software-as-a-Service we dene the concept Measurement- as-a-Service and describe how to organize a measurement program according to this definition. The Measurement-as-a-Service concept is well-aligned with measurement programs developed according to ISO/IEC 15939 and can help the companies to increase the benefits obtained from the efficient use of metrics.
  •  
46.
  • Staron, Miroslaw, 1977, et al. (author)
  • Measuring and Visualizing Code Stability - A Case Study at Three Companies
  • 2013
  • In: IWSM/Mensura, Conference proceedings, IEEE. - : IEEE.
  • Conference paper (peer-reviewed)abstract
    • Monitoring performance of software development organizations can be achieved from a number of perspectives – e.g. using such tools as Balanced Scorecards or corporate dashboards. However, automated monitoring through quantitative metrics usually gives the best cost-benefit trade-off. In this paper we present results from a study on using code stability indicators as a tool for product stability and organizational performance, conducted at three different software development companies – Ericsson AB, Saab AB Electronic Defense Systems (Saab) and Volvo Group Trucks Technology (Volvo Group). The results show that visualizing the source code changes using heatmaps and linking these visualizations to defect inflow profiles provide indicators of how stable the product under development is and whether quality assurance efforts should be directed to specific parts of the product. Observing the indicator and making decisions based on its visualization leads to shorter feedback loops between development and test, thus resulting in lower development costs, shorter lead time and increased quality. The industrial case study in the paper shows that the indicator and its visualization can show whether the modifications of software products are focused on parts of the code base or are spread widely throughout the product.
  •  
47.
  • Staron, Miroslaw, 1977, et al. (author)
  • MeSRAM - A method for assessing robustness of measurement programs in large software development organizations and its industrial evaluation
  • 2016
  • In: Journal of Systems and Software. - : Elsevier BV. - 0164-1212. ; 113:March, s. 76-100
  • Journal article (peer-reviewed)abstract
    • Measurement programs in large software development organizations contain a large number of indicators, base and derived measures to monitor products, processes and projects. The diversity and the number of these measures causes the measurement programs to become large, combining multiple needs, measurement tools and organizational goals. For the measurement program to effectively support organization's goals, it should be scalable, automated, standardized and flexible - i.e. robust. In this paper we present a method for assessing the robustness of measurement programs. The method is based on the robustness model which has been developed in collaboration between seven companies and a university. The purpose of the method is to support the companies to optimize the value obtained from the measurement programs and their cost. We evaluated the method at the seven companies and the results from applying the method to each company quantified the robustness of their programs, reflecting the real-world status of the programs and pinpointed strengths and improvements of the programs. (C) 2015 Elsevier Inc. All rights reserved.
  •  
48.
  • Staron, Miroslaw, 1977, et al. (author)
  • MetricsCloud: Scaling-Up Metrics Dissemination in Large Organizations
  • 2014
  • In: Advances in Software Engineering. - : Hindawi Limited. - 1687-8655 .- 1687-8663. ; 2014
  • Journal article (peer-reviewed)abstract
    • The evolving software development practices in modern software development companies often bring in more empowerment to software development teams. The empowered teams change the way in which software products and projects are measured and how the measures are communicated. In this paper we address the problem of dissemination of measurement information by designing a measurement infrastructure in a cloud environment. The described cloud system (MetricsCloud) utilizes file-sharing as the underlying mechanism to disseminate the measurement information at the company. Based on the types of measurement systems identified in this paper MetricsCloud realizes a set of synchronization strategies to fulfill the needs specific for these types. The results from migrating traditional, web-based, folder-sharing distribution of measurement systems to the cloud show that this type of measurement dissemination is flexible, reliable, and easy to use.
  •  
49.
  •  
50.
  • Staron, Miroslaw, 1977, et al. (author)
  • Predicting Monthly Defect Inflow in Large Software Projects – An Industrial Case Study
  • 2007
  • In: Industry Track Proceedings of the 27 International Symposium on Software Reliability Engineering. ; , s. 23-30
  • Conference paper (peer-reviewed)abstract
    • One of the main aspects considered during planning of large software projects is the defect inflow. The curve of the defect inflow shows what potentially can happen if no counter measures are taken in the project. The curve can support project and quality managers in taking decisions, in order to meet some of the required quality objectives. This paper presents a method for constructing prediction models of a monthly defect inflow for the duration of an entire project. The projects, which we construct the models for, are large software projects at Ericsson, which are structured around a significant number of work packages, which incrementally deliver new functionality for embedded software in a network node. The presented method is based on using similarity of projects for estimations and regression techniques, which result in equations describing defect inflow in a project. The independent variable in the equation is the month of the project. The method results in creating prediction models which we evaluate in two on-going projects by comparing them to existing practices at Ericsson and alternative ways of constructing the models. The method is intended to be simple and allows adjusting the models as the project progresses. The results from these two projects show that the accuracy of predictions is higher for the models developed using our method than for such models as the Rayleigh model.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-50 of 57
Type of publication
conference paper (38)
journal article (13)
book chapter (5)
book (1)
Type of content
peer-reviewed (49)
other academic/artistic (8)
Author/Editor
Staron, Miroslaw, 19 ... (55)
Meding, Wilhelm (45)
Hansson, Jörgen, 197 ... (11)
Meding, Wilhelm, 197 ... (6)
Antinyan, Vard, 1984 (5)
Hebig, Regina (4)
show more...
Nilsson, Martin, 197 ... (4)
Rana, Rakesh, 1985 (4)
Hebig, Regina, 1984 (3)
Al Sabbagh, Khaled, ... (3)
Ochodek, Miroslaw (3)
Söder, Ola (3)
Staron, Miroslaw (2)
Meding, Wilhelm, 197 ... (2)
Al-Sabbagh, Kaled Wa ... (2)
Henriksson, Anders (2)
Ochodek, M. (2)
Berbyuk Lindström, N ... (1)
Karlsson, Göran (1)
Horkoff, Jennifer, 1 ... (1)
Nilsson, Martin (1)
Nilsson, Sven (1)
Nilsson, Christer (1)
Feldt, Robert, 1972 (1)
Berger, Christian, 1 ... (1)
Hansson, Jörgen (1)
Tichy, Matthias, 197 ... (1)
Knauss, Eric, 1977 (1)
Bosch, Jan, 1967 (1)
Henriksson, A (1)
Derehag, Jesper (1)
Runsten, Mattias (1)
Wikström, Erik (1)
Österström, P. (1)
Wikström, E. (1)
Wranker, J. (1)
Henriksson, Anders, ... (1)
Österström, Per, 197 ... (1)
Antinyan, Vard (1)
Österström, Per (1)
Österströ, Per, 1970 (1)
Bergenwall, Henric, ... (1)
Wranker, Johan, 1970 (1)
Henriksson, Anders, ... (1)
Nilsson, Agneta, 196 ... (1)
Koutsikouri, Dina, 1 ... (1)
Calikli, Gul (1)
Johansson, Ludvig (1)
Söder, Ola, 1977 (1)
Castell, Magnus, 197 ... (1)
show less...
University
University of Gothenburg (49)
Chalmers University of Technology (23)
University of Skövde (3)
Karolinska Institutet (1)
Language
English (57)
Research subject (UKÄ/SCB)
Natural sciences (54)
Engineering and Technology (4)
Social Sciences (2)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view