SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Staron Miroslaw) "

Search: WFRF:(Staron Miroslaw)

  • Result 1-50 of 284
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Al Sabbagh, Khaled, 1987, et al. (author)
  • Selective Regression Testing based on Big Data: Comparing Feature Extraction Techniques
  • 2020
  • In: IEEE Software. - 1937-4194 .- 0740-7459. ; , s. 322-329
  • Conference paper (peer-reviewed)abstract
    • Regression testing is a necessary activity in continuous integration (CI) since it provides confidence that modified parts of the system are correct at each integration cycle. CI provides large volumes of data which can be used to support regression testing activities. By using machine learning, patterns about faulty changes in the modified program can be induced, allowing test orchestrators to make inferences about test cases that need to be executed at each CI cycle. However, one challenge in using learning models lies in finding a suitable way for characterizing source code changes and preserving important information. In this paper, we empirically evaluate the effect of three feature extraction algorithms on the performance of an existing ML-based selective regression testing technique. We designed and performed an experiment to empirically investigate the effect of Bag of Words (BoW), Word Embeddings (WE), and content-based feature extraction (CBF). We used stratified cross validation on the space of features generated by the three FE techniques and evaluated the performance of three machine learning models using the precision and recall metrics. The results from this experiment showed a significant difference between the models' precision and recall scores, suggesting that the BoW-fed model outperforms the other two models with respect to precision, whereas a CBF-fed model outperforms the rest with respect to recall.
  •  
2.
  • Jarzębowicz, Aleksander, et al. (author)
  • Preface
  • 2024
  • In: Conference proceedings-Software, System, and Service Engineering. - : Springer. - 9783031510748 - 9783031510755 ; , s. v-vi
  • Conference paper (other academic/artistic)
  •  
3.
  • Ochodek, Miroslaw, et al. (author)
  • LegacyPro: A DNA-inspired method for identifying process legacies in software development organizations
  • 2020
  • In: IEEE Software. - 0740-7459 .- 1937-4194. ; 37:6, s. 76-85
  • Journal article (peer-reviewed)abstract
    • Changing a software development process is a tricky task—the bigger the change, the trickier it gets. Large companies have the inertia of processes, the change of process takes time, happens over multiple releases and at different pace in different parts of the organization. Unfortunately, there are no effective tools available that help us determine if an organization has really adopted a proclaimed process change, as well as to what extent it is making progress towards this desired state. This paper presents a novel and unique method for determining the factual adoption of new processes in software R&D organizations. We use a DNA-inspired analysis (motifs) to categorize parts and find similarities between projects using defect-inflow profiles. We applied the method to analyze projects from a large infrastructure provider and from open source and show quantification of the evolution of processes. IEEE
  •  
4.
  • Ochodek, Miroslaw, et al. (author)
  • Mining Task-Specific Lines of Code Counters
  • 2023
  • In: IEEE Access. - 2169-3536. ; 11, s. 100218-100233
  • Journal article (peer-reviewed)abstract
    • Context: Lines of code (LOC) is a fundamental software code measure that is widely used as a proxy for software development effort or as a normalization factor in many other software-related measures (e.g., defect density). Unfortunately, the problem is that it is not clear which lines of code should be counted: all of them or some specific ones depending on the project context and task in mind? Objective: To design a generator of task-specific LOC measures and their counters mined directly from data that optimize the correlation between the LOC measures and variables they proxy for (e.g., code-review duration). Method: We use Design Science Research as our research methodology to build and validate a generator of task-specific LOC measures and their counters. The generated LOC counters have a form of binary decision trees inferred from historical data using Genetic Programming. The proposed tool was validated based on three tasks, i.e., mining LOC measures to proxy for code readability, number of assertions in unit tests, and code-review duration. Results: Task-specific LOC measures showed a "strong" to "very strong" negative correlation with code-readability score (Kendall's $\tau $ ranging from -0.83 to -0.76) compared to "weak" to "strong" negative correlation for the best among the standard LOC measures ( $\tau $ ranging from -0.36 to -0.13). For the problem of proxying for the number of assertions in unit tests, correlation coefficients were also higher for task-specific LOC measures by ca. 11% to 21% ( $\tau $ ranged from 0.31 to 0.34). Finally, task-specific LOC measures showed a stronger correlation with code-review duration than the best among the standard LOC measures ( $\tau $ = 0.31, 0.36, and 0.37 compared to 0.11, 0.08, 0.16, respectively). Conclusions: Our study shows that it is possible to mine task-specific LOC counters from historical datasets using Genetic Programming. Task-specific LOC measures obtained that way show stronger correlations with the variables they proxy for than the standard LOC measures.
  •  
5.
  • Ochodek, Miroslaw, et al. (author)
  • On Identifying Similarities in Git Commit Trends—A Comparison Between Clustering and SimSAX
  • 2020
  • In: SWQD 2020: Software Quality: Quality Intelligence in Software and Systems Engineering. - Cham : Springer. - 1865-1348 .- 1865-1356. - 9783030355104
  • Conference paper (peer-reviewed)abstract
    • Software products evolve increasingly fast as markets continuously demand new features and agility to customer’s need. This evolution of products triggers an evolution of software development practices in a different way. Compared to classical methods, where products were developed in projects, contemporary methods for continuous integration, delivery, and deployment develop products as part of continuous programs. In this context, software architects, designers, and quality engineers need to understand how the processes evolve over time since there is no natural start and stop of projects. For example, they need to know how similar two iterations of the same program or how similar two development programs are. In this paper, we compare three methods for calculating the degree of similarity between projects by comparing their Git commit series. We test three approaches—the DNA-motifs-inspired …
  •  
6.
  • Ochodek, Miroslaw, et al. (author)
  • Recognizing lines of code violating company-specific coding guidelines using machine learning A Method and Its Evaluation
  • 2020
  • In: Empirical Software Engineering. - : Springer Science and Business Media LLC. - 1382-3256 .- 1573-7616. ; 25, s. 220-265
  • Journal article (peer-reviewed)abstract
    • Software developers in big and medium-size companies are working with millions of lines of code in their codebases. Assuring the quality of this code has shifted from simple defect management to proactive assurance of internal code quality. Although static code analysis and code reviews have been at the forefront of research and practice in this area, code reviews are still an effort-intensive and interpretation-prone activity. The aim of this research is to support code reviews by automatically recognizing company-specific code guidelines violations in large-scale, industrial source code. In our action research project, we constructed a machine-learning-based tool for code analysis where software developers and architects in big and medium-sized companies can use a few examples of source code lines violating code/design guidelines (up to 700 lines of code) to train decision-tree classifiers to find similar …
  •  
7.
  • Ochodek, Miroslaw, 1980, et al. (author)
  • Using Machine Learning to Design a Flexible LOC Counter
  • 2017
  • In: Workshop on Machine Learning Techniques for Software Quality Evaluation. - : IEEE. - 9781509065974
  • Conference paper (peer-reviewed)abstract
    • Abstract—Background: The results of counting the size of programs in terms of Lines-of-Code (LOC) depends on the rules used for counting (i.e. definition of which lines should be counted). In the majority of the measurement tools, the rules are statically coded in the tool and the users of the measurement tools do not know which lines were counted and which were not. Goal: The goal of our research is to investigate how to use machine learning to teach a measurement tool which lines should be counted and which should not. Our interest is to identify which parameters of the learning algorithm can be used to classify lines to be counted. Method: Our research is based on the design science research methodology where we construct a measurement tool based on machine learning and evaluate it based on open source programs. As a training set, we use industry professionals to classify which lines should be counted. Results: The results show that classifying the lines as to be counted or not has an average accuracy varying between 0.90 and 0.99 measured as Matthew’s Correlation Coefficient and between 95% and nearly 100% measured as the percentage of correctly classified lines. Conclusions: Based on the results we conclude that using machine learning algorithms as the core of modern measurement instruments has a large potential and should be explored further.
  •  
8.
  •  
9.
  •  
10.
  •  
11.
  •  
12.
  • 7th International Workshop on Automotive System/Software Architecture (WASA 2021)
  • 2021
  • Editorial proceedings (other academic/artistic)abstract
    • This volume contains the papers presented at the 7th International Workshop on Automotive System/Software Architecture (WASA 2021) held on March 22, 2021, in Stuttgart, Germany. WASA was organized as part of the 18th IEEE International Conference on Software Architecture (ICSA 2021), the premier software architecture conference. Due to the worldwide SARS-CoV-2 pandemic, the main conference and the workshop were hosted virtually.
  •  
13.
  •  
14.
  • Abrahao, Silvia, et al. (author)
  • Modeling and Architecting of Complex Software Systems
  • 2024
  • In: IEEE SOFTWARE. - 0740-7459 .- 1937-4194. ; 41:3, s. 76-79
  • Journal article (peer-reviewed)abstract
    • This edition of the "Practitioners' Digest" covers recent papers on novel approaches and tools to assist developers in modeling and architecting software systems from two conferences: the 26th ACM/IEEE International Conference on Model Driven Engineering Languages and Systems (MODELS) and the 20th IEEE International Conference on Software Architecture (ICSA). Feedback or suggestions are welcome. Also, if you try or adopt any of the practices included in the column, please send us and the authors of the paper(s) a note about your experiences.
  •  
15.
  • Abrahao, Silvia, et al. (author)
  • Open Source Software: Communities and Quality
  • 2023
  • In: IEEE Software. - 1937-4194 .- 0740-7459. ; 40:4, s. 96-99
  • Journal article (peer-reviewed)abstract
    • This edition of the Practitioner's Digest features recent papers on open source software related to toxicity in open source discussions, newcomers in open source projects, quality of ansible scripts, code review practices, orphan vulnerabilities in open source software, and the relationship between community and design smells.
  •  
16.
  •  
17.
  • Accelerating digital transformation : 10 years of software center
  • 2022
  • Editorial collection (other academic/artistic)abstract
    • This book celebrates the 10-year anniversary of Software Center (a collaboration between 18 European companies and five Swedish universities) by presenting some of the most impactful and relevant journal or conference papers that researchers in the center have published over the last decade.The book is organized around the five themes around which research in Software Center is organized, i.e. Continuous Delivery, Continuous Architecture, Metrics, Customer Data and Ecosystems Driven Development, and AI Engineering. The focus of the Continuous Delivery theme is to help companies to continuously build high quality products with the right degree of automation. The Continuous Architecture theme addresses challenges that arise when balancing the need for architectural quality and more agile ways of working with shorter development cycles. The Metrics theme studies and provides insight to understand, monitor and improve software processes, products and organizations. The fourth theme, Customer Data and Ecosystem Driven Development, helps companies make sense of the vast amounts of data that are continuously collected from products in the field. Eventually, the theme of AI Engineering addresses the challenge that many companies struggle with in terms of deploying machine- and deep-learning models in industrial contexts with production quality. Each theme has its own part in the book and each part has an introduction chapter and then a carefully selected reprint of the most important papers from that theme.This book mainly aims at researchers and advanced professionals in the areas of software engineering who would like to get an overview about the achievement made in various topics relevant for industrial large-scale software development and management – and to see how research benefits from a close cooperation between industry and academia.
  •  
18.
  •  
19.
  • Al Mamun, Md Abdullah, 1982, et al. (author)
  • Evolution of technical debt: An exploratory study
  • 2019
  • In: CEUR Workshop Proceedings. - : CEUR-WS. - 1613-0073. ; 2476, s. 87-102, s. 87-102
  • Conference paper (peer-reviewed)abstract
    • Context: Technical debt is known to impact maintainability of software. As source code files grow in size, maintainability becomes more challenging. Therefore, it is expected that the density of technical debt in larger files would be reduced for the sake of maintainability. Objective: This exploratory study investigates whether a newly introduced metric ‘technical debt density trend’ helps to better understand and explain the evolution of technical debt. The ‘technical debt density trend’ metric is the slope of the line of two successive ‘technical debt density’ measures corresponding to the ‘lines of code’ values of two consecutive revisions of a source code file. Method: This study has used 11,822 commits or revisions of 4,013 Java source files from 21 open source projects. For the technical debt measure, SonarQube tool is used with 138 code smells. Results: This study finds that ‘technical debt density trend’ metric has interesting characteristics that make it particularly attractive to understand the pattern of accrual and repayment of technical debt by breaking down a technical debt measure into multiple components, e.g., ‘technical debt density’ can be broken down into two components showing mean density corresponding to revisions that accrue technical debt and mean density corresponding to revisions that repay technical debt. The use of ‘technical debt density trend’ metric helps us understand the evolution of technical debt with greater insights.
  •  
20.
  • Al Mamun, Md Abdullah, 1982, et al. (author)
  • Improving Code Smell Predictions in Continuous Integration by Differentiating Organic from Cumulative Measures
  • 2019
  • In: The Fifth International Conference on Advances and Trends in Software Engineering. - 2519-8394. - 9781510883741 ; , s. 62-71
  • Conference paper (peer-reviewed)abstract
    • Continuous integration and deployment are enablers of quick innovation cycles of software and systems through incremental releases of a product within short periods of time. If software qualities can be predicted for the next release, quality managers can plan ahead with resource allocation for concerning issues. Cumulative metrics are observed to have much higher correlation coefficients compared to non-cumulative metrics. Given the difference in correlation coefficients of cumulative and noncumulative metrics, this study investigates the difference between metrics of these two categories concerning the correctness of predicting code smell which is internal software quality. This study considers 12 metrics from each measurement category, and 35 code smells collected from 36,217 software revisions (commits) of 242 open source Java projects. We build 8,190 predictive models and evaluate them to determine how measurement categories of predictors and targets affect model accuracies predicting code smells. To further validate our approach, we compared our results with Principal Component Analysis (PCA), a statistical procedure for dimensionality reduction. Results of the study show that within the context of continuous integration, non-cumulative metrics as predictors build better predictive models with respect to model accuracy compared to cumulative metrics. When the results are compared with models built from extracted PCA components, we found better results using our approach.
  •  
21.
  • Al Sabbagh, Khaled, 1987, et al. (author)
  • A classification of code changes and test types dependencies for improving machine learning based test selection
  • 2021
  • In: SIGPLAN Notices (ACM Special Interest Group on Programming Languages). - New York, NY, USA : ACM. - 0730-8566. ; , s. 40-49
  • Conference paper (peer-reviewed)abstract
    • Machine learning has been increasingly used to solve various software engineering tasks. One example of their usage is in regression testing, where a classifier is built using historical code commits to predict which test cases require execution. In this paper, we address the problem of how to link specific code commits to test types to improve the predictive performance of learning models in improving regression testing. We design a dependency taxonomy of the content of committed code and the type of a test case. The taxonomy focuses on two types of code commits: changing memory management and algorithm complexity. We reviewed the literature, surveyed experienced testers from three Swedish-based software companies, and conducted a workshop to develop the taxonomy. The derived taxonomy shows that memory management code should be tested with tests related to performance, load, soak, stress, volume, and capacity; the complexity changes should be tested with the same dedicated tests and maintainability tests. We conclude that this taxonomy can improve the effectiveness of building learning models for regression testing.
  •  
22.
  • Al Sabbagh, Khaled, 1987, et al. (author)
  • Improving Data Quality for Regression Test Selection by Reducing Annotation Noise
  • 2020
  • In: Proceedings - 46th Euromicro Conference on Software Engineering and Advanced Applications, SEAA 2020. ; , s. 191-194
  • Conference paper (peer-reviewed)abstract
    • Big data and machine learning models have been increasingly used to support software engineering processes and practices. One example is the use of machine learning models to improve test case selection in continuous integration. However, one of the challenges in building such models is the identification and reduction of noise that often comes in large data. In this paper, we present a noise reduction approach that deals with the problem of contradictory training entries. We empirically evaluate the effectiveness of the approach in the context of selective regression testing. For this purpose, we use a curated training set as input to a tree-based machine learning ensemble and compare the classification precision, recall, and f-score against a non-curated set. Our study shows that using the noise reduction approach on the training instances gives better results in prediction with an improvement of 37% on precision, 70% on recall, and 59% on f-score.
  •  
23.
  • Al Sabbagh, Khaled, 1987, et al. (author)
  • Improving Software Regression Testing Using a Machine Learning-Based Method for Test Type Selection
  • 2022
  • In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). - Cham : Springer International Publishing. - 1611-3349 .- 0302-9743. ; 13709 LNCS, s. 480-496
  • Conference paper (peer-reviewed)abstract
    • Since only a limited time is available for performing software regression testing, a subset of crucial test cases from the test suites has to be selected for execution. In this paper, we introduce a method that uses the relation between types of code changes and regression tests to select test types that require execution. We work closely with a large power supply company to develop and evaluate the method and measure the total regression testing time taken by our method and its effectiveness in selecting the most relevant test types. The results show that the method reduces the total regression time by an average of 18,33% when compared with the approach used by our industrial partner. The results also show that using a medium window size in the method configuration results in an improved recall rate from 61,11% to 83,33%, but not in considerable time reduction of testing. We conclude that our method can potentially be used to steer the testing effort at software development companies by guiding testers into which regression test types are essential for execution.
  •  
24.
  • Al-Sabbagh, Khaled, et al. (author)
  • Improving test case selection by handling class and attribute noise
  • 2022
  • In: Journal of Systems and Software. - : Elsevier BV. - 0164-1212. ; 183
  • Journal article (peer-reviewed)abstract
    • Big data and machine learning models have been increasingly used to support software engineering processes and practices. One example is the use of machine learning models to improve test case selection in continuous integration. However, one of the challenges in building such models is the large volume of noise that comes in data, which impedes their predictive performance. In this paper, we address this issue by studying the effect of two types of noise, called class and attribute, on the predictive performance of a test selection model. For this purpose, we analyze the effect of class noise by using an approach that relies on domain knowledge for relabeling contradictory entries and removing duplicate ones. Thereafter, an existing approach from the literature is used to experimentally study the effect of attribute noise removal on learning. The analysis results show that the best learning is achieved when training a model on class-noise cleaned data only - irrespective of attribute noise. Specifically, the learning performance of the model reported 81% precision, 87% recall, and 84% f-score compared with 44% precision, 17% recall, and 25% f-score for a model built on uncleaned data. Finally, no causality relationship between attribute noise removal and the learning of a model for test case selection was drawn. (C) 2021 The Author(s). Published by Elsevier Inc.
  •  
25.
  • Al Sabbagh, Khaled, 1987, et al. (author)
  • Predicting build outcomes in continuous integration using textual analysis of source code commits
  • 2022
  • In: PROMISE 2022 - Proceedings of the 18th International Conference on Predictive Models and Data Analytics in Software Engineering, co-located with ESEC/FSE 2022. - New York, NY, USA : ACM. ; , s. 42-51
  • Conference paper (peer-reviewed)abstract
    • Machine learning has been increasingly used to solve various software engineering tasks. One example of its usage is to predict the outcome of builds in continuous integration, where a classifier is built to predict whether new code commits will successfully compile. The aim of this study is to investigate the effectiveness of fifteen software metrics in building a classifier for build outcome prediction. Particularly, we implemented an experiment wherein we compared the effectiveness of a line-level metric and fourteen other traditional software metrics on 49,040 build records that belong to 117 Java projects. We achieved an average precision of 91% and recall of 80% when using the line-level metric for training, compared to 90% precision and 76% recall for the next best traditional software metric. In contrast, using file-level metrics was found to yield a higher predictive quality (average MCC for the best software metric= 68%) than the line-level metric (average MCC= 16%) for the failed builds. We conclude that file-level metrics are better predictors of build outcomes for the failed builds, whereas the line-level metric is a slightly better predictor of passed builds.
  •  
26.
  • Al Sabbagh, Khaled, 1987, et al. (author)
  • Predicting Test Case Verdicts Using TextualAnalysis of Commited Code Churns
  • 2019
  • In: CEUR Workshop Proceedings. - 1613-0073. ; 2476, s. 138-153
  • Conference paper (peer-reviewed)abstract
    • Background: Continuous Integration (CI) is an agile software development practice that involves producing several clean builds of the software per day. The creation of these builds involve running excessive executions of automated tests, which is hampered by high hardware cost and reduced development velocity. Goal: The goal of our research is to develop a method that reduces the number of executed test cases at each CI cycle.Method: We adopt a design research approach with an infrastructure provider company to develop a method that exploits Ma-chine Learning (ML) to predict test case verdicts for committed sourcecode. We train five different ML models on two data sets and evaluate their performance using two simple retrieval measures: precision and recall. Results: While the results from training the ML models on the first data-set of test executions revealed low performance, the curated data-set for training showed an improvement on performance with respect to precision and recall. Conclusion: Our results indicate that the method is applicable when training the ML model on churns of small sizes
  •  
27.
  •  
28.
  • Al Sabbagh, Khaled, 1987, et al. (author)
  • The Effect of Class Noise on Continuous Test Case Selection: A Controlled Experiment on Industrial Data
  • 2020
  • In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). - 1611-3349 .- 0302-9743. ; 12562, s. 287-303
  • Conference paper (peer-reviewed)abstract
    • Continuous integration and testing produce a large amount of data about defects in code revisions, which can be utilized for training a predictive learner to effectively select a subset of test suites. One challenge in using predictive learners lies in the noise that comes in the training data, which often leads to a decrease in classification performances. This study examines the impact of one type of noise, called class noise, on a learner’s ability for selecting test cases. Understanding the impact of class noise on the performance of a learner for test case selection would assist testers decide on the appropriateness of different noise handling strategies. For this purpose, we design and implement a controlled experiment using an industrial data-set to measure the impact of class noise at six different levels on the predictive performance of a learner. We measure the learning performance using the Precision, Recall, F-score, and Mathew Correlation Coefficient (MCC) metrics. The results show a statistically significant relationship between class noise and the learners performance for test case selection. Particularly, a significant difference between the three performance measures (Precision, F-score, and MCC)under all the six noise levels and at 0% level was found, whereas a similar relationship between recall and class noise was found at a level above30%. We conclude that higher class noise ratios lead to missing out more tests in the predicted subset of test suite and increases the rate of false alarms when the class noise ratio exceeds 30%
  •  
29.
  •  
30.
  •  
31.
  • Alkoutli, Anas, et al. (author)
  • Assessing Security of Internal Vehicle Networks
  • 2023
  • In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). - 0302-9743 .- 1611-3349. - 9783031368882 ; , s. 151-164
  • Book chapter (peer-reviewed)abstract
    • Automotive software grows exponentially in size. In premium vehicles, the size can reach over 100 million lines of code. One of the challenges in such a large software is how it is architecturally designed and whether this design leads to security vulnerabilities. In this paper, we report on a design science research study aimed at understanding the vulnerabilities of modern premium vehicles. We used machine learning to identify and reconstruct signals within the vehicle’s communication networks. The results show that the distributed software architectures can have security vulnerabilities due to the high connectivity of modern vehicles; and that the security needs to be seen holistically – both when constructing the vehicle’s software and when designing communication channels with cloud services. The paper proposed a number of measures that can help to address the identified vulnerabilities.
  •  
32.
  • Antinyan, Vard, 1984, et al. (author)
  • A Complexity Measure for Textual Requirements
  • 2016
  • In: Proceedings of 2016 Joint Conference of the International Workshop on Software Measurement and the International Conference on Software Process and Product Measurement (Iwsm-Mensura). - : IEEE. - 9781509041473 - 9781509041480
  • Conference paper (peer-reviewed)abstract
    • Unequivocally understandable requirements are vital for software design process. However, in practice it is hard to achieve the desired level of understandability, because in large software products a substantial amount of requirements tend to have ambiguous or complex descriptions. Over time such requirements decelerate the development speed and increase the risk of late design modifications, therefore finding and improving them is an urgent task for software designers. Manual reviewing is one way of addressing the problem, but it is effort-intensive and critically slow for large products. Another way is using measurement, in which case one needs to design effective measures. In recent years there have been great endeavors in creating and validating measures for requirements understandability: most of the measures focused on ambiguous patterns. While ambiguity is one property that has major effect on understandability, there is also another important property, complexity, which also has major effect on understandability, but is relatively less investigated. In this paper we define a complexity measure for textual requirements through an action research project in a large software development organization. We also present its evaluation results in three large companies. The evaluation shows that there is a significant correlation between the measurement values and the manual assessment values of practitioners. We recommend this measure to be used with earlier created ambiguity measures as means for automated identification of complex specifications.
  •  
33.
  • Antinyan, V., et al. (author)
  • A Pragmatic View on Code Complexity Management
  • 2019
  • In: Computer. - : Institute of Electrical and Electronics Engineers (IEEE). - 0018-9162 .- 1558-0814. ; 52:2, s. 14-22
  • Journal article (peer-reviewed)abstract
    • This article endeavors to underpin complexity understanding by scrutinizing how developers experience code complexity and how certain code characteristics impact complexity. The results provide a distinction between essential and accidental code characteristics and help in evaluating the influence of these characteristics on complexity increase.
  •  
34.
  • Antinyan, Vard, 1984, et al. (author)
  • Defining technical risks in software development
  • 2014
  • In: Joint Conference of the 24th International Workshop on Software Measurement, IWSM 2014 and the 9th International Conference on Software Process and Product Measurement, Mensura 2014; ss RotterdamRotterdam; Netherlands; 6 October 2014 through 8 October 2014. - : IEEE. - 9781479941742
  • Conference paper (peer-reviewed)abstract
    • Challenges of technical risk assessment is difficult to address, while its success can benefit software organizations appreciably. Classical definition of risk as a 'combination of probability and impact of adverse event' appears not working with technical risk assessment. The main reason of this is the nature of adverse event's outcome which is rather continuous than discrete. The objective of this study was to scrutinize different aspects of technical risks and provide a definition, which will support effective risk assessment and management in software development organizations. In this study we defined the risk considering the nature of actual risks, emerged in software development. Afterwards, we summarized the software engineers' view on technical risks as results of three workshops with 15 engineers of four software development companies. The results show that technical risks could be viewed as a combination of uncertainty and magnitude of difference between actual and optimal design of product artifacts and processes. The presented definition is congruent with practitioners view on technical risk. It supports risk assessment in a quantitative manner and enables identification of potential product improvement areas.
  •  
35.
  •  
36.
  • Antinyan, Vard, 1984, et al. (author)
  • Identifying Complex Functions : By Investigating Various Aspects of Code Complexity
  • 2015
  • In: Proceedings of 2015 Science and Information Conference (SAI). - : IEEE Press. - 9781479985470 - 9781479985487 - 9781479985463 ; , s. 879-888
  • Conference paper (peer-reviewed)abstract
    • The complexity management of software code has become one of the major problems in software development industry. With growing complexity the maintenance effort of code increases. Moreover, various aspects of complexity create difficulties for complexity assessment. The objective of this paper is to investigate the relationships of various aspects of code complexity and propose a method for identifying the most complex functions. We have conducted an action research project in two software development companies and complemented it with a study of three open source products. Four complexity metrics are measured, and their nature and mutual influence are investigated. The results and possible explanations are discussed with software engineers in industry. The results show that there are two distinguishable aspects of complexity of source code functions: Internal and outbound complexities. Those have an inverse relationship. Moreover, the product of them does not seem to be greater than a certain limit, regardless of software size. We present a method that permits identification of most complex functions considering the two aspects of complexities. The evaluation shows that the use of the method is effective in industry: It enables identification of 0.5% most complex functions out of thousands of functions for reengineering.
  •  
37.
  • Antinyan, Vard, 1984, et al. (author)
  • Identifying risky areas of software code in Agile/Lean software development: An industrial experience report
  • 2014
  • In: 2014 Software Evolution Week - IEEE Conference on Software Maintenance, Reengineering, and Reverse Engineering, CSMR-WCRE 2014 - Proceedings. - : IEEE. - 9781479941742
  • Conference paper (peer-reviewed)abstract
    • Modern software development relies on incremental delivery to facilitate quick response to customers' requests. In this dynamic environment the continuous modifications of software code can cause risks for software developers; when developing a new feature increment, the added or modified code may contain fault-prone or difficult-to-maintain elements. The outcome of these risks can be defective software or decreased development velocity. This study presents a method to identify the risky areas and assess the risk when developing software code in Lean/Agile environment. We have conducted an action research project in two large companies, Ericsson AB and Volvo Group Truck Technology. During the study we have measured a set of code properties and investigated their influence on risk. The results show that the superposition of two metrics, complexity and revisions of a source code file, can effectively enable identification and assessment of the risk. We also illustrate how this kind of assessment can be successfully used by software developers to manage risks on a weekly basis as well as release-wise. A measurement system for systematic risk assessment has been introduced to two companies. © 2014 IEEE.
  •  
38.
  • Antinyan, Vard, 1984, et al. (author)
  • Monitoring Evolution of Code Complexity and Magnitude of Changes
  • 2014
  • In: Acta Cybernetica. - 0324-721X. ; 21:3, s. 367-382
  • Journal article (peer-reviewed)abstract
    • Background: Complexity management has become a crucial activity in continuous software development. While the overall perceived complexity of a product grows rather insignificantly, the small units, such as functions and files, can have noticeable complexity growth with every increment of product features. This kind of evolution triggers risks of escalating fault-proneness and deteriorating maintainability. Goal: The goal of this research was to develop a measurement system which enables effective monitoring of complexity evolution. Method: An action research has been conducted in two large software development organiza-tions. We have measured three complexity and two change properties of code for two large industrial products. The complexity growth has been measured for five consecutive releases of products. Different patterns of growth have been identified and evaluated with software engi-neers in industry. Results: The results show that monitoring cyclomatic complexity evolution of functions and number of revisions of files focuses the attention of designers to potentially problematic files and functions for manual assessment and improvement. A measurement system was developed at Ericsson to support the monitoring process.
  •  
39.
  • Antinyan, Vard, et al. (author)
  • Monitoring evolution of code complexity and magnitude of changes
  • 2014
  • In: Acta Cybernetica. - : University of Szeged, Institute of Informatics. - 0324-721X. ; 21:3, s. 367-382
  • Journal article (peer-reviewed)abstract
    • Complexity management has become a crucial activity in continuous software development. While the overall perceived complexity of a product grows rather insignificantly, the small units, such as functions and files, can have noticeable complexity growth with every increment of product features. This kind of evolution triggers risks of escalating fault-proneness and deteriorating maintainability. The goal of this research was to develop a measurement system which enables effective monitoring of complexity evolution. An action research has been conducted in two large software development organizations. We have measured three complexity and two change properties of code for two large industrial products. The complexity growth has been measured for five consecutive releases of the products. Different patterns of growth have been identified and evaluated with software engineers in industry. The results show that monitoring cyclomatic complexity evolution of functions and number of revisions of files focuses the attention of designers to potentially problematic files and functions for manual assessment and improvement. A measurement system was developed at Ericsson to support the monitoring process.
  •  
40.
  • Antinyan, Vard, 1984, et al. (author)
  • Monitoring Evolution of Code Complexity in Agile/Lean Software Development - A Case Study at Two Companies
  • 2013
  • In: 13th Symposium on Programming Languages and Software Tools, SPLST 2013 - Proceedings. - 9789633062289 ; , s. 1-15
  • Conference paper (peer-reviewed)abstract
    • One of the distinguishing characteristics of Agile and Lean software development is that software products “grow” with new functionality with relatively small increments. Continuous customer demands of new features and the companies’ abilities to deliver on those demands are the two driving forces behind this kind of software evolution. Despite the numerous benefits there are a number of risks associated with this kind of growth. One of the main risks is the fact that the complexity of the software product grows slowly, but over time reaches scales which makes the product hard to maintain or evolve. The goal of this paper is to present a measurement system for monitoring the growth of complexity and drawing attention when it becomes problematic. The measurement system was developed during a case study at Ericsson and Volvo Group Truck Technology. During the case study we explored the evolution of size, complexity, revisions and number of designers of two large software products from the telecom and automotive domains. The results show that two measures needed to be monitored to keep the complexity development under control - McCabe’s complexity and number of revisions.
  •  
41.
  • Antinyan, Vard, 1984, et al. (author)
  • Mythical Unit Test Coverage
  • 2018
  • In: IEEE Software. - 0740-7459. ; 35:3, s. 73-79
  • Journal article (peer-reviewed)abstract
    • It is a continuous struggle to understand how much a product should be tested before its delivery to the market. Ericsson, as a global software development company, decided to evaluate the adequacy of the unit-test-coverage criterion that it had employed for years as a guide for sufficiency of testing. Naturally, one can think that if increasing coverage decreases the number of defects significantly, then coverage can be considered a criterion for test sufficiency. To test this hypothesis in practice, we investigated the relationship of unit-test-coverage measures and post-unit-test defects in a large commercial product of Ericsson. The results indicate that high unit-test coverage did not seem to be any tangible help in producing defect-free software.
  •  
42.
  • Antinyan, Vard, 1984, et al. (author)
  • Profiling prerelease software product and organizational performance
  • 2014
  • In: Continuous software engineering. - Cham : Springer. - 9783319112831 ; , s. 167-182
  • Book chapter (peer-reviewed)abstract
    • Background: Large software development organizations require effective means of quantifying excellence of products and improvement areas. A good quantification of excellence supports organizations in retaining market leadership. In addition, a good quantification of improvement areas is needed to continuously increase performance of products and processes. Objective: In this chapter we present a method for developing product and organizational performance profiles. The profiles are a means of quantifying prerelease properties of products and quantifying performance of software development processes. Method: We conducted two case studies at three companies-Ericsson, Volvo Group Truck Technology, and Volvo Car Corporation. The goal of first case study is to identify risky areas of source code. We used a focus group to elicit and evaluate measures and indicators at Ericsson. Volvo Group Truck Technology was used to validate our profiling method. Results: The results of the first case study showed that profiling of product performance can be done by identifying risky areas of source code using combination of two measures-McCabe complexity and number of revisions of files. The results of second case study show that profiling change frequencies of models can help developers identify implicit architectural dependencies. Conclusions: We conclude that profiling is an effective tool for supporting improvements of product and organizational performance. The key for creating useful profiles is the close collaboration between research and development organizations. © 2014 Springer International Publishing Switzerland. All rights reserved.
  •  
43.
  • Antinyan, Vard, 1984, et al. (author)
  • Rendex: A method for automated reviews of textual requirements
  • 2017
  • In: Journal of Systems and Software. - : Elsevier BV. - 0164-1212. ; 131, s. 63-77
  • Journal article (peer-reviewed)abstract
    • Conducting requirements reviews before the start of software design is one of the central goals in requirements management. Fast and accurate reviews promise to facilitate software development process and mitigate technical risks of late design modifications. In large software development companies, however, it is difficult to conduct reviews as fast as needed, because the number of regularly incoming requirements is typically several thousand. Manually reviewing thousands of requirements is a time-consuming task and disrupts the process of continuous software development. As a consequence, software engineers review requirements in parallel with designing the software, thus partially accepting the technical risks. In this paper we present a measurement-based method for automating requirements reviews in large software development companies. The method, Rendex, is developed in an action research project in a large software development organization and evaluated in four large companies. The evaluation shows that the assessment results of Rendex have 73%-80% agreement with the manual assessment results of software engineers. Succeeding the evaluation, Rendex was integrated with the requirements management environment in two of the collaborating companies and is regularly used for proactive reviews of requirements. (C) 2017 Elsevier Inc. All rights reserved.
  •  
44.
  • Antinyan, Vard, et al. (author)
  • Validating software measures using action research a method and industrial experiences
  • 2016
  • In: EASE '16. - New York, NY, USA : Association for Computing Machinery (ACM). - 9781450336918
  • Conference paper (peer-reviewed)abstract
    • Validating software measures for using them in practice is a challenging task. Usually more than one complementary validation methods are applied for rigorously validating software measures: Theoretical methods help with defining the measures with expected properties and empirical methods help with evaluating the predictive power of measures. Despite the variety of these methods there still remain cases when the validation of measures is difficult. Particularly when the response variables of interest are not accurately measurable and the practical context cannot be reduced to an experimental setup the abovementioned methods are not effective. In this paper we present a complementary empirical method for validating measures. The method relies on action research principles and is meant to be used in combination with theoretical validation methods. The industrial experiences documented in this paper show that in many practical cases the method is effective. © 2016 ACM.
  •  
45.
  • Babu, Md Abu Ahammed, 1994, et al. (author)
  • Impact of Image Data Splitting on the Performance of Automotive Perception Systems
  • 2024
  • In: Lecture Notes in Business Information Processing. - 1865-1356 .- 1865-1348. ; 505 LNBIP, s. 91-111
  • Conference paper (peer-reviewed)abstract
    • Context: Training image recognition systems is one of the crucial elements of the AI Engineering process in general and for automotive systems in particular. The quality of data and the training process can have a profound impact on the quality, performance, and safety of automotive software. Objective: Splitting data between train and test sets is one of the crucial elements in this process as it can determine both how well the system learns and generalizes to new data. Typical data splits take into consideration either randomness or timeliness of data points. However, in image recognition systems, the similarity of images is of equal importance. Methods: In this computational experiment, we study the impact of six data-splitting techniques. We use an industrial dataset with high-definition color images of driving sequences to train a YOLOv7 network. Results: The mean average precision (mAP) was 0.943 and 0.841 when the similarity-based and the frame-based splitting techniques were applied, respectively. However, the object-based splitting technique produces the worst mAP score (0.118). Conclusion: There are significant differences in the performance of object detection methods when applying different data-splitting techniques. The most positive results are the random selections, whereas the most objective ones are splits based on sequences that represent different geographical locations.
  •  
46.
  • Berbyuk Lindström, Nataliya, 1978, et al. (author)
  • How to Succeed in Communicating Software Metrics in Organization?
  • 2022
  • In: AMCIS (Americas Conference on Information Systems), Minneapolis, MI, August 10-14. - : AMCIS 2022 TREO 80.
  • Conference paper (peer-reviewed)abstract
    • While software metrics are indispensable for quality assurance, using metrics in practice is complicated. Quality, productivity, speed, and efficiency are important factors to be considered in software development (Holmstrom et al. 2006; Svensson 2005). Measuring correct metrics and using them in the right and transparent way contributes to pushing development in a desirable direction, leading to achieving projected goals and outcomes (Staron and Meding 2018). On the other hand, tracking the wrong metrics, and failing to interpret and communicate them properly results in a stressful work environment, conflicts, distrust, lower engagement, and decreased productivity (de Sá Leitão Júnior 2018; Ellis et al. 1991; Staron 2012). To ensure proper and effective use of metrics in organizations, successful communication around metrics is essential (Lindström et al. 2021; Post et al. 2002; Staron and Meding 2015). The purpose of this study is to understand and improve communication about metrics in contexts of contemporary software development practice in organizations. This is achieved by identifying the bottlenecks in the process of communication around metrics and how to overcome them in practice. Drawing on 38 semi-structured interviews and interactive workshops with metrics teams members and stakeholders from three organizations, we identify three interrelated challenges including limited knowledge about metrics and lack of terminology, uncoordinated use of multiple communication channels, and sensitivity of metrics, which influence workplace communication, trust, and performance. Our study shows the importance of developing metrics terminology to ensure the development of a shared understanding of metrics. Further, raising awareness about the affordances such channels as dashboards, email, MS Teams meetings/chat, stand up meetings, reports, etc., commonly used in software organizations, and how they can be combined to successfully transfer information about metrics is essential (Verhulsdonck and Shah 2020). It becomes especially important in remote work practices. Finally, though metrics is a powerful tool for decision making, enhancing transparency, and steering development in the desired direction, they can also turn into finger-pointing, blaming, and a pressing tool, resulting in stress and conflicts (Streit and Pizka 2011). The findings also indicate the importance of creating a culture around metrics, clarifying, and informing about the purpose of metrics in the organization (Umarji and Seaman 2008). We plan to build on the early findings of this study to develop a comprehensive framework for successful software metrics communication within organizations.
  •  
47.
  • Berbyuk Lindström, Nataliya, 1978, et al. (author)
  • Understanding Metrics Team-Stakeholder Communication in Agile Metrics Service Delivery
  • 2021
  • In: APSEC (Asian Pacific Software Engineering conference), December 6-10, Taiwan-Virtual.. ; 2021-December, s. 401-409
  • Conference paper (peer-reviewed)abstract
    • In this paper, we explore challenges in communication between metrics teams and stakeholders in metrics service delivery. Drawing on interviews and interactive workshops with team members and stakeholders at two different Swedish agile software development organizations, we identify interrelated challenges such as aligning expectations, prioritizing demands, providing regular feedback, and maintaining continuous dialogue, which influence team-stakeholder interaction, relationships and performance. Our study shows the importance of understanding communicative hurdles and provides suggestions for their mitigation, therefore meriting further empirical research.
  •  
48.
  • Berbyuk Lindström, Nataliya, 1978, et al. (author)
  • Who are Metrics Team’s Stakeholders, What Do They Expect, and How to Communicate with Them? Conducting Stakeholder Mapping with Focus on Communication in Agile Software Development Organization
  • 2022
  • In: International Conference on Information Systems (ICIS), SIGITPROJMGMT 17th International Research Workshop on Information Technology Project Management, Copenhagen, December 10, 2022-01-01.
  • Conference paper (peer-reviewed)abstract
    • As an increasing number of organizations create metrics teams, conducting stakeholder mapping is pivotal for identifying and analyzing metrics stakeholders’ expectations for reducing the risks of miscommunication and project failure. Further, though team-stakeholder communication is essential for successful collaboration, few studies focus on it in software measurement context. This case study seeks to identify and analyze metrics team’s stakeholders, with a special focus on communication challenges in team-stakeholder contacts. Inspired by Bryson's Basic Stakeholder Analysis Techniques and Mitchell, Agle, and Wood's theoretical model for stakeholder identification, a stakeholder mapping exercise was conducted using interactive workshops and follow-up interviews with 16 metrics team members and their stakeholders. The results illustrate the complexity of identifying stakeholders in agile organizations, the importance of developing a metrics culture, and enhancing transparency in team-stakeholder communication. The study aims to contribute to the development of stakeholder theory and offers insights into communication in software engineering context.
  •  
49.
  • Block, Linda, et al. (author)
  • Cerebral ischemia detection using artificial intelligence (CIDAI)-A study protocol
  • 2020
  • In: Acta Anaesthesiologica Scandinavica. - : Wiley. - 0001-5172 .- 1399-6576. ; 64:9, s. 1335-1342
  • Journal article (peer-reviewed)abstract
    • Background The onset of cerebral ischemia is difficult to predict in patients with altered consciousness using the methods available. We hypothesize that changes in Heart Rate Variability (HRV), Near-Infrared Spectroscopy (NIRS), and Electroencephalography (EEG) correlated with clinical data and processed by artificial intelligence (AI) can indicate the development of imminent cerebral ischemia and reperfusion, respectively. This study aimed to develop a method that enables detection of imminent cerebral ischemia in unconscious patients, noninvasively and with the support of AI. Methods This prospective observational study will include patients undergoing elective surgery for carotid endarterectomy and patients undergoing acute endovascular embolectomy for cerebral arterial embolism. HRV, NIRS, and EEG measurements and clinical information on patient status will be collected and processed using machine learning. The study will take place at Sahlgrenska University Hospital, Gothenburg, Sweden. Inclusion will start in September 2020, and patients will be included until a robust model can be constructed. By analyzing changes in HRV, EEG, and NIRS measurements in conjunction with cerebral ischemia or cerebral reperfusion, it should be possible to train artificial neural networks to detect patterns of impending cerebral ischemia. The analysis will be performed using machine learning with long short-term memory artificial neural networks combined with convolutional layers to identify patterns consistent with cerebral ischemia and reperfusion. Discussion Early signs of cerebral ischemia could be detected more rapidly by identifying patterns in integrated, continuously collected physiological data processed by AI. Clinicians could then be alerted, and appropriate actions could be taken to improve patient outcomes.
  •  
50.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-50 of 284
Type of publication
conference paper (167)
journal article (68)
book chapter (21)
reports (10)
editorial collection (4)
book (4)
show more...
doctoral thesis (4)
licentiate thesis (3)
editorial proceedings (2)
research review (1)
show less...
Type of content
peer-reviewed (225)
other academic/artistic (59)
Author/Editor
Staron, Miroslaw, 19 ... (252)
Meding, Wilhelm (45)
Hansson, Jörgen, 197 ... (36)
Kuzniarz, Ludwik (36)
Staron, Miroslaw (30)
Berger, Christian, 1 ... (23)
show more...
Rana, Rakesh, 1985 (22)
Durisic, Darko, 1986 (17)
Antinyan, Vard, 1984 (11)
Abrahão, Silvia (10)
Meding, W. (10)
Nilsson, Martin, 197 ... (10)
Tichy, Matthias, 197 ... (9)
Ochodek, Miroslaw (9)
Törner, Fredrik, 197 ... (9)
Serebrenik, Alexande ... (8)
Penzenstadler, Birgi ... (8)
Nilsson, Martin (8)
Wohlin, Claes (8)
Hebig, Regina, 1984 (8)
Block, Linda (7)
Horkoff, Jennifer, 1 ... (7)
Al Sabbagh, Khaled, ... (7)
Hebig, Regina (6)
Meding, Wilhelm, 197 ... (6)
El-Merhi, Ali (6)
Ochodek, M. (6)
Kuzniarz, L. (6)
Bosch, Jan, 1967 (5)
Söder, Ola (5)
Carver, J. C. (5)
Vithal, Richard (5)
Penzenstadler, Birgi ... (4)
Capilla, R. (4)
Liljencrantz, Jaquet ... (4)
Hansson, Jörgen (4)
Schröder, Jan, 1986 (4)
Odenstedt Hergès, He ... (4)
Törner, Fredrik (4)
Pareto, Lars, 1966 (4)
Berbyuk Lindström, N ... (3)
Steghöfer, Jan-Phili ... (3)
Sandberg, Anna (3)
Kumar Pandey, Sushan ... (3)
Naredi, Silvana, 195 ... (3)
Pareto, Lars (3)
Elam, Mikael, 1956 (3)
Carver, Jeffrey C. (3)
Knauss, Alessia, 198 ... (3)
Durisic, Darko (3)
show less...
University
University of Gothenburg (227)
Chalmers University of Technology (105)
Blekinge Institute of Technology (30)
University of Skövde (12)
Mälardalen University (3)
Linköping University (2)
show more...
Karlstad University (2)
University West (1)
Linnaeus University (1)
Karolinska Institutet (1)
show less...
Language
English (283)
Swedish (1)
Research subject (UKÄ/SCB)
Natural sciences (265)
Engineering and Technology (44)
Social Sciences (12)
Medical and Health Sciences (6)
Humanities (1)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view