SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "L773:0164 1212 "

Search: L773:0164 1212

  • Result 1-25 of 260
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Höst, Martin, et al. (author)
  • Evaluation of Code Review Methods through Interviews and Experimentation
  • 2000
  • In: Journal of Systems and Software. - 0164-1212. ; 52:2-3, s. 113-120
  • Journal article (peer-reviewed)abstract
    • This paper presents the results of a study where the effects of introducing code reviews in an organisational unit have been evaluated. The study was performed in an ongoing commercial project, mainly through interviews with developers and an experiment where the effects of introducing code reviews were measured. Two different checklist-based review methods have been evaluated. The objectives of the study are to analyse the effects of introducing code reviews in the organisational unit and to compare the two methods. The results indicate that many of the faults that normally are found in later test phases or operation are instead found in code reviews, but no difference could be found between the two methods. The results of the study are considered positive and the organisational unit has continued to work with code reviews.
  •  
2.
  • Höst, Martin, et al. (author)
  • Exploring Bottlenecks in Market-Driven Requirements Management Processes with Discrete Event Simulation
  • 2001
  • In: Journal of Systems and Software. - 0164-1212. ; 59:3, s. 323-332
  • Journal article (peer-reviewed)abstract
    • This paper presents a study where a market-driven requirements management process is simulated. In market-driven software development, generic software packages are released to a market with many customers. New requirements are continuously issued, and the objective of the requirements management process is to elicit, manage, and prioritize the requirements. In the presented study, a specific requirements management process is modelled using discrete event simulation, and the parameters of the model are estimated based on interviews with people from the specific organisation where the process is used. Based on the results from simulations, conditions that result in an overload situation are identified. Simulations are also used to find process change proposals that can result in a non-overloaded process. The risk of overload can be avoided if the capacity of the requirements management process is increased, or if the number of incoming requirements is decreased, for example, through early rejection of low-priority requirements.
  •  
3.
  • Petersson, Håkan, et al. (author)
  • Capture-Recapture in Software Inspections after 10 Years Research : Theory, Evaluation and Application.
  • 2004
  • In: Journal of Systems and Software. - 0164-1212 .- 1873-1228. ; 72:2, s. 249-264
  • Journal article (peer-reviewed)abstract
    • Software inspection is a method to detect faults in the early phases of the software life cycle. In order to estimate the number of faults not found, capture-recapture was introduced for software inspections in 1992 to estimate remaining faults after an inspection. Since then, several papers have been written in the area, concerning the basic theory, evaluation of models and application of the method. This paper summarizes the work made in capture-recapture for software inspections during these years. Furthermore, and more importantly, the contribution of the papers are classified as theory, evaluation or application, in order to analyse the performed research as well as to highlight the areas of research that need further work. It is concluded that (1) most of the basic theory is investigated within biostatistics, (2) most software engineering research is performed on evaluation, a majority ending up in recommendation of the Mh-JK model, and (3) there is a need for application experiences. In order to support the application, an inspection process is presented with decision points based on capture-recapture estimates.
  •  
4.
  • Regnell, Björn, et al. (author)
  • Towards integration of use case modelling and usage-based testing
  • 2000
  • In: Journal of Systems and Software. - 0164-1212. ; 50:2, s. 117-130
  • Journal article (peer-reviewed)abstract
    • This paper focuses on usage modelling as a basis for both requirements engineering (RE) and testing, and investigates the possibility of integrating the two disciplines of use case modelling (UCM) and statistical usage testing (SUT). The paper investigates the conceptual framework for each discipline, and discusses how they can be integrated to form a seamless transition from requirements models to test models for reliability certification. Two approaches for such an integration are identified: integration by model transformation and integration by model extension. The integration approaches are illustrated through an example, and advantages as well as disadvantages of each approach are discussed. Based on the fact that the two disciplines have models with common information and similar structure, it is argued that an integration may result in coordination benefits and reduced costs. Several areas of further research are identified.
  •  
5.
  • Thelin, Thomas, et al. (author)
  • Applying sampling to improve software inspections
  • 2004
  • In: Journal of Systems and Software. - 0164-1212. ; 73:2, s. 257-269
  • Journal article (peer-reviewed)abstract
    • The main objective of software inspections is to find faults in software documents. The benefits of inspections are reported from researchers as well as software organizations. However, inspections are time consuming and the resources may not be sufficient to inspect all documents. Sampling of documents in inspections provides a systematic solution to select what to be inspected in the case resources are not sufficient to inspect everything. The method presented in this paper uses sampling, inspection and resource scheduling to increase the efficiency of an inspection session. A pre-inspection phase is used in order to determine which documents need most inspection time, i.e. which documents contain most faults. Then, the main inspection is focused on these documents. We describe the sampling method and provide empirical evidence, which indicates that the method is appropriate to use. A Monte Carlo simulation is used to evaluate the proposed method and a case study using industrial data is used to validate the simulation model. Furthermore, we discuss the results and important future research in the area of sampling of software inspections.
  •  
6.
  • Thelin, Thomas, et al. (author)
  • Robust estimations of fault content with capture-recapture and detection profile estimators
  • 2000
  • In: Journal of Systems and Software. - 0164-1212. ; 52:2, s. 139-148
  • Journal article (peer-reviewed)abstract
    • Inspections are widely used in the software engineering community as efficient contributors to reduced fault content and improved product understanding. In order to measure and control the effect and use of inspections, the fault content after an inspection must be estimated. The capture-recapture method, with its origin in biological sciences, is a promising approach for estimation of the remaining fault content in software artefacts. However, a number of empirical studies show that the estimates are neither accurate nor robust. In order to find robust estimates, i.e., estimates with small bias and variations, the adherence to the prerequisites for different estimation models is investigated. The basic hypothesis is that a model should provide better estimates the closer the actual sample distribution is to the model's theoretical distribution. Firstly, a distance measure is evaluated and secondly a X2-based procedure is applied. Thirdly, smoothing algorithms are tried out, e.g., mean and median values of the estimates from a number of estimation models. Based on two different inspection experiments, we conclude that it is not possible to show a correlation between adherence to the model's theoretical distributions and the prediction capabilities of the models. This indicates that there are other factors that affect the estimation capabilities more than the prerequisites. Neither does the investigation point out any specific model to be superior. On the contrary, the Mh-JK model, which has been shown as the best alternative in a prior study, is inferior in this study. The most robust estimations are achieved by the smoothing algorithms.
  •  
7.
  • Wohlin, Claes, et al. (author)
  • Strategies for Industrial Relevance in Software Engineering Education
  • 1999
  • In: Journal of Systems and Software. - 0164-1212. ; 49:2-3, s. 125-134
  • Journal article (peer-reviewed)abstract
    • This paper presents a collection of experiences related to success factors in graduate and postgraduate education. The experiences are mainly concerned with how to make the education relevant from an industrial viewpoint. This is emphasized as a key issue in software engineering education and research, as the main objective is to give the students a good basis for large-scale software development in an industrial environment. The presentation is divided into experiences at the graduate and postgraduate levels, respectively. For each level a number of strategies to achieve industrial relevance are presented. On the graduate level a course in large-scale software development is described to exemplify how industrial relevance can be achieved on the graduate level. The strategies on the postgraduate level have been successful, but it is concluded that more can be done regarding industrial collaboration in the planning and conduction of experiments and case studies. Another interesting strategy for the future is a special postgraduate program for people employed in industry.
  •  
8.
  • Andersson, Niclas, et al. (author)
  • Overview and industrial application of code generator generators
  • 1996
  • In: Journal of Systems and Software. - : Elsevier. - 0164-1212 .- 1873-1228. ; 32:3, s. 185-214
  • Journal article (peer-reviewed)abstract
    • During the past 10 to 15 years, there has been active research in the area of automatically generating the code generator part of compilers from formal specifications. However, little has been reported on the application of these systems in an industrial setting. This paper attempts to fill this gap, in addition to providing a tutorial overview of the most well-known methods. Four systems for automatic generation of code generators are described in this paper. CGSS, BEG, TWIG and BURG. CGSS is an older Graham-Glanville style system based on pattern matching through parsing, whereas BEG, TWIG, and BURG are more recent systems based on tree pattern matching combined with dynamic programming. An industrial-strength code generator previously implemented for a special-purpose language using the CGSS system is described and compared in some detail to our new implementation based on the BEG system. Several problems of integrating local and global register allocations within automatically generated code generators are described, and some solutions are proposed. In addition, the specification of a full code generator for SUN SPARC with register windows using the BEG system is described. We finally conclude that current technology of automatically generating code generators is viable in an industrial setting. However, further research needs to be done on the problem of properly integrating register allocation and instruction scheduling with instruction selection, when both are generated from declarative specifications.
  •  
9.
  • Berglund, Erik (author)
  • Designing electronic reference documentation for software component libraries
  • 2003
  • In: Journal of Systems and Software. - 0164-1212 .- 1873-1228. ; 68:1, s. 65-75
  • Journal article (peer-reviewed)abstract
    • Contemporary software development is based on global sharing of software component libraries. As a result, programmers spend much time reading reference documentation rather than writing code, making library reference documentation a central programming tool. Traditionally, reference documentation is designed for textbooks even though it may be distributed online. However, the computer provides new dimensions of change, evolution, and adaptation that can be utilized to support efficiency and quality in software development. What is difficult to determine is how the electronic text dimensions best can be utilized in library reference documentation. This article presents a study of the design of electronic reference documentation for software component libraries. Results are drawn from a study in an industrial environment based on the use of an experimental electronic reference documentation (called Dynamic Javadoc or DJavadoc) used in a real-work situation for 4 months. The results from interviews with programmers indicate that the electronic library reference documentation does not require adaptation or evolution on an individual level. More importantly, reference documentation should facilitate the transfer of code from documentation to source files and also support the integration of multiple documentation sources.
  •  
10.
  • Fritzson, Peter (author)
  • Symbolic Debugging through Incremental Compilation in an Integrated Environment
  • 1983
  • In: Journal of Systems and Software. - : Elsevier. - 0164-1212 .- 1873-1228. ; 3:4, s. 285-294
  • Journal article (peer-reviewed)abstract
    • It is demonstrated that fine-grained incremental compilation is a relevant technique when implementing powerful debuggers an incremental programming environments. A debugger and an incremental compiler for pascal has been implemented in the DICE system (Distributed Incremental Compiling environment). Incremental compilation is at the statement level which makes it useful for the debugger which also operates at the statement level. The quality of code produced by the incremental compiler approaches that required for production use. The algorithms involved an incremental compilation are not very complicated, but they require information that is easily available only in an integrated system, like DICE, where editor, compiler, linker, debugger and program data-base are well integrated into a single system. The extra information that has to be kept around, like the cross-reference database, can be used for multiple purposes, which makes total system economics favorable.
  •  
11.
  • Fritzson, Peter, et al. (author)
  • Using assertions in declarative and operational models for automated debugging
  • 1994
  • In: Journal of Systems and Software. - : Elsevier. - 0164-1212 .- 1873-1228. ; 25:3, s. 223-239
  • Journal article (peer-reviewed)abstract
    • This article presents an improved method for semiautomatic bug localization, by extending our previous generalized algorithm debugging technique, (GADT) [Fritzson et al. 1991], which uses declarative assertions about program units such as procedures and operational assertions about program behavior. For example, functional properties are best expressed through declarative assertions about procedure units, whereas order-dependent properties, or sequencing constraints in general, are more easily expressed using operational semantics. A powerful assertion language, called FORMAN, has been developed to this end. Such assertions can be collected into assertion libraries, which can greatly increase the degree of automation in bug localization. The long-range goal of this work is a semiautomatic debugging and testing system, which can be used during large-scale program development of nontrivial programs. To our knowledge, the extended GADT (EGADT) presented here is the first method that uses powerful operational assertions integrated with algorithmic debugging. In addition to providing support for local-level bug localization within procedures (which is not handled well by basic algorithmic debugging), the operational assertions reduce the number of irrelevant questions to the programmer during bug localization, thus further improving bug localization. A prototype of the GADT, implemented in PASCAL, supports debugging in a subset of Pascal. An interpreter of FORMAN assertions has also been implemented in PASCAL. During bug localization, both declarative and operational assertions are evaluated on execution traces.
  •  
12.
  • Lundell, Björn, et al. (author)
  • Changing perceptions of CASE-technology
  • 2004
  • In: Journal of Systems and Software. - : Elsevier. - 0164-1212 .- 1873-1228. ; 72:2, s. 271-280
  • Research review (peer-reviewed)abstract
    • The level to which CASE technology has been successfully deployed in IS and software development organisations has been at best variable. Much has been written about an apparent mismatch between user expectations of the technology and the products which are developed for the growing marketplace. In this paper we explore how this tension has developed over time, with the aim of identifying and characterising the major factors contributing to it. We identify three primary themes: volatility and plurality in the marketplace; the close relationship between tools and development methods; and the context sensitivity of feature assessment. By exploring the tension and developing these themes we hope to further the debate on how to improve evaluation of CASE prior to adoption.
  •  
13.
  • Shahmehri, Nahid, et al. (author)
  • Usability criteria for automated debugging systems
  • 1995
  • In: Journal of Systems and Software. - : Elsevier. - 0164-1212 .- 1873-1228. ; 31:1, s. 55-70
  • Journal article (peer-reviewed)abstract
    • Much of the current discussion around automated debugging systems is centered around various technical issues. In contrast, this paper focuses on user oriented usability criteria for automated debugging systems, and reviews several systems according to these criteria. We introduce four usability criteria: generality, cognitive plausibility, degree of automation and appreciation of the user's expertise. A debugging system which is general is able to understand a program without restrictive assumptions about the class of algorithms, the implementation, etc. A cognitively plausible debugging system supports debugging according to the user's mental model, e.g. by supporting several levels of abstraction and directions of bug localization. A high degree of automation means that fewer interactions with the user are required to find a bug. A debugging system that appreciates the user's expertise is suitable for both expert and novice programmers, and has the ability to take advantage of the additional knowledge of an expert programmer to speed up and improve the debugging process. Existing automated debugging systems fulfill these user-oriented requirements to a varying degree. However, many improvements are still needed to make automated debugging systems attractive to a broad range of users.
  •  
14.
  • Thelin, Thomas, et al. (author)
  • Applying Sampling to Improve Software Inspections.
  • 2004
  • In: Journal of Software and Systems. - New York : Elsevier. - 0164-1212. ; 73:2, s. 257-269
  • Journal article (peer-reviewed)abstract
    • The main objective of software inspections is to find faults in software documents. The benefits of inspections are reported from researchers as well as software organizations. However, inspections are time consuming and the resources may not be sufficient to inspect all documents. Sampling of documents in inspections provides a systematic solution to select what to be inspected in the case resources are not sufficient to inspect everything. The method presented in this paper uses sampling, inspection and resource scheduling to increase the efficiency of an inspection session. A pre-inspection phase is used in order to determine which documents need most inspection time, i.e. which documents contain most faults. Then, the main inspection is focused on these documents. We describe the sampling method and provide empirical evidence, which indicates that the method is appropriate to use. A Monte Carlo simulation is used to evaluate the proposed method and a case study using industrial data is used to validate the simulation model. Furthermore, we discuss the results and important future research in the area of sampling of software inspections.
  •  
15.
  • Abbas, Nadeem, 1980-, et al. (author)
  • ASPLe : a methodology to develop self-adaptive software systems with systematic reuse
  • 2020
  • In: Journal of Systems and Software. - : Elsevier. - 0164-1212 .- 1873-1228. ; 167, s. 1-19
  • Journal article (peer-reviewed)abstract
    • More than two decades of research have demonstrated an increasing need for software systems to be self-adaptive. Self-adaptation is required to deal with runtime dynamics which are difficult to predict before deployment. A vast body of knowledge to develop Self-Adaptive Software Systems (SASS) has been established. We, however, discovered a lack of process support to develop self-adaptive systems with reuse. To that end, we propose a domain-engineering based methodology, Autonomic Software Product Lines engineering (ASPLe), which provides step-by-step guidelines for developing families of SASS with systematic reuse. The evaluation results from a case study show positive effects on quality and reuse for self-adaptive systems designed using the ASPLe compared to state-of-the-art engineering practices.
  •  
16.
  • Addazi, Lorenzo, et al. (author)
  • Blended graphical and textual modelling for UML profiles : A proof-of-concept implementation and experiment
  • 2021
  • In: Journal of Systems and Software. - : Elsevier Inc.. - 0164-1212 .- 1873-1228. ; 175
  • Journal article (peer-reviewed)abstract
    • Domain-specific modelling languages defined by extending or constraining the Unified Modelling Language (UML) through the profiling mechanism have historically relied on graphical notations to maximise human understanding and facilitate communication among stakeholders. Other notations, such as text-, form-, or table-based are, however, often preferred for specific modelling purposes, due to the nature of a specific domain or the available tooling, or for personal preference. Currently, the state of the art support for UML-based languages provides an almost completely detached, or even entirely mutually exclusive, use of graphical and textual modelling. This becomes inadequate when dealing with the development of modern systems carried out by heterogeneous stakeholders. Our intuition is that a modelling framework based on seamless blended multi-notations can disclose several benefits, among which: flexible separation of concerns, multi-view modelling based on multiple notations, convenient text-based editing operations (inside and outside the modelling environment), and eventually faster modelling activities. In this paper we report on: (i) a proof-of-concept implementation of a framework for UML and profiles modelling using blended textual and graphical notations, and (ii) an experiment on the framework, which eventually shows that blended multi-notation modelling performs better than standard single-notation modelling.
  •  
17.
  • Afzal, Wasif, et al. (author)
  • Software Test Process Improvement Approaches: A Systematic Literature Review and an Industrial Case Study
  • 2016
  • In: Journal of Systems and Software. - : Elsevier BV. - 0164-1212. ; 111, s. 1-33
  • Journal article (peer-reviewed)abstract
    • Software Test Process Improvement (STPI) approaches are frameworks that guide software development organizations to improve their software testing process. We have identified existing STPI approaches and their characteristics (such as completeness of development, availability of information and assessment instruments, and domain limitations of the approaches) using a systematic literature review (SLR). Furthermore, two selected approaches (TPI Next and TMMi) are evaluated with respect to their content and assessment results in industry. As a result of this study, we have identified 18 STPI approaches and their characteristics. A detailed comparison of the content of TPI Next and TMMi is done. We found that many of the STPI approaches do not provide sufficient information or the approaches do not include assessment instruments. This makes it difficult to apply many approaches in industry. Greater similarities were found between TPI Next and TMMi and fewer differences. We conclude that numerous STPI approaches are available but not all are generally applicable for industry. One major difference between available approaches is their model representation. Even though the applied approaches generally show strong similarities, differences in the assessment results arise due to their different model representations.
  •  
18.
  • Al-Sabbagh, Khaled, et al. (author)
  • Improving test case selection by handling class and attribute noise
  • 2022
  • In: Journal of Systems and Software. - : Elsevier BV. - 0164-1212. ; 183
  • Journal article (peer-reviewed)abstract
    • Big data and machine learning models have been increasingly used to support software engineering processes and practices. One example is the use of machine learning models to improve test case selection in continuous integration. However, one of the challenges in building such models is the large volume of noise that comes in data, which impedes their predictive performance. In this paper, we address this issue by studying the effect of two types of noise, called class and attribute, on the predictive performance of a test selection model. For this purpose, we analyze the effect of class noise by using an approach that relies on domain knowledge for relabeling contradictory entries and removing duplicate ones. Thereafter, an existing approach from the literature is used to experimentally study the effect of attribute noise removal on learning. The analysis results show that the best learning is achieved when training a model on class-noise cleaned data only - irrespective of attribute noise. Specifically, the learning performance of the model reported 81% precision, 87% recall, and 84% f-score compared with 44% precision, 17% recall, and 25% f-score for a model built on uncleaned data. Finally, no causality relationship between attribute noise removal and the learning of a model for test case selection was drawn. (C) 2021 The Author(s). Published by Elsevier Inc.
  •  
19.
  • Alahyari, Hiva, 1979, et al. (author)
  • A study of value in agile software development organizations
  • 2017
  • In: Journal of Systems and Software. - : Elsevier BV. - 0164-1212 .- 1873-1228. ; 125, s. 271-288
  • Journal article (peer-reviewed)abstract
    • The Agile manifesto focuses on the delivery of valuable software. In Lean, the principles emphasise value, where every activity that does not add value is seen as waste. Despite the strong focus on value, and that the primary critical success factor for software intensive product development lies in the value domain, no empirical study has investigated specifically what value is. This paper presents an empirical study that investigates how value is interpreted and prioritised, and how value is assured and measured. Data was collected through semi-structured interviews with 23 participants from 14 agile software development organisations. The contribution of this study is fourfold. First, it examines how value is perceived amongst agile software development organisations. Second, it compares the perceptions and priorities of the perceived values by domains and roles. Third, it includes an examination of what practices are used to achieve value in industry, and what hinders the achievement of value. Fourth, it characterises what measurements are used to assure, and evaluate value-creation activities. (C) 2016 Elsevier Inc. All rights reserved.
  •  
20.
  • Ali, Nauman bin, et al. (author)
  • FLOW-assisted value stream mapping in the early phases of large-scale software development
  • 2016
  • In: Journal of Systems and Software. - : Elsevier. - 0164-1212 .- 1873-1228. ; 111, s. 213-227
  • Journal article (peer-reviewed)abstract
    • Value stream mapping (VSM) has been successfully applied in the context of software process improvement. However, its current adaptations from Lean manufacturing focus mostly on the flow of artifacts and have taken no account of the essential information flows in software development. A solution specifically targeted toward information flow elicitation and modeling is FLOW. This paper aims to propose and evaluate the combination of VSM and FLOW to identify and alleviate information and communication related challenges in large-scale software development. Using case study research, FLOW-assisted VSM was used for a large product at Ericsson AB, Sweden. Both the process and the outcome of FLOW-assisted VSM have been evaluated from the practitioners’ perspective. It was noted that FLOW helped to systematically identify challenges and improvements related to information flow. Practitioners responded favorably to the use of VSM and FLOW, acknowledged the realistic nature and impact on the improvement on software quality, and found the overview of the entire process using the FLOW notation very useful. The combination of FLOW and VSM presented in this study was successful in systematically uncovering issues and characterizing their solutions, indicating their practical usefulness for waste removal with a focus on information flow related issues.
  •  
21.
  • Ali, Nazakat, et al. (author)
  • Modeling and safety analysis for collaborative safety-critical systems using hierarchical colored Petri nets
  • 2024
  • In: Journal of Systems and Software. - : Elsevier Inc.. - 0164-1212 .- 1873-1228. ; 210
  • Journal article (peer-reviewed)abstract
    • Context: Collaborative systems enable multiple independent systems to work together towards a common goal. These systems can include both human-system and system-system interactions and can be found in a variety of settings, including smart manufacturing, smart transportation, and healthcare. Safety is an important consideration for collaborative systems because one system's failure can significantly impact the overall system performance and adversely affect other systems, humans or the environment. Goal: Fail-safe mechanisms for safety-critical systems are designed to bring the system to a safe state in case of a failure in the sensors or actuators. However, a collaborative safety-critical system must do better and be safe-operational, for e.g., a failure of one of the members in a platoon of vehicles in the middle of a highway is not acceptable. Thus, failures must be compensated, and compliance with safety constraints must be ensured even under faults or failures of constituent systems. Method: In this paper, we model and analyze safety for collaborative safety-critical systems using hierarchical Coloured Petri nets (CPN). We used an automated Human Rescue Robot System (HRRS) as a case study, modeled it using hierarchical CPN, and injected some specified failures to check and confirm the safe behavior in case of unexpected scenarios. Results: The system behavior was observed after injecting three types of failures in constituent systems, and then safety mechanisms were applied to mitigate the effect of these failures. After applying safety mechanisms, the HRRS system's overall behavior was again observed both in terms of verification and validation, and the simulated results show that all the identified failures were mitigated and HRRS completed its mission. Conclusion: It was found that the approach based on formal methods (CPN modeling) can be used for the safety analysis, modeling, validation, and verification of collaborative safety-critical systems like HRRS. The hierarchical CPN provides a rigorous way of modeling to implement complex collaborative systems. 
  •  
22.
  • Ampatzoglou, A., et al. (author)
  • Research state of the art on GoF design patterns: A mapping study
  • 2013
  • In: Journal of Systems and Software. - : Elsevier BV. - 0164-1212. ; 86:7, s. 1945-1964
  • Journal article (peer-reviewed)abstract
    • Design patterns are used in software development to provide reusable and documented solutions to common design problems. Although many studies have explored various aspects of design patterns, no research summarizing the state of research related to design patterns existed up to now. This paper presents the results of a mapping study of about 120 primary studies, to provide an overview of the research efforts on Gang of Four (GoF) design patterns. The research questions of this study deal with (a) if design pattern research can be further categorized in research subtopics, (b) which of the above subtopics are the most active ones and (c) what is the reported effect of GoF patterns on software quality attributes. The results suggest that design pattern research can be further categorized to research on GoF patterns formalization, detection and application and on the effect of GoF patterns on software quality attributes. Concerning the intensity of research activity of the abovementioned subtopics, research on pattern detection and on the effect of GoF patterns on software quality attributes appear to be the most active ones. Finally, the reported research to date on the effect of GoF patterns on software quality attributes are controversial; because some studies identify one pattern's effect as beneficial whereas others report the same pattern's effect as harmful.
  •  
23.
  • Antinyan, Vard, 1984, et al. (author)
  • Rendex: A method for automated reviews of textual requirements
  • 2017
  • In: Journal of Systems and Software. - : Elsevier BV. - 0164-1212. ; 131, s. 63-77
  • Journal article (peer-reviewed)abstract
    • Conducting requirements reviews before the start of software design is one of the central goals in requirements management. Fast and accurate reviews promise to facilitate software development process and mitigate technical risks of late design modifications. In large software development companies, however, it is difficult to conduct reviews as fast as needed, because the number of regularly incoming requirements is typically several thousand. Manually reviewing thousands of requirements is a time-consuming task and disrupts the process of continuous software development. As a consequence, software engineers review requirements in parallel with designing the software, thus partially accepting the technical risks. In this paper we present a measurement-based method for automating requirements reviews in large software development companies. The method, Rendex, is developed in an action research project in a large software development organization and evaluated in four large companies. The evaluation shows that the assessment results of Rendex have 73%-80% agreement with the manual assessment results of software engineers. Succeeding the evaluation, Rendex was integrated with the requirements management environment in two of the collaborating companies and is regularly used for proactive reviews of requirements. (C) 2017 Elsevier Inc. All rights reserved.
  •  
24.
  • Asplund, Fredrik, et al. (author)
  • The Discourse on Tool Integration Beyond Technology, A Literature Survey
  • 2015
  • In: Journal of Systems and Software. - : Elsevier. - 0164-1212 .- 1873-1228. ; 106, s. 117-131
  • Journal article (peer-reviewed)abstract
    • The tool integration research area emerged in the 1980s. This survey focuses on those strands of tool integration research that discuss issues beyond technology. We reveal a discourse centered around six frequently mentioned non-functional properties. These properties have been discussed in relation to technology and high level issues. However, while technical details have been covered, high level issues and, by extension, the contexts in which tool integration can be found, are treated indifferently. We conclude that this indifference needs to be challenged, and research on a larger set of stakeholders and contexts initiated. An inventory of the use of classification schemes underlines the difficulty of evolving the classical classification scheme published by Wasserman. Two frequently mentioned redefinitions are highlighted to facilitate their wider use. A closer look at the limited number of research methods and the poor attention to research design indicates a need for a changed set of research methods. We propose more critical case studies and method diversification through theory triangulation. Additionally, among disparate discourses we highlight several focusing on standardization which are likely to contain relevant findings. This suggests that open communities employed in the context of (pre-)standardization could be especially important in furthering the targeted discourse.
  •  
25.
  • Avritzer, Alberto, et al. (author)
  • Methods and Opportunities for Rejuvenation in Aging Distributed Software
  • 2010
  • In: Journal of Systems and Software. - : Elsevier BV. - 0164-1212 .- 1873-1228. ; 83:9, s. 1568-1578
  • Journal article (peer-reviewed)abstract
    • In this paper we describe several methods for detecting the need for software rejuvenation in mission critical systems that are subjected to worm infection, and introduce new software rejuvenation algorithms. We evaluate these algorithms' effectiveness using both simulation studies and analytic modeling, by assessing the probability of mission success. The system under study emulates a Mobile Ad-Hoc Network (MANET) of processing nodes. Our analysis determined that some of our rejuvenation algorithms are quite effective in maintaining a high probability of mission success while the system is under explicit attack by a worm infection.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-25 of 260
Type of publication
journal article (254)
research review (6)
Type of content
peer-reviewed (248)
other academic/artistic (12)
Author/Editor
Bosch, Jan, 1967 (18)
Wohlin, Claes (17)
Petersen, Kai (11)
Weyns, Danny (10)
Šmite, Darja (10)
Staron, Miroslaw, 19 ... (8)
show more...
Feldt, Robert, 1972 (8)
Knauss, Eric, 1977 (8)
Gonzalez-Huerta, Jav ... (7)
Mirandola, Raffaela (7)
Runeson, Per (7)
Fucci, Davide, 1985- (6)
Gorschek, Tony, 1972 ... (6)
Torkar, Richard, 197 ... (6)
Steghöfer, Jan-Phili ... (6)
Pelliccione, Patrizi ... (6)
Wnuk, Krzysztof, 198 ... (5)
Lundberg, Lars (5)
Mendez, Daniel (5)
Felderer, Michael, 1 ... (5)
Perez-Palacin, Diego (5)
Heldal, Rogardt, 196 ... (5)
Lundell, Björn (5)
Besker, Terese, 1970 (5)
Mendes, Emilia (5)
Unterkalmsteiner, Mi ... (4)
Berger, Thorsten, 19 ... (4)
Carlson, Jan (4)
Moe, Nils Brede (4)
Berntsson Svensson, ... (4)
Wohlrab, Rebekka, 19 ... (4)
Martini, Antonio, 19 ... (4)
Alégroth, Emil, 1984 ... (4)
Baudry, Benoit (4)
Scandariato, Riccard ... (4)
Fritzson, Peter (4)
Thelin, Thomas (4)
Baldassarre, Maria T ... (3)
Chaudron, Michel, 19 ... (3)
Hebig, Regina (3)
Sjödin, Mikael (3)
Berger, Christian, 1 ... (3)
Gren, Lucas, 1984 (3)
Gorschek, Tony, 1973 (3)
Meding, W. (3)
Rönkkö, Kari (3)
Gamalielsson, Jonas (3)
Britto, Ricardo, 198 ... (3)
Romano, Simone (3)
Scanniello, Giuseppe (3)
show less...
University
Blekinge Institute of Technology (84)
Chalmers University of Technology (77)
University of Gothenburg (48)
Mälardalen University (30)
Linnaeus University (26)
Royal Institute of Technology (17)
show more...
RISE (14)
Lund University (13)
Linköping University (10)
Malmö University (7)
University of Skövde (7)
Luleå University of Technology (3)
Kristianstad University College (2)
Uppsala University (2)
Mid Sweden University (2)
Umeå University (1)
University of Gävle (1)
Örebro University (1)
Karlstad University (1)
show less...
Language
English (260)
Research subject (UKÄ/SCB)
Natural sciences (220)
Engineering and Technology (66)
Social Sciences (19)
Medical and Health Sciences (1)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view