SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "L773:1573 1367 OR L773:0963 9314 "

Sökning: L773:1573 1367 OR L773:0963 9314

  • Resultat 1-47 av 47
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Abbaspour Asadollah, Sara, et al. (författare)
  • 10 Years of research on debugging concurrent and multicore software : a systematic mapping study
  • 2017
  • Ingår i: Software quality journal. - : Springer Science and Business Media LLC. - 0963-9314 .- 1573-1367. ; 25:1, s. 49-82
  • Tidskriftsartikel (refereegranskat)abstract
    • Debugging – the process of identifying, localizing and fixing bugs – is a key activity in software development. Due to issues such as non-determinism and difficulties of reproducing failures, debugging concurrent software is significantly more challenging than debugging sequential software. A number of methods, models and tools for debugging concurrent and multicore software have been proposed, but the body of work partially lacks a common terminology and a more recent view of the problems to solve. This suggests the need for a classification, and an up-to-date comprehensive overview of the area. This paper presents the results of a systematic mapping study in the field of debugging of concurrent and multicore software in the last decade (2005– 2014). The study is guided by two objectives: (1) to summarize the recent publication trends and (2) to clarify current research gaps in the field.Through a multi-stage selection process, we identified 145 relevant papers. Based on these, we summarize the publication trend in the field by showing distribution of publications with respect to year , publication venues , representation of academia and industry , and active research institutes . We also identify research gaps in the field based on attributes such as types of concurrency bugs, types of debugging processes , types of research  and research contributions.The main observations from the study are that during the years 2005–2014: (1) there is no focal conference or venue to publish papers in this area, hence a large variety of conferences and journal venues (90) are used to publish relevant papers in this area; (2) in terms of publication contribution, academia was more active in this area than industry; (3) most publications in the field address the data race bug; (4) bug identification is the most common stage of debugging addressed by articles in the period; (5) there are six types of research approaches found, with solution proposals being the most common one; and (6) the published papers essentially focus on four different types of contributions, with ”methods” being the type most common one.We can further conclude that there is still quite a number of aspects that are not sufficiently covered in the field, most notably including (1) exploring correction  and fixing bugs  in terms of debugging process; (2) order violation, suspension  and starvation  in terms of concurrency bugs; (3) validation and evaluation research  in the matter of research type; (4) metric  in terms of research contribution. It is clear that the concurrent, parallel and multicore software community needs broader studies in debugging.This systematic mapping study can help direct such efforts.
  •  
2.
  • Alégroth, Emil, 1984-, et al. (författare)
  • Characteristics that affect Preference of Decision Models for Asset Selection : An Industrial Questionnaire Survey
  • 2020
  • Ingår i: Software quality journal. - : Springer. - 0963-9314 .- 1573-1367. ; 28:4, s. 1675-1707
  • Tidskriftsartikel (refereegranskat)abstract
    • Modern software development relies on a combination of development and re-use of technical asset, e.g. software components, libraries and APIs.In the past, re-use was mostly conducted with internal assets but today external; open source, customer off-the-shelf (COTS) and assets developed through outsourcing are also common.This access to more asset alternatives presents new challenges regarding what assets to optimally chose and how to make this decision.To support decision-makers, decision-theory has been used to develop decision models for asset selection.However, very little industrial data has been presented in literature about the usefulness, or even perceived usefulness, of these models.Additionally, only limited information has been presented about what model characteristics that determine practitioner preference towards one model over another.Objective: The objective of this work is to evaluate what characteristics of decision models for asset selection that determine industrial practitioner preference of a model when given the choice of a decision-model of high precision or a model with high speed.Method: An industrial questionnaire survey is performed where a total of 33 practitioners, of varying roles, from 18 companies are tasked to compare two decision models for asset selection.Textual analysis and formal and descriptive statistics are then applied on the survey responses to answer the study's research questions.Results: The study shows that the practitioners had clear preference towards the decision model that emphasised speed over the one that emphasised decision precision.This conclusion was determined to be because one of the models was perceived faster, had lower complexity, had, was more flexible in use for different decisions, was more agile how it could be used in operation, its emphasis on people, its emphasis on ``good enough'' precision and ability to fail fast if a decision was a failure.Hence, seven characteristics that the practitioners considered important for their acceptance of the model.Conclusion: Industrial practitioner preference, which relates to acceptance, of decision models for asset selection is dependent on multiple characteristics that must be considered when developing a model for different types of decisions such as operational day-to-day decisions as well as more critical tactical or strategic decisions.The main contribution of this work are seven identified characteristics that can serve as industrial requirements for future research on decision models for asset selection.
  •  
3.
  • Alferez, Mauricio, et al. (författare)
  • Modeling variability in the video domain : language and experience report
  • 2019
  • Ingår i: Software quality journal. - : Springer. - 0963-9314 .- 1573-1367. ; 27:1, s. 307-347
  • Tidskriftsartikel (refereegranskat)abstract
    • In an industrial project, we addressed the challenge of developing a software-based video generator such that consumers and providers of video processing algorithms can benchmark them on a wide range of video variants. This article aims to report on our positive experience in modeling, controlling, and implementing software variability in the video domain. We describe how we have designed and developed a variability modeling language, called VM, resulting from the close collaboration with industrial partners during 2 years. We expose the specific requirements and advanced variability constructs; we developed and used to characterize and derive variations of video sequences. The results of our experiments and industrial experience show that our solution is effective to model complex variability information and supports the synthesis of hundreds of realistic video variants. From the software language perspective, we learned that basic variability mechanisms are useful but not enough; attributes and multi-features are of prior importance; meta-information and specific constructs are relevant for scalable and purposeful reasoning over variability models. From the video domain and software perspective, we report on the practical benefits of a variability approach. With more automation and control, practitioners can now envision benchmarking video algorithms over large, diverse, controlled, yet realistic datasets (videos that mimic real recorded videos)-something impossible at the beginning of the project.
  •  
4.
  • Andrews, A, et al. (författare)
  • A framework for design tradeoffs
  • 2005
  • Ingår i: Software Quality Journal. - : Springer Science and Business Media LLC. - 0963-9314 .- 1573-1367. ; 13:4, s. 377-405
  • Tidskriftsartikel (refereegranskat)abstract
    • Designs almost always require tradeoffs between competing design choices to meet system requirements. We present a framework for evaluating design choices with respect to meeting competing requirements. Specifically, we develop a model to estimate the performance of a UML design subject to changing levels of security and fault-tolerance. This analysis gives us a way to identify design solutions that are infeasible. Multi-criteria decision making techniques are applied to evaluate the remaining feasible alternatives. The method is illustrated with two examples: a small sensor network and a system for controlling traffic lights.
  •  
5.
  • Borg, Markus, et al. (författare)
  • Ergo, SMIRK is safe : a safety case for a machine learning component in a pedestrian automatic emergency brake system
  • 2023
  • Ingår i: Software quality journal. - : Springer. - 0963-9314 .- 1573-1367. ; 31:2, s. 335-
  • Tidskriftsartikel (refereegranskat)abstract
    • Integration of machine learning (ML) components in critical applications introduces novel challenges for software certification and verification. New safety standards and technical guidelines are under development to support the safety of ML-based systems, e.g., ISO 21448 SOTIF for the automotive domain and the Assurance of Machine Learning for use in Autonomous Systems (AMLAS) framework. SOTIF and AMLAS provide high-level guidance but the details must be chiseled out for each specific case. We initiated a research project with the goal to demonstrate a complete safety case for an ML component in an open automotive system. This paper reports results from an industry-academia collaboration on safety assurance of SMIRK, an ML-based pedestrian automatic emergency braking demonstrator running in an industry-grade simulator. We demonstrate an application of AMLAS on SMIRK for a minimalistic operational design domain, i.e., we share a complete safety case for its integrated ML-based component. Finally, we report lessons learned and provide both SMIRK and the safety case under an open-source license for the research community to reuse. © 2023, The Author(s).
  •  
6.
  •  
7.
  • Butler, Simon, et al. (författare)
  • On business adoption and use of reproducible builds for open and closed source software
  • 2023
  • Ingår i: Software quality journal. - : Springer Nature Switzerland AG. - 0963-9314 .- 1573-1367. ; 31:3, s. 687-719
  • Tidskriftsartikel (refereegranskat)abstract
    • Reproducible builds (R-Bs) are software engineering practices that reliably create bit-for-bit identical binary executable files from specified source code. R-Bs are applied in someopen source software (OSS) projects and distributions to allow verification that the distrib-uted binary has been built from the released source code. The use of R-Bs has been advo-cated in software maintenance and R-Bs are applied in the development of some OSS secu-rity applications. Nonetheless, industry application of R-Bs appears limited, and we seekto understand whether awareness is low or if significant technical and business reasonsprevent wider adoption. Through interviews with software practitioners and business man-agers, this study explores the utility of applying R-Bs in businesses in the primary and sec-ondary software sectors and the business and technical reasons supporting their adoption.We find businesses use R-Bs in the safety-critical and security domains, and R-Bs are valu-able for traceability and support collaborative software development. We also found thatR-Bs are valued as engineering processes and are seen as a badge of software quality, butwithout a tangible value proposition. There are good engineering reasons to use R-Bs inindustrial software development, and the principle of establishing correspondence betweensource code and binary offers opportunities for the development of further applications.
  •  
8.
  • Börstler, Jürgen, et al. (författare)
  • Beauty and the Beast: on the readability of object-oriented example programs
  • 2016
  • Ingår i: Software quality journal. - : Springer. - 0963-9314 .- 1573-1367. ; 24:2, s. 231-246
  • Tidskriftsartikel (refereegranskat)abstract
    • Some solutions to a programming problem are more elegant or more simple than others and thus more understandable for students. We review desirable properties of example programs from a cognitive and a measurement point of view. Certain cognitive aspects of example programs are captured by common software measures, but they are not sufficient to capture a key aspect of understandability: readability. We propose and discuss a simple readability measure for software, SRES, and apply it to object-oriented textbook examples. Our results show that readability measures correlate well with human perceptions of quality. Compared with other readability measures, SRES is less sensitive to commenting and white-space. These results also have implications for software maintainability measures.
  •  
9.
  • Chatzipetrou, Panagiota, Assistant Professor, 1984-, et al. (författare)
  • An experience-based framework for evaluating alignment of software quality goals
  • 2015
  • Ingår i: Software quality journal. - : Springer. - 0963-9314 .- 1573-1367. ; 23:4, s. 567-594
  • Tidskriftsartikel (refereegranskat)abstract
    • Efficient quality management of software projects requires knowledge of how various groups of stakeholders involved in software development prioritize the product and project goals. Agreements or disagreements among members of a team may originate from inherent groupings, depending on various professional or other characteristics. These agreements are not easily detected by conventional practices (discussions, meetings, etc.) since the natural language expressions are often obscuring, subjective, and prone to misunderstandings. It is therefore essential to have objective tools that can measure the alignment among the members of a team; especially critical for the software development is the degree of alignment with respect to the prioritization goals of the software product. The paper proposes an experience-based framework of statistical and graphical techniques for the systematic study of prioritization alignment, such as hierarchical cluster analysis, analysis of cluster composition, correlation analysis, and closest agreement-directed graph. This framework can provide a thorough and global picture of a team's prioritization perspective and can potentially aid managerial decisions regarding team composition and leadership. The framework is applied and illustrated in a study related to global software development where 65 individuals in different roles, geographic locations and professional relationships with a company, prioritize 24 goals from individual perception of the actual situation and for an ideal situation.
  •  
10.
  • Chatzipetrou, Panagiota, Assistant Professor, 1984-, et al. (författare)
  • Component attributes and their importance in decisions and component selection
  • 2020
  • Ingår i: Software quality journal. - : Springer Science and Business Media LLC. - 0963-9314 .- 1573-1367. ; 28, s. 567-593
  • Tidskriftsartikel (refereegranskat)abstract
    • Component-based software engineering is a common approach in the development and evolution of contemporary software systems. Different component sourcing options are available, such as: (1) Software developed internally (in-house), (2) Software developed outsourced, (3) Commercial off-the-shelf software, and (4) Open-Source Software. However, there is little available research on what attributes of a component are the most important ones when selecting new components. The objective of this study is to investigate what matters the most to industry practitioners when they decide to select a component. We conducted a cross-domain anonymous survey with industry practitioners involved in component selection. First, the practitioners selected the most important attributes from a list. Next, they prioritized their selection using the Hundred-Dollar ($100) test. We analyzed the results using compositional data analysis. The results of this exploratory analysis showed that cost was clearly considered to be the most important attribute for component selection. Other important attributes for the practitioners were: support of the component, longevity prediction, and level of off-the-shelf fit to product. Moreover, several practitioners still consider in-house software development to be the sole option when adding or replacing a component. On the other hand, there is a trend to complement it with other component sourcing options and, apart from cost, different attributes factor into their decision. Furthermore, in our analysis, nonparametric tests and biplots were used to further investigate the practitioners’ inherent characteristics. It seems that smaller and larger organizations have different views on what attributes are the most important, and the most surprising finding is their contrasting views on the cost attribute: larger organizations with mature products are considerably more cost aware.
  •  
11.
  • Ciccozzi, Federico, et al. (författare)
  • Architecture optimization: Speed or accuracy? Both!
  • 2018
  • Ingår i: Software quality journal. - : Springer Science and Business Media LLC. - 0963-9314 .- 1573-1367. ; 26:2, s. 661-684
  • Tidskriftsartikel (refereegranskat)abstract
    • Embedded systems are becoming more and more complex, thus demanding innovative means to tame their challenging development. Among others, early architecture optimization represents a crucial activity in the development of embedded systems to maximise the usage of their limited resources and to respect their real-time requirements. Typically, architecture optimization seeks good architecture candidates based on model-based analysis. Leveraging abstractions and estimates, this analysis usually produces approximations useful for comparing architecture candidates. Nonetheless, approximations do not provide enough accuracy in estimating crucial extra-functional properties. In this article, we provide an architecture optimization framework that profits from both the speed of model-based predictions and the accuracy of execution-based measurements. Model-based optimization rapidly finds a good architecture candidate, which is refined through optimization based on monitored executions of automatically generated code. Moreover, the framework enables the developer to leverage her optimization experience. More specifically, the developer can use runtime monitoring of generated code execution to manually adjust task allocation at modelling level, and commit the changes without halting execution. In the article, our architecture optimization mechanism is first described from a general point of view and then exploited for optimizing the allocation of software tasks to the processing cores of a multicore embedded system; we target extra-functional properties that can be concretely represented and automatically compared for different architectural alternatives (such as memory consumption, energy consumption, or responsetime).
  •  
12.
  • Eken, Beyza, et al. (författare)
  • An empirical study on the effect of community smells on bug prediction
  • 2021
  • Ingår i: Software quality journal. - : Springer. - 0963-9314 .- 1573-1367. ; 29, s. 159-194
  • Tidskriftsartikel (refereegranskat)abstract
    • Community-aware metrics through socio-technical developer networks or organizational structures have already been studied in the software bug prediction field. Community smells are also proposed to identify communication and collaboration patterns in developer communities. Prior work reports a statistical association between community smells and code smells identified in software modules. We investigate the contribution of community smells on predicting bug-prone classes and compare their contribution with that of code smell-related information and state-of-the-art process metrics. We conduct our empirical analysis on ten open-source projects with varying sizes, buggy and smelly class ratios. We build seven different bug prediction models to answer three RQs: a baseline model including a state-of-the-art metric set used, three models incorporating a particular metric set, namely community smells, code smells, code smell intensity, into the baseline, and three models incorporating a combination of smell-related metrics into the baseline. The performance of these models is reported in terms of recall, false positive rates, F-measure and AUC and statistically compared using Scott-Knott ESD tests. Community smells improve the prediction performance of a baseline model by up to 3% in terms of AUC, while code smell intensity improves the baseline models by up to 40% in terms of F-measure and up to 17% in terms of AUC. The conclusions are significantly influenced by the validation strategies used, algorithms and the selected projects' data characteristics. While the code smell intensity metric captures the most information about technical flaws in predicting bug-prone classes, the community smells also contribute to bug prediction models by revealing communication and collaboration flaws in software development teams. Future research is needed to capture the communication patterns through multiple channels and to understand whether socio-technical flaws could be used in a cross-project bug prediction setting.
  •  
13.
  • Engström, Emelie, et al. (författare)
  • SERP-test : a taxonomy for supporting industry-academia communication
  • 2017
  • Ingår i: Software quality journal. - : Springer-Verlag New York. - 0963-9314 .- 1573-1367. ; 25:4, s. 1269-1305
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper presents the construction and evaluation of SERP-test, a taxonomy aimed to improve communication between researchers and practitioners in the area of software testing. SERP-test can be utilized for direct communication in industry academia collaborations. It may also facilitate indirect communication between practitioners adopting software engineering research and researchers who are striving for industry relevance. SERP-test was constructed through a systematic and goal-oriented approach which included literature reviews and interviews with practitioners and researchers. SERP-test was evaluated through an online survey and by utilizing it in an industry–academia collaboration project. SERP-test comprises four facets along which both research contributions and practical challenges may be classified: Intervention, Scope, Effect target and Context constraints. This paper explains the available categories for each of these facets (i.e., their definitions and rationales) and presents examples of categorized entities. Several tasks may benefit from SERP-test, such as formulating research goals from a problem perspective, describing practical challenges in a researchable fashion, analyzing primary studies in a literature review, or identifying relevant points of comparison and generalization of research.
  •  
14.
  • Felderer, Michael, 1978-, et al. (författare)
  • Comprehensibility of system models during test design : A controlled experiment comparing UML activity diagrams and state machines
  • 2019
  • Ingår i: Software quality journal. - : Springer. - 0963-9314 .- 1573-1367. ; 27:1, s. 125-147
  • Tidskriftsartikel (refereegranskat)abstract
    • UML activity diagrams and state machines are both used for modeling system behavior from the user perspective and are frequently the basis for deriving system test cases. In practice, system test cases are often derived manually from UML activity diagrams or state machines. For this task, comprehensibility of respective models is essential and a relevant question for practice to support model selection and design, as well as subsequent test derivation. Therefore, the objective of this paper is to compare the comprehensibility of UML activity diagrams and state machines during manual test case derivation. We investigate the comprehensibility of UML activity diagrams and state machines in a controlled student experiment. Three measures for comprehensibility have been investigated: (1) the self-assessed comprehensibility, (2) the actual comprehensibility measured by the correctness of answers to comprehensibility questions, and (3) the number of errors made during test case derivation. The experiment was performed and internally replicated with overall 84 participants divided into three groups at two institutions. Our experiment indicates that activity diagrams are more comprehensible but also more error-prone with regard to manual test case derivation and discusses how these results can improve system modeling and test case design.
  •  
15.
  • Flemström, Daniel, et al. (författare)
  • Similarity-based prioritization of test case automation
  • 2018
  • Ingår i: Software quality journal. - : Springer Science and Business Media LLC. - 0963-9314 .- 1573-1367. ; 26:4, s. 1421-1449
  • Tidskriftsartikel (refereegranskat)abstract
    • The importance of efficient software testing procedures is driven by an ever increasing system complexity as well as global competition. In the particular case of manual test cases at the system integration level, where thousands of test cases may be executed before release, time must be well spent in order to test the system as completely and as efficiently as possible. Automating a subset of the manual test cases, i.e, translating the manual instructions to automatically executable code, is one way of decreasing the test effort. It is further common that test cases exhibit similarities, which can be exploited through reuse when automating a test suite. In this paper, we investigate the potential for reducing test effort by ordering the test cases before such automation, given that we can reuse already automated parts of test cases. In our analysis, we investigate several approaches for prioritization in a case study at a large Swedish vehicular manufacturer. The study analyzes the effects with respect to test effort, on four projects with a total of 3919 integration test cases constituting 35,180 test steps, written in natural language. The results show that for the four projects considered, the difference in expected manual effort between the best and the worst order found is on average 12 percentage points. The results also show that our proposed prioritization method is nearly as good as more resource demanding meta-heuristic approaches at a fraction of the computational time. Based on our results, we conclude that the order of automation is important when the set of test cases contain similar steps (instructions) that cannot be removed, but are possible to reuse. More precisely, the order is important with respect to how quickly the manual test execution effort decreases for a set of test cases that are being automated. 
  •  
16.
  • Fotrousi, Farnaz, et al. (författare)
  • The effect of requests for user feedback on Quality of Experience
  • 2018
  • Ingår i: Software quality journal. - : SPRINGER. - 0963-9314 .- 1573-1367. ; 26:2, s. 385-415
  • Tidskriftsartikel (refereegranskat)abstract
    • Companies are interested in knowing how users experience and perceive their products. Quality of Experience (QoE) is a measurement that is used to assess the degree of delight or annoyance in experiencing a software product. To assess QoE, we have used a feedback tool integrated into a software product to ask users about their QoE ratings and to obtain information about their rationales for good or bad QoEs. It is known that requests for feedback may disturb users; however, little is known about the subjective reasoning behind this disturbance or about whether this disturbance negatively affects the QoE of the software product for which the feedback is sought. In this paper, we present a mixed qualitative-quantitative study with 35 subjects that explore the relationship between feedback requests and QoE. The subjects experienced a requirement-modeling mobile product, which was integrated with a feedback tool. During and at the end of the experience, we collected the users' perceptions of the product and the feedback requests. Based on the users' rational for being disturbed by the feedback requests, such as "early feedback," "interruptive requests," "frequent requests," and "apparently inappropriate content," we modeled feedback requests. The model defines feedback requests using a set of five-tuple variables: "task," "timing" of the task for issuing the feedback requests, user's "expertise-phase" with the product, the "frequency" of feedback requests about the task, and the "content" of the feedback request. Configuration of these parameters might drive the participants' perceived disturbances. We also found that the disturbances generated by triggering user feedback requests have negligible impacts on the QoE of software products. These results imply that software product vendors may trust users' feedback even when the feedback requests disturb the users.
  •  
17.
  • Franke, Ulrik, et al. (författare)
  • Availability of enterprise IT systems : an expert-based Bayesian framework
  • 2012
  • Ingår i: Software quality journal. - : Springer Science and Business Media LLC. - 0963-9314 .- 1573-1367. ; 20:2, s. 369-394
  • Tidskriftsartikel (refereegranskat)abstract
    • Ensuring the availability of enterprise IT systems is a challenging task. The factors that can bring systems down are numerous, and their impact on various system architectures is difficult to predict. At the same time, maintaining high availability is crucial in many applications, ranging from control systems in the electric power grid, over electronic trading systems on the stock market to specialized command and control systems for military and civilian purposes. This paper describes a Bayesian decision support model, designed to help enterprise IT systems decision makers evaluate the consequences of their decisions by analyzing various scenarios. The model is based on expert elicitation from 50 experts on IT systems availability, obtained through an electronic survey. The Bayesian model uses a leaky Noisy-OR method to weigh together the expert opinions on 16 factors affecting systems availability. Using this model, the effect of changes to a system can be estimated beforehand, providing decision support for improvement of enterprise IT systems availability. The Bayesian model thus obtained is then integrated within a standard, reliability block diagram-style, mathematical model for assessing availability on the architecture level. In this model, the IT systems play the role of building blocks. The overall assessment framework thus addresses measures to ensure high availability both on the level of individual systems and on the level of the entire enterprise architecture. Examples are presented to illustrate how the framework can be used by practitioners aiming to ensure high availability.
  •  
18.
  • Helali Moghadam, Mahshid, et al. (författare)
  • An autonomous performance testing framework using self-adaptive fuzzy reinforcement learning
  • 2022
  • Ingår i: Software quality journal. - : Springer. - 0963-9314 .- 1573-1367. ; , s. 127-159
  • Tidskriftsartikel (refereegranskat)abstract
    • Test automation brings the potential to reduce costs and human effort, but several aspects of software testing remain challenging to automate. One such example is automated performance testing to find performance breaking points. Current approaches to tackle automated generation of performance test cases mainly involve using source code or system model analysis or use-case-based techniques. However, source code and system models might not always be available at testing time. On the other hand, if the optimal performance testing policy for the intended objective in a testing process instead could be learned by the testing system, then test automation without advanced performance models could be possible. Furthermore, the learned policy could later be reused for similar software systems under test, thus leading to higher test efficiency. We propose SaFReL, a self-adaptive fuzzy reinforcement learning-based performance testing framework. SaFReL learns the optimal policy to generate performance test cases through an initial learning phase, then reuses it during a transfer learning phase, while keeping the learning running and updating the policy in the long term. Through multiple experiments in a simulated performance testing setup, we demonstrate that our approach generates the target performance test cases for different programs more efficiently than a typical testing process and performs adaptively without access to source code and performance models. © 2021, The Author(s).
  •  
19.
  • Ivarsson, Martin, 1980, et al. (författare)
  • Tool support for disseminating and improving development practices
  • 2012
  • Ingår i: Software Quality Journal. - : Springer Science and Business Media LLC. - 1573-1367 .- 0963-9314. ; 20:1, s. 173-199
  • Tidskriftsartikel (refereegranskat)abstract
    • Knowledge management in software engineering and software process improvement activities pose challenges as initiatives are deployed. Most existing approaches are either too expensive to deploy or do not take an organization's specific needs into consideration. There is thus a need for scalable improvement approaches that leverage knowledge already residing in the organizations. This paper presents tool support for an Experience Factory approach for disseminating and improving practices used in an organization. Experiences from using practices in development projects are captured in postmortems and provide iteratively improved decision support for identifying what practices work well and what needs improvement. An initial evaluation of using the tool for organizational improvement has been performed utilizing both academia and industry. The results from the evaluation indicate that organizational characteristics influence how practices and experiences can be used. Experiences collected in postmortems are estimated to have little effect on improvements to practices used throughout the organization. However, in organizations where different practices are used in different parts of the organization, making practices available together with experiences from use, as well as having context information, can influence decisions on what practices to use in projects.
  •  
20.
  • Jabangwe, Ronald, et al. (författare)
  • Handover of managerial responsibilities in global software development : a case study of source code evolution and quality
  • 2015
  • Ingår i: Software quality journal. - : Springer. - 0963-9314 .- 1573-1367. ; 23:4, s. 539-566
  • Tidskriftsartikel (refereegranskat)abstract
    • Studies report on the negative effect on quality in global software development (GSD) due to communication and coordination-related challenges. However, empirical studies reporting on the magnitude of the effect are scarce. This paper presents findings from an embedded explanatory case study on the change in quality over time, across multiple releases, for products that were developed in a GSD setting. The GSD setting involved periods of distributed development between geographically dispersed sites as well as a handover of project management responsibilities between the involved sites. Investigations were performed on two medium-sized products from a company that is part of a large multinational corporation. Quality is investigated quantitatively using defect data and measures that quantify two source code properties, size and complexity. Observations were triangulated with subjective views from company representatives. There were no observable indications that the distribution of work or handover of project management responsibilities had an impact on quality on both products. Among the product-, process- and people-related success factors, we identified well-designed product architectures, early handover planning and support from the sending site to the receiving site after the handover and skilled employees at the involved sites. Overall, these results can be useful input for decision-makers who are considering distributing development work between globally dispersed sites or handing over project management responsibilities from one site to another. Moreover, our study shows that analyzing the evolution of size and complexity properties of a product’s source code can provide valuable information to support decision-making during similar projects. Finally, the strategy used by the company to relocate responsibilities can also be considered as an alternative for software transfers, which have been linked with a decline in efficiency, productivity and quality.
  •  
21.
  • Karlsson, Stefan, et al. (författare)
  • Exploring API behaviours through generated examples
  • 2024
  • Ingår i: Software Quality Journal. - : SPRINGER. - 1573-1367 .- 0963-9314. ; 32:2, s. 729-763
  • Tidskriftsartikel (refereegranskat)abstract
    • Understanding the behaviour of a system's API can be hard. Giving users access to relevant examples of how an API behaves has been shown to make this easier for them. In addition, such examples can be used to verify expected behaviour or identify unwanted behaviours. Methods for automatically generating examples have existed for a long time. However, state-of-the-art methods rely on either white-box information, such as source code, or on formal specifications of the system behaviour. But what if you do not have access to either? This may be the case, for example, when interacting with a third-party API. In this paper, we present an approach to automatically generate relevant examples of behaviours of an API, without requiring either source code or a formal specification of behaviour. Evaluation on an industry-grade REST API shows that our method can produce small and relevant examples that can help engineers to understand the system under exploration.
  •  
22.
  • Kienle, Holger, et al. (författare)
  • System-specific static code analyses : a case study in the complex embedded systems domain
  • 2012
  • Ingår i: Software quality journal. - : Springer Science and Business Media LLC. - 0963-9314 .- 1573-1367. ; 20:2, s. 337-367
  • Tidskriftsartikel (refereegranskat)abstract
    • In this paper, we are exploring the approach to utilize system-specific static analyses of code with the goal to improve software quality forspecific software systems. Specialized analyses, tailored for a particular system, make it possible to take advantage of system/domainknowledge that is not available to more generic analyses. Furthermore, analyses can be selected and/or developed in order to best meet the challenges and specific issues of the system at hand. As a result, such analyses can be used as a complement to more generic code analysistools because they are likely to have a better impact on (business) concerns such as improving certain software quality attributes and reducing certain classes of failures. We present a case study of a large, industrial embedded system, giving examples of what kinds of analyses could be realized and demonstrate the feasibility of implementing such analyses. We synthesize lessons learned based on our case study and provide recommendations on how to realize system-specific analyses and how to get them adopted by industry.
  •  
23.
  • Lagerström, Robert, et al. (författare)
  • Architecture analysis of enterprise systems modifiability - A metamodel for software change cost estimation
  • 2010
  • Ingår i: Software quality journal. - : Springer Publishing Company. - 0963-9314 .- 1573-1367. ; 18:4, s. 437-468
  • Tidskriftsartikel (refereegranskat)abstract
    • Enterprise architecture models can be used in order to increase the general understanding of enterprise systems and specifically to perform various kinds of analysis. The present paper proposes a metamodel for enterprise systems modifiability analysis, i.e. assessing the cost of making changes to enterprise-wide systems. The enterprise architecture metamodel is formalized using probabilistic relational models, which enables the combination of regular entity-relationship modeling aspects with means to perform enterprise architecture analysis. The content of the presented metamodel is validated based on survey and workshop data and its estimation capability is tested with data from 21 software change projects. To illustrate the applicability of the metamodel an instantiated architectural model based on a software change project conducted at a large Nordic transportation company is detailed.
  •  
24.
  • Lagerström, Robert, et al. (författare)
  • Identifying factors affecting software development cost and productivity
  • 2012
  • Ingår i: Software Quality Journal. - : Springer Science and Business Media LLC. - 0963-9314 .- 1573-1367. ; 20:2, s. 395-417
  • Tidskriftsartikel (refereegranskat)abstract
    • Software systems of today are often complex, making development costs difficult to estimate. This paper uses data from 50 projects performed at one of the largest banks in Sweden to identify factors that have an impact on software development cost. Correlation analysis of the relationship between factor states and project costs was assessed using ANOVA and regression analysis. Ten out of the original 31 factors turned out to have an impact on software development project cost at the Swedish bank including the: number of function points, involved risk, number of budget revisions, primary platform, project priority, commissioning body's unit, commissioning body, number of project participants, project duration, and number of consultants. In order to be able to compare projects of different size and complexity, this study also considers the software development productivity defined as the amount of function points per working hour in a project. The study at the bank indicates that the productivity is affected by factors such as performance of estimation and prognosis efforts, project type, number of budget revisions, existence of testing conductor, presentation interface, and number of project participants. A discussion addressing how the productivity factors relate to cost estimation models and their factors is presented. Some of the factors found to have an impact on cost are already included in estimation models such as COCOMO II, TEAMATe, and SEER-SEM, for instance function points and software platform. Thus, this paper validates these well-known factors for cost estimation. However, several of the factors found in this study are not included in established models for software development cost estimation. Thus, this paper also provides indications for possible extensions of these models.
  •  
25.
  • Lenhard, Jörg, et al. (författare)
  • Exploring the suitability of source code metrics for indicating architectural inconsistencies
  • 2018
  • Ingår i: Software quality journal. - : Springer. - 0963-9314 .- 1573-1367.
  • Tidskriftsartikel (refereegranskat)abstract
    • Software architecture degradation is a phenomenon that frequently occurs during software evolution. Source code anomalies are one of the several aspects that potentially contribute to software architecture degradation. Many techniques for automating the detection of such anomalies are based on source code metrics. It is, however, unclear how accurate these techniques are in identifying the architecturally relevant anomalies in a system. The objective of this paper is to shed light on the extent to which source code metrics on their own can be used to characterize classes contributing to software architecture degradation. We performed a multi-case study on three open-source systems for each of which we gathered the intended architecture and data for 49 different source code metrics taken from seven different code quality tools. This data was analyzed to explore the links between architectural inconsistencies, as detected by applying reflexion modeling, and metric values indicating potential design problems at the implementation level. The results show that there does not seem to be a direct correlation between metrics and architectural inconsistencies. For many metrics, however, classes more problematic as indicated by their metric value seem significantly more likely to contribute to inconsistencies than less problematic classes. In particular, the fan-in, a classes’ public API, and method counts seem to be suitable indicators. The fan-in metric seems to be a particularly interesting indicator, as class size does not seem to have a confounding effect on this metric. This finding may be useful for focusing code restructuring efforts on architecturally relevant metrics in case the intended architecture is not explicitly specified and to further improve architecture recovery and consistency checking tool support.
  •  
26.
  • Lindholm, Christin, et al. (författare)
  • A case study on software risk analysis and planning in medical device development
  • 2014
  • Ingår i: Software Quality Journal. - : Springer Science and Business Media LLC. - 0963-9314 .- 1573-1367. ; 22:3, s. 469-497
  • Tidskriftsartikel (refereegranskat)abstract
    • Software failures in medical devices can lead to catastrophic situations. Therefore, it is crucial to handle software-related risks when developing medical devices, and there is a need for further analysis of how this type of risk management should be conducted. The objective of this paper is to collect and summarise experiences from conducting risk management with an organisation developing medical devices. Specific focus is put on the first steps of the risk management process, i.e. risk identification, risk analysis, and risk planning. The research is conducted as action research, with the aim of analysing and giving input to the organisation’s introduction of a software risk management process. First, the method was defined based on already available methods and then used. The defined method focuses on user risks, based on scenarios describing the expected use of the medical device in its target environment. During the use of the method, different stakeholders, including intended users, were involved. Results from the case study show that there are challenging problems in the risk management process with respect to definition of the system boundary and system context, the use of scenarios as input to the risk identification, estimation of detectability during risk analysis, and action proposals during risk planning. It can be concluded that the risk management method has potential to be used in the development organisation, although future research is needed with respect to, for example, context limitation and how to allow for flexible updates of the product.
  •  
27.
  •  
28.
  •  
29.
  • Mendes, Emilia, et al. (författare)
  • Towards improving decision making and estimating the value of decisions in value-based software engineering : the VALUE framework
  • 2018
  • Ingår i: Software quality journal. - : Springer-Verlag New York. - 0963-9314 .- 1573-1367. ; 26:2, s. 607-656
  • Tidskriftsartikel (refereegranskat)abstract
    • To sustain growth, maintain competitive advantage and to innovate, companies must make a paradigm shift in which both short- and long-term value aspects are employed to guide their decision-making. Such need is clearly pressing in innovative industries, such as ICT, and is also the core of Value-based Software Engineering (VBSE). The goal of this paper is to detail a framework called VALUE—improving decision-making relating to software-intensive products and services development—and to show its application in practice to a large ICT company in Finland. The VALUE framework includes a mixed-methods approach, as follows: to elicit key stakeholders’ tacit knowledge regarding factors used during a decision-making process, either transcripts from interviews with key stakeholders are analysed and validated in focus group meetings or focus-group meeting(s) are directly applied. These value factors are later used as input to a Web-based tool (Value tool) employed to support decision making. This tool was co-created with four industrial partners in this research via a design science approach that includes several case studies and focus-group meetings. Later, data on key stakeholders’ decisions gathered using the Value tool, plus additional input from key stakeholders, are used, in combination with the Expert-based Knowledge Engineering of Bayesian Network (EKEBN) process, coupled with the weighed sum algorithm (WSA) method, to build and validate a company-specific value estimation model. The application of our proposed framework to a real case, as part of an ongoing collaboration with a large software company (company A), is presented herein. Further, we also provide a detailed example, partially using real data on decisions, of a value estimation Bayesian network (BN) model for company A. This paper presents some empirical results from applying the VALUE Framework to a large ICT company; those relate to eliciting key stakeholders’ tacit knowledge, which is later used as input to a pilot study where these stakeholders employ the Value tool to select features for one of their company’s chief products. The data on decisions obtained from this pilot study is later applied to a detailed example on building a value estimation BN model for company A. We detail a framework—VALUE framework—to be used to help companies improve their value-based decisions and to go a step further and also estimate the overall value of each decision. © 2017 The Author(s)
  •  
30.
  • Minhas, Nasir Mehmood, et al. (författare)
  • Lessons learned from replicating a study on information-retrieval based test case prioritization
  • 2023
  • Ingår i: Software quality journal. - : Springer. - 0963-9314 .- 1573-1367. ; 31:4, s. 1527-1559
  • Tidskriftsartikel (refereegranskat)abstract
    • Replication studies help solidify and extend knowledge by evaluating previous studies’ findings. Software engineering literature showed that too few replications are conducted focusing on software artifacts without the involvement of humans. This study aims to replicate an artifact-based study on software testing to address the gap related to replications. In this investigation, we focus on (i) providing a step-by-step guide of the replication, reflecting on challenges when replicating artifact-based testing research and (ii) evaluating the replicated study concerning the validity and robustness of the findings. We replicate a test case prioritization technique proposed by Kwon et al. We replicated the original study using six software programs, four from the original study and two additional software programs. We automated the steps of the original study using a Jupyter notebook to support future replications. Various general factors facilitating replications are identified, such as (1) the importance of documentation; (2) the need for assistance from the original authors; (3) issues in the maintenance of open-source repositories (e.g., concerning needed software dependencies, versioning); and (4) availability of scripts. We also noted observations specific to the study and its context, such as insights from using different mutation tools and strategies for mutant generation. We conclude that the study by Kwon et al. is partially replicable for small software programs and could be automated to facilitate software practitioners, given the availability of required information. However, it is hard to implement the technique for large software programs with the current guidelines. Based on lessons learned, we suggest that the authors of original studies need to publish their data and experimental setup to support the external replications. © 2023, The Author(s).
  •  
31.
  • Pernstal, J., et al. (författare)
  • FLEX-RCA: a lean-based method for root cause analysis in software process improvement
  • 2019
  • Ingår i: Software Quality Journal. - : Springer Science and Business Media LLC. - 1573-1367 .- 0963-9314. ; 27:1, s. 389-428
  • Tidskriftsartikel (refereegranskat)abstract
    • Software process improvement (SPI) is an instrument to increase the productivity of, and the quality of work, in software organizations. However, a majority of SPI frameworks are too extensive or provide guidance and potential improvement areas at a high level, indicating only the symptoms, not the causes. Motivated by the industrial need of two Swedish automotive companies to systematically uncover the underlying root causes of high-level improvement issues identified in an SPI project-assessing inter-departmental interactions in large-scale software systems development-this paper advances a root cause analysis (RCA) method building on Lean Six Sigma, called Flex-RCA. Flex-RCA is used to delve deeper into challenges identified to find root causes as a part of the evaluation and subsequent improvement activities. We also demonstrate and evaluate Flex-RCA's industrial applicability in a case study. An overall conclusion is that the use of Flex-RCA was successful, showing that it had the desired effect of both producing a broad base of causes on a high level and, more importantly, enabling an exploration of the underlying root causes.
  •  
32.
  • Solinski, Adam, et al. (författare)
  • Prioritizing agile benefits and limitations in relation to practice usage
  • 2016
  • Ingår i: Software quality journal. - : Springer. - 0963-9314 .- 1573-1367. ; 24:2, s. 447-482
  • Tidskriftsartikel (refereegranskat)abstract
    • In recent years, there has been significant shift from rigid development (RD) toward agile. However, it has also been spotted that agile methodologies are hardly ever followed in their pure form. Hybrid processes as combinations of RD and agile practices emerge. In addition, agile adoption has been reported to result in both benefits and limitations. This exploratory study (a) identifies development models based on RD and agile practice usage by practitioners; (b) identifies agile practice adoption scenarios based on eliciting practice usage over time; (c) prioritizes agile benefits and limitations in relation to (a) and (b). Practitioners provided answers through a questionnaire. The development models are determined using hierarchical cluster analysis. The use of practices over time is captured through an interactive board with practices and time indication sliders. This study uses the extended hierarchical voting analysis framework to investigate benefit and limitation prioritization. Four types of development models and six adoption scenarios have been identified. Overall, 45 practitioners participated in the prioritization study. A common benefit among all models and adoption patterns is knowledge and learning, while high requirements on professional skills were perceived as the main limitation. Furthermore, significant variances in terms of benefits and limitations have been observed between models and adoption patterns. The most significant internal benefit categories from adopting agile are knowledge and learning, employee satisfaction, social skill development, and feedback and confidence. Professional skill-specific demands, scalability, and lack of suitability for specific product domains are the main limitations of agile practice usage. Having a balanced agile process allows to achieve a high number of benefits. With respect to adoption, a big bang transition from RD to agile leads to poor quality in comparison with the alternatives.
  •  
33.
  • Song, Qunying, et al. (författare)
  • Critical scenario identification for realistic testing of autonomous driving systems
  • 2023
  • Ingår i: Software quality journal. - : Springer Nature. - 0963-9314 .- 1573-1367. ; 31:2, s. 441-469
  • Tidskriftsartikel (refereegranskat)abstract
    • Autonomous driving has become an important research area for road traffic, whereas testing of autonomous driving systems to ensure a safe and reliable operation remains an open challenge. Substantial real-world testing or massive driving data collection does not scale since the potential test scenarios in real-world traffic are infinite, and covering large shares of them in the test is impractical. Thus, critical ones have to be prioritized. We have developed an approach for critical test scenario identification and in this study, we implement the approach and validate it on two real autonomous driving systems from industry by integrating it into their tool-chain. Our main contribution in this work is the demonstration and validation of our approach for critical scenario identification for testing real autonomous driving systems.
  •  
34.
  • Sulaman, Sardar Muhammad, et al. (författare)
  • Comparison of the FMEA and STPA safety analysis methods : a case study
  • 2019
  • Ingår i: Software quality journal. - : Springer. - 0963-9314 .- 1573-1367. ; 27:1, s. 349-387
  • Tidskriftsartikel (refereegranskat)abstract
    • As our society becomes more and more dependent on IT systems, failures of these systems can harm more and more people and organizations. Diligently performing risk and hazard analysis helps to minimize the potential harm of IT system failures on the society and increases the probability of their undisturbed operation. Risk and hazard analysis is an important activity for the development and operation of critical software intensive systems, but the increased complexity and size puts additional requirements on the effectiveness of risk and hazard analysis methods. This paper presents a qualitative comparison of two hazard analysis methods, failure mode and effect analysis (FMEA) and system theoretic process analysis (STPA), using case study research methodology. Both methods have been applied on the same forward collision avoidance system to compare the effectiveness of the methods and to investigate what are the main differences between them. Furthermore, this study also evaluates the analysis process of both methods by using a qualitative criteria derived from the technology acceptance model (TAM). The results of the FMEA analysis were compared to the results of the STPA analysis, which were presented in a previous study. Both analyses were conducted on the same forward collision avoidance system. The comparison shows that FMEA and STPA deliver similar analysis results.
  •  
35.
  • Tihana, Galinac Grbac, et al. (författare)
  • Quantitative analysis of unit verification as predictor in large scale software enginering
  • 2016
  • Ingår i: Software Quality Journal. - : Springer Science and Business Media LLC. - 0963-9314 .- 1573-1367. ; 24:4, s. 967-995
  • Tidskriftsartikel (refereegranskat)abstract
    • Unit verification, including software inspections and unit tests, is usually the first code verification phase in the software development process. However, principles of unit verification are weakly explored, mostly due to the lack of data, since unit verification data are rarely systematically collected and only a few studies have been published with such data from industry. Therefore, we explore the theory of fault distributions, originating in the quantitative analysis by Fenton and Ohlsson, in the weakly explored context of unit verification in large-scale software development. We conduct a quantitative case study on a sequence of four development projects on consecutive releases of the same complex software product line system for telecommunication exchanges. We replicate the operationalization from earlier studies, analyzed hypotheses related to the Pareto principle of fault distribution, persistence of faults, effects of module size, and quality in terms of fault densities, however, now from the perspective of unit verification. The patterns in unit verification results resemble those of later verification phases, e.g., regarding the Pareto principle, and may thus be used for prediction and planning purposes. Using unit verification results as predictors may improve the quality and efficiency of software verification.
  •  
36.
  • Torkar, Richard, 1971, et al. (författare)
  • Prediction of faults-slip-through in large software projects: An empirical evaluation
  • 2014
  • Ingår i: Software quality journal. - Netherlands : Springer Science and Business Media LLC. - 0963-9314 .- 1573-1367. ; 22:1, s. 51-86
  • Tidskriftsartikel (refereegranskat)abstract
    • A large percentage of the cost of rework can be avoided by finding more faults earlier in a software test process. Therefore, determination of which software test phases to focus improvement work on has considerable industrial interest. We evaluate a number of prediction techniques for predicting the number of faults slipping through to unit, function, integration, and system test phases of a large industrial project. The objective is to quantify improvement potential in different test phases by striving toward finding the faults in the right phase. The results show that a range of techniques are found to be useful in predicting the number of faults slipping through to the four test phases; however, the group of search-based techniques (genetic programming, gene expression programming, artificial immune recognition system, and particle swarm optimization–based artificial neural network) consistently give better predictions, having a representation at all of the test phases. Human predictions are consistently better at two of the four test phases. We conclude that the human predictions regarding the number of faults slipping through to various test phases can be well supported by the use of search-based techniques. A combination of human and an automated search mechanism (such as any of the search-based techniques) has the potential to provide improved prediction results.
  •  
37.
  • Ulan, Maria, et al. (författare)
  • Copula-based software metrics aggregation
  • 2021
  • Ingår i: Software quality journal. - : Springer. - 0963-9314 .- 1573-1367. ; 29, s. 863-899
  • Tidskriftsartikel (refereegranskat)abstract
    • A quality model is a conceptual decomposition of an abstract notion of quality into relevant, possibly conflicting characteristics and further into measurable metrics. For quality assessment and decision making, metrics values are aggregated to characteristics and ultimately to quality scores. Aggregation has often been problematic as quality models do not provide the semantics of aggregation. This makes it hard to formally reason about metrics, characteristics, and quality. We argue that aggregation needs to be interpretable and mathematically well defined in order to assess, to compare, and to improve quality. To address this challenge, we propose a probabilistic approach to aggregation and define quality scores based on joint distributions of absolute metrics values. To evaluate the proposed approach and its implementation under realistic conditions, we conduct empirical studies on bug prediction of ca. 5000 software classes, maintainability of ca. 15000 open-source software systems, and on the information quality of ca. 100000 real-world technical documents. We found that our approach is feasible, accurate, and scalable in performance.
  •  
38.
  • Wohlin, Claes, et al. (författare)
  • Assessing project success using subjective evaluation factors
  • 2001
  • Ingår i: Software quality journal. - DORDRECHT : KLUWER ACADEMIC PUBL. - 0963-9314 .- 1573-1367. ; 9:1, s. 43-70
  • Tidskriftsartikel (refereegranskat)abstract
    • Project evaluation is essential to understand and assess the key aspects of a project that make it either a success or failure. The latter is influenced by a large number of factors, and many times it is hard to measure them objectively. This paper addresses this by introducing a new method for identifying and assessing key project characteristics, which are crucial for a project's success. The method consists of a number of well-defined steps, which are described in detail. The method is applied to two case studies from different application domains and continents. It is concluded that patterns are possible to detect from the data sets. Further, the analysis of the two data sets shows that the proposed method using subjective factors is useful, since it provides an increased understanding, insight and assessment of which project factors might affect project success.
  •  
39.
  •  
40.
  • Yoo, Shin, et al. (författare)
  • Guest editorial: special section on regression testing
  • 2014
  • Ingår i: Software Quality Journal. - : Springer Science and Business Media LLC. - 0963-9314 .- 1573-1367. ; 22:4, s. 699-699
  • Tidskriftsartikel (övrigt vetenskapligt/konstnärligt)
  •  
41.
  •  
42.
  • Johansson, Glenn, et al. (författare)
  • Eco-innovations : a novel phenomenon?
  • 1998
  • Ingår i: Journal of Sustainable Product Design. - 1367-6679 .- 1573-1588. ; :7, s. 7-15
  • Tidskriftsartikel (refereegranskat)abstract
    • It has generally been accepted that in order to reach sustainability, significant changes will have to take place. Eco-innovations ie. new products and processes providing customer value, while using less resources and resulting in reduced environmental impacts, are therefore of great importance. On the basis of selected parts of the existing innovation theory, this article explores the eco-innovation phenomenon. The theory is used to analyse two examples of ecoinnovation; the struggle between steel and aluminium to the application of light weight car bodies, and the development of lawn mowers with improved environmental performance. The analysis shows that innovation theory is useful for creating a better understanding of the concept and development of eco-innovations. It is therefore concluded that the innovation theory should be part of the frame of reference when analysing and managing eco-innovations.
  •  
43.
  • Johansson, Glenn, et al. (författare)
  • Eco-innovations - a novel phenomenon?
  • 1998
  • Ingår i: Journal of Sustainable Product Design. - 1367-6679 .- 1573-1588. ; :7, s. 7-15
  • Tidskriftsartikel (refereegranskat)
  •  
44.
  • Kudran Pradhan, Shameer, et al. (författare)
  • Identifying and managing data quality requirements: a design science study in the field of automated driving
  • 2023
  • Ingår i: Software quality journal. - 0963-9314.
  • Tidskriftsartikel (refereegranskat)abstract
    • Good data quality is crucial for any data-driven system’s effective and safe operation. For critical safety systems, the significance of data quality is even higher since incorrect or low-quality data may cause fatal faults. However, there are challenges in identifying and managing data quality. In particular, there is no accepted process to define and continuously test data quality concerning what is necessary for operating the system. This lack is problematic because even safety-critical systems become increasingly dependent on data. Here, we propose a Candidate Framework for Data Quality Assessment and Maintenance (CaFDaQAM) to systematically manage data quality and related requirements based on design science research. The framework is constructed based on an advanced driver assistance system (ADAS) case study. The study is based on empirical data from a literature review, focus groups, and design workshops. The proposed framework consists of four components: a Data Quality Workflow, a List of Data Quality Challenges, a List of Data Quality Attributes, and Solution Candidates. Together, the components act as tools for data quality assessment and maintenance. The candidate framework and its components were validated in a focus group.
  •  
45.
  •  
46.
  • Ritzén, Sofia, et al. (författare)
  • Actions for Integrating Environmental Aspects into Product Development
  • 2001
  • Ingår i: Journal of Sustainable Product Design. - 1367-6679 .- 1573-1588. ; 1:2, s. 91-102
  • Tidskriftsartikel (refereegranskat)abstract
    • The paper is based on twenty-eight interviews in four Swedish companies – three of which consider themselves as the best practitioners in Evironmentally Conscious Design (ECD) and one that has recently begun to integrate environmental features into product development (referred to as the non-practice company). Based on earlier research, several important factors for a successful change in product development were investigated. The need to integrate environmental concerns into the non-practice company''s product development is described, along with problems arising from the implementation of ECD and the solutions adopted by the best-practice companies. Lastly, six ways of integrating environmental aspects are identified in the areas management, individual behaviour, resources, formal product development procedures, support tools, and competence development.
  •  
47.
  • Näsvall, Pia, et al. (författare)
  • Quality of life in patients with a permanent stoma after rectal cancer surgery
  • 2017
  • Ingår i: Quality of Life Research. - : Springer Science and Business Media LLC. - 0962-9343 .- 1573-2649. ; 26:1, s. 55-64
  • Tidskriftsartikel (refereegranskat)abstract
    • AIM: Health-related quality of life (HRQoL) assessment is important in understanding the patient's perspective and for decision-making in health care. HRQoL is often impaired in patients with stoma. The aim was to evaluate HRQoL in rectal cancer patients with permanent stoma compared to patients without stoma.METHODS: 711 patients operated for rectal cancer with abdomino-perineal resection or Hartman's procedure and a control group (n = 275) operated with anterior resection were eligible. Four QoL questionnaires were sent by mail. Comparisons of mean values between groups were made by Student´s independent t test. Comparison was made to a Swedish background population.RESULTS: 336 patients with a stoma and 117 without stoma replied (453/986; 46 %). A bulging or a hernia around the stoma was present in 31.5 %. Operation due to parastomal hernia had been performed in 11.7 % in the stoma group. Mental health (p = 0.007), body image (p < 0.001), and physical (p = 0.016) and emotional function (p = 0.003) were inferior in patients with stoma. Fatigue (p = 0.019) and loss of appetite (p = 0.027) were also more prominent in the stoma group. Sexual function was impaired in the non-stoma group (p = 0.034). However in the stoma group, patients with a bulge/hernia had more sexual problems (p = 0.004). Pain was associated with bulge/hernia (p < 0.001) and fear for leakage decreased QoL (p < 0.001). HRQoL was impaired compared to the Swedish background population.CONCLUSION: Overall HRQoL in patients operated for rectal cancer with permanent stoma was inferior compared to patients without stoma. In the stoma group, a bulge or a hernia around the stoma further impaired HRQoL.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-47 av 47
Typ av publikation
tidskriftsartikel (47)
Typ av innehåll
refereegranskat (44)
övrigt vetenskapligt/konstnärligt (3)
Författare/redaktör
Petersen, Kai (5)
Sundmark, Daniel (3)
Gorschek, Tony, 1972 ... (3)
Wohlin, Claes (3)
Helali Moghadam, Mah ... (2)
Hughes, John, 1958 (2)
visa fler...
Bui, Thanh (2)
Höst, Martin (2)
Börstler, Jürgen (2)
Felderer, Michael, 1 ... (2)
Afzal, Wasif (2)
Feldt, Robert, 1972 (2)
Johnson, Pontus (2)
Gorschek, Tony, 1973 (2)
Johansson, Glenn (2)
Mattsson, Michael (2)
Löwe, Welf (1)
Zhu, H. (1)
Ritzén, Sofia (1)
Saadatmand, Mehrdad, ... (1)
Jongeling, Robbert (1)
Hoffman, D (1)
Abbaspour Asadollah, ... (1)
Eldh, Sigrid (1)
Hansson, Hans (1)
Afza, Wasif (1)
Causevic, Adnan, 198 ... (1)
Wnuk, Krzysztof, 198 ... (1)
Nolte, Thomas (1)
Lundberg, Lars (1)
Carlson, Jan (1)
Crnkovic, Ivica (1)
Strigård, Karin (1)
Gunnarsson, Ulf (1)
Lisper, Björn (1)
Torkar, Richard, 197 ... (1)
Ciccozzi, Federico (1)
Feljan, Juraj (1)
Franke, Ulrik (1)
Bjarnason, Elizabeth (1)
Rutegård, Jörgen (1)
France, R. (1)
Alégroth, Emil, 1984 ... (1)
Alégroth, Emil (1)
Knauss, Eric, 1977 (1)
Alferez, Mauricio (1)
Acher, Mathieu (1)
Galindo, Jose A. (1)
Baudry, Benoit (1)
Benavides, David (1)
visa färre...
Lärosäte
Blekinge Tekniska Högskola (19)
Mälardalens universitet (8)
Lunds universitet (8)
Chalmers tekniska högskola (8)
Kungliga Tekniska Högskolan (7)
RISE (4)
visa fler...
Göteborgs universitet (2)
Umeå universitet (2)
Örebro universitet (2)
Linnéuniversitetet (2)
Linköpings universitet (1)
Jönköping University (1)
Högskolan i Skövde (1)
Karlstads universitet (1)
Karolinska Institutet (1)
visa färre...
Språk
Engelska (47)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (37)
Teknik (13)
Samhällsvetenskap (3)
Medicin och hälsovetenskap (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy