SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Sandahl Kristian) "

Sökning: WFRF:(Sandahl Kristian)

  • Resultat 1-50 av 59
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Accelerating digital transformation : 10 years of software center
  • 2022
  • Samlingsverk (redaktörskap) (övrigt vetenskapligt/konstnärligt)abstract
    • This book celebrates the 10-year anniversary of Software Center (a collaboration between 18 European companies and five Swedish universities) by presenting some of the most impactful and relevant journal or conference papers that researchers in the center have published over the last decade.The book is organized around the five themes around which research in Software Center is organized, i.e. Continuous Delivery, Continuous Architecture, Metrics, Customer Data and Ecosystems Driven Development, and AI Engineering. The focus of the Continuous Delivery theme is to help companies to continuously build high quality products with the right degree of automation. The Continuous Architecture theme addresses challenges that arise when balancing the need for architectural quality and more agile ways of working with shorter development cycles. The Metrics theme studies and provides insight to understand, monitor and improve software processes, products and organizations. The fourth theme, Customer Data and Ecosystem Driven Development, helps companies make sense of the vast amounts of data that are continuously collected from products in the field. Eventually, the theme of AI Engineering addresses the challenge that many companies struggle with in terms of deploying machine- and deep-learning models in industrial contexts with production quality. Each theme has its own part in the book and each part has an introduction chapter and then a carefully selected reprint of the most important papers from that theme.This book mainly aims at researchers and advanced professionals in the areas of software engineering who would like to get an overview about the achievement made in various topics relevant for industrial large-scale software development and management – and to see how research benefits from a close cooperation between industry and academia.
  •  
2.
  • Ahmad, Azeem, et al. (författare)
  • A Multi-factor Approach for Flaky Test Detection and Automated Root Cause Analysis
  • 2021
  • Ingår i: Proceedings - Asia-Pacific Software Engineering Conference, APSEC. - : IEEE COMPUTER SOC. - 1530-1362. ; , s. 338-348
  • Konferensbidrag (refereegranskat)abstract
    • Developers often spend time to determine whether test case failures are real failures or flaky. The flaky tests, also known as non-deterministic tests, switch their outcomes without any modification in the codebase, hence reducing the confidence of developers during maintenance as well as in the quality of a product. Re-running test cases to reveal flakiness is resource-consuming, unreliable and does not reveal the root causes of test flakiness. Our paper evaluates a multi-factor approach to identify flaky test executions implemented in a tool named MDF laker. The four factors are: trace-back coverage, flaky frequency, number of test smells, and test size. Based on the extracted factors, MDFlaker uses k-Nearest Neighbor (KNN) to determine whether failed test executions are flaky. We investigate MDFlaker in a case study with 2166 test executions from different open-source repositories. We evaluate the effectiveness of our flaky detection tool. We illustrate how the multi-factor approach can be used to reveal root causes for flakiness, and we conduct a qualitative comparison between MDF laker and other tools proposed in literature. Our results show that the combination of different factors can be used to identify flaky tests. Each factor has its own trade-off, e.g., trace-back leads to many true positives, while flaky frequency yields more true negatives. Therefore, specific combinations of factors enable classification for testers with limited information (e.g., not enough test history information).
  •  
3.
  • Ahmad, Azeem, et al. (författare)
  • An Industrial Study on the Challenges and Effects of Diversity-Based Testing in Continuous Integration
  • 2023
  • Ingår i: IEEE International Conference on Software Quality, Reliability and Security, QRS. - 2693-9177. - 9798350319583
  • Konferensbidrag (refereegranskat)abstract
    • Many test prioritisation techniques have been proposed in order to improve test effectiveness of Continuous Integration (CI) pipelines. Particularly, diversity-based testing (DBT) has shown promising and competitive results to improve test effectiveness. However, the technical and practical challenges of introducing test prioritisation in CI pipelines are rarely discussed, thus hindering the applicability and adoption of those proposed techniques. This research builds on our prior work in which we evaluated diversity-based techniques in an industrial setting. This work investigates the factors that influence the adoption of DBT both in connection to improvements in test cost-effectiveness, as well as the process and human related challenges to transfer and use DBT prioritisation in CI pipelines. We report on a case study considering the CI pipeline of Axis Communications in Sweden. We performed a thematic analysis of a focus group interview with senior practitioners at the company to identify the challenges and perceived benefits of using test prioritisation in their test process. Our thematic analysis reveals a list of ten challenges and seven perceived effects of introducing test prioritisation in CI cycles. For instance, our participants emphasized the importance of introducing comprehensible and transparent techniques that instill trust in its users. Moreover, practitioners prefer techniques compatible with their current test infrastructure (e.g., test framework and environments) in order to reduce instrumentation efforts and avoid disrupting their current setup. In conclusion, we have identified tradeoffs between different test prioritisation techniques pertaining to the technical, process and human aspects of regression testing in CI. We summarize those findings in a list of seven advantages that refer to specific stakeholder interests and describe the effects of adopting DBT in CI pipelines.
  •  
4.
  • Ahmad, Azeem, 1984- (författare)
  • Contributions to Improving Feedback and Trust in Automated Testing and Continuous Integration and Delivery
  • 2022
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • An integrated release version (also known as a release candidate in software engineering) is produced by merging, building, and testing code on a regular basis as part of the Continuous Integration and Continuous Delivery (CI/CD) practices. Several benefits, including improved software quality and shorter release cycles, have been claimed for CI/CD. On the other hand, recent research has uncovered a plethora of problems and bad practices related to CI/CD adoption, necessitating some optimization. Some of the problems addressed in this work include the ability to respond to practitioners’ questions and obtain quick and trustworthy feedback in CI/CD. To be more specific, our effort concentrated on: 1) identifying the information needs of software practitioners engaged in CI/CD; 2) adopting test optimization approaches to obtain faster feedback that are realistic for use in CI/CD environments without introducing excessive technical requirements; 3) identifying perceived causes and automated root cause analysis of test flakiness, thereby providing developers with guidance on how to resolve test flakiness; and 4) identifying challenges in addressing information needs, providing faster and more trustworthy feedback. The findings of the research reported in this thesis are based on data from three single-case studies and three multiple-case studies. The research uses quantitative and qualitative data collected via interviews, site visits, and workshops. To perform our analyses, we used data from firms producing embedded software as well as open-source repositories. The following are major research and practical contributions. Information Needs: The initial contribution to research is a list of information needs in CI/CD. This list contains 27 frequently asked questions on continuous integration and continuous delivery by software practitioners. The identified information needs have been classified as related to testing, code & commit, confidence, bug, and artifacts. We investigated how companies deal with information needs, what tools they use to deal with them, and who is interested in them. We concluded that there is a discrepancy between the identified needs and the techniques employed to meet them. Since some information needs cannot be met by current tools, manual inspections are required, which adds time to the process. Information about code & commit, confidence level, and testing is the most frequently sought for and most important information. Evaluation of Diversity Based Techniques/Tool: The contribution is to conduct a detailed examination of diversity-based techniques using industry test cases to determine if there is a difference between diversity functions in selecting integrationlevel automated test. Additionally, how diversity-based testing compares to other optimization techniques used in industry in terms of fault detection rates, feature coverage, and execution time. This enables us to observe how coverage changes when we run fewer test cases. We concluded that some of the techniques can eliminate up to 85% of test cases (provided by the case company) while still covering all distinct features/requirements. The techniques are developed and made available as an open-source tool for further research and application. Test Flakiness Detection, Prediction & Automated Root Cause Analysis: We identified 19 factors that professionals perceive affect test flakiness. These perceived factors are divided into four categories: test code, system under test, CI/test infrastructure, and organizational. We concluded that some of the perceived factors of test flakiness in closed-source development are directly related to non-determinism, whereas other perceived factors concern different aspects e.g., lack of good properties of a test case (i.e., small, simple and robust), deviations from the established  processes, etc. To see if the developers’ perceptions were in line with what they had labelled as flaky or not, we examined the test artifacts that were readily available. We verified that two of the identified perceived factors (i.e., test case size and simplicity) are indeed indicative of test flakiness. Furthermore, we proposed a light weight technique named trace-back coverage to detect flaky tests. Trace-back coverage was combined with other factors such as test smells indicating test flakiness, flakiness frequency and test case size to investigate the effect on revealing test flakiness. When all factors are taken into consideration, the precision of flaky test detection is increased from 57% (using single factor) to 86% (combination of different factors). 
  •  
5.
  • Ahmad, Azeem, et al. (författare)
  • Data visualisation in continuous integration and delivery : Information needs, challenges, and recommendations
  • 2022
  • Ingår i: IET Software. - : WILEY. - 1751-8806 .- 1751-8814. ; 16:3, s. 331-349
  • Tidskriftsartikel (refereegranskat)abstract
    • Several operations, ranging from regular code updates to compiling, building, testing, and distribution to customers, are consolidated in continuous integration and delivery. Professionals seek additional information to complete the mission at hand during these tasks. Developers who devote a large amount of time and effort to finding such information may become distracted from their work. We will better understand the processes, procedures, and resources used to deliver a quality product on time by defining the types of information that software professionals seek. A deeper understanding of software practitioners information needs has many advantages, including remaining competitive, growing knowledge of issues that can stymie a timely update, and creating a visualisation tool to assist practitioners in addressing their information needs. This is an extension of a previous work done by the authors. The authors conducted a multiple-case holistic study with six different companies (38 unique participants) to identify information needs in continuous integration and delivery. This study attempts to capture the importance, frequency, required effort (e.g. sequence of actions required to collect information), current approach to handling, and associated stakeholders with respect to identified needs. 27 information needs associated with different stakeholders (i.e. developers, testers, project managers, release team, and compliance authority) were identified. The identified needs were categorised as testing, code & commit, confidence, bug, and artefacts. Apart from identifying information needs, practitioners face several challenges in developing visualisation tools. Thus, 8 challenges that were faced by the practitioners to develop/maintain visualisation tools for the software team were identified. The recommendations from practitioners who are experts in developing, maintaining, and providing visualisation services to the software team were listed.
  •  
6.
  • Ahmad, Azeem, et al. (författare)
  • Empirical analysis of practitioners perceptions of test flakiness factors
  • 2021
  • Ingår i: Software testing, verification & reliability. - : Wiley-Blackwell. - 0960-0833 .- 1099-1689. ; 31:8
  • Tidskriftsartikel (refereegranskat)abstract
    • Identifying the root causes of test flakiness is one of the challenges faced by practitioners during software testing. In other words, the testing of the software is hampered by test flakiness. Since the research about test flakiness in large-scale software engineering is scarce, the need for an empirical case-study where we can build a common and grounded understanding of the problem as well as relevant remedies that can later be evaluated in a large-scale context is a necessity. This study reports the findings from a multiple-case study. The authors conducted an online survey to investigate and catalogue the root causes of test flakiness and mitigation strategies. We attempted to understand how practitioners perceive test flakiness in closed-source development, such as how they define test flakiness and what practitioners perceive can affect test flakiness. The perceptions of practitioners were compared with the available literature. We investigated whether practitioners perceptions are reflected in the test artefacts such as what is the relationship between the perceived factors and properties of test artefacts. This study reported 19 factors that are perceived by professionals to affect test flakiness. These perceived factors are categorized as test code, system under test, CI/test infrastructure, and organization-related. The authors concluded that some of the perceived factors in test flakiness in closed-source development are directly related to non-determinism, whereas other perceived factors concern different aspects, for example, lack of good properties of a test case, deviations from the established processes, and ad hoc decisions. Given a data set from investigated cases, the authors concluded that two of the perceived factors (i.e., test case size and test case simplicity) have a strong effect on test flakiness.
  •  
7.
  • Ahmad, Azeem, et al. (författare)
  • Identifying Randomness related Flaky Tests through Divergence and Execution Tracing
  • 2022
  • Ingår i: 2022 IEEE 15TH INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION WORKSHOPS (ICSTW 2022). - : IEEE COMPUTER SOC. - 9781665496285 ; , s. 293-300
  • Konferensbidrag (refereegranskat)abstract
    • Developers often spend time to determine whether test case failures are real failures or flaky. The flaky tests, known as non-deterministic tests, change their outcomes without any changes in the codebase, thus reducing the trust of developers during a software release as well as in the quality of a product. While rerunning test cases is a common approach, it is resource intensive, unreliable, and does not uncover the actual cause of test flakiness. Our paper evaluates an approach to identify randomness-related flaky. This paper used a divergence algorithm and execution tracing techniques to identify flaky tests, which resulted in the FLAKYPY prototype. In addition, this paper discusses the cases where FLAKYPY successfully identified the flaky test as well as those cases where FLAKYPY failed. The papers discuss how the reporting mechanism of FLAKYPY can help developers in identifying the root cause of randomness-related test flakiness. Thirty-two open-source projects were used in this. We concluded that FLAKYPY can detect most of the randomness-related test flakiness. In addition, the reporting mechanism of FLAKYPY reveals sufficient information about possible root causes of test flakiness.
  •  
8.
  • Ahmad, Azeem, 1984-, et al. (författare)
  • The Perceived Effects of Introducing Coaching on the Development of Student's Soft Skills Managing Software Quality.
  • 2021
  • Ingår i: Proceedings of 4th Software Engineering Education Workshop (SEED 2021) co-located with APSEC 2021, 06-Dec, 2021, Taipei, Taiwan.
  • Konferensbidrag (refereegranskat)abstract
    • Technical abilities (also known as hard skills) are just as crucial as soft skills (such as communication, cooperation, teamwork, etc.) in attaining professional success. Therefore it is important to pay much attention to soft skills when developing the curriculum of engineering educations. Many elements can have a direct or indirect impact on students’ soft skills, including course topic, course module (i.e., laboratories, seminars, etc.), the medium of instruction, and learning activities. Many academics have investigated the development of soft skills in a variety of disciplines, including engineering, science, and business. The purpose of this study is to assess the perceived impact of coaching on the development of soft skills in MS and BS engineering students. During four planned sessions over a six-month period, MS students acted as coachers, while BS students received coaching from MS students. After each coaching session, all students were asked to complete a survey to evaluate their perception for how their soft skills had developed. The results of the perceived effects of introducing coaching activities are presented in this article. This article is a first step, in the series of our investigation, in identifying the students’ perceptions about the development of soft skills. According to the survey, the MS engineering students who were the coachers had perceived to improve most of their soft skills. However, in the perception of BS students, their soft skills did not improve as compared to MS students, prompting us to conduct additional research in the future to discover what hampered the growth of BS students’ soft skills as well as how MS students’ soft skills were enhanced.
  •  
9.
  • Ardi, Shanai, et al. (författare)
  • A Case Study of Introducing Security Risk Assessment in Requirements Engineering in a Large Organization
  • 2023
  • Ingår i: SN Computer Science. - : Springer. - 2661-8907. ; 4:5
  • Tidskriftsartikel (refereegranskat)abstract
    • Software products are increasingly used in critical infrastructures, and verifying the security of these products has become a necessary part of every software development project. Effective and practical methods and processes are needed by software vendors and infrastructure operators to meet the existing extensive demand for security. This article describes a lightweight security risk assessment method that flags security issues as early as possible in the software project, namely during requirements analysis. The method requires minimal training effort, adds low overhead, and makes it possible to show immediate results to affected stakeholders. We present a longitudinal case study of how a large enterprise developing complex telecom products adopted this method all the way from pilot studies to full-scale regular use. Lessons learned from the case study provide knowledge about the impact that upskilling and training of requirements engineers have on reducing the risk of malfunctions or security vulnerabilities in situations where it is not possible to have security experts go through all requirements. The case study highlights the challenges of process changes in large organizations as well as the pros and cons of having centralized, distributed, or semi-distributed workforce for security assurance in requirements engineering.
  •  
10.
  • Ardi, Shanai, 1977- (författare)
  • Vulnerability and Risk Analysis Methods and Application in Large Scale Development of Secure Systems
  • 2021
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Since software products are heavily used in today’s connected society, design and implementation of such software products to make them resilient to security threats become crucial.This thesis addresses some of the challenges faced by software vendors when developing secure software. The approach is to reduce the risk of introducing security weaknesses to software products by providing solutions that support software developers during the software lifecycle.  Software developers are usually not security experts. However, there are methods and tools, such as the ones introduced in this thesis, that can help developers build more secure software.The research is performed with a design science approach, where the risk reducing method is the artifact that is iteratively developed.  Chronologically, the research is divided into two parts. The first part provides security models as a means of developing a detailed understanding of the extent of potential security issues and their respective security mitigation activities. The purpose is to lower the risk of introducing vulnerabilities to the software during its lifecycle. This is facilitated by the Sustainable Software Security Process (S3P), which is a structured and generally applicable process aimed at minimizing the effort of using security models during all phases of the software development process. S3P achieves this in three steps. The first step uses a semi-formal modeling approach and identifies causes of known vulnerabilities in terms of defects and weaknesses in development activities that may introduce the vulnerability in the code. The second step identifies measures that if in place would address the causes and eliminate the underlying vulnerability and support selection of the most suitable measures. The final step ensures that the selected measures are adopted into the development process to reduce the risk of having similar vulnerabilities in the future.Collaborative tools can be used in this process to ensure that software developers who are not security experts benefit from application of the S3P process and its components. For this thesis, proof-of-concept versions of collaboration tools were developed to support the three steps of the S3P.We present the results of our empirical evaluations on all three steps of S3P using various methods such as surveys, case studies and asking for expert opinion to verify that the method is fully understandable and easy to perform and is perceived by developers to provide value for software security.The last contribution of the first part of research deals with improving product security during requirements engineering through integration of parts of S3P into Common Criteria (CC) and in this way to improve the accuracy of CC through systematically identifying the security objectives and proposing solutions to meet those objectives using S3P. The review and validation by an industrial partner leading in the CC area demonstrate improved accuracy of CC.Based on the findings in the first part of the research, the second part focuses on early phases of software development and vulnerability causes originating from requirements engineering. We study the challenges associated with introducing a specific security activity, i.e., Security Risk Assessment (SRA), into the requirements engineering process in a large-scale software development context. Specific attention is given to the possibility of bridging the gap between developers and security experts when using SRA and examines the pros and cons of organizing personnel working with SRA in a centralized, distributed, or semi-distributed unit. As the journey of changing the way of working in a large corporation takes time and involves many factors, it was natural to perform a longitudinal case study - all the way from pilot studies to full-scale, regular use.The results of the case study clarify that introduction of a specific security activity to the development process must be evolved over time in order to achieve the desired results. The present design of the SRA method shows that it is worthwhile to work with risk assessment in the requirements phase with all types of requirements, even at a low level of abstraction. The method aligns well with a decentralized, agile development method with many teams working on the same product. During the study, we observed an increase in security awareness among the developers in the subject company. However, it was also observed that involvement of security experts to ensure acceptable quality of the risk assessment and to identify all risks cannot be totally eliminated.
  •  
11.
  • Bager, Ninna, et al. (författare)
  • Complex and monosomal karyotype are distinct cytogenetic entities with an adverse prognostic impact in paediatric acute myeloid leukaemia : A NOPHO-DBH-AML study
  • 2018
  • Ingår i: British Journal of Haematology. - : Wiley. - 0007-1048 .- 1365-2141. ; 183:4, s. 618-628
  • Tidskriftsartikel (refereegranskat)abstract
    • Data on occurrence, genetic characteristics and prognostic impact of complex and monosomal karyotype (CK/MK) in children with acute myeloid leukaemia (AML) are scarce. We studied CK and MK in a large unselected cohort of childhood AML patients diagnosed and treated according to Nordic Society for Paediatric Haematology and Oncology (NOPHO)-AML protocols 1993-2015. In total, 800 patients with de novo AML were included. CK was found in 122 (15%) and MK in 41 (5%) patients. CK and MK patients were young (median age 2.1 and 3.3 years, respectively) and frequently had FAB M7 morphology (24% and 22%, respectively). Refractory disease was more common in MK patients (15% vs. 4%) and stem cell transplantation in first complete remission was more frequent (32% vs. 19%) compared with non-CK/non-MK patients. CK showed no association with refractory disease but was an independent predictor of an inferior event-free survival (EFS; hazard ratio [HR] 1.43, P = 0.03) and overall survival (OS; HR 1.48, P = 0.01). MK was associated with a poor EFS (HR 1.57, P = 0.03) but did not show an inferior OS compared to non-MK patients (HR 1.14, P = 0.62). In a large paediatric cohort, we characterized AML with non-recurrent abnormal karyotype and unravelled the adverse impact of CK and MK on prognosis.
  •  
12.
  • Berglund, Erik, 1971- (författare)
  • Library Communication Among Programmers Worldwide
  • 2002
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Programmers worldwide share components and jointly develop components on a global scale in contemporary software development. An important aspect of such library-based programming is the need for technical communication with regard to libraries – library communication. As part of their work, programmers must discover, study, and learn as well as debate problems and future development. In this sense, the electronic, networked media has fundamentally changed programming by providing new mechanisms for communication and global interaction through global networks such as the Internet. Today, the baseline for library communication is hypertext documentation. Improvements in quality, efficiency, cost and frustration of the programming activity can be expected by further developments in the electronic aspects of library communication.This thesis addresses the use of the electronic networked medium in the activity of library communication and aims to discover design knowledge for communication tools and processes directed towards this particular area. A model of library communication is provided that describes interaction among programmer as webs of interrelated library communities. A discussion of electronic, networked tools and processes that match such a model is also provided. Furthermore, research results are provided from the design and industrial valuationof electronic reference documentation for the Java domain. Surprisingly, the evaluation did not support individual adaptation (personalization). Furthermore, global library communication processes have been studied in relation to open-source documentation and user-related bug handling. Open-source documentation projects are still relatively uncommon even in open-source software projects. User-related Open-source does not address the passive behavior users have towards bugs. Finally, the adaptive authoring process in electronic reference documentation is addressed and found to provide limited support for expressing the electronic, networked dimensions of authoring requiring programming skill by technical writers.Library communication is addressed here by providing engineering knowledge with regards to the construction of practical electronic, networked tools and processes in the area. Much of the work has been performed in relation to Java library communication and therefore the thesis has particular relevancefor the object-oriented programming domain. A practical contribution of the work is the DJavadoc tool that contributes to the development of reference documentation by providing adaptive Java reference documentation.
  •  
13.
  • Borg, Andreas, 1976-, et al. (författare)
  • A Method for Improving the Treatment of Capacity Requirements in Large Telecommunication Systems
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Non-functional requirements crosscut functional models and are more difficult to enforce in system models. This paper describes a long-term research collaboration regarding capacity requirements between Linköping University and Ericsson AB. We describe an industrial case study on non-functional requirements as a background. Succeeding efforts dedicated to capacity include a detailed description of the term, a best practice inventory within Ericsson, and a pragmatic approach for how to annotate UML models with capacity information. The results are also represented as a method plug-in to the OpenUP software process and an anatomy facilitating the possibility to assess and improve an organization’s abilities to develop for capacity. The results combine into a method for how to improve the treatment of capacity requirements in large-scale software systems. Both product and process views are included, with emphasis on the latter.
  •  
14.
  • Borg, Andreas, 1976-, et al. (författare)
  • Extending the OpenUP/Basic Requirements Discipline to Specify Capacity Requirements
  • 2007
  • Ingår i: Requirements Engineering Conference, 2007. RE '07. - : IEEE Computer Society. - 9780769529356 ; , s. 328-333
  • Konferensbidrag (refereegranskat)abstract
    • Software processes, such as RUP and agile methods, focus their requirements engineering part on use cases and thus functional requirements. Complex products, such as radio network control software, need special handling of non-functional requirements as well. We describe how we used the eclipse process framework to augment the open and minimal OpenUP/basic process with improvements found in management of capacity requirements in a case-study at Ericsson. The result is compared with another project improving RUP to handle performance requirements. The major differences between the improvements are that 1) they suggest a special, dedicated performance manager role and we suggest that present roles are augmented, 2) they suggest a bottom-up approach to performance verification while we focus on system performance first, i.e. top-down. Further, we suggest augmenting UMLl-2 models with capacity attributes to improve information flow from requirements to implementation.
  •  
15.
  • Borg, Andreas, 1976-, et al. (författare)
  • Good Practice and Improvement Model of Handling Capacity Requirements of Large Telecommunication Systems
  • 2006
  • Ingår i: 14th IEEE International Requirements Engineering Conference (RE'06), Minneapolis/S:t Paul. - Los Alamitos, CA : IEEE Computer Society. - 0769525555 - 9780769525556 ; , s. 245-250
  • Konferensbidrag (refereegranskat)abstract
    • There is evidence to suggest that the software industry has not yet matured as regards management of nonfunctional requirements (NFRs). Consequently the cost of achieving required quality is unnecessarily high. To try and avoid this, the telecommunication systems provider Ericsson defined a research task to try and improve the management of requirements for capacity, which is one of the most critical NFRs. Linkoping University joined in the effort and conducted an interview series to investigate good practice within different parts of the company. Inspired by the interviews and an ongoing process improvement project a model for improvement was created and activities were synthesized. This paper contributes the results from the interview series, and details the subprocesses of specification that should be improved. Such improvements are about understanding the relationship between numerical entities at all system levels, augmenting UML specifications to make NFRs visible, working with time budgets, and testing the sub system level components on the same level as they are specified.
  •  
16.
  • Borg, Andreas, 1976-, et al. (författare)
  • Integrating an Improvement Model of Handling Capacity Requirements with OpenUP/Basic Process
  • 2007
  • Ingår i: 13th International working conference on Requirements Engineering: Foundations for Software Quality (REFSQ'07), Trondheim, Norway. - Berlin Heidelberg : Springer. - 9783540730309 ; , s. 341-354
  • Konferensbidrag (refereegranskat)abstract
    • Contemporary software processes and modeling languages have a strong focus on Functional Requirements (FRs), whereas information of Non-Functional Requirements (NFRs) are managed with text-based documentation and individual skills of the personnel. In order to get a better understanding of how capacity requirements are handled, we carried out an interview series with various branches of Ericsson. The analysis of this material revealed 18 Capacity Sub-Processes (CSPs) that need to be attended to create a capacity-oriented development. In this paper we describe all these sub-processes and their mapping into an extension of the OpenUP/Basic software process. Such an extension will support a process engineer in realizing the sub-processes, and has at the same time shown that there are no internal inconsistencies of the CSPs. The extension provides a context for continued research in using UML to support negotiation between requirements and existing design.
  •  
17.
  •  
18.
  •  
19.
  • Borg, Andreas, 1976- (författare)
  • Processes and Models for Capacity Requirements in Telecommunication Systems
  • 2009
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Capacity is an essential quality factor in telecommunication systems. The ability to develop systems with the lowest cost per subscriber and transaction, that also meet the highest availability requirements and at the same time allow for scalability, is a true challenge for a telecommunication systems provider. This thesis describes a research collaboration between Linköping University and Ericsson AB aimed at improving the management, representation, and implementation of capacity requirements in large-scale software engineering.An industrial case study on non-functional requirements in general was conducted to provide the explorative research background, and a richer understanding of identified difficulties was gained by dedicating subsequent investigations to capacity. A best practice inventory within Ericsson regarding the management of capacity requirements and their refinement into design and implementation was carried out. It revealed that capacity requirements crosscut most of the development process and the system lifecycle, thus widening the research context considerably. The interview series resulted in the specification of 19 capacity sub-processes; these were represented as a method plug-in to the OpenUP software development process in order to construct a coherent package of knowledge as well as to communicate the results. They also provide the basis of an empirically grounded anatomy which has been validated in a focus group. The anatomy enables the assessment and stepwise improvement of an organization’s ability to develop for capacity, thus keeping the initial cost low. Moreover, the notion of capacity is discussed and a pragmatic approach for how to support model-based, function-oriented development with capacity information by its annotation in UML models is presented. The results combine into a method for how to improve the treatment of capacity requirements in large-scale software systems.
  •  
20.
  •  
21.
  • Borg, Andreas, 1976-, et al. (författare)
  • The Bad Conscience of Requirements Engineering : An Investigation in Real-World Treatment of Non-Functional Requirements
  • 2003
  • Ingår i: Third Conference on Software Engineering Research and Practice in Sweden (SERPS'03), Lund. ; , s. 1-8
  • Konferensbidrag (refereegranskat)abstract
    • Even though non-functional requirements (NFRs) are critical in order to provide software of good quality, the literature of NFRs is relatively sparse. We describe how NFRs are treated in two development organizations, an Ericsson application center and the IT department of the Swedish Meteorological and Hydrological Institute. We have interviewed professionals about problems they face and their ideas on how to improve the situation. Both organizations are aware of NFRs and related problems but their main focus is on functional requirements,primarily because existing methods focus on these. The most tangible problems experienced are that many NFRs remain undiscovered and that NFRs are stated in non-measurable terms. It became clear that the size andstructure of the organization require proper distribution of employees’ interest, authority and competence of NFRs. We argue that a feasible solution might be to strengthen the position of architectural requirements, which are more likely to emphasize NFRs.
  •  
22.
  • Borg Hammer, Anne Sofie, et al. (författare)
  • Hypodiploidy has unfavorable impact on survival in pediatric acute myeloid leukemia : An I-BFM Study Group collaboration
  • 2023
  • Ingår i: Blood Advances. - : American Society of Hematology. - 2473-9529 .- 2473-9537. ; 7:6, s. 1045-1055
  • Tidskriftsartikel (refereegranskat)abstract
    • Hypodiploidy, defined as modal numbers (MNs) 45 or lower, has not been independently investigated in pediatric acute myeloid leukemia (AML) but is a well-described high-risk factor in pediatric acute lymphoblastic leukemia. We aimed to characterize and study the prognostic impact of hypodiploidy in pediatric AML. In this retrospective cohort study, we included children below 18 years of age with de novo AML and a hypodiploid karyotype diagnosed from 2000 to 2015 in 14 childhood AML groups from the International Berlin-Frankfurt-Münster (I-BFM) framework. Exclusion criteria comprised constitutional hypodiploidy, monosomy 7, composite karyotype, and t(8;21) with concurring sex chromosome loss. Hypodiploidy occurred in 81 patients (1.3%) with MNs, 45 (n = 66); 44 (n = 10) and 43 (n = 5). The most frequently lost chromosomes were chromosome 9 and sex chromosomes. Five-year event-free survival (EFS) and overall survival (OS) were 34% and 52%, respectively, for the hypodiploid cohort. Children with MN≤44 (n = 15) had inferior EFS (21%) and OS (33%) compared with children with MN = 45 (n = 66; EFS, 37%; OS, 56%). Adjusted hazard ratios (HRs) were 4.9 (P = .001) and 6.1 (P = .003). Monosomal karyotype or monosomy 9 had particular poor OS (43% and 15%, respectively). Allogeneic stem cell transplantation (SCT) in first complete remission (CR1) (n = 18) did not mitigate the unfavorable outcome of hypodiploidy (adjusted HR for OS was 1.5; P = .42). We identified pediatric hypodiploid AML as a rare subgroup with an inferior prognosis even in the patients treated with SCT in CR1.
  •  
23.
  • Bosch, Jan, 1967, et al. (författare)
  • Accelerating digital transformation: 10 years of software center
  • 2022
  • Bok (övrigt vetenskapligt/konstnärligt)abstract
    • This book celebrates the 10-year anniversary of Software Center (a collaboration between 18 European companies and five Swedish universities) by presenting some of the most impactful and relevant journal or conference papers that researchers in the center have published over the last decade. The book is organized around the five themes around which research in Software Center is organized, i.e. Continuous Delivery, Continuous Architecture, Metrics, Customer Data and Ecosystems Driven Development, and AI Engineering. The focus of the Continuous Delivery theme is to help companies to continuously build high quality products with the right degree of automation. The Continuous Architecture theme addresses challenges that arise when balancing the need for architectural quality and more agile ways of working with shorter development cycles. The Metrics theme studies and provides insight to understand, monitor and improve software processes, products and organizations. The fourth theme, Customer Data and Ecosystem Driven Development, helps companies make sense of the vast amounts of data that are continuously collected from products in the field. Eventually, the theme of AI Engineering addresses the challenge that many companies struggle with in terms of deploying machine- and deep-learning models in industrial contexts with production quality. Each theme has its own part in the book and each part has an introduction chapter and then a carefully selected reprint of the most important papers from that theme. This book mainly aims at researchers and advanced professionals in the areas of software engineering who would like to get an overview about the achievement made in various topics relevant for industrial large-scale software development and management - and to see how research benefits from a close cooperation between industry and academia.
  •  
24.
  • Broman, David, et al. (författare)
  • How can we make software engineering text books well-founded, up-to-date, and accessible to students?
  • 2011
  • Ingår i: Proceedings of the 24th IEEE-CS Conference on Software Engineering Education and Training (CSEE&T 2011). - : IEEE. - 9781457703492 - 9781457703478 ; , s. 386-390
  • Konferensbidrag (refereegranskat)abstract
    • When teaching software engineering courses it is highly important to have good text books that are well-founded, up-to-date, and easily accessible to students. However, currently available text books on the market are either very broad or highly specialized, making it hard to select appropriate books for specific software engineering courses. Moreover, due to the rapidly changing subject of software engineering, books tend to become obsolete, which make students hesitate to buy books even if they are part of the listed course literature. In this paper, we briefly explain and discuss an approach of using a web-based system for creating collaborative and peer-reviewed text books that can be customized individually for specific courses. We describe and discuss the proposed system from a use case perspective.
  •  
25.
  • Broman, David, 1977-, et al. (författare)
  • The Company Approach to Software Engineering Project Courses
  • 2012
  • Ingår i: IEEE Transactions on Education. - : Institute of Electrical and Electronics Engineers (IEEE). - 0018-9359 .- 1557-9638. ; 55:4, s. 445-452
  • Tidskriftsartikel (refereegranskat)abstract
    • Teaching larger software engineering project courses at the end of a computing curriculum is a way for students to learn some aspects of real-world jobs in industry. Such courses, often referred to as capstone courses, are effective for learning how to apply the skills they have acquired in, for example, design, test, and configuration management. However, these courses are typically performed in small teams, giving only a limited realistic perspective of problems faced when working in real companies. This paper describes an alternative approach to classic capstone projects, with the aim of being more realistic from an organizational, process, and communication perspective. This methodology, called the company approach, is described by intended learning outcomes, teaching/learning activities, and assessment tasks. The approach is implemented and evaluated in a larger Masters student course.
  •  
26.
  • Carlshamre, Pär (författare)
  • A usability perspective on requirements engineering : from methodology to product development
  • 2001
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Usability is one of the most important aspects of software. A multitude of methods and techniques intended to support the development of usable systems has been provided, but the impact on industrial software development has been limited. One of the reasons for this limited success is the gap between traditional academic theory generation and commercial practice. Another reason is the gap between usability engineering and established requirements engineering practice. This thesis is based on empirical research and puts a usability focus on three important aspects of requirements engineering: elicitation, specification and release planning.There are two main themes of investigation. The first is concerned with the development and introduction of a usability-oriented method for elicitation and specification of requirements, with an explicit focus on utilizing the skills of technical communicators. This longitudinal, qualitative study, performed in an industrial setting in the first half of the nineties, provides ample evidence in favor of a closer collaboration between technical communicators and system developers. It also provides support for the benefits of a task-oriented approach to requirements elicitation. The results are also reflected upon in a retrospective paper, and the experiences point in the direction of an increased focus on the specification part, in order to bridge the gap between usability engineering and established requirements management practice.The second represents a usability-oriented approach to understanding and supporting release planning in software product development. Release planning is an increasingly important part of requirements engineering, and it is complicated by intricate dependencies between requirements. A survey performed at five different companies gave an understanding of the nature and frequency of these interdependencies. This knowledge was then turned into the design and implementation of a support tool, with the purpose of provoking a deeper understanding of the release planning task. This was done through a series of cooperative evaluation sessions with release planning experts. The results indicate that, although the tool was considered useful by the experts, the initial understanding of the task was overly simplistic. As a result, a number of design implications are proposed.
  •  
27.
  • Carlshamre, Pär, et al. (författare)
  • An Industrial Survey of Requirements Interdependencies in Software Product Release Planning
  • 2001
  • Ingår i: In Proc. Fifth IEEE Int. Symposium on Requirements Engineering (RE'01). - : IEEE. - 0769511252 ; , s. 84-91
  • Konferensbidrag (refereegranskat)abstract
    • The task of finding an optimal selection of requirements for the next release of a software system is difficult as requirements may depend on each other in complex ways. The paper presents the results from an in-depth study of the interdependencies within 5 distinct sets of requirements, each including 20 high-priority requirements of 5 distinct products from 5 different companies. The results show that: (1) roughly 20% of the requirements are responsible for 75% of the interdependencies; (2) only a few requirements are singular; (3) customer-specific bespoke development tend to include more functionality- related dependencies whereas market-driven product development have an emphasis on value-related dependencies. Several strategies for reducing the effort needed for identifying and managing interdependencies are outlined. A technique for visualization of interdependencies with the aim of supporting release planning is also discussed. The complexity of requirements interdependency analysis is studied in relation to metrics of requirements coupling. Finally, a number of issues for further research are identified
  •  
28.
  • Dastgeer, Usman, 1986- (författare)
  • Skeleton Programming for Heterogeneous GPU-based Systems
  • 2011
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • In this thesis, we address issues associated with programming modern heterogeneous systems while focusing on a special kind of heterogeneous systems that include multicore CPUs and one or more GPUs, called GPU-based systems.We consider the skeleton programming approach to achieve high level abstraction for efficient and portable programming of these GPU-based systemsand present our work on SkePU library which is a skeleton library for these systems.We extend the existing SkePU library with a two-dimensional (2D) data type and skeleton operations and implement several new applications using newly made skeletons. Furthermore, we consider the algorithmic choice present in SkePU and implement support to specify and automatically optimize the algorithmic choice for a skeleton call, on a given platform.To show how to achieve performance, we provide a case-study on optimized GPU-based skeleton implementation for 2D stencil computations and introduce two metrics to maximize resource utilization on a GPU. By devising a mechanism to automatically calculate these two metrics, performance can be retained while porting an application from one GPU architecture to another.Another contribution of this thesis is implementation of the runtime support for the SkePU skeleton library. This is achieved with the help of the StarPUruntime system. By this implementation,support for dynamic scheduling and load balancing for the SkePU skeleton programs is achieved. Furthermore, a capability to do hybrid executionby parallel execution on all available CPUs and GPUs in a system, even for a single skeleton invocation, is developed.SkePU initially supported only data-parallel skeletons. The first task-parallel skeleton (farm) in SkePU is implemented with support for performance-aware scheduling and hierarchical parallel execution by enabling all data parallel skeletons to be usable as tasks inside the farm construct.Experimental evaluations are carried out and presented for algorithmic selection, performance portability, dynamic scheduling and hybrid execution aspects of our work.
  •  
29.
  • de Oliveira Neto, Francisco Gomes, et al. (författare)
  • Improving continuous integration with similarity-based test case selection
  • 2018
  • Ingår i: Proceedings of the 13th International Workshop on Automation of Software Test. - New York : ACM Digital Library. - 0270-5257. - 9781450357432 ; , s. 39-45
  • Konferensbidrag (refereegranskat)abstract
    • Automated testing is an essential component of Continuous Integration (CI) and Delivery (CD), such as scheduling automated test sessions on overnight builds. That allows stakeholders to execute entire test suites and achieve exhaustive test coverage, since running all tests is often infeasible during work hours, i.e., in parallel to development activities. On the other hand, developers also need test feedback from CI servers when pushing changes, even if not all test cases are executed. In this paper we evaluate similarity-based test case selection (SBTCS) on integration-level tests executed on continuous integration pipelines of two companies. We select test cases that maximise diversity of test coverage and reduce feedback time to developers. Our results confirm existing evidence that SBTCS is a strong candidate for test optimisation, by reducing feedback time (up to 92% faster in our case studies) while achieving full test coverage using only information from test artefacts themselves.
  •  
30.
  • Gorschek, Tony, et al. (författare)
  • A controlled empirical evaluation of a requirements abstraction model
  • 2007
  • Ingår i: Information and Software Technology. - Newton, MA : Elsevier BV. - 0950-5849 .- 1873-6025. ; 49:7, s. 790-805
  • Tidskriftsartikel (refereegranskat)abstract
    • Requirement engineers in industry are faced with the complexity of handling large amounts of requirements as development moves from traditional bespoke projects towards market-driven development. There is a need for usable and useful models that recognize this reality and support the engineers in a continuous effort of choosing which requirements to accept and which to dismiss off hand using the goals and product strategies put forward by management. This paper presents an evaluation of such a model that is built based on needs identified in industry. The evaluation's primary goal is to test the model's usability and usefulness in a lab environment prior to large scale industry piloting, and is a part of a large technology transfer effort. The evaluation uses 179 subjects from three different Swedish Universities, which is a large portion of the university students educated in requirements engineering in Sweden during 2004 and 2005. The results provide a strong indication that the model is indeed both useful and usable and ready for industry trials. © 2006 Elsevier B.V. All rights reserved.
  •  
31.
  •  
32.
  • Jonsson, Leif, et al. (författare)
  • Automated Bug Assignment: Ensemble-based Machine Learning in Large Scale Industrial Contexts
  • 2015
  • Ingår i: Empirical Software Engineering. - : Springer Science and Business Media LLC. - 1573-7616 .- 1382-3256. ; 21:4
  • Tidskriftsartikel (refereegranskat)abstract
    • Bug report assignment is an important part of software maintenance. In particular, incorrect assignments of bug reports to development teams can be very expensive in large software development projects. Several studies propose automating bug assignment techniques using machine learning in open source software contexts, but no study exists for large-scale proprietary projects in industry. The goal of this study is to evaluate automated bug assignment techniques that are based on machine learning classification. In particular, we study the state-of-the-art ensemble learner Stacked Generalization (SG) that combines several classifiers. We collect more than 50,000 bug reports from five development projects from two companies in different domains. We implement automated bug assignment and evaluate the performance in a set of controlled experiments. We show that SG scales to large scale industrial application and that it outperforms the use of individual classifiers for bug assignment, reaching prediction accuracies from 50 % to 89 % when large training sets are used. In addition, we show how old training data can decrease the prediction accuracy of bug assignment. We advice industry to use SG for bug assignment in proprietary contexts, using at least 2,000 bug reports for training. Finally, we highlight the importance of not solely relying on results from cross-validation when evaluating automated bug assignment.
  •  
33.
  • Jonsson, Leif, et al. (författare)
  • Automatic Localization of Bugs to Faulty Components in Large Scale Software Systems using Bayesian Classification
  • 2016
  • Ingår i: 2016 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY (QRS 2016). - : IEEE. - 9781509041275 ; , s. 425-432
  • Konferensbidrag (refereegranskat)abstract
    • We suggest a Bayesian approach to the problem of reducing bug turnaround time in large software development organizations. Our approach is to use classification to predict where bugs are located in components. This classification is a form of automatic fault localization (AFL) at the component level. The approach only relies on historical bug reports and does not require detailed analysis of source code or detailed test runs. Our approach addresses two problems identified in user studies of AFL tools. The first problem concerns the trust in which the user can put in the results of the tool. The second problem concerns understanding how the results were computed. The proposed model quantifies the uncertainty in its predictions and all estimated model parameters. Additionally, the output of the model explains why a result was suggested. We evaluate the approach on more than 50000 bugs.
  •  
34.
  • Jonsson, Leif, 1973- (författare)
  • Machine Learning-Based Bug Handling in Large-Scale Software Development
  • 2018
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This thesis investigates the possibilities of automating parts of the bug handling process in large-scale software development organizations. The bug handling process is a large part of the mostly manual, and very costly, maintenance of software systems. Automating parts of this time consuming and very laborious process could save large amounts of time and effort wasted on dealing with bug reports. In this thesis we focus on two aspects of the bug handling process, bug assignment and fault localization. Bug assignment is the process of assigning a newly registered bug report to a design team or developer. Fault localization is the process of finding where in a software architecture the fault causing the bug report should be solved. The main reason these tasks are not automated is that they are considered hard to automate, requiring human expertise and creativity. This thesis examines the possi- bility of using machine learning techniques for automating at least parts of these processes. We call these automated techniques Automated Bug Assignment (ABA) and Automatic Fault Localization (AFL), respectively. We treat both of these problems as classification problems. In ABA, the classes are the design teams in the development organization. In AFL, the classes consist of the software components in the software architecture. We focus on a high level fault localization that it is suitable to integrate into the initial support flow of large software development organizations.The thesis consists of six papers that investigate different aspects of the AFL and ABA problems. The first two papers are empirical and exploratory in nature, examining the ABA problem using existing machine learning techniques but introducing ensembles into the ABA context. In the first paper we show that, like in many other contexts, ensembles such as the stacked generalizer (or stacking) improves classification accuracy compared to individual classifiers when evaluated using cross fold validation. The second paper thor- oughly explore many aspects such as training set size, age of bug reports and different types of evaluation of the ABA problem in the context of stacking. The second paper also expands upon the first paper in that the number of industry bug reports, roughly 50,000, from two large-scale industry software development contexts. It is still as far as we are aware, the largest study on real industry data on this topic to this date. The third and sixth papers, are theoretical, improving inference in a now classic machine learning tech- nique for topic modeling called Latent Dirichlet Allocation (LDA). We show that, unlike the currently dominating approximate approaches, we can do parallel inference in the LDA model with a mathematically correct algorithm, without sacrificing efficiency or speed. The approaches are evaluated on standard research datasets, measuring various aspects such as sampling efficiency and execution time. Paper four, also theoretical, then builds upon the LDA model and introduces a novel supervised Bayesian classification model that we call DOLDA. The DOLDA model deals with both textual content and, structured numeric, and nominal inputs in the same model. The approach is evaluated on a new data set extracted from IMDb which have the structure of containing both nominal and textual data. The model is evaluated using two approaches. First, by accuracy, using cross fold validation. Second, by comparing the simplicity of the final model with that of other approaches. In paper five we empirically study the performance, in terms of prediction accuracy, of the DOLDA model applied to the AFL problem. The DOLDA model was designed with the AFL problem in mind, since it has the exact structure of a mix of nominal and numeric inputs in combination with unstructured text. We show that our DOLDA model exhibits many nice properties, among others, interpretability, that the research community has iden- tified as missing in current models for AFL.
  •  
35.
  • Jonsson, Leif, et al. (författare)
  • Towards Automated Anomaly Report Assignment in Large Complex Systems using Stacked Generalization
  • 2012
  • Ingår i: Proceedings of the Fifth International Conference on Software Testing, Verification and Validation (ICST 2012). - : IEEE. - 9781457719066 ; , s. 437-446
  • Konferensbidrag (refereegranskat)abstract
    • Maintenance costs can be substantial for organizations with very large and complex software systems. This paper describes research for reducing anomaly report turnaround time which, if successful, would contribute to reducing maintenance costs and at the same time maintaining a good customer perception. Specifically, we are addressing the problem of the manual, laborious, and inaccurate process of assigning anomaly reports to the correct design teams. In large organizations with complex systems this is particularly problematic because the receiver of the anomaly report from customer may not have detailed knowledge of the whole system. As a consequence, anomaly reports may be wrongly routed around in the organization causing delays and unnecessary work. We have developed and validated machine learning approach, based on stacked generalization, to automatically route anomaly reports to the correct design teams in the organization. A research prototype has been implemented and evaluated on roughly one year of real anomaly reports on a large and complex system at Ericsson AB. The prediction accuracy of the automation is approaching that of humans, indicating that the anomaly report handling time could be significantly reduced by using our approach.
  •  
36.
  • Knudson, Dean, et al. (författare)
  • A Preliminary Report on Establishing an Industry Based International Capstone Exchange Program
  • 2012
  • Ingår i: 2012 Capstone Design Conference Proceedings.
  • Konferensbidrag (refereegranskat)abstract
    • This article presents preliminary results on the establishment of an industry based international capstone exchange program. North Dakota State University in the United States, the University of Applied Sciences and Arts Hannover, Germany and Linköping University in Sweden as well as industrial sponsors in each country will be participating. Three models for industry based capstone courses are presented along with characteristics of projects well suited for an international exchange program. At this point, the project exchange is ready to take place in the spring 2012 semester, so results at this time are mainly given regarding how to set up such an exchange. Some early conclusions and areas for potential improvement are included as well. More results from the first instance will be available at the time of the conference.
  •  
37.
  • Knudson, Dean, et al. (författare)
  • Global software engineering experience through international capstone project exchanges
  • 2018
  • Ingår i: Proceedings - International Conference on Software Engineering. - New York : ACM Digital Library. - 9781450357173 ; , s. 54-58
  • Konferensbidrag (refereegranskat)abstract
    • Today it is very common for software systems to be built by teams located in more than one country. For example, a project team may be located in the US while the team lead resides in Sweden. How then should students be trained for this kind of work? Senior design or capstone projects offer students real-world hands-on experience but rarely while working internationally. One reason is that most instructors do not have international business contacts that allow them to find project sponsors in other countries. Another reason is the fear of having to invest a huge amount of time managing an international project. In this paper we present the general concepts related to "International Capstone Project Exchanges", the basic model behind the exchanges (student teams are led by an industry sponsor residing in a different country) and several alternate models that have been used in practice. We will give examples from projects in the US, Germany, Sweden, Australia, and Colombia. We have extended the model beyond software projects to include engineering projects as well as marketing, and journalism. We conclude with a description of an International Capstone Project Exchange website that we have developed to aid any university in establishing their own international project exchange.
  •  
38.
  • Lagerberg, Lina, et al. (författare)
  • The impact of agile principles and practices on large-scale software development projects : A multiple-case study of two projects at Ericsson
  • 2013
  • Ingår i: Empirical Software Engineering and Measurement, 2013. - Los Alamitos : IEEE. - 9780769550565 ; , s. 348-356
  • Konferensbidrag (refereegranskat)abstract
    • BACKGROUND: Agile software development methods have a number of reported benefits on productivity, project visibility, software quality and other areas. There are also negative effects reported. However, the base of empirical evidence to the claimed effects needs more empirical studies. AIM: The purpose of the research was to contribute with empirical evidence on the impact of using agile principles and practices in large-scale, industrial software development. Research was focused on impacts within seven areas: Internal software documentation, Knowledge sharing, Project visibility, Pressure and stress, Coordination effectiveness, and Productivity. METHOD: Research was carried out as a multiple-case study on two contemporary, large-scale software development projects with different levels of agile adoption at Ericsson. Empirical data was collected through a survey of project members. RESULTS AND CONCLUSIONS: Intentional implementation of agile principles and practices were found to: correlate with a more balanced use of internal software documentation, contribute to knowledge sharing, correlate with increased project visibility and coordination effectiveness, reduce the need for other types of coordination mechanisms, and possibly increase productivity. No correlation with increase in pressure and stress were found.
  •  
39.
  • Martel-Duguech, Luciana Maria, et al. (författare)
  • ESE audit on management of Adult Growth Hormone Deficiency in clinical practice.
  • 2021
  • Ingår i: European journal of endocrinology. - 1479-683X. ; 184:2, s. 323-334
  • Tidskriftsartikel (refereegranskat)abstract
    • Guidelines recommend adults with pituitary disease in whom GH therapy is contemplated, to be tested for GH deficiency (AGHD); however, clinical practice is not uniform.1) To record current practice of AGHD management throughout Europe and benchmark it against guidelines; 2) To evaluate educational status of healthcare professionals about AGHD.On-line survey in endocrine centres throughout Europe.Endocrinologists voluntarily completed an electronic questionnaire regarding AGHD patients diagnosed or treated in 2017-2018.Twenty-eight centres from 17 European countries participated, including 2139 AGHD patients, 28% of childhood-onset GHD. Aetiology was most frequently non-functioning pituitary adenoma (26%), craniopharyngioma (13%) and genetic/congenital mid-line malformations (13%). Diagnosis of GHD was confirmed by a stimulation test in 52% (GHRH+arginine, 45%; insulin-tolerance, 42%, glucagon, 6%; GHRH alone and clonidine tests, 7%); in the remaining, ≥3 pituitary deficiencies and low serum IGF-I were diagnostic. Initial GH dose was lower in older patients, but only women <26 years were prescribed a higher dose than men; dose titration was based on normal serum IGF-I, tolerance and side-effects. In one country, AGHD treatment was not approved. Full public reimbursement was not available in four countries and only in childhood-onset GHD in another. AGHD awareness was low among non-endocrine professionals and healthcare administrators. Postgraduate AGHD curriculum training deserves being improved.Despite guideline recommendations, GH replacement in AGHD is still not available or reimbursed in all European countries. Knowledge among professionals and health administrators needs improvement to optimize care of adults with GHD.
  •  
40.
  • Moe, Johan, et al. (författare)
  • Using execution trace data to improve distributed systems
  • 2002
  • Ingår i: Proceedings International Conference on Software Maintenance. ; , s. 640-648
  • Konferensbidrag (refereegranskat)abstract
    • Understanding the dynamic behavior of a system is a key determinant to successful system maintenance. This paper contributes two studies at Ericsson Radio Systems of the perfective maintenance of large and distributed systems. Our approach is a holistic method based on tracing and the technical solution to acquisition of trace data is to use CORBA interceptors. Our method has proven useful in solving a wide variety of problems in design as well as implementation and test-all this at a small price. Examples of improvements are performance, new test cases and merging of objects
  •  
41.
  • Mårtensson, Torvald, et al. (författare)
  • The Testing Hopscotch Model - Six Complementary Profiles Replacing the Perfect All-Round Tester
  • 2024
  • Ingår i: PRODUCT-FOCUSED SOFTWARE PROCESS IMPROVEMENT, PROFES 2023, PT I. - : SPRINGER INTERNATIONAL PUBLISHING AG. - 9783031492655 - 9783031492662 ; , s. 495-510
  • Konferensbidrag (refereegranskat)abstract
    • Contrasting the idea of a team with all-round testers, the Testing Hopscotch model includes six complementary profiles, tailored for different types of testing. The model is based on 60 interviews and three focus groups with 22 participants. The validation of the Testing Hopscotch model included ten validation workshops with 58 participants from six companies developing large-scale and complex software systems. The validation showed how the model provided valuable insights and promoted good discussions, helping companies identify what they need to do in order to improve testing in each individual case. The results from the validation workshops were confirmed at a cross-company workshop with 33 participants from seven companies and six universities. Based on the diverse nature of the seven companies involved in the study, it is reasonable to expect that the Testing Hopscotch model is relevant to a large segment of the software industry at large. The validation of the Testing Hopscotch model showed that the model is novel, actionable and useful in practice, helping companies identify what they need to do to improve testing in their organization.
  •  
42.
  • Nilsson, Sara, 1990-, et al. (författare)
  • Empirical Study of Requirements Engineering in Cross Domain Development
  • 2018
  • Ingår i: DS 92: Proceedings of the DESIGN 2018 15th International Design Conference. - Glasgow : The Design Society. - 9789537738594 ; , s. 857-868
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Shortened time-to-market cycles and increasingly complex systems are just some of the challenges faced by industry. The requirement engineering process needs to adapt to these challenges in order to guarantee that the end product fulfils the customer expectations as well as the necessary safety norms. The goal of this paper is to investigate the way engineers work in practice with the requirement engeneering processes at different stages of the development, with a particular focus on the use of requirements in cross domain development and to compare this to the existing theory in the domain.
  •  
43.
  • Patel, Mikael, et al. (författare)
  • A Case Study in Assessing and Improving Capacity Using an Anatomy of Good Practice
  • 2007
  • Ingår i: The 6th joint meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE 2007), Dubrovnik, Croatia. - New York : ACM. - 9781595938114 ; , s. 509-512
  • Konferensbidrag (refereegranskat)abstract
    • Capacity in telecommunication systems is highly related to operator revenue. As a vendor of such systems, Ericsson AB is continuously improving its processes for estimating, specifying, tuning, and testing the capacity of delivered systems. In order to systematize process improvements Ericsson AB and Linköping University joined forces to create an anatomy of Capacity Sub Processes (CSPs). The anatomy is the result of an interview series conducted to document good practices amongst organizations active in capacity improvement. In this paper we analyze four different development processes in terms of how far they have reached in their process maturity according to our anatomy and show possible improvement directions. Three of the processes are currently in use at Ericsson, and the fourth is the OpenUP/Basic process which we have used as a reference process in earlier research. We also include an analysis of the observed good practices. The result mainly confirms the order of CSPs in the anatomy, but we need to use our information of the maturity of products and the major life cycle in the organization in order to fully explain the role of the anatomy in planning of improvements.
  •  
44.
  • Rezaei, Hengameh, et al. (författare)
  • Identifying and Managing Complex Modules in Executable Software Design Models - Empirical Assessment of a Large Telecom Software Product
  • 2014
  • Ingår i: 2014 Joint Conference of the International Workshop on Software Measurement and the International Conference on Software Process and Product Measurement (IWSM-MENSURA), Rotterdam, The Netherlands, October 6-8, 2014. - Los Alamitos : IEEE Computer Society. - 9781479941742 ; 1:1
  • Konferensbidrag (refereegranskat)abstract
    • Using design models instead of executable code has shown itself to be an efficient way of increasing abstraction level of software development. However, applying established code-based software engineering methods to design models can be a challenge - due to different abstraction levels, the same metrics as for code are not applicable for the design models. One of practical challenges in using metrics at the model level is applying complexity-prediction formulas developed using code-based metrics to design models. The existing formulas do not apply as they do not take into consideration the behavior part of the models - e.g. State charts. In this paper we address this challenge by conducting a case study at one of the large telecom products at Ericsson with the goal to identify which metrics can predict complex, hard to understand and hard to maintain software modules based on their design models. We use both statistical methods like regression to build prediction formulas and qualitative interviews to codify expert designers' perception of which software modules are complex. The results of this case study show that such measures as the number of non-self-transitions, transition per states or state depth can be combined in order to identify software units that are perceived as complex by expert designers. Our conclusion is that these metrics can be used in other companies to predict complex modules, but the coefficients should be recalculated per product to increase the prediction accuracy.
  •  
45.
  •  
46.
  • Sandahl, Kristian, 1959- (författare)
  • Case studies in knowledge acquisition, migration and user acceptance of expert systems
  • 1987
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • In recent years, expert systems technology has become commercially mature, but widespread delivery of systems in regular use is still slow. This thesis discusses three main difficulties in the development and delivery of expert systems, namely,the knowledge acquisition bottleneck, i.e. the problem of formalizing the expert knowledge into a computer-based representation.the migration problem, where we argue that the different requirements on a development environment and a delivery environment call for systematic methods to transfer knowledge bases between the environments.the user acceptance barrier, where we believe that user interface issues and concerns for a smooth integration into the end-user’s working environment play a crucial role for the successful use of expert systems.In this thesis, each of these areas is surveyed and discussed in the light of experience gained from a number of expert system projects performed by us since 1983. Two of these projects, a spot-welding robot configuration system and an antibody analysis advisor, are presented in greater detail in the thesis.
  •  
47.
  •  
48.
  • Ståhl, Daniel, et al. (författare)
  • An Eco-System Approach to Project-Based Learning in Software Engineering Education
  • 2022
  • Ingår i: IEEE Transactions on Education. - : IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC. - 0018-9359 .- 1557-9638. ; 65:4, s. 514-523
  • Tidskriftsartikel (refereegranskat)abstract
    • Contribution: This article identifies the participation of external stakeholders as a key contributing factor for positive outcomes in project-based software engineering courses. A model for overlapping virtuous circles of lasting positive impact on both stakeholders and students from such courses is proposed. Background: Project-based courses are widespread in software engineering education, and there are numerous designs for such courses presented in literature. It is found that the needs and motivations of external stakeholders, from industry and government sectors, in these courses has received limited attention in related work. Intended Outcomes: A course design that prepares students for graduate level studies and professional life, through close proximity to external stakeholders in a highly realistic setting, working on "live" projects. Application Design: Building on a long tradition of university-industry collaboration dating back to 1977, as well as findings in related work, students are assigned to live projects proposed by external stakeholders from industry and government, working in close proximity with their respective stakeholders throughout the project. The course places great emphasis on coaching over instruction, treating the many unforeseen challenges of such projects as a valuable part of the learning experience. Findings: Based on interviews with stakeholders and students, it is found that stakeholder and student outcomes are interdependent and build upon one another, and that positive outcomes for both groups are necessary for the sustainability of the course over multiple iterations.
  •  
49.
  • Svahnberg, Mikael, et al. (författare)
  • Perspectives on Requirements Understandability : for Whom Does the Teacher's Bell Toll?
  • 2008
  • Konferensbidrag (refereegranskat)abstract
    • Software development decision makers use many different information sources as a basis for their decisions. One of these sources is the requirements specification, which is used in a large number of processes throughout the software development cycle. In order to make good decisions, the quality and completeness of the available information is important. Hence, requirements must be written in a way that is understandable for the different decision makers. However, requirements are rarely written with an explicit perception of how to make them understandable for different target usages. In this study we investigate the implicit assumptions of current and future requirements engineers and their teachers regarding which usages they perceive as most important when creating requirements. This is contrasted with industrial viewpoints of the relative importance of different requirements usages. The results indicate that the teachers and future requirements engineers have a strong focus towards in-project perspectives, and very little in common with the perspectives of industry managers. Thus, we are training students to serve as software developers, and not software engineering managers.
  •  
50.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-50 av 59
Typ av publikation
konferensbidrag (31)
tidskriftsartikel (12)
doktorsavhandling (7)
licentiatavhandling (4)
annan publikation (2)
samlingsverk (redaktörskap) (1)
visa fler...
rapport (1)
bok (1)
visa färre...
Typ av innehåll
refereegranskat (41)
övrigt vetenskapligt/konstnärligt (18)
Författare/redaktör
Sandahl, Kristian (30)
Sandahl, Kristian, P ... (7)
Ahmad, Azeem (6)
Leifler, Ola (6)
Börstler, Jürgen (4)
Eldh, Sigrid (3)
visa fler...
Broman, David, 1977- (3)
Eriksson, Magnus (3)
de Oliveira Neto, Fr ... (3)
Carlshamre, Pär (3)
Jonsson, Leif (3)
Staron, Miroslaw, 19 ... (2)
Zeller, Bernward (2)
Hasle, Henrik (2)
Abrahamsson, Jonas (2)
Jahnukainen, Kirsi (2)
Carlson, Jan (2)
Karlsson, J. (2)
Enoiu, Eduard Paul (2)
Ahmad, Azeem, 1984- (2)
Kjeldsen, Eigil (2)
Gustafsson, Mats (1)
Eriksson, M (1)
Olsson, J. (1)
Ohlsson, Kjell (1)
Ha, Shau-Yin (1)
Jonsson, Olafur G. (1)
Lausen, Birgitte (1)
De Moerloose, Barbar ... (1)
Palle, Josefine, 196 ... (1)
Goth, Miklos I (1)
Bosch, Jan (1)
Holmström Olsson, He ... (1)
Staron, Miroslaw (1)
Villani, Mattias (1)
Johannsson, Gudmundu ... (1)
Shi, Zhixiang (1)
Leifler, Ola, 1978- (1)
Mäntylä, Mika, Profe ... (1)
Held, Erik Norrestam (1)
Berglund, Aseel, 197 ... (1)
Kovacs, Gabor (1)
Magnusson, Måns (1)
Höybye, Charlotte (1)
Nilsson, Sara, 1990 (1)
Hägglund, Sture (1)
Eriksson, Henrik (1)
Bosch, Jan, 1967 (1)
Christ, Emanuel (1)
Moe, Johan (1)
visa färre...
Lärosäte
Linköpings universitet (48)
Kungliga Tekniska Högskolan (8)
Göteborgs universitet (5)
Umeå universitet (4)
Mälardalens universitet (4)
Blekinge Tekniska Högskola (3)
visa fler...
Karolinska Institutet (2)
Uppsala universitet (1)
Lunds universitet (1)
Chalmers tekniska högskola (1)
RISE (1)
VTI - Statens väg- och transportforskningsinstitut (1)
visa färre...
Språk
Engelska (58)
Svenska (1)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (43)
Samhällsvetenskap (5)
Teknik (3)
Medicin och hälsovetenskap (3)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy