SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Felderer Michael 1978 ) "

Search: WFRF:(Felderer Michael 1978 )

  • Result 1-50 of 51
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Usman, Muhammad, 1978-, et al. (author)
  • Compliance Requirements in Large-Scale Software Development : An Industrial Case Study
  • 2020
  • In: Lecture Notes in Computer Science. - Cham : Springer-Verlag Tokyo Inc.. - 9783030641474 ; , s. 385-401
  • Conference paper (peer-reviewed)abstract
    • Regulatory compliance is a well-studied area, including research on how to model, check, analyse, enact, and verify compliance of software. However, while the theoretical body of knowledge is vast, empirical evidence on challenges with regulatory compliance, as faced by industrial practitioners particularly in the Software Engineering domain, is still lacking. In this paper, we report on an industrial case study which aims at providing insights into common practices and challenges with checking and analysing regulatory compliance, and we discuss our insights in direct relation to the state of reported evidence. Our study is performed at Ericsson AB, a large telecommunications company, which must comply to both locally and internationally governing regulatory entities and standards such as GDPR. The main contributions of this work are empirical evidence on challenges experienced by Ericsson that complement the existing body of knowledge on regulatory compliance. © 2020, Springer Nature Switzerland AG.
  •  
2.
  • Fagerholm, F., et al. (author)
  • Cognition in Software Engineering: A Taxonomy and Survey of a Half-Century of Research
  • 2022
  • In: Acm Computing Surveys. - : Association for Computing Machinery (ACM). - 0360-0300 .- 1557-7341. ; 54:11
  • Journal article (peer-reviewed)abstract
    • Cognition plays a fundamental role in most software engineering activities. This article provides a taxonomy of cognitive concepts and a survey of the literature since the beginning of the Software Engineering discipline. The taxonomy comprises the top-level concepts of perception, attention, memory, cognitive load, reasoning, cognitive biases, knowledge, social cognition, cognitive control, and errors, and procedures to assess them both qualitatively and quantitatively. The taxonomy provides a useful tool to filter existing studies, classify new studies, and support researchers in getting familiar with a (sub) area. In the literature survey, we systematically collected and analysed 311 scientific papers spanning five decades and classified them using the cognitive concepts from the taxonomy. Our analysis shows that the most developed areas of research correspond to the four life-cycle stages, software requirements, design, construction, and maintenance. Most research is quantitative and focuses on knowledge, cognitive load, memory, and reasoning. Overall, the state of the art appears fragmented when viewed from the perspective of cognition. There is a lack of use of cognitive concepts that would represent a coherent picture of the cognitive processes active in specific tasks. Accordingly, we discuss the research gap in each cognitive concept and provide recommendations for future research.
  •  
3.
  • Adigun, Jubril Gbolahan, et al. (author)
  • Collaborative Artificial Intelligence Needs Stronger Assurances Driven by Risks
  • 2022
  • In: Computer. - : IEEE Computer Society. - 0018-9162 .- 1558-0814. ; 55:3, s. 52-63
  • Journal article (peer-reviewed)abstract
    • Collaborative artificial intelligence systems (CAISs) aim to work with humans in a shared space to achieve a common goal, but this can pose hazards that could harm human beings. We identify emerging problems in this context and report our vision of and progress toward a risk-driven assurance process for CAISs.
  •  
4.
  • Auer, Florian, et al. (author)
  • Controlled experimentation in continuous experimentation : Knowledge and challenges
  • 2021
  • In: Information and Software Technology. - : Elsevier B.V.. - 0950-5849 .- 1873-6025. ; 134
  • Journal article (peer-reviewed)abstract
    • Context: Continuous experimentation and A/B testing is an established industry practice that has been researched for more than 10 years. Our aim is to synthesize the conducted research. Objective: We wanted to find the core constituents of a framework for continuous experimentation and the solutions that are applied within the field. Finally, we were interested in the challenges and benefits reported of continuous experimentation. Methods: We applied forward snowballing on a known set of papers and identified a total of 128 relevant papers. Based on this set of papers we performed two qualitative narrative syntheses and a thematic synthesis to answer the research questions. Results: The framework constituents for continuous experimentation include experimentation processes as well as supportive technical and organizational infrastructure. The solutions found in the literature were synthesized to nine themes, e.g. experiment design, automated experiments, or metric specification. Concerning the challenges of continuous experimentation, the analysis identified cultural, organizational, business, technical, statistical, ethical, and domain-specific challenges. Further, the study concludes that the benefits of experimentation are mostly implicit in the studies. Conclusion: The research on continuous experimentation has yielded a large body of knowledge on experimentation. The synthesis of published research presented within include recommended infrastructure and experimentation process models, guidelines to mitigate the identified challenges, and what problems the various published solutions solve. © 2021 The Authors
  •  
5.
  • Auer, Florian, et al. (author)
  • From monolithic systems to Microservices : An assessment framework
  • 2021
  • In: Information and Software Technology. - : Elsevier B.V.. - 0950-5849 .- 1873-6025. ; 137
  • Journal article (peer-reviewed)abstract
    • Context: Re-architecting monolithic systems with Microservices-based architecture is a common trend. Various companies are migrating to Microservices for different reasons. However, making such an important decision like re-architecting an entire system must be based on real facts and not only on gut feelings. Objective: The goal of this work is to propose an evidence-based decision support framework for companies that need to migrate to Microservices, based on the analysis of a set of characteristics and metrics they should collect before re-architecting their monolithic system. Method: We conducted a survey done in the form of interviews with professionals to derive the assessment framework based on Grounded Theory. Results: We identified a set consisting of information and metrics that companies can use to decide whether to migrate to Microservices or not. The proposed assessment framework, based on the aforementioned metrics, could be useful for companies if they need to migrate to Microservices and do not want to run the risk of failing to consider some important information. © 2021 The Author(s)
  •  
6.
  • Auer, Florian, et al. (author)
  • Towards defining a microservice migration framework
  • 2018
  • In: ACM International Conference Proceeding Series. - New York, NY, USA : Association for Computing Machinery.
  • Conference paper (peer-reviewed)abstract
    • Microservices are more and more popular. As a result, some companies started to believe that microservices are the solution to all of their problems and rush to adopt microservices without sufficient knowledge about the impacts. Most of the time they expect to decrease their maintenance effort or to ease the deployment process. However, re-architecting a system to microservices is not always beneficial. In this work we propose a work-plan to identify a decision framework that supports practitioners in the understanding of possible migration based benefits and issues. This will lead to more reasoned decisions and mitigate the risk of migration. © 2018 Copyright held by the owner/author(s).
  •  
7.
  • Beer, Armin, et al. (author)
  • Measuring and improving testability of system requirements in an industrial context by applying the goal question metric approach
  • 2018
  • In: Proceedings - International Conference on Software Engineering. - New York, NY, USA : IEEE Computer Society. - 9781450357494 ; , s. 25-32
  • Conference paper (peer-reviewed)abstract
    • Testing is subject to two basic constraints, namely cost and quality. The cost depends on the efficiency of the testing activities as well as their quality and testability. The author's practical experience in large-scale systems shows that if the requirements are adapted iteratively or the architecture is altered, testability decreases. However, what is often lacking is a root cause analysis of the testability degradations and the introduction of improvement measures during software development. In order to introduce agile practices in the rigid strategy of the V-model, good testability of software artifacts is vital. So testability is also the bridgehead towards agility. In this paper, we report on a case study in which we measure and improve testability on the basis of the Goal Question Metric Approach. © 2018 ACM.
  •  
8.
  • Bendler, Daniel, et al. (author)
  • Competency Models for Information Security and Cybersecurity Professionals : Analysis of Existing Work and a New Model
  • 2023
  • In: ACM Transactions on Computing Education. - : Association for Computing Machinery (ACM). - 1946-6226. ; 23:2
  • Journal article (peer-reviewed)abstract
    • Competency models are widely adopted frameworks that are used to improve human resource functions and education. However, the characteristics of competency models related to the information security and cybersecurity domains are not well understood. To bridge this gap, this study investigates the current state of competency models related to the security domain through qualitative content analysis. Additionally, based on the competency model analysis, an evidence-based competency model is proposed. Examining the content of 27 models, we found that the models can benefit target groups in many different ways, ranging from policymaking to performance management. Owing to their many uses, competency models can arguably help to narrow the skills gap from which the profession is suffering. Nonetheless, the models have their shortcomings. First, the models do not cover all of the topics specified by the Cybersecurity Body of Knowledge ( i.e., no model is complete). Second, by omitting social, personal, and methodological competencies, many models reduce the competency profile of a security expert to professional competencies. Addressing the limitations of previous work, the proposed competency model provides a holistic view of the competencies required by security professionals for job achievement and can potentially benefit both the education system and the labor market. To conclude, the implications of the competency model analysis and use cases of the proposed model are discussed.
  •  
9.
  • Doležel, Michal, et al. (author)
  • Organizational patterns between developers and testers : Investigating testers' autonomy and role identity
  • 2018
  • In: ICEIS 2018 - Proceedings of the 20th International Conference on Enterprise Information Systems. - : SciTePress. - 9789897582981 ; , s. 336-344
  • Conference paper (peer-reviewed)abstract
    • This paper deals with organizational patterns (configurations, set-ups) between developers/programmers and testers. We firstly discuss the key differences between these two Information Systems Development (ISD) occupations. Highlighting the origin of inevitable disagreements between them, we reflect on the nature of the software testing field that currently undergoes an essential change under the increasing influence of agile ISD approaches and methods. We also deal with the ongoing professionalization of software testing. More specifically, we propose that the concept of role identity anchored in (social) identity theory can be applied to the profession of software testers, and their activities studied accordingly. Furthermore, we conceptualize three organizational patterns (i.e. isolated testers, embedded testers, and eradicated testers) based on our selective literature review of research and practice sources in Information Systems (IS) and Software Engineering (SE) disciplines. After summarizing the key industrial challenges of these patterns, we conclude the paper by calling for more research evidence that would demonstrate the viability of the recently introduced novel organizational models. We also argue that especially the organizational model of "combined software engineering", where the roles of programmers and testers are reunited into a single role of "software engineer", deserves a closer attention of IS and SE researchers in the future. © 2018 by SCITEPRESS - Science and Technology Publications, Lda.
  •  
10.
  • Felderer, Michael, 1978-, et al. (author)
  • A testability analysis framework for non-functional properties
  • 2018
  • In: 2018 IEEE 11TH INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION WORKSHOPS (ICSTW). - : Institute of Electrical and Electronics Engineers Inc.. - 9781538663523 ; , s. 54-58
  • Conference paper (peer-reviewed)abstract
    • This paper presents background, the basic steps and an example for a testability analysis framework for non-functional properties.
  •  
11.
  • Felderer, Michael, 1978-, et al. (author)
  • Artificial Intelligence Techniques in System Testing
  • 2023
  • In: Optimising the software development process with artificial intelligence. - : Springer Science and Business Media Deutschland GmbH. - 9789811999475 - 9789811999482 ; , s. 221-240
  • Book chapter (other academic/artistic)abstract
    • System testing is essential for developing high-quality systems, but the degree of automation in system testing is still low. Therefore, there is high potential for Artificial Intelligence (AI) techniques like machine learning, natural language processing, or search-based optimization to improve the effectiveness and efficiency of system testing. This chapter presents where and how AI techniques can be applied to automate and optimize system testing activities. First, we identified different system testing activities (i.e., test planning and analysis, test design, test execution, and test evaluation) and indicated how AI techniques could be applied to automate and optimize these activities. Furthermore, we presented an industrial case study on test case analysis, where AI techniques are applied to encode and group natural language into clusters of similar test cases for cluster-based test optimization. Finally, we discuss the levels of autonomy of AI in system testing. 
  •  
12.
  • Felderer, Michael, 1978-, et al. (author)
  • Comprehensibility of system models during test design : A controlled experiment comparing UML activity diagrams and state machines
  • 2019
  • In: Software quality journal. - : Springer. - 0963-9314 .- 1573-1367. ; 27:1, s. 125-147
  • Journal article (peer-reviewed)abstract
    • UML activity diagrams and state machines are both used for modeling system behavior from the user perspective and are frequently the basis for deriving system test cases. In practice, system test cases are often derived manually from UML activity diagrams or state machines. For this task, comprehensibility of respective models is essential and a relevant question for practice to support model selection and design, as well as subsequent test derivation. Therefore, the objective of this paper is to compare the comprehensibility of UML activity diagrams and state machines during manual test case derivation. We investigate the comprehensibility of UML activity diagrams and state machines in a controlled student experiment. Three measures for comprehensibility have been investigated: (1) the self-assessed comprehensibility, (2) the actual comprehensibility measured by the correctness of answers to comprehensibility questions, and (3) the number of errors made during test case derivation. The experiment was performed and internally replicated with overall 84 participants divided into three groups at two institutions. Our experiment indicates that activity diagrams are more comprehensible but also more error-prone with regard to manual test case derivation and discusses how these results can improve system modeling and test case design.
  •  
13.
  • Felderer, Michael, 1978-, et al. (author)
  • Formal methods in industrial practice - Bridging the gap (track summary)
  • 2018
  • In: Lect. Notes Comput. Sci.. - Cham : Springer Verlag. - 9783030034269 ; , s. 77-81
  • Conference paper (peer-reviewed)abstract
    • Already for many decades, formal methods are considered to be the way forward to help the software industry to make more reliable and trustworthy software. However, despite this strong belief, and many individual success stories, no real change in industrial software development seems to happen. In fact, the software industry is moving fast forward itself, and the gap between what formal methods can achieve, and the daily software development practice does not seem to get smaller (and might even be growing).
  •  
14.
  •  
15.
  • Felderer, Michael, 1978-, et al. (author)
  • Introduction to the Special Issue on value and waste in software engineering
  • 2022
  • In: Information and Software Technology. - : Elsevier B.V.. - 0950-5849 .- 1873-6025. ; 144
  • Journal article (peer-reviewed)abstract
    • In the context of software engineering, “value” and “waste” can mean different things to different stakeholders. While traditionally value and waste have been considered from a business or economic point of view, there has been a trend in recent years towards a broader perspective that also includes wider human and societal values. This Special Issue explores value and waste aspects in all areas of software engineering, including identifying, quantifying, reasoning about, and representing value and waste, driving value and avoiding waste, and managing value and waste. In this editorial we provide an introduction to the topic and provide an overview of the contributions included in this Special Issue. © 2021
  •  
16.
  • Felderer, Michael, 1978- (author)
  • Risk-based Software Quality and Security Engineering in Data-intensive Environments (Invited Keynote)
  • 2018
  • In: FUTURE DATA AND SECURITY ENGINEERING, FDSE 2018. - Cham : SPRINGER INTERNATIONAL PUBLISHING AG. - 9783030031923 ; , s. 12-17
  • Conference paper (peer-reviewed)abstract
    • The concept of risk as a measure for the potential of gaining or losing something of value has successfully been applied in software quality engineering for years, e.g., for risk-based test case prioritization, and in security engineering, e.g., for security requirements elicitation. In practice, both, in software quality engineering and in security engineering, risks are typically assessed manually, which tends to be subjective, non-deterministic, error-prone and time-consuming. This often leads to the situation that risks are not explicitly assessed at all and further prevents that the high potential of assessed risks to support decisions is exploited. However, in modern data-intensive environments, e.g., open online environments, continuous software development or IoT, the online, system or development environments continuously deliver data, which provides the possibility to now automatically assess and utilize software and security risks. In this paper we first discuss the concept of risk in software quality and security engineering. Then, we provide two current examples from software quality engineering and security engineering, where data-driven risk assessment is a key success factor, i.e., risk-based continuous software quality engineering in continuous software development and risk-based security data extraction and processing in the open online web.
  •  
17.
  •  
18.
  • Felderer, Michael, 1978-, et al. (author)
  • The Evolution of Empirical Methods in Software Engineering
  • 2020
  • In: Contemporary Empirical Methods in Software Engineering. - Cham : Springer Nature. - 9783030324889 ; , s. 1-24
  • Book chapter (peer-reviewed)abstract
    • Empirical methods like experimentation have become a powerful means to drive the field of software engineering by creating scientific evidence on software development, operation, and maintenance, but also by supporting practitioners in their decision-making and learning. Today empirical methods are fully applied in software engineering. However, they have developed in several iterations since the 1960s. In this chapter we tell the history of empirical software engineering and present the evolution of empirical methods in software engineering in five iterations, i.e., (1) mid-1960s to mid-1970s, (2) mid-1970s to mid-1980s, (3) mid-1980s to end of the 1990s, (4) the 2000s, and (5) the 2010s. We present the five iterations of the development of empirical software engineering mainly from a methodological perspective and additionally take key papers, venues, and books, which are covered in chronological order in a separate section on recommended further readings, into account. We complement our presentation of the evolution of empirical software engineering by presenting the current situation and an outlook in Sect. 4 and the available books on empirical software engineering. Furthermore, based on the chapters covered in this book we discuss trends on contemporary empirical methods in software engineering related to the plurality of research methods, human factors, data collection and processing, aggregation and synthesis of evidence, and impact of software engineering research.
  •  
19.
  • Foidl, Harald, et al. (author)
  • Data Smells : Categories, Causes and Consequences, and Detection of Suspicious Data in AI-based Systems
  • 2022
  • In: Proceedings - 1st International Conference on AI Engineering - Software Engineering for AI, CAIN 2022. - New York, NY, USA : Institute of Electrical and Electronics Engineers (IEEE). - 9781450392754 ; , s. 229-239
  • Conference paper (peer-reviewed)abstract
    • High data quality is fundamental for today's AI-based systems. However, although data quality has been an object of research for decades, there is a clear lack of research on potential data quality issues (e.g., ambiguous, extraneous values). These kinds of issues are latent in nature and thus often not obvious. Nevertheless, they can be associated with an increased risk of future problems in AI-based systems (e.g., technical debt, data-induced faults). As a counterpart to code smells in software engineering, we refer to such issues as Data Smells. This article conceptualizes data smells and elaborates on their causes, consequences, detection, and use in the context of AI-based systems. In addition, a catalogue of 36 data smells divided into three categories (i.e., Believability Smells, Understandability Smells, Consistency Smells) is presented. Moreover, the article outlines tool support for detecting data smells and presents the result of an initial smell detection on more than 240 real-world datasets. 
  •  
20.
  • Fucci, Davide, 1985-, et al. (author)
  • Evaluating software security maturity using OWASP SAMM : Different approaches and stakeholders perceptions
  • 2024
  • In: Journal of Systems and Software. - : Elsevier. - 0164-1212 .- 1873-1228. ; 214
  • Journal article (peer-reviewed)abstract
    • Background: Recent years have seen a surge in cyber-attacks, which can be prevented or mitigated using software security activities. OWASP SAMM is a maturity model providing a versatile way for companies to assess their security posture and plan for improvements. Objective: We perform an initial SAMM assessment in collaboration with a company in the financial domain. Our objective is to assess a holistic inventory of the company security-related activities, focusing on how different roles perform the assessment and how they perceive the instrument used in the process. Methodology: We perform a case study to collect data using SAMM in a lightweight and novel manner through assessment using an online survey with 17 participants and a focus group with seven participants. Results: We show that different roles perceive maturity differently and that the two assessments deviate only for specific practices making the lightweight approach a viable and efficient solution in industrial practice. Our results indicate that the questions included in the SAMM assessment tool are answered easily and confidently across most roles. Discussion: Our results suggest that companies can productively use a lightweight SAMM assessment. We provide nine lessons learned for guiding industrial practitioners in the evaluation of their current security posture as well as for academics wanting to utilize SAMM as a research tool in industrial settings. Editor's note: Open Science material was validated by the Journal of Systems and Software Open Science Board. © 2024 The Author(s)
  •  
21.
  • Gadner, Daniel, et al. (author)
  • The Collective Process Framework DTScrum for Integrating Design Thinking into Scrum
  • 2022
  • In: Design Thinking for Software Engineering. - Cham : Springer. - 9783030905941 ; , s. 85-101
  • Book chapter (peer-reviewed)abstract
    • The rapid progression of technological capabilities and fast-changing customer demand forces today’s firms to adapt to a volatile environment and react quickly to change. Thus, the agile software development framework Scrum has already become a generally accepted framework for delivering work in small but consumable increments at a fast pace. One facet often neglected is that approaches such as like Scrum are not yet able to cope with ill-defined problems. That is to say, while software development approaches aim at developing software products iteratively and incrementally, we often still need to shift our attention first on framing the actual problem. In this context, one design discipline can help to unveil the real problem, define it, and put it into a clear customer requirement. This discipline is more known as Design Thinking and originates from the search to complement the arts and sciences. Design Thinking is utilizing knowledge from both professions alike, but in ways that are peculiarly adapted to the problems of the digital age. During the last years, it is also receiving much attention in the Software Engineering community. Despite showing obvious similarities with agile software development, little is yet known how to make effective use of Design Thinking in the context of agile approaches. In this contribution, it will be depicted how the basic principles and concepts of Design Thinking and Scrum cohere on a conceptual level while addressing the various, and to some extent, competing views and needs emerging from the professional environment. Important here are the synergies between problem understanding, reflected by Design Thinking, and problem-solving, reflected by Scrum.
  •  
22.
  • Garousi, Vahid, et al. (author)
  • A survey on software testability
  • 2019
  • In: Information and Software Technology. - : Elsevier. - 0950-5849 .- 1873-6025. ; 108, s. 35-64
  • Journal article (peer-reviewed)abstract
    • Context: Software testability is the degree to which a software system or a unit under test supports its own testing. To predict and improve software testability, a large number of techniques and metrics have been proposed by both practitioners and researchers in the last several decades. Reviewing and getting an overview of the entire state-of-the-art and state-of-the-practice in this area is often challenging for a practitioner or a new researcher. Objective: Our objective is to summarize the body of knowledge in this area and to benefit the readers (both practitioners and researchers) in preparing, measuring and improving software testability. Method: To address the above need, the authors conducted a survey in the form of a systematic literature mapping (classification) to find out what we as a community know about this topic. After compiling an initial pool of 303 papers, and applying a set of inclusion/exclusion criteria, our final pool included 208 papers (published between 1982 and 2017). Results: The area of software testability has been comprehensively studied by researchers and practitioners. Approaches for measurement of testability and improvement of testability are the most-frequently addressed in the papers. The two most often mentioned factors affecting testability are observability and controllability. Common ways to improve testability are testability transformation, improving observability, adding assertions, and improving controllability.Conclusion: This paper serves for both researchers and practitioners as an "index" to the vast body of knowledge in the area of testability. The results could help practitioners measure and improve software testability in their projects. To assess potential benefits of this review paper, we shared its draft version with two of our industrial collaborators. They stated that they found the review useful and beneficial in their testing activities. Our results can also benefit researchers in observing the trends in this area and identify the topics that require further investigation.
  •  
23.
  • Garousi, Vahid, et al. (author)
  • Aligning software engineering education with industrial needs : A meta-analysis
  • 2019
  • In: Journal of Systems and Software. - : Elsevier Inc.. - 0164-1212 .- 1873-1228. ; 156, s. 65-83
  • Journal article (peer-reviewed)abstract
    • Context: According to various reports, many software engineering (SE) graduates often face difficulties when beginning their careers, which is mainly due to misalignment of the skills learned in university education with what is needed in the software industry. Objective: Our objective is to perform a meta-analysis to aggregate the results of the studies published in this area to provide a consolidated view on how to align SE education with industry needs, to identify the most important skills and also existing knowledge gaps. Method: To synthesize the body of knowledge, we performed a systematic literature review (SLR), in which we systematically selected a pool of 35 studies and then conducted a meta-analysis using data extracted from those studies. Results: Via a meta-analysis and using data from 13 countries and over 4,000 data points, highlights of the SLR include: (1) software requirements, design, and testing are the most important skills; and (2) the greatest knowledge gaps are in configuration management, SE models and methods, SE process, design (and architecture), as well as in testing. Conclusion: This paper provides implications for both educators and hiring managers by listing the most important SE skills and the knowledge gaps in the industry. © 2019 Elsevier Inc.
  •  
24.
  • Garousi, Vahid, et al. (author)
  • Benefitting from the Grey Literature in Software Engineering Research
  • 2020
  • In: Contemporary Empirical Methods in Software Engineering. - Cham : Springer Nature. - 9783030324889 ; , s. 385-413
  • Book chapter (peer-reviewed)abstract
    • Researchers generally place the most trust in peer-reviewed, published information, such as journals and conference papers. By contrast, software engineering (SE) practitioners typically do not have the time, access, or expertise to review and benefit from such publications. As a result, practitioners are more likely to turn to other sources of information that they trust, e.g., trade magazines, online blog posts, survey results, or technical reports, collectively referred to as grey literature (GL). Furthermore, practitioners also share their ideas and experiences as GL, which can serve as a valuable data source for research. While GL itself is not a new topic in SE, using, benefitting, and synthesizing knowledge from the GL in SE is a contemporary topic in empirical SE research and we are seeing that researchers are increasingly benefitting from the knowledge available within GL. The goal of this chapter is to provide an overview of GL in SE, together with insights on how SE researchers can effectively use and benefit from the knowledge and evidence available in the vast amount of GL.
  •  
25.
  • Garousi, Vahid, et al. (author)
  • Characterizing industry-academia collaborations in software engineering : evidence from 101 projects
  • 2019
  • In: Empirical Software Engineering. - : Springer New York LLC. - 1382-3256 .- 1573-7616. ; 24:4, s. 2540-2602
  • Journal article (peer-reviewed)abstract
    • Research collaboration between industry and academia supports improvement and innovation in industry and helps ensure the industrial relevance of academic research. However, many researchers and practitioners in the community believe that the level of joint industry-academia collaboration (IAC) projects in Software Engineering (SE) research is relatively low, creating a barrier between research and practice. The goal of the empirical study reported in this paper is to explore and characterize the state of IAC with respect to industrial needs, developed solutions, impacts of the projects and also a set of challenges, patterns and anti-patterns identified by a recent Systematic Literature Review (SLR) study. To address the above goal, we conducted an opinion survey among researchers and practitioners with respect to their experience in IAC. Our dataset includes 101 data points from IAC projects conducted in 21 different countries. Our findings include: (1) the most popular topics of the IAC projects, in the dataset, are: software testing, quality, process, and project managements; (2) over 90% of IAC projects result in at least one publication; (3) almost 50% of IACs are initiated by industry, busting the myth that industry tends to avoid IACs; and (4) 61% of the IAC projects report having a positive impact on their industrial context, while 31% report no noticeable impacts or were “not sure”. To improve this situation, we present evidence-based recommendations to increase the success of IAC projects, such as the importance of testing pilot solutions before using them in industry. This study aims to contribute to the body of evidence in the area of IAC, and benefit researchers and practitioners. Using the data and evidence presented in this paper, they can conduct more successful IAC projects in SE by being aware of the challenges and how to overcome them, by applying best practices (patterns), and by preventing anti-patterns. © 2019, The Author(s).
  •  
26.
  • Garousi, Vahid, et al. (author)
  • Closing the Gap Between Software Engineering Education and Industrial Needs
  • 2020
  • In: IEEE Software. - : IEEE Computer Society. - 0740-7459 .- 1937-4194. ; 7:2, s. 68-77
  • Journal article (peer-reviewed)abstract
    • According to different reports, many recent software engineering graduates often face difficulties when beginning their professional careers, due to misalignment of the skills learnt in their university education with what is needed in industry. To address that need, many studies have been conducted to align software engineering education with industry needs. To synthesize that body of knowledge, we present in this paper a systematic literature review (SLR) which summarizes the findings of 33 studies in this area. By doing a meta-analysis of all those studies and using data from 12 countries and over 4,000 data points, this study will enable educators and hiring managers to adapt their education / hiring efforts to best prepare the software engineering workforce. IEEE
  •  
27.
  • Garousi, Vahid, et al. (author)
  • Exploring the industry's challenges in software testing : An empirical study
  • 2020
  • In: Journal of Software. - : WILEY. - 2047-7473 .- 2047-7481. ; 32:8
  • Journal article (peer-reviewed)abstract
    • Context Software testing is an important and costly software engineering activity in the industry. Despite the efforts of the software testing research community in the last several decades, various studies show that still many practitioners in the industry report challenges in their software testing tasks. Objective To shed light on industry's challenges in software testing, we characterize and synthesize the challenges reported by practitioners. Such concrete challenges can then be used for a variety of purposes, eg, research collaborations between industry and academia. Method Our empirical research method is opinion survey. By designing an online survey, we solicited practitioners' opinions about their challenges in different testing activities. Our dataset includes data from 72 practitioners from eight different countries. Results Our results show that test management and test automation are considered the most challenging among all testing activities by practitioners. Our results also include a set of 104 concrete challenges in software testing that may need further investigations by the research community. Conclusion We conclude that the focal points of industrial work and academic research in software testing differ. Furthermore, the paper at hand provides valuable insights concerning practitioners' "pain" points and, thus, provides researchers with a source of important research topics of high practical relevance.
  •  
28.
  • Garousi, Vahid, et al. (author)
  • Guidelines for including grey literature and conducting multivocal literature reviews in software engineering
  • 2019
  • In: Information and Software Technology. - : Elsevier B.V.. - 0950-5849 .- 1873-6025. ; 106, s. 101-121
  • Journal article (peer-reviewed)abstract
    • Context: A Multivocal Literature Review (MLR) is a form of a Systematic Literature Review (SLR) which includes the grey literature (e.g., blog posts, videos and white papers) in addition to the published (formal) literature (e.g., journal and conference papers). MLRs are useful for both researchers and practitioners since they provide summaries both the state-of-the art and –practice in a given area. MLRs are popular in other fields and have recently started to appear in software engineering (SE). As more MLR studies are conducted and reported, it is important to have a set of guidelines to ensure high quality of MLR processes and their results. Objective: There are several guidelines to conduct SLR studies in SE. However, several phases of MLRs differ from those of traditional SLRs, for instance with respect to the search process and source quality assessment. Therefore, SLR guidelines are only partially useful for conducting MLR studies. Our goal in this paper is to present guidelines on how to conduct MLR studies in SE. Method: To develop the MLR guidelines, we benefit from several inputs: (1) existing SLR guidelines in SE, (2), a literature survey of MLR guidelines and experience papers in other fields, and (3) our own experiences in conducting several MLRs in SE. We took the popular SLR guidelines of Kitchenham and Charters as the baseline and extended/adopted them to conduct MLR studies in SE. All derived guidelines are discussed in the context of an already-published MLR in SE as the running example. Results: The resulting guidelines cover all phases of conducting and reporting MLRs in SE from the planning phase, over conducting the review to the final reporting of the review. In particular, we believe that incorporating and adopting a vast set of experience-based recommendations from MLR guidelines and experience papers in other fields have enabled us to propose a set of guidelines with solid foundations. Conclusion: Having been developed on the basis of several types of experience and evidence, the provided MLR guidelines will support researchers to effectively and efficiently conduct new MLRs in any area of SE. The authors recommend the researchers to utilize these guidelines in their MLR studies and then share their lessons learned and experiences. © 2018
  •  
29.
  • Garousi, Vahid, et al. (author)
  • Introduction to the Special Issue on : Grey Literature and Multivocal Literature Reviews (MLRs) in software engineering
  • 2022
  • In: Information and Software Technology. - : Elsevier B.V.. - 0950-5849 .- 1873-6025. ; 141
  • Journal article (peer-reviewed)abstract
    • In parallel to academic (peer-reviewed) literature (e.g., journal and conference papers), an enormous extent of grey literature (GL) has accumulated since the inception of software engineering (SE). GL is often defined as “literature that is not formally published in sources such as books or journal articles”, e.g., in the form of trade magazines, online blog-posts, technical reports, and online videos such as tutorial and presentation videos. GL is typically produced by SE practitioners. We have observed that researchers are increasingly using and benefitting from the knowledge available within GL. Related to the notion of GL is the notion of Multivocal Literature Reviews (MLRs) in SE, i.e., a MLR is a form of a Systematic Literature Review (SLR) which includes knowledge and/or evidence from the GL in addition to the peer-reviewed literature. MLRs are useful for both researchers and practitioners because they provide summaries of both the state-of-the-art and -practice in a given area. MLRs are popular in other fields and have started to appear in SE community. It is timely then for a Special Issue (SI) focusing on GL and MLRs in SE. From the pool of 13 submitted papers, and after following a rigorous peer review process, seven papers were accepted for this SI. In this introduction we provide a brief overview of GL and MLRs in SE, and then a brief summary of the seven papers published in this SI. © 2021
  •  
30.
  • Garousi, Vahid, et al. (author)
  • Mining user reviews of COVID contact-tracing apps : An exploratory analysis of nine European apps
  • 2022
  • In: Journal of Systems and Software. - : Elsevier Inc.. - 0164-1212 .- 1873-1228. ; 184
  • Journal article (peer-reviewed)abstract
    • Context: More than 78 countries have developed COVID contact-tracing apps to limit the spread of coronavirus. However, many experts and scientists cast doubt on the effectiveness of those apps. For each app, a large number of reviews have been entered by end-users in app stores. Objective: Our goal is to gain insights into the user reviews of those apps, and to find out the main problems that users have reported. Our focus is to assess the “software in society” aspects of the apps, based on user reviews. Method: We selected nine European national apps for our analysis and used a commercial app-review analytics tool to extract and mine the user reviews. For all the apps combined, our dataset includes 39,425 user reviews. Results: Results show that users are generally dissatisfied with the nine apps under study, except the Scottish (“Protect Scotland”) app. Some of the major issues that users have complained about are high battery drainage and doubts on whether apps are really working. Conclusion: Our results show that more work is needed by the stakeholders behind the apps (e.g., app developers, decision-makers, public health experts) to improve the public adoption, software quality and public perception of these apps. © 2021 Elsevier Inc.
  •  
31.
  • Garousi, Vahid, et al. (author)
  • NLP-assisted software testing : a systematic mapping of the literature
  • 2020
  • In: Information and Software Technology. - : Elsevier B.V.. - 0950-5849 .- 1873-6025. ; 126
  • Research review (peer-reviewed)abstract
    • Context: To reduce manual effort of extracting test cases from natural-language requirements, many approaches based on Natural Language Processing (NLP) have been proposed in the literature. Given the large amount of approaches in this area, and since many practitioners are eager to utilize such techniques, it is important to synthesize and provide an overview of the state-of-the-art in this area. Objective: Our objective is to summarize the state-of-the-art in NLP-assisted software testing which could benefit practitioners to potentially utilize those NLP-based techniques. Moreover, this can benefit researchers in providing an overview of the research landscape. Method: To address the above need, we conducted a survey in the form of a systematic literature mapping (classification). After compiling an initial pool of 95 papers, we conducted a systematic voting, and our final pool included 67 technical papers. Results: This review paper provides an overview of the contribution types presented in the papers, types of NLP approaches used to assist software testing, types of required input requirements, and a review of tool support in this area. Some key results we have detected are: (1) only four of the 38 tools (11%) presented in the papers are available for download; (2) a larger ratio of the papers (30 of 67) provided a shallow exposure to the NLP aspects (almost no details). Conclusion: This paper would benefit both practitioners and researchers by serving as an “index” to the body of knowledge in this area. The results could help practitioners utilizing the existing NLP-based techniques; this in turn reduces the cost of test-case design and decreases the amount of human resources spent on test activities. After sharing this review with some of our industrial collaborators, initial insights show that this review can indeed be useful and beneficial to practitioners. © 2020 Elsevier B.V.
  •  
32.
  • Garousi, Vahid, et al. (author)
  • Testing embedded software : A survey of the literature
  • 2018
  • In: Information and Software Technology. - : Elsevier. - 0950-5849 .- 1873-6025. ; 104, s. 14-45
  • Journal article (peer-reviewed)abstract
    • Context Embedded systems have overwhelming penetration around the world. Innovations are increasingly triggered by software embedded in automotive, transportation, medical-equipment, communication, energy, and many other types of systems. To test embedded software in an effective and efficient manner, a large number of test techniques, approaches, tools and frameworks have been proposed by both practitioners and researchers in the last several decades. Objective: However, reviewing and getting an overview of the entire state-of-the-art and the practice in this area is challenging for a practitioner or a (new) researcher. Also unfortunately, as a result, we often see that many companies reinvent the wheel (by designing a test approach new to them, but existing in the domain) due to not having an adequate overview of what already exists in this area. Method: To address the above need, we conducted and report in this paper a systematic literature review (SLR) in the form of a systematic literature mapping (SLM) in this area. After compiling an initial pool of 588 papers, a systematic voting about inclusion/exclusion of the papers was conducted among the authors, and our final pool included 312 technical papers. Results: Among the various aspects that we aim at covering, our review covers the types of testing topics studied, types of testing activity, types of test artifacts generated (e.g., test inputs or test code), and the types of industries in which studies have focused on, e.g., automotive and home appliances. Furthermore, we assess the benefits of this review by asking several active test engineers in the Turkish embedded software industry to review its findings and provide feedbacks as to how this review has benefitted them. Conclusion: The results of this review paper have already benefitted several of our industry partners in choosing the right test techniques / approaches for their embedded software testing challenges. We believe that it will also be useful for the large world-wide community of software engineers and testers in the embedded software industry, by serving as an "index" to the vast body of knowledge in this important area. Our results will also benefit researchers in observing the latest trends in this area and for identifying the topics which need further investigations.
  •  
33.
  • Garousi, Vahid, et al. (author)
  • What users think of COVID-19 contact-tracing apps : An analysis of eight European apps
  • 2022
  • In: IEEE Software. - : IEEE Computer Society. - 0740-7459 .- 1937-4194. ; 39:3, s. 22-30
  • Journal article (peer-reviewed)abstract
    • More than 64 countries and regions have, so far, developed COVID-19 contact-tracing apps to limit the spread of coronavirus. However, many experts and scientists cast doubt on the effectiveness of those apps. For each app, between a few hundred to a few thousand reviews have been entered by end-users in app stores. In this paper, we mine insights from the user reviews of contact-tracing apps of eight European countries to find out what end users think of COVID contact-tracing apps and the main problems that users have reported. IEEE
  •  
34.
  • Garousi, Vahid, et al. (author)
  • What We Know About Smells in Software Test Code
  • 2019
  • In: IEEE Software. - : IEEE COMPUTER SOC. - 0740-7459 .- 1937-4194. ; 36:3, s. 61-73
  • Journal article (peer-reviewed)abstract
    • Test smells are poorly designed tests and negatively affect the quality of test suites and production code. We present the largest catalog of test smells, along with a summary of guidelines, techniques, and tools used to deal with test smells.
  •  
35.
  • Grossmann, Juergen, et al. (author)
  • A Taxonomy to Assess and Tailor Risk-Based Testing in Recent Testing Standards
  • 2020
  • In: IEEE Software. - : IEEE COMPUTER SOC. - 0740-7459 .- 1937-4194. ; 37:1, s. 40-49
  • Journal article (peer-reviewed)abstract
    • This article provides a taxonomy for risk-based testing that serves as a tool to define, tailor, or assess such approaches. In this setting, the taxonomy is used to systematically identify deviations between the requirements from public standards and the individual testing approaches.
  •  
36.
  • Huber, Stefan, et al. (author)
  • A comparative study on the energy consumption of Progressive Web Apps
  • 2022
  • In: Information Systems. - : Elsevier Ltd. - 0306-4379 .- 1873-6076. ; 108
  • Journal article (peer-reviewed)abstract
    • Progressive Web Apps (PWAs) are a promising approach for developing mobile apps, especially when developing apps for multiple mobile systems. As mobile devices are limited with respect to battery capacity, developers should keep the energy footprint of a mobile app as low as possible. The goal of this study is to analyze the difference in energy consumption of PWAs compared to other mobile development approaches. As mobile apps are primarily interactive in nature, we focus on UI rendering and interaction scenarios. For this, we implemented five versions of the same app with different development approaches and examined their energy footprint on two Android devices with four execution scenarios. Additionally, we extended our research by analyzing multiple real-world mobile apps to include a more practical perspective. Regarding execution environments, we also compared the energy consumption of PWAs executed in different web-browsers. The results based on sample and real-world apps show that the used development approach influences the energy footprint of a mobile app. Native development shows the lowest energy consumption. PWAs, albeit having a higher energy consumption than native apps, are a viable alternative to other mobile cross-platform development (MCPD) approaches. The experiments could not assert an inherent technological disadvantage of PWAs in contrast to other MCPD approaches when considering UI energy consumption. Moreover, the web-browser engine used to execute the PWA has a significant influence on the energy footprint of the app. © 2022 Elsevier Ltd
  •  
37.
  • Lenz, Luca, et al. (author)
  • Explainable Priority Assessment of Software-Defects using Categorical Features at SAP HANA
  • 2020
  • In: ACM International Conference Proceeding Series. - New York, NY, USA : Association for Computing Machinery. - 9781450377317 ; , s. 366-367
  • Conference paper (peer-reviewed)abstract
    • We want to automate priority assessment of software defects. To do so we provide a tool which uses an explainability-driven framework and classical machine learning algorithms to keep the decisions transparent. Differing from other approaches we only use objective and categorical fields from the bug tracking system as features. This makes our approach lightweight and extremely fast. We perform binary classification with priority labels corresponding to deadlines. Additionally, we evaluate the tool on real data to ensure good performance in the practical use case. © 2020 ACM.
  •  
38.
  • Molléri, Jefferson Seide, et al. (author)
  • Aligning the Views of Research Quality in Empirical Software Engineering
  • Other publication (other academic/artistic)abstract
    • Context: Research quality is intended to assess the design and reporting of studies. It comprises a series of concepts such as methodological rigor, practical relevance, and conformance to ethical standards. Depending on the perspective, different views of importance are given to the conceptual dimensions of research quality.Objective: We intend to assess the level of alignment between researchers with regard to a conceptual model of research quality. This includes aligning the definition of research quality and reasoning on the relative importance of quality characteristics.Method: We conducted a mixed methods approach comprising an internal case study and a complementary focus group. We carried out a hierarchical voting prioritization based on the conceptual model to collect relative values for importance. In the focus group, we also moderate discussions with experts to address potential misalignment.Results: The alignment at the research group level was higher compared to that at community level. Moreover, the interdisciplinary conceptual quality model was seeing to express fairly the quality of research, but presented limitations regarding its structure and components' description, which resulted in an updated model. Conclusion: The interdisciplinary model used was suitable for the software engineering context. The process used for reflecting on the alignment of quality with respect to definitions and priorities was working well. 
  •  
39.
  • Molléri, Jefferson Seide, et al. (author)
  • Determining a core view of research quality in empirical software engineering
  • 2023
  • In: Computer Standards & Interfaces. - : Elsevier. - 0920-5489 .- 1872-7018. ; 84
  • Journal article (peer-reviewed)abstract
    • Context: Research quality is intended to appraise the design and reporting of studies. It comprises a set of standards such as methodological rigor, practical relevance, and conformance to ethical standards. Depending on the perspective, different views of importance are given to the standards for research quality. Objective: To investigate the suitability of a conceptual model of research quality to Software Engineering (SE), from the perspective of researchers engaged in Empirical Software Engineering (ESE) research, in order to understand the core value of research quality. Method: We conducted a mixed-methods approach with two distinct group perspectives: (i) a research group; and (ii) the empirical SE research community. Our data collection approach comprised a questionnaire survey and a complementary focus group. We carried out a hierarchical voting prioritization to collect relative values for importance of standards for research quality. Results: In the context of this research, ‘internally valid’, ‘relevant research idea’, and ‘applicable results’ are perceived as the core standards for research quality in empirical SE. The alignment at the research group level was higher compared to that at the community level. Conclusion: The conceptual model was seen to express fairly the standards for research quality in the SE context. It presented limitations regarding its structure and components’ description, which resulted in an updated model. © 2022
  •  
40.
  • Molléri, Jefferson Seide, et al. (author)
  • Reasoning about Research Quality Alignment in Software Engineering
  • Other publication (other academic/artistic)abstract
    • Context: Research quality is intended to assess the design and reporting of studies. It comprises a series of concepts such as methodological rigor, practical relevance, and conformance to ethical standards. Depending on the perspective, different views of importance are given to the conceptual dimensions of research quality.Objective: We aim to better understand what constitutes research quality from the perspective of the empirical software engineering community. In particular, we intend to assess the level of alignment between researchers with regard to a conceptual model of research quality.Method: We conducted a mixed methods approach comprising an internal case study and a complementary focus group. We carried out a hierarchical voting prioritization based on the conceptual model to collect relative values for importance. In the focus group, we also moderate discussions with experts to address potential misalignment.Results: We provide levels of alignment with regard to the importance of quality dimensions in the view of the participants. Moreover, the conceptual model fairly expresses the quality of research but has limitations with regards the structure and description of its components.Conclusion: Based on the results, we revised the conceptual model and provided an updated version adjusted to the context of empirical software engineering research. We also discussed how to assess quality alignment in research using our approach, and how to use the revised model of quality to characterize an assessment instrument.
  •  
41.
  • Pekaric, Irdin, et al. (author)
  • A taxonomy of attack mechanisms in the automotive domain
  • 2021
  • In: Computer Standards & Interfaces. - : Elsevier B.V.. - 0920-5489 .- 1872-7018. ; 78
  • Journal article (peer-reviewed)abstract
    • In the last decade, the automotive industry incorporated multiple electronic components into vehicles introducing various capabilities for adversaries to generate diverse types of attacks. In comparison to older types of vehicles, where the biggest concern was physical security, modern vehicles might be targeted remotely. As a result, multiple attack vectors aiming to disrupt different vehicle components emerged. Research and practice lack a comprehensive attack taxonomy for the automotive domain. In this regard, we conduct a systematic literature study, wherein 48 different attacks were identified and classified according to the proposed taxonomy of attack mechanisms. The taxonomy can be utilized by penetration testers in the automotive domain as well as to develop more sophisticated attacks by chaining multiple attack vectors together. In addition, we classify the identified attack vectors based on the following five dimensions: (1) AUTOSAR layers, (2) attack domains, (3) information security principles, (4) attack surfaces, and (5) attacker profile. The results indicate that the most applied attack vectors identified in literature are GPS spoofing, message injection, node impersonation, sybil, and wormhole attack, which are mostly applied to application and services layers of the AUTOSAR architecture. © 2021 The Author(s)
  •  
42.
  • Rainer, Austen W., et al. (author)
  • Retrieving and mining professional experience of software practice from grey literature : An exploratory review
  • 2020
  • In: IET Software. - : John Wiley & Sons. - 1751-8806 .- 1751-8814. ; 14:6, s. 665-676
  • Research review (peer-reviewed)abstract
    • Retrieving and mining practitioners' self-reports of their professional experience of software practice could provide valuable evidence for research. The authors are, however, unaware of any existing reviews of research conducted in this area. The authors reviewed and classified previous research, and identified insights into the challenges research confronts when retrieving and mining practitioners' self-reports of their experience of software practice. They conducted an exploratory review to identify and classify 42 studies. They analysed a selection of those studies for insights on challenges to mining professional experience. They identified only one directly relevant study. Even then this study concerns the software professional's emotional experiences rather than the professional's reporting of behaviour and events occurring during software practice. They discussed the challenges concerning: the prevalence of professional experience; definitions, models and theories; the sparseness of data; units of discourse analysis; annotator agreement; evaluation of the performance of algorithms; and the lack of replications. No directly relevant prior research appears to have been conducted in this area. They discussed the value of reporting negative results in secondary studies. There are a range of research opportunities but also considerable challenges. They formulated a set of guiding questions for further research in this area. © 2020 Institution of Engineering and Technology. All rights reserved.
  •  
43.
  • Santoso, Ario, et al. (author)
  • Specification-driven predictive business process monitoring
  • 2020
  • In: Software and Systems Modeling. - : Springer Verlag. - 1619-1366 .- 1619-1374. ; 19:6, s. 1307-1343
  • Journal article (peer-reviewed)abstract
    • Predictive analysis in business process monitoring aims at forecasting the future information of a running business process. The prediction is typically made based on the model extracted from historical process execution logs (event logs). In practice, different business domains might require different kinds of predictions. Hence, it is important to have a means for properly specifying the desired prediction tasks, and a mechanism to deal with these various prediction tasks. Although there have been many studies in this area, they mostly focus on a specific prediction task. This work introduces a language for specifying the desired prediction tasks, and this language allows us to express various kinds of prediction tasks. This work also presents a mechanism for automatically creating the corresponding prediction model based on the given specification. Differently from previous studies, instead of focusing on a particular prediction task, we present an approach to deal with various prediction tasks based on the given specification of the desired prediction tasks. We also provide an implementation of the approach which is used to conduct experiments using real-life event logs. © 2019, The Author(s).
  •  
44.
  • Sauerwein, Clemens, et al. (author)
  • An Analysis and Classification of Public Information Security Data Sources used in Research and Practice
  • 2019
  • In: Computers & security (Print). - : Elsevier. - 0167-4048 .- 1872-6208. ; 82, s. 140-155
  • Journal article (peer-reviewed)abstract
    • In order to counteract today’s sophisticated and increasing number of cyber threats the timely acquisition of information regarding vulnerabilities, attacks, threats, countermeasures and risks is crucial. Therefore, employees tasked with information security risk management processes rely on a variety of information security data sources, ranging from inter-organizational threat intelligence sharing platforms to public information security data sources, such as mailing lists or expert blogs. However, research and practice lack a comprehensive overview about these public information security data sources, their characteristics and dependencies. Moreover, comprehensive knowledge about these sources would be beneficial to systematically use and integrate them to information security processes. In this paper, a triangulation study is conducted to identify and analyze public information security data sources. Furthermore, a taxonomy is introduced to classify and compare these data sources based on the following six dimensions: (1) Type of information, (2) Integrability, (3) Timeliness, (4) Originality, (5) Type of Source,and (6) Trustworthiness. In total, 68 public information security data sources were identified and classified. The investigations showed that research and practice rely on a large variety of heterogeneous information security data sources, which makes it more difficult to integrate and use them for information security and risk management processes.
  •  
45.
  • Schlick, Rupert, et al. (author)
  • A proposal of an example and experiments repository to foster industrial adoption of formal methods
  • 2018
  • In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). - Cham : Springer Verlag. - 9783030034269 ; , s. 249-272
  • Conference paper (peer-reviewed)abstract
    • Formal methods (in a broad sense) have been around almost since the beginning of computer science. Nonetheless, there is a perception in the formal methods community that take-up by industry is low considering the potential benefits. We take a look at possible reasons and give candidate explanations for this effect. To address the issue, we propose a repository of industry-relevant example problems with an accompanying open data storage for experiment results in order to document, disseminate and compare exemplary solutions from formal model based methods. This would allow potential users from industry to better understand the available solutions and to more easily select and adopt a formal method that fits their needs. At the same time, it would foster the adoption of open data and good scientific practice in this research field. © Springer Nature Switzerland AG 2018.
  •  
46.
  • Sillaber, Christian, et al. (author)
  • Laying the foundation for smart contract development : an integrated engineering process model
  • 2021
  • In: Information Systems and E-Business Management. - : Springer. - 1617-9846 .- 1617-9854. ; 19:3, s. 863-882
  • Journal article (peer-reviewed)abstract
    • Smart contracts are seen as the major building blocks for future autonomous blockchain- and Distributed Ledger Technology (DLT)-based applications. Engineering such contracts for trustless, append-only, and decentralized digital ledgers allows mutually distrustful parties to transform legal requirements into immutable and formalized rules. Previous experience shows this to be a challenging task due to demanding socio-technical ecosystems and the specificities of decentralized ledger technology. In this paper, we therefore develop an integrated process model for engineering DLT-based smart contracts that accounts for the specificities of DLT. This model was iteratively refined with the support of industry experts. The model explicitly accounts for the immutability of the trustless, append-only, and decentralized DLT ecosystem, and thereby overcomes certain limitations of traditional software engineering process models. More specifically, it consists of five successive and closely intertwined phases: conceptualization, implementation, approval, execution, and finalization. For each phase, the respective activities, roles, and artifacts are identified and discussed in detail. Applying such a model when engineering smart contracts will help software engineers and developers to better understand and streamline the engineering process of DLTs in general and blockchain in particular. Furthermore, this model serves as a generic framework which will support application development in all fields in which DLT can be applied. © 2020, The Author(s).
  •  
47.
  • Steidl, Monika, et al. (author)
  • The pipeline for the continuous development of artificial intelligence models-Current state of research and practice
  • 2023
  • In: Journal of Systems and Software. - : Elsevier. - 0164-1212 .- 1873-1228. ; 199
  • Research review (peer-reviewed)abstract
    • Companies struggle to continuously develop and deploy Artificial Intelligence (AI) models to complex production systems due to AI characteristics while assuring quality. To ease the development process, continuous pipelines for AI have become an active research area where consolidated and in-depth analysis regarding the terminology, triggers, tasks, and challenges is required.This paper includes a Multivocal Literature Review (MLR) where we consolidated 151 relevant formal and informal sources. In addition, nine-semi structured interviews with participants from academia and industry verified and extended the obtained information. Based on these sources, this paper provides and compares terminologies for Development and Operations (DevOps) and Continuous Integration (CI)/Continuous Delivery (CD) for AI, Machine Learning Operations (MLOps), (end-to-end) lifecycle management, and Continuous Delivery for Machine Learning (CD4ML). Furthermore, the paper provides an aggregated list of potential triggers for reiterating the pipeline, such as alert systems or schedules. In addition, this work uses a taxonomy creation strategy to present a consolidated pipeline comprising tasks regarding the continuous development of AI. This pipeline consists of four stages: Data Handling, Model Learning, Software Development and System Operations. Moreover, we map challenges regarding pipeline implementation, adaption, and usage for the continuous development of AI to these four stages.(c) 2023 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
  •  
48.
  • Sulaman, Sardar Muhammad, et al. (author)
  • Comparison of the FMEA and STPA safety analysis methods : a case study
  • 2019
  • In: Software quality journal. - : Springer. - 0963-9314 .- 1573-1367. ; 27:1, s. 349-387
  • Journal article (peer-reviewed)abstract
    • As our society becomes more and more dependent on IT systems, failures of these systems can harm more and more people and organizations. Diligently performing risk and hazard analysis helps to minimize the potential harm of IT system failures on the society and increases the probability of their undisturbed operation. Risk and hazard analysis is an important activity for the development and operation of critical software intensive systems, but the increased complexity and size puts additional requirements on the effectiveness of risk and hazard analysis methods. This paper presents a qualitative comparison of two hazard analysis methods, failure mode and effect analysis (FMEA) and system theoretic process analysis (STPA), using case study research methodology. Both methods have been applied on the same forward collision avoidance system to compare the effectiveness of the methods and to investigate what are the main differences between them. Furthermore, this study also evaluates the analysis process of both methods by using a qualitative criteria derived from the technology acceptance model (TAM). The results of the FMEA analysis were compared to the results of the STPA analysis, which were presented in a previous study. Both analyses were conducted on the same forward collision avoidance system. The comparison shows that FMEA and STPA deliver similar analysis results.
  •  
49.
  • Tuzun, Eray, et al. (author)
  • Ground-Truth Deficiencies in Software Engineering : When Codifying the Past Can Be Counterproductive
  • 2022
  • In: IEEE Software. - : IEEE Computer Society. - 0740-7459 .- 1937-4194. ; 39:3, s. 85-95
  • Journal article (peer-reviewed)abstract
    • In software engineering, the objective function of human decision makers might be influenced by many factors. Relying on historical data as the ground truth may give rise to systems that automate software engineering decisions by mimicking past suboptimal behavior. We describe the problem and offer some strategies. ©IEEE.
  •  
50.
  • Wagner, Stefan, PhD, 1982-, et al. (author)
  • Challenges in Survey Research
  • 2020
  • In: Contemporary Empirical Methods in Software Engineering. - Cham : Springer Nature. - 9783030324889 ; , s. 93-125
  • Book chapter (peer-reviewed)abstract
    • While being an important and often used research method, survey research has been less often discussed on a methodological level in empirical software engineering than other types of research. This chapter compiles a set of important and challenging issues in survey research based on experiences with several large-scale international surveys. The chapter covers theory building, sampling, invitation and follow-up, statistical as well as qualitative analysis of survey data and the usage of psychometrics in software engineering surveys.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-50 of 51
Type of publication
journal article (31)
conference paper (10)
book chapter (5)
research review (3)
other publication (2)
Type of content
peer-reviewed (46)
other academic/artistic (5)
Author/Editor
Felderer, Michael, 1 ... (51)
Petersen, Kai (3)
Pfahl, Dietmar (3)
Auer, Florian (3)
Mendes, Emilia (3)
Unterkalmsteiner, Mi ... (2)
show more...
Mendez, Daniel (2)
Fucci, Davide, 1985- (2)
Feldt, Robert (2)
Alégroth, Emil, 1984 ... (2)
Mäntylä, Mika (2)
Ramler, Rudolf (2)
Lenarduzzi, Valentin ... (2)
Beer, Armin (2)
Enoiu, Eduard Paul, ... (1)
Höst, Martin (1)
Penzenstadler, Birgi ... (1)
Prikladnicki, Rafael (1)
Baldassarre, Maria T ... (1)
Lisper, Björn (1)
Adigun, Jubril Gbola ... (1)
Camilli, Matteo (1)
Giusti, Andrea (1)
Matt, Dominik T. (1)
Perini, Anna (1)
Russo, Barbara (1)
Susi, Angelo (1)
Torkar, Richard, 197 ... (1)
De la Vara, Jose Lui ... (1)
Schlick, R. (1)
de Oliveira Neto, Fr ... (1)
Feldt, Robert, 1972 (1)
Gurov, Dilian, 1964- (1)
Olsson Holmström, He ... (1)
Tahvili, Sahar (1)
Martini, M. (1)
Runeson, Per (1)
Ros, Rasmus (1)
Kaltenbrunner, Lukas (1)
Taibi, Davide (1)
Männistö, Tomi (1)
Usman, Muhammad, 197 ... (1)
Turhan, Burak (1)
Schlick, Rupert (1)
Huisman, M (1)
Bendler, Daniel (1)
Vittorini, Valeria (1)
Nardone, Roberto (1)
Winkler, Dietmar (1)
Herrmann, Andrea (1)
show less...
University
Blekinge Institute of Technology (51)
Mälardalen University (3)
Chalmers University of Technology (3)
University of Gothenburg (2)
Lund University (2)
Royal Institute of Technology (1)
show more...
Malmö University (1)
Karlstad University (1)
show less...
Language
English (51)
Research subject (UKÄ/SCB)
Natural sciences (50)
Engineering and Technology (3)
Social Sciences (2)
Medical and Health Sciences (1)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view