SwePub
Sök i SwePub databas

  Extended search

Boolean operators must be entered wtih CAPITAL LETTERS

Träfflista för sökning "hsv:(NATURAL SCIENCES) hsv:(Computer and Information Sciences) hsv:(Software Engineering) srt2:(2020-2024)"

Search: hsv:(NATURAL SCIENCES) hsv:(Computer and Information Sciences) hsv:(Software Engineering) > (2020-2024)

  • Result 1-25 of 1447
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Lidstrom, D, et al. (author)
  • Agent based match racing simulations : Starting practice
  • 2022
  • In: SNAME 24th Chesapeake Sailing Yacht Symposium, CSYS 2022. - : Society of Naval Architects and Marine Engineers.
  • Conference paper (peer-reviewed)abstract
    • Match racing starts in sailing are strategically complex and of great importance for the outcome of a race. With the return of the America's Cup to upwind starts and the World Match Racing Tour attracting young and development sailors, the tactical skills necessary to master the starts could be trained and learned by means of computer simulations to assess a large range of approaches to the starting box. This project used game theory to model the start of a match race, intending to develop and study strategies using Monte-Carlo tree search to estimate the utility of a player's potential moves throughout a race. Strategies that utilised the utility estimated in different ways were defined and tested against each other through means of simulation and with an expert advice on match racing start strategy from a sailor's perspective. The results show that the strategies that put greater emphasis on what the opponent might do, perform better than those that did not. It is concluded that Monte-Carlo tree search can provide a basis for decision making in match races and that it has potential for further use. 
  •  
2.
  • David, I., et al. (author)
  • Blended modeling in commercial and open-source model-driven software engineering tools: A systematic study
  • 2023
  • In: Software and Systems Modeling. - : Springer Science and Business Media LLC. - 1619-1366 .- 1619-1374. ; 22, s. 415-447
  • Journal article (peer-reviewed)abstract
    • Blended modeling aims to improve the user experience of modeling activities by prioritizing the seamless interaction with models through multiple notations over the consistency of the models. Inconsistency tolerance, thus, becomes an important aspect in such settings. To understand the potential of current commercial and open-source modeling tools to support blended modeling, we have designed and carried out a systematic study. We identify challenges and opportunities in the tooling aspect of blended modeling. Specifically, we investigate the user-facing and implementation-related characteristics of existing modeling tools that already support multiple types of notations and map their support for other blended aspects, such as inconsistency tolerance, and elevated user experience. For the sake of completeness, we have conducted a multivocal study, encompassing an academic review, and grey literature review. We have reviewed nearly 5000 academic papers and nearly 1500 entries of grey literature. We have identified 133 candidate tools, and eventually selected 26 of them to represent the current spectrum of modeling tools.
  •  
3.
  • Hujainah, Fadhl Mohammad Omar, 1987, et al. (author)
  • SRPTackle: A semi-automated requirements prioritisation technique for scalable requirements of software system projects
  • 2021
  • In: Information and Software Technology. - : Elsevier BV. - 0950-5849. ; 131
  • Journal article (peer-reviewed)abstract
    • Context Requirement prioritisation (RP) is often used to select the most important system requirements as perceived by system stakeholders. RP plays a vital role in ensuring the development of a quality system with defined constraints. However, a closer look at existing RP techniques reveals that these techniques suffer from some key challenges, such as scalability, lack of quantification, insufficient prioritisation of participating stakeholders, overreliance on the participation of professional expertise, lack of automation and excessive time consumption. These key challenges serve as the motivation for the present research. Objective This study aims to propose a new semiautomated scalable prioritisation technique called ‘SRPTackle’ to address the key challenges. Method SRPTackle provides a semiautomated process based on a combination of a constructed requirement priority value formulation function using a multi-criteria decision-making method (i.e. weighted sum model), clustering algorithms (K-means and K-means++) and a binary search tree to minimise the need for expert involvement and increase efficiency. The effectiveness of SRPTackle is assessed by conducting seven experiments using a benchmark dataset from a large actual software project. Results Experiment results reveal that SRPTackle can obtain 93.0% and 94.65% as minimum and maximum accuracy percentages, respectively. These values are better than those of alternative techniques. The findings also demonstrate the capability of SRPTackle to prioritise large-scale requirements with reduced time consumption and its effectiveness in addressing the key challenges in comparison with other techniques. Conclusion With the time effectiveness, ability to scale well with numerous requirements, automation and clear implementation guidelines of SRPTackle, project managers can perform RP for large-scale requirements in a proper manner, without necessitating an extensive amount of effort (e.g. tedious manual processes, need for the involvement of experts and time workload).
  •  
4.
  • Laaber, C., et al. (author)
  • Applying test case prioritization to software microbenchmarks
  • 2021
  • In: Empirical Software Engineering. - : Springer Science and Business Media LLC. - 1382-3256 .- 1573-7616. ; 26:6
  • Journal article (peer-reviewed)abstract
    • Regression testing comprises techniques which are applied during software evolution to uncover faults effectively and efficiently. While regression testing is widely studied for functional tests, performance regression testing, e.g., with software microbenchmarks, is hardly investigated. Applying test case prioritization (TCP), a regression testing technique, to software microbenchmarks may help capturing large performance regressions sooner upon new versions. This may especially be beneficial for microbenchmark suites, because they take considerably longer to execute than unit test suites. However, it is unclear whether traditional unit testing TCP techniques work equally well for software microbenchmarks. In this paper, we empirically study coverage-based TCP techniques, employing total and additional greedy strategies, applied to software microbenchmarks along multiple parameterization dimensions, leading to 54 unique technique instantiations. We find that TCP techniques have a mean APFD-P (average percentage of fault-detection on performance) effectiveness between 0.54 and 0.71 and are able to capture the three largest performance changes after executing 29% to 66% of the whole microbenchmark suite. Our efficiency analysis reveals that the runtime overhead of TCP varies considerably depending on the exact parameterization. The most effective technique has an overhead of 11% of the total microbenchmark suite execution time, making TCP a viable option for performance regression testing. The results demonstrate that the total strategy is superior to the additional strategy. Finally, dynamic-coverage techniques should be favored over static-coverage techniques due to their acceptable analysis overhead; however, in settings where the time for prioritzation is limited, static-coverage techniques provide an attractive alternative.
  •  
5.
  • Mahmood, Wardah, 1992, et al. (author)
  • Effects of variability in models: a family of experiments
  • 2022
  • In: Empirical Software Engineering. - : Springer Science and Business Media LLC. - 1382-3256 .- 1573-7616. ; 27:3
  • Journal article (peer-reviewed)abstract
    • The ever-growing need for customization creates a need to maintain software systems in many different variants. To avoid having to maintain different copies of the same model, developers of modeling languages and tools have recently started to provide implementation techniques for such variant-rich systems, notably variability mechanisms, which support implementing the differences between model variants. Available mechanisms either follow the annotative or the compositional paradigm, each of which have dedicated benefits and drawbacks. Currently, language and tool designers select the used variability mechanism often solely based on intuition. A better empirical understanding of the comprehension of variability mechanisms would help them in improving support for effective modeling. In this article, we present an empirical assessment of annotative and compositional variability mechanisms for three popular types of models. We report and discuss findings from a family of three experiments with 164 participants in total, in which we studied the impact of different variability mechanisms during model comprehension tasks. We experimented with three model types commonly found in modeling languages: class diagrams, state machine diagrams, and activity diagrams. We find that, in two out of three experiments, annotative technique lead to better developer performance. Use of the compositional mechanism correlated with impaired performance. For all three considered tasks, the annotative mechanism was preferred over the compositional one in all experiments. We present actionable recommendations concerning support of flexible, tasks-specific solutions, and the transfer of established best practices from the code domain to models.
  •  
6.
  • Penzenstadler, Birgit, 1981, et al. (author)
  • Bots in Software Engineering
  • 2022
  • In: IEEE Software. - 1937-4194 .- 0740-7459. ; 39:5, s. 101-104
  • Research review (peer-reviewed)
  •  
7.
  • Pir Muhammad, Amna, 1990 (author)
  • Managing Human Factors and Requirements in Agile Development of Automated Vehicles: An Exploration
  • 2022
  • Licentiate thesis (other academic/artistic)abstract
    • Context: Automated Vehicle (AV) technology has evolved significantly in complexity and impact; it is expected to ultimately change urban transporta- tion. However, research shows that vehicle automation can only live up to this expectation if it is defined with human capabilities and limitations in mind. Therefore, it is necessary to bring human factors knowledge to AV developers. Objective: This thesis aims to empirically study how we can effectively bring the required human factors knowledge into large-scale agile AV develop- ment. The research goals are 1) to explore requirements engineering and human factors in agile AV development, 2) to investigate the problems of requirements engineering, human factors, and agile way of working in AV development, and 3) to demonstrate initial solutions to existing problems in agile AV development. Method: We conducted this research in close collaboration with industry, using different empirical methodologies to collect data—including interviews, workshops, and document analysis. To gain in-depth insights, we did a qualita- tive exploratory study to investigate the problem and used a design science approach to develop initial solution in several iterations. Findings and Conclusions: We found that applying human factors knowledge effectively is one of the key problem areas that need to be solved in agile development of artificial intelligence (AI)-intense systems. This motivated us to do an in-depth interview study on how to manage human factors knowl- edge during AV development. From our data, we derived a working definition of human factors for AV development, discovered the relevant properties of agile and human factors, and defined implications for agile ways of working, managing human factors knowledge, and managing requirements. The design science approach allowed us to identify challenges related to agile requirements engineering in three case companies in iterations. Based on these three case studies, we developed a solution strategy to resolve the RE challenges in agile AV development. Moreover, we derived building blocks and described guide- lines for the creation of a requirements strategy, which should describe how requirements are structured, how work is organized, and how RE is integrated into the agile work and feature flow. Future Outlook: In future work, I plan to define a concrete requirement strategy for human factors knowledge in large-scale agile AV development. It could help establishing clear communication channels and practices for incorporating explicit human factors knowledge into AI-based large-scale agile AV development.
  •  
8.
  • Samoaa, Hazem Peter, et al. (author)
  • A systematic mapping study of source code representation for deep learning in software engineering
  • 2022
  • In: Iet Software. - : Institution of Engineering and Technology (IET). - 1751-8806 .- 1751-8814. ; 16:4, s. 351-385
  • Journal article (peer-reviewed)abstract
    • The usage of deep learning (DL) approaches for software engineering has attracted much attention, particularly in source code modelling and analysis. However, in order to use DL, source code needs to be formatted to fit the expected input form of DL models. This problem is known as source code representation. Source code can be represented via different approaches, most importantly, the tree-based, token-based, and graph-based approaches. We use a systematic mapping study to investigate i detail the representation approaches adopted in 103 studies that use DL in the context of software engineering. Thus, studies are collected from 2014 to 2021 from 14 different journals and 27 conferences. We show that each way of representing source code can provide a different, yet orthogonal view of the same source code. Thus, different software engineering tasks might require different (combinations of) code representation approaches, depending on the nature and complexity of the task. Particularly, we show that it is crucial to define whether the DL approach requires lexical, syntactical, or semantic code information. Our analysis shows that a wide range of different representations and combinations of representations (hybrid representations) are used to solve a wide range of common software engineering problems. However, we also observe that current research does not generally attempt to transfer existing representations or models to other studies even though there are other contexts in which these representations and models may also be useful. We believe that there is potential for more reuse and the application of transfer learning when applying DL to software engineering tasks.
  •  
9.
  • Bergström, Gustav, et al. (author)
  • Evaluating the layout quality of UML class diagrams using machine learning
  • 2022
  • In: Journal of Systems and Software. - : Elsevier BV. - 0164-1212. ; 192
  • Journal article (peer-reviewed)abstract
    • UML is the de facto standard notation for graphically representing software. UML diagrams are used in the analysis, construction, and maintenance of software systems. Mostly, UML diagrams capture an abstract view of a (piece of a) software system. A key purpose of UML diagrams is to share knowledge about the system among developers. The quality of the layout of UML diagrams plays a crucial role in their comprehension. In this paper, we present an automated method for evaluating the layout quality of UML class diagrams. We use machine learning based on features extracted from the class diagram images using image processing. Such an automated evaluator has several uses: (1) From an industrial perspective, this tool could be used for automated quality assurance for class diagrams (e.g., as part of a quality monitor integrated into a DevOps toolchain). For example, automated feedback can be generated once a UML diagram is checked in the project repository. (2) In an educational setting, the evaluator can grade the layout aspect of student assignments in courses on software modeling, analysis, and design. (3) In the field of algorithm design for graph layouts, our evaluator can assess the layouts generated by such algorithms. In this way, this evaluator opens up the road for using machine learning to learn good layouting algorithms. Approach.: We use machine learning techniques to build (linear) regression models based on features extracted from the class diagram images using image processing. As ground truth, we use a dataset of 600+ UML Class Diagrams for which experts manually label the quality of the layout. Contributions.: This paper makes the following contributions: (1) We show the feasibility of the automatic evaluation of the layout quality of UML class diagrams. (2) We analyze which features of UML class diagrams are most strongly related to the quality of their layout. (3) We evaluate the performance of our layout evaluator. (4) We offer a dataset of labeled UML class diagrams. In this dataset, we supply for every diagram the following information: (a) a manually established ground truth of the quality of the layout, (b) an automatically established value for the layout-quality of the diagram (produced by our classifier), and (c) the values of key features of the layout of the diagram (obtained by image processing). This dataset can be used for replication of our study and others to build on and improve on this work. Editor's note: Open Science material was validated by the Journal of Systems and Software Open Science Board.
  •  
10.
  •  
11.
  • Sweidan, Dirar, et al. (author)
  • Predicting Customer Churn in Retailing
  • 2022
  • In: Proceedings 21st IEEE International Conference on Machine Learning and Applications ICMLA 2022. - : IEEE. - 9781665462839 - 9781665462846 ; , s. 635-640
  • Conference paper (peer-reviewed)abstract
    • Customer churn is one of the most challenging problems for digital retailers. With significantly higher costs for acquiring new customers than retaining existing ones, knowledge about which customers are likely to churn becomes essential. This paper reports a case study where a data-driven approach to churn prediction is used for predicting churners and gaining insights about the problem domain. The real-world data set used contains approximately 200 000 customers, describing each customer using more than 50 features. In the pre-processing, exploration, modeling and analysis, attributes related to recency, frequency, and monetary concepts are identified and utilized. In addition, correlations and feature importance are used to discover and understand churn indicators. One important finding is that the churn rate highly depends on the number of previous purchases. In the segment consisting of customers with only one previous purchase, more than 75% will churn, i.e., not make another purchase in the coming year. For customers with at least four previous purchases, the corresponding churn rate is around 25%. Further analysis shows that churning customers in general, and as expected, make smaller purchases and visit the online store less often. In the experimentation, three modeling techniques are evaluated, and the results show that, in particular, Gradient Boosting models can predict churners with relatively high accuracy while obtaining a good balance between precision and recall. 
  •  
12.
  • Al Sabbagh, Khaled, 1987, et al. (author)
  • Improving Data Quality for Regression Test Selection by Reducing Annotation Noise
  • 2020
  • In: Proceedings - 46th Euromicro Conference on Software Engineering and Advanced Applications, SEAA 2020. ; , s. 191-194
  • Conference paper (peer-reviewed)abstract
    • Big data and machine learning models have been increasingly used to support software engineering processes and practices. One example is the use of machine learning models to improve test case selection in continuous integration. However, one of the challenges in building such models is the identification and reduction of noise that often comes in large data. In this paper, we present a noise reduction approach that deals with the problem of contradictory training entries. We empirically evaluate the effectiveness of the approach in the context of selective regression testing. For this purpose, we use a curated training set as input to a tree-based machine learning ensemble and compare the classification precision, recall, and f-score against a non-curated set. Our study shows that using the noise reduction approach on the training instances gives better results in prediction with an improvement of 37% on precision, 70% on recall, and 59% on f-score.
  •  
13.
  • Blanch, Krister, 1991 (author)
  • Beyond-application datasets and automated fair benchmarking
  • 2023
  • Licentiate thesis (other academic/artistic)abstract
    • Beyond-application perception datasets are generalised datasets that emphasise the fundamental components of good machine perception data. When analysing the history of perception datatsets, notable trends suggest that design of the dataset typically aligns with an application goal. Instead of focusing on a specific application, beyond-application datasets instead look at capturing high-quality, high-volume data from a highly kinematic environment, for the purpose of aiding algorithm development and testing in general. Algorithm benchmarking is a cornerstone of autonomous systems development, and allows developers to demonstrate their results in a comparative manner. However, most benchmarking systems allow developers to use their own hardware or select favourable data. There is also little focus on run time performance and consistency, with benchmarking systems instead showcasing algorithm accuracy. By combining both beyond-application dataset generation and methods for fair benchmarking, there is also the dilemma of how to provide the dataset to developers for this benchmarking, as the result of a high-volume, high-quality dataset generation is a significant increase in dataset size when compared to traditional perception datasets. This thesis presents the first results of attempting the creation of such a dataset. The dataset was built using a maritime platform, selected due to the highly dynamic environment presented on water. The design and initial testing of this platform is detailed, as well as as methods of sensor validation. Continuing, the thesis then presents a method of fair benchmarking, by utilising remote containerisation in a way that allows developers to present their software to the dataset, instead of having to first locally store a copy. To test this dataset and automatic online benchmarking, a number of reference algorithms were required for initial results. Three algorithms were built, using the data from three different sensors captured on the maritime platform. Each algorithm calculates vessel odometry, and the automatic benchmarking system was utilised to show the accuracy and run-time performance of these algorithms. It was found that the containerised approach alleviated data management concerns, prevented inflated accuracy results, and demonstrated precisely how computationally intensive each algorithm was.
  •  
14.
  • John, Meenu Mary, et al. (author)
  • Towards an AI-driven business development framework: A multi-case study
  • 2023
  • In: Journal of Software: Evolution and Process. - : Wiley. - 2047-7481 .- 2047-7473. ; 35:6
  • Journal article (peer-reviewed)abstract
    • Artificial intelligence (AI) and the use of machine learning (ML) and deep learning (DL) technologies are becoming increasingly popular in companies. These technologies enable companies to leverage big quantities of data to improve system performance and accelerate business development. However, despite the appeal of ML/DL, there is a lack of systematic and structured methods and processes to help data scientists and other company roles and functions to develop, deploy and evolve models. In this paper, based on multi-case study research in six companies, we explore practices and challenges practitioners experience in developing ML/DL models as part of large software-intensive embedded systems. Based on our empirical findings, we derive a conceptual framework in which we identify three high-level activities that companies perform in parallel with the development, deployment and evolution of models. Within this framework, we outline activities, iterations and triggers that optimize model design as well as roles and company functions. In this way, we provide practitioners with a blueprint for effectively integrating ML/DL model development into the business to achieve better results than other (algorithmic) approaches. In addition, we show how this framework helps companies solve the challenges we have identified and discuss checkpoints for terminating the business case.
  •  
15.
  • Abrahao, Silvia, et al. (author)
  • Open Source Software: Communities and Quality
  • 2023
  • In: IEEE Software. - 1937-4194 .- 0740-7459. ; 40:4, s. 96-99
  • Journal article (peer-reviewed)abstract
    • This edition of the Practitioner's Digest features recent papers on open source software related to toxicity in open source discussions, newcomers in open source projects, quality of ansible scripts, code review practices, orphan vulnerabilities in open source software, and the relationship between community and design smells.
  •  
16.
  • Besker, Terese, 1970 (author)
  • Technical Debt: An empirical investigation of its harmfulness and on management strategies in industry
  • 2020
  • Doctoral thesis (other academic/artistic)abstract
    • Background: In order to survive in today's fast-growing and ever fast-changing business environment, software companies need to continuously deliver customer value, both from a short- and long-term perspective. However, the consequences of potential long-term and far-reaching negative effects of shortcuts and quick fixes made during the software development lifecycle, described as Technical Debt (TD), can impede the software development process. Objective: The overarching goal of this Ph.D. thesis is twofold. The first goal is to empirically study and understand in what way and to what extent, TD influences today’s software development work, specifically with the intention to provide more quantitative insight into the field. Second, to understand which different initiatives can reduce the negative effects of TD and also which factors are important to consider when implementing such initiatives. Method: To achieve the objectives, a combination of both quantitative and qualitative research methodologies are used, including interviews, surveys, a systematic literature review, a longitudinal study, analysis of documents, correlation analysis, and statistical tests. In seven of the eleven studies included in this Ph.D. thesis, a combination of multiple research methods are used to achieve high validity. Results: We present results showing that software suffering from TD will cause various negative effects on both the software and the developing process. These negative effects are illustrated from a technical, financial, and a developer’s working situational perspective. These studies also identify several initiatives that can be undertaken in order to reduce the negative effects of TD. Conclusion: The results show that software developers report that they waste 23% of their working time due to experiencing TD and that TD required them to perform additional time-consuming work activities. This study also shows that, compared to all types of TD, architectural TD has the greatest negative impact on daily software development work and that TD has negative effects on several different software quality attributes. Further, the results show that TD reduces developer morale. Moreover, the findings show that intentionally introducing TD in startup companies can allow the startups to cut development time, enabling faster feedback and increased revenue, preserve resources, and decrease risk and thereby contribute to beneficial effects. This study also identifies several initiatives that can be undertaken in order to reduce the negative effects of TD, such as the introduction of a tracking process where the TD items are introduced in an official backlog. The finding also indicates that there is an unfulfilled potential regarding how managers can influence the manner in which software practitioners address TD.
  •  
17.
  • di Lucia, Lorenzo, 1977, et al. (author)
  • Decision-making fitness of methods to understand Sustainable Development Goal interactions
  • 2022
  • In: Nature Sustainability. - : Springer Science and Business Media LLC. - 2398-9629. ; 5:2, s. 131-138
  • Journal article (peer-reviewed)abstract
    • The integrated nature of the Sustainable Development Goals (SDGs) presents a challenge to implementing the 2030 Agenda. Analytical methods to support decision-makers are often developed without explicitly incorporating decision-makers’ views and experience. Here, we investigate whether existing methods are fit-for-purpose in supporting decision-makers at national and subnational levels. We identify prominent methods for SDG interaction analysis, which we then evaluate by engaging directly (via a survey and interviews) with method developers and decision-makers in Sweden. We find that decision-makers prioritize methods that are simple and flexible to apply and able to provide directly actionable and understandable results. They are less concerned with the accuracy, precision, completeness or quantitative nature of the knowledge. Prominent categories of methods include self-assessment, expert judgement, literature-based, statistical analyses and modelling. Interviewed decision-makers consider these methods in line with the features prioritized in the survey but highlight low performance on features they value highly, such as the extent to which results are actionable and overall ease of use. Methods developers have limited awareness of decision-makers’ priorities and requirements, so hindering methodological advancement. They should focus on the practical value of applications to support decision-makers, resource-constrained organizations and those seeking to evaluate multiple cases.
  •  
18.
  • Giaimo, Federico, 1989, et al. (author)
  • Continuous Experimentation and the cyber-physical systems challenge: An overview of the literature and the industrial perspective.
  • 2020
  • In: Journal of Systems and Software. - : Elsevier BV. - 0164-1212. ; 170
  • Journal article (peer-reviewed)abstract
    • Context: New software development patterns are emerging aiming at accelerating the process of delivering value. One is Continuous Experimentation, which allows to systematically deploy and run instrumented software variants during development phase in order to collect data from the field of application. While currently this practice is used on a daily basis on web-based systems, technical difficulties challenge its adoption in fields where computational resources are constrained, e.g., cyber-physical systems and the automotive industry. Objective: This paper aims at providing an overview of the engagement on the Continuous Experimentation practice in the context of cyber-physical systems. Method: A systematic literature review has been conducted to investigate the link between the practice and the field of application. Additionally, an industrial multiple case study is reported. Results: The study presents the current state-of-the-art regarding Continuous Experimentation in the field of cyber-physical systems. The current perspective of Continuous Experimentation in industry is also reported. Conclusions: The field has not reached maturity yet. More conceptual analyses are found than solution proposals and the state-of-practice is yet to be achieved. However it is expected that in time an increasing number of solutions will be proposed and validated.
  •  
19.
  • Hammarstedt, Martin, et al. (author)
  • Sparv 5 Developer’s Guide
  • 2022
  • Reports (other academic/artistic)abstract
    • The Sparv Pipeline developed by Språkbanken Text is a text analysis tool run from the command line. This Developer’s Guide describes its general structure and key concepts and serves as an API documentation. Most importantly, it describes how to write plugins for Sparv 5 so that you can add your own functions to the toolkit.
  •  
20.
  • Hatamian, Majid, et al. (author)
  • A privacy and security analysis of early-deployed COVID-19 contact tracing Android apps
  • 2021
  • In: Empirical Software Engineering. - : Springer Nature. - 1382-3256 .- 1573-7616. ; 26:3
  • Journal article (peer-reviewed)abstract
    • As this article is being drafted, the SARS-CoV-2/COVID-19 pandemic is causing harm and disruption across the world. Many countries aimed at supporting their contact tracers with the use of digital contact tracing apps in order to manage and control the spread of the virus. Their idea is the automatic registration of meetings between smartphone owners for the quicker processing of infection chains. To date, there are many contact tracing apps that have already been launched and used in 2020. There has been a lot of speculations about the privacy and security aspects of these apps and their potential violation of data protection principles. Therefore, the developers of these apps are constantly criticized because of undermining users’ privacy, neglecting essential privacy and security requirements, and developing apps under time pressure without considering privacy- and security-by-design. In this study, we analyze the privacy and security performance of 28 contact tracing apps available on Android platform from various perspectives, including their code’s privileges, promises made in their privacy policies, and static and dynamic performances. Our methodology is based on the collection of various types of data concerning these 28 apps, namely permission requests, privacy policy texts, run-time resource accesses, and existing security vulnerabilities. Based on the analysis of these data, we quantify and assess the impact of these apps on users’ privacy. We aimed at providing a quick and systematic inspection of the earliest contact tracing apps that have been deployed on multiple continents. Our findings have revealed that the developers of these apps need to take more cautionary steps to ensure code quality and to address security and privacy vulnerabilities. They should more consciously follow legal requirements with respect to apps’ permission declarations, privacy principles, and privacy policy contents.
  •  
21.
  • Javed, Muhammad, et al. (author)
  • Safe and secure platooning of Automated Guided Vehicles in Industry 4.0
  • 2021
  • In: Journal of systems architecture. - Sweden : Elsevier B.V.. - 1383-7621 .- 1873-6165. ; 121
  • Journal article (peer-reviewed)abstract
    • Automated Guided Vehicles (AGVs) are widely used for materials transportation. Operating them in a platooned manner has the potential to improve safety, security and efficiency, control overall traffic flow and reduce resource usage. However, the published studies on platooning focus mainly on the design of technical solutions in the context of automotive domain. In this paper we focus on a largely unexplored theme of platooning in production sites transformed to the Industry 4.0, with the aim of providing safety and security assurances. We present an overall approach for a fault- and threat tolerant platooning for materials transportation in production environments. Our functional use cases include the platoon control for collision avoidance, data acquisition and processing by considering range, and connectivity with fog and cloud levels. To perform the safety and security analyses, the Hazard and Operability (HAZOP) and Threat and Operability (THROP) techniques are used. Based on the results obtained from them, the safety and security requirements are derived for the identification and prevention/mitigation of potential platooning hazards, threats and vulnerabilities. The assurance cases are constructed to show the acceptable safety and security of materials transportation using AGV platooning. We leveraged a simulation-based digital twin for performing the verification and validation as well as finetuning of the platooning strategy. Simulation data is gathered from digital twin to monitor platoon operations, identify unexpected or incorrect behaviour, evaluate the potential implications, trigger control actions to resolve them, and continuously update assurance cases. The applicability of the AGV platooning is demonstrated in the context of a quarry site. © 2021 The Authors
  •  
22.
  • Karlsson, Stefan, et al. (author)
  • Exploring API behaviours through generated examples
  • 2024
  • In: Software Quality Journal. - : SPRINGER. - 1573-1367 .- 0963-9314. ; 32:2, s. 729-763
  • Journal article (peer-reviewed)abstract
    • Understanding the behaviour of a system's API can be hard. Giving users access to relevant examples of how an API behaves has been shown to make this easier for them. In addition, such examples can be used to verify expected behaviour or identify unwanted behaviours. Methods for automatically generating examples have existed for a long time. However, state-of-the-art methods rely on either white-box information, such as source code, or on formal specifications of the system behaviour. But what if you do not have access to either? This may be the case, for example, when interacting with a third-party API. In this paper, we present an approach to automatically generate relevant examples of behaviours of an API, without requiring either source code or a formal specification of behaviour. Evaluation on an industry-grade REST API shows that our method can produce small and relevant examples that can help engineers to understand the system under exploration.
  •  
23.
  • Michael Ayas, Hamdy, 1994, et al. (author)
  • An empirical study of the systemic and technical migration towards microservices
  • 2023
  • In: Empirical Software Engineering. - 1382-3256 .- 1573-7616. ; 28:4
  • Journal article (peer-reviewed)abstract
    • ContextAs many organizations modernize their software architecture and transition to the cloud, migrations towards microservices become more popular. Even though such migrations help to achieve organizational agility and effectiveness in software development, they are also highly complex, long-running, and multi-faceted.ObjectiveIn this study we aim to comprehensively map the journey towards microservices and describe in detail what such a migration entails. In particular, we aim to discuss not only the technical migration, but also the long-term journey of change, on a systemic level.MethodOur research method is an inductive, qualitative study on two data sources. Two main methodological steps take place - interviews and analysis of discussions from StackOverflow. The analysis of both, the 19 interviews and 215 StackOverflow discussions, is based on techniques found in grounded theory.ResultsOur results depict the migration journey, as it materializes within the migrating organization, from structural changes to specific technical changes that take place in the work of engineers. We provide an overview of how microservices migrations take place as well as a deconstruction of high level modes of change to specific solution outcomes. Our theory contains 2 modes of change taking place in migration iterations, 14 activities and 53 solution outcomes of engineers. One of our findings is on the architectural change that is iterative and needs both a long and short term perspective, including both business and technical understanding. In addition, we found that a big proportion of the technical migration has to do with setting up supporting artifacts and changing the paradigm that software is developed.
  •  
24.
  • Mohamad, Mazen, et al. (author)
  • Security assurance cases-state of the art of an emerging approach
  • 2021
  • In: Empirical Software Engineering. - : Springer Science and Business Media LLC. - 1382-3256 .- 1573-7616. ; 26:4
  • Journal article (peer-reviewed)abstract
    • Security Assurance Cases (SAC) are a form of structured argumentation used to reason about the security properties of a system. After the successful adoption of assurance cases for safety, SAC are getting significant traction in recent years, especially in safety-critical industries (e.g., automotive), where there is an increasing pressure to be compliant with several security standards and regulations. Accordingly, research in the field of SAC has flourished in the past decade, with different approaches being investigated. In an effort to systematize this active field of research, we conducted a systematic literature review (SLR) of the existing academic studies on SAC. Our review resulted in an in-depth analysis and comparison of 51 papers. Our results indicate that, while there are numerous papers discussing the importance of SAC and their usage scenarios, the literature is still immature with respect to concrete support for practitioners on how to build and maintain a SAC. More importantly, even though some methodologies are available, their validation and tool support is still lacking.
  •  
25.
  • Nass, Michel, et al. (author)
  • Improving Web Element Localization by Using a Large Language Model
  • 2024
  • In: Software Testing Verification and Reliability. - 0960-0833 .- 1099-1689. ; In Press
  • Journal article (peer-reviewed)abstract
    • Web-based test automation heavily relies on accurately finding web elements. Traditional methods compare attributes but do not grasp the context and meaning of elements and words. The emergence of large language models (LLMs) like GPT-4, which can show human-like reasoning abilities on some tasks, offers new opportunities for software engineering and web element localization. This paper introduces and evaluates VON Similo LLM, an enhanced web element localization approach. Using an LLM, it selects the most likely web element from the top-ranked ones identified by the existing VON Similo method, ideally aiming to get closer to human-like selection accuracy. An experimental study was conducted using 804 web element pairs from 48 real-world web applications. We measured the number of correctly identified elements as well as the execution times, comparing the effectiveness and efficiency of VON Similo LLM against the baseline algorithm. In addition, motivations from the LLM were recorded and analysed for 140 instances. VON Similo LLM demonstrated improved performance, reducing failed localizations from 70 to 40 (out of 804), a 43% reduction. Despite its slower execution time and additional costs of using the GPT-4 model, the LLM's human-like reasoning showed promise in enhancing web element localization. LLM technology can enhance web element localization in GUI test automation, reducing false positives and potentially lowering maintenance costs. However, further research is necessary to fully understand LLMs' capabilities, limitations and practical use in GUI testing.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-25 of 1447
Type of publication
conference paper (649)
journal article (561)
book chapter (51)
licentiate thesis (46)
research review (44)
doctoral thesis (41)
show more...
reports (22)
editorial proceedings (14)
other publication (10)
book (6)
editorial collection (2)
artistic work (2)
show less...
Type of content
peer-reviewed (1224)
other academic/artistic (219)
pop. science, debate, etc. (3)
Author/Editor
Bosch, Jan, 1967 (84)
Olsson, Helena Holms ... (54)
Staron, Miroslaw, 19 ... (50)
Mendez, Daniel (50)
Knauss, Eric, 1977 (40)
Horkoff, Jennifer, 1 ... (37)
show more...
Gorschek, Tony, 1972 ... (37)
Runeson, Per (34)
Berger, Thorsten, 19 ... (32)
Feldt, Robert, 1972 (29)
Gay, Gregory, 1987 (29)
Borg, Markus (28)
Unterkalmsteiner, Mi ... (25)
Steghöfer, Jan-Phili ... (25)
Alégroth, Emil, 1984 ... (25)
Börstler, Jürgen, 19 ... (25)
Strüber, Daniel, 198 ... (24)
Söderberg, Emma (24)
Penzenstadler, Birgi ... (23)
Felderer, Michael, 1 ... (23)
Šmite, Darja (23)
Petersen, Kai (23)
Wnuk, Krzysztof, 198 ... (22)
Fucci, Davide, 1985- (22)
Gonzalez-Huerta, Jav ... (22)
Monperrus, Martin (21)
Ahmad, Muhammad Ovai ... (20)
Hebig, Regina, 1984 (19)
Pelliccione, Patrizi ... (19)
Torkar, Richard, 197 ... (18)
Baudry, Benoit (18)
Engström, Emelie (18)
Weyns, Danny (17)
Hebig, Regina (17)
Wohlrab, Rebekka, 19 ... (17)
Ali, Nauman bin, Dr. (17)
Mendes, Emilia (17)
Wohlin, Claes (16)
Lundell, Björn (16)
Abrahão, Silvia (15)
Gamalielsson, Jonas (15)
Britto, Ricardo, 198 ... (14)
Frattini, Julian, 19 ... (14)
Enoiu, Eduard Paul, ... (13)
Höst, Martin (13)
Ahmed, Bestoun S., 1 ... (13)
Gomes, Francisco, 19 ... (13)
Gren, Lucas, 1984 (13)
Ericsson, Morgan, Do ... (13)
Usman, Muhammad, 197 ... (13)
show less...
University
Chalmers University of Technology (512)
Blekinge Institute of Technology (297)
University of Gothenburg (264)
Lund University (130)
Royal Institute of Technology (96)
Mälardalen University (76)
show more...
Malmö University (61)
RISE (61)
Karlstad University (55)
Linnaeus University (45)
Uppsala University (33)
University of Skövde (33)
Linköping University (30)
Luleå University of Technology (24)
Örebro University (17)
Umeå University (13)
Mid Sweden University (11)
Jönköping University (9)
Halmstad University (7)
Stockholm University (5)
Stockholm School of Economics (5)
Karolinska Institutet (4)
University West (3)
Swedish University of Agricultural Sciences (3)
Kristianstad University College (2)
University of Borås (2)
VTI - The Swedish National Road and Transport Research Institute (2)
IVL Swedish Environmental Research Institute (2)
University of Gävle (1)
Högskolan Dalarna (1)
Stockholm University of the Arts (1)
show less...
Language
English (1435)
Swedish (10)
Undefined language (1)
Mongolian (1)
Research subject (UKÄ/SCB)
Natural sciences (1446)
Engineering and Technology (361)
Social Sciences (130)
Medical and Health Sciences (15)
Humanities (14)
Agricultural Sciences (4)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view