SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "hsv:(NATURVETENSKAP) hsv:(Data och informationsvetenskap) hsv:(Programvaruteknik) "

Sökning: hsv:(NATURVETENSKAP) hsv:(Data och informationsvetenskap) hsv:(Programvaruteknik)

  • Resultat 1-50 av 4767
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Chatterjee, Bapi, 1982 (författare)
  • Lock-free Concurrent Search
  • 2017
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The contemporary computers typically consist of multiple computing cores with high compute power. Such computers make excellent concurrent asynchronous shared memory system. On the other hand, though many celebrated books on data structure and algorithm provide a comprehensive study of sequential search data structures, unfortunately, we do not have such a luxury if concurrency comes in the setting. The present dissertation aims to address this paucity. We describe novel lock-free algorithms for concurrent data structures that target a variety of search problems. (i) Point search (membership query, predecessor query, nearest neighbour query) for 1-dimensional data: Lock-free linked-list; lock-free internal and external binary search trees (BST). (ii) Range search for 1-dimensional data: A range search method for lock-free ordered set data structures - linked-list, skip-list and BST. (iii) Point search for multi-dimensional data: Lock-free kD-tree, specially, a generic method for nearest neighbour search. We prove that the presented algorithms are linearizable i.e. the concurrent data structure operations intuitively display their sequential behaviour to an observer of the concurrent system. The lock-freedom in the introduced algorithms guarantee overall progress in an asynchronous shared memory system. We present the amortized analysis of lock-free data structures to show their efficiency. Moreover, we provide sample implementations of the algorithms and test them over extensive micro-benchmarks. Our experiments demonstrate that the implementations are scalable and perform well when compared to related existing alternative implementations on common multi-core computers. Our focus is on propounding the generic methodologies for efficient lock-free concurrent search. In this direction, we present the notion of help-optimality, which captures the optimization of amortized step complexity of the operations. In addition to that, we explore the language-portable design of lock-free data structures that aims to simplify an implementation from programmer’s point of view. Finally, our techniques to implement lock-free linearizable range search and nearest neighbour search are independent of the underlying data structures and thus are adaptive to similar data structures.
  •  
2.
  •  
3.
  • Liu, Yuanhua, 1971, et al. (författare)
  • Considering the importance of user profiles in interface design
  • 2009
  • Ingår i: User Interfaces. ; , s. 23-
  • Bokkapitel (övrigt vetenskapligt/konstnärligt)abstract
    • User profile is a popular term widely employed during product design processes by industrial companies. Such a profile is normally intended to represent real users of a product. The ultimate purpose of a user profile is actually to help designers to recognize or learn about the real user by presenting them with a description of a real user’s attributes, for instance; the user’s gender, age, educational level, attitude, technical needs and skill level. The aim of this chapter is to provide information on the current knowledge and research about user profile issues, as well as to emphasize the importance of considering these issues in interface design. In this chapter, we mainly focus on how users’ difference in expertise affects their performance or activity in various interaction contexts. Considering the complex interaction situations in practice, novice and expert users’ interactions with medical user interfaces of different technical complexity will be analyzed as examples: one focuses on novice and expert users’ difference when interacting with simple medical interfaces, and the other focuses on differences when interacting with complex medical interfaces. Four issues will be analyzed and discussed: (1) how novice and expert users differ in terms of performance during the interaction; (2) how novice and expert users differ in the perspective of cognitive mental models during the interaction; (3) how novice and expert users should be defined in practice; and (4) what are the main differences between novice and expert users’ implications for interface design. Besides describing the effect of users’ expertise difference during the interface design process, we will also pinpoint some potential problems for the research on interface design, as well as some future challenges that academic researchers and industrial engineers should face in practice.
  •  
4.
  • Rumman, Nadine Abu, et al. (författare)
  • Skin deformation methods for interactive character animation
  • 2017
  • Ingår i: Communications in Computer and Information Science. - Cham : Springer International Publishing. - 1865-0937 .- 1865-0929. ; 693, s. 153-174, s. 153-174
  • Konferensbidrag (refereegranskat)abstract
    • Character animation is a vital component of contemporary computer games, animated feature films and virtual reality applications. The problem of creating appealing character animation can best be described by the title of the animation bible: “The Illusion of Life”. The focus is not on completing a given motion task, but more importantly on how this motion task is performed by the character. This does not necessarily require realistic behavior, but behavior that is believable. This of course includes the skin deformations when the character is moving. In this paper, we focus on the existing research in the area of skin deformation, ranging from skeleton-based deformation and volume preserving techniques to physically based skinning methods. We also summarize the recent contributions in deformable and soft body simulations for articulated characters, and discuss various geometric and example-based approaches. © Springer International Publishing AG 2017.
  •  
5.
  • Scheuner, Joel, 1991, et al. (författare)
  • Performance Benchmarking of Infrastructure-as-a-Service (IaaS) Clouds with CloudWorkBench
  • 2019
  • Ingår i: ICPE 2019 - Companion of the 2019 ACM/SPEC International Conference on Performance Engineering. - New York, NY, USA : ACM. ; , s. 53-56
  • Konferensbidrag (refereegranskat)abstract
    • The continuing growth of the cloud computing market has led to an unprecedented diversity of cloud services with different performance characteristics. To support service selection, researchers and practitioners conduct cloud performance benchmarking by measuring and objectively comparing the performance of different providers and configurations (e.g., instance types in different data center regions). In this tutorial, we demonstrate how to write performance tests for IaaS clouds using the Web-based benchmarking tool Cloud WorkBench (CWB). We will motivate and introduce benchmarking of IaaS cloud in general, demonstrate the execution of a simple benchmark in a public cloud environment, summarize the CWB tool architecture, and interactively develop and deploy a more advanced benchmark together with the participants.
  •  
6.
  •  
7.
  • Sweidan, Dirar, et al. (författare)
  • Predicting Customer Churn in Retailing
  • 2022
  • Ingår i: Proceedings 21st IEEE International Conference on Machine Learning and Applications ICMLA 2022. - : IEEE. - 9781665462839 - 9781665462846 ; , s. 635-640
  • Konferensbidrag (refereegranskat)abstract
    • Customer churn is one of the most challenging problems for digital retailers. With significantly higher costs for acquiring new customers than retaining existing ones, knowledge about which customers are likely to churn becomes essential. This paper reports a case study where a data-driven approach to churn prediction is used for predicting churners and gaining insights about the problem domain. The real-world data set used contains approximately 200 000 customers, describing each customer using more than 50 features. In the pre-processing, exploration, modeling and analysis, attributes related to recency, frequency, and monetary concepts are identified and utilized. In addition, correlations and feature importance are used to discover and understand churn indicators. One important finding is that the churn rate highly depends on the number of previous purchases. In the segment consisting of customers with only one previous purchase, more than 75% will churn, i.e., not make another purchase in the coming year. For customers with at least four previous purchases, the corresponding churn rate is around 25%. Further analysis shows that churning customers in general, and as expected, make smaller purchases and visit the online store less often. In the experimentation, three modeling techniques are evaluated, and the results show that, in particular, Gradient Boosting models can predict churners with relatively high accuracy while obtaining a good balance between precision and recall. 
  •  
8.
  • Lu, Zhihan, et al. (författare)
  • Multimodal Hand and Foot Gesture Interaction for Handheld Devices
  • 2014
  • Ingår i: ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP). - : Association for Computing Machinery (ACM). - 1551-6857 .- 1551-6865. ; 11:1
  • Tidskriftsartikel (refereegranskat)abstract
    • We present a hand-and-foot-based multimodal interaction approach for handheld devices. Our method combines input modalities (i.e., hand and foot) and provides a coordinated output to both modalities along with audio and video. Human foot gesture is detected and tracked using contour-based template detection (CTD) and Tracking-Learning-Detection (TLD) algorithm. 3D foot pose is estimated from passive homography matrix of the camera. 3D stereoscopic and vibrotactile are used to enhance the immersive feeling. We developed a multimodal football game based on the multimodal approach as a proof-of-concept. We confirm our systems user satisfaction through a user study.
  •  
9.
  • Al Sabbagh, Khaled, 1987, et al. (författare)
  • Improving Data Quality for Regression Test Selection by Reducing Annotation Noise
  • 2020
  • Ingår i: Proceedings - 46th Euromicro Conference on Software Engineering and Advanced Applications, SEAA 2020. ; , s. 191-194
  • Konferensbidrag (refereegranskat)abstract
    • Big data and machine learning models have been increasingly used to support software engineering processes and practices. One example is the use of machine learning models to improve test case selection in continuous integration. However, one of the challenges in building such models is the identification and reduction of noise that often comes in large data. In this paper, we present a noise reduction approach that deals with the problem of contradictory training entries. We empirically evaluate the effectiveness of the approach in the context of selective regression testing. For this purpose, we use a curated training set as input to a tree-based machine learning ensemble and compare the classification precision, recall, and f-score against a non-curated set. Our study shows that using the noise reduction approach on the training instances gives better results in prediction with an improvement of 37% on precision, 70% on recall, and 59% on f-score.
  •  
10.
  • Bainomugisha, Engineer, et al. (författare)
  • Message from Chairs of SEiA 2018
  • 2018
  • Ingår i: Proceedings - International Conference on Software Engineering. - New York, NY, USA : ACM. - 0270-5257. ; 2018, s. x-xi
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)
  •  
11.
  • Bergström, Gustav, et al. (författare)
  • Evaluating the layout quality of UML class diagrams using machine learning
  • 2022
  • Ingår i: Journal of Systems and Software. - : Elsevier BV. - 0164-1212. ; 192
  • Tidskriftsartikel (refereegranskat)abstract
    • UML is the de facto standard notation for graphically representing software. UML diagrams are used in the analysis, construction, and maintenance of software systems. Mostly, UML diagrams capture an abstract view of a (piece of a) software system. A key purpose of UML diagrams is to share knowledge about the system among developers. The quality of the layout of UML diagrams plays a crucial role in their comprehension. In this paper, we present an automated method for evaluating the layout quality of UML class diagrams. We use machine learning based on features extracted from the class diagram images using image processing. Such an automated evaluator has several uses: (1) From an industrial perspective, this tool could be used for automated quality assurance for class diagrams (e.g., as part of a quality monitor integrated into a DevOps toolchain). For example, automated feedback can be generated once a UML diagram is checked in the project repository. (2) In an educational setting, the evaluator can grade the layout aspect of student assignments in courses on software modeling, analysis, and design. (3) In the field of algorithm design for graph layouts, our evaluator can assess the layouts generated by such algorithms. In this way, this evaluator opens up the road for using machine learning to learn good layouting algorithms. Approach.: We use machine learning techniques to build (linear) regression models based on features extracted from the class diagram images using image processing. As ground truth, we use a dataset of 600+ UML Class Diagrams for which experts manually label the quality of the layout. Contributions.: This paper makes the following contributions: (1) We show the feasibility of the automatic evaluation of the layout quality of UML class diagrams. (2) We analyze which features of UML class diagrams are most strongly related to the quality of their layout. (3) We evaluate the performance of our layout evaluator. (4) We offer a dataset of labeled UML class diagrams. In this dataset, we supply for every diagram the following information: (a) a manually established ground truth of the quality of the layout, (b) an automatically established value for the layout-quality of the diagram (produced by our classifier), and (c) the values of key features of the layout of the diagram (obtained by image processing). This dataset can be used for replication of our study and others to build on and improve on this work. Editor's note: Open Science material was validated by the Journal of Systems and Software Open Science Board.
  •  
12.
  • Laaber, C., et al. (författare)
  • An Evaluation of Open-Source Software Microbenchmark Suites for Continuous Performance Assessment
  • 2018
  • Ingår i: MSR '18 Proceedings of the 15th International Conference on Mining Software Repositories. - New York, NY, USA : ACM Digital Library. - 9781450357166 ; , s. 119-130, s. 119-130
  • Bokkapitel (övrigt vetenskapligt/konstnärligt)abstract
    • Continuous integration (CI) emphasizes quick feedback to developers. This is at odds with current practice of performance testing, which predominantely focuses on long-running tests against entire systems in production-like environments. Alternatively, software microbenchmarking attempts to establish a performance baseline for small code fragments in short time. This paper investigates the quality of microbenchmark suites with a focus on suitability to deliver quick performance feedback and CI integration. We study ten open-source libraries written in Java and Go with benchmark suite sizes ranging from 16 to 983 tests, and runtimes between 11 minutes and 8.75 hours. We show that our study subjects include benchmarks with result variability of 50% or higher, indicating that not all benchmarks are useful for reliable discovery of slow-downs. We further artificially inject actual slowdowns into public API methods of the study subjects and test whether test suites are able to discover them. We introduce a performance-test quality metric called the API benchmarking score (ABS). ABS represents a benchmark suite's ability to find slowdowns among a set of defined core API methods. Resulting benchmarking scores (i.e., fraction of discovered slowdowns) vary between 10% and 100% for the study subjects. This paper's methodology and results can be used to (1) assess the quality of existing microbenchmark suites, (2) select a set of tests to be run as part of CI, and (3) suggest or generate benchmarks for currently untested parts of an API.
  •  
13.
  • Laaber, C., et al. (författare)
  • Applying test case prioritization to software microbenchmarks
  • 2021
  • Ingår i: Empirical Software Engineering. - : Springer Science and Business Media LLC. - 1382-3256 .- 1573-7616. ; 26:6
  • Tidskriftsartikel (refereegranskat)abstract
    • Regression testing comprises techniques which are applied during software evolution to uncover faults effectively and efficiently. While regression testing is widely studied for functional tests, performance regression testing, e.g., with software microbenchmarks, is hardly investigated. Applying test case prioritization (TCP), a regression testing technique, to software microbenchmarks may help capturing large performance regressions sooner upon new versions. This may especially be beneficial for microbenchmark suites, because they take considerably longer to execute than unit test suites. However, it is unclear whether traditional unit testing TCP techniques work equally well for software microbenchmarks. In this paper, we empirically study coverage-based TCP techniques, employing total and additional greedy strategies, applied to software microbenchmarks along multiple parameterization dimensions, leading to 54 unique technique instantiations. We find that TCP techniques have a mean APFD-P (average percentage of fault-detection on performance) effectiveness between 0.54 and 0.71 and are able to capture the three largest performance changes after executing 29% to 66% of the whole microbenchmark suite. Our efficiency analysis reveals that the runtime overhead of TCP varies considerably depending on the exact parameterization. The most effective technique has an overhead of 11% of the total microbenchmark suite execution time, making TCP a viable option for performance regression testing. The results demonstrate that the total strategy is superior to the additional strategy. Finally, dynamic-coverage techniques should be favored over static-coverage techniques due to their acceptable analysis overhead; however, in settings where the time for prioritzation is limited, static-coverage techniques provide an attractive alternative.
  •  
14.
  • Mallozzi, Piergiuseppe, 1990, et al. (författare)
  • A runtime monitoring framework to enforce invariants on reinforcement learning agents exploring complex environments
  • 2019
  • Ingår i: RoSE 2019, IEEE/ACM 2nd International Workshop on Robotics Software Engineering, p.5-12. - : IEEE. - 9781728122496
  • Konferensbidrag (refereegranskat)abstract
    • © 2019 IEEE. Without prior knowledge of the environment, a software agent can learn to achieve a goal using machine learning. Model-free Reinforcement Learning (RL) can be used to make the agent explore the environment and learn to achieve its goal by trial and error. Discovering effective policies to achieve the goal in a complex environment is a major challenge for RL. Furthermore, in safety-critical applications, such as robotics, an unsafe action may cause catastrophic consequences in the agent or in the environment. In this paper, we present an approach that uses runtime monitoring to prevent the reinforcement learning agent to perform 'wrong' actions and to exploit prior knowledge to smartly explore the environment. Each monitor is de?ned by a property that we want to enforce to the agent and a context. The monitors are orchestrated by a meta-monitor that activates and deactivates them dynamically according to the context in which the agent is learning. We have evaluated our approach by training the agent in randomly generated learning environments. Our results show that our approach blocks the agent from performing dangerous and safety-critical actions in all the generated environments. Besides, our approach helps the agent to achieve its goal faster by providing feedback and shaping its reward during learning.
  •  
15.
  • Pir Muhammad, Amna, 1990 (författare)
  • Managing Human Factors and Requirements in Agile Development of Automated Vehicles: An Exploration
  • 2022
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Context: Automated Vehicle (AV) technology has evolved significantly in complexity and impact; it is expected to ultimately change urban transporta- tion. However, research shows that vehicle automation can only live up to this expectation if it is defined with human capabilities and limitations in mind. Therefore, it is necessary to bring human factors knowledge to AV developers. Objective: This thesis aims to empirically study how we can effectively bring the required human factors knowledge into large-scale agile AV develop- ment. The research goals are 1) to explore requirements engineering and human factors in agile AV development, 2) to investigate the problems of requirements engineering, human factors, and agile way of working in AV development, and 3) to demonstrate initial solutions to existing problems in agile AV development. Method: We conducted this research in close collaboration with industry, using different empirical methodologies to collect data—including interviews, workshops, and document analysis. To gain in-depth insights, we did a qualita- tive exploratory study to investigate the problem and used a design science approach to develop initial solution in several iterations. Findings and Conclusions: We found that applying human factors knowledge effectively is one of the key problem areas that need to be solved in agile development of artificial intelligence (AI)-intense systems. This motivated us to do an in-depth interview study on how to manage human factors knowl- edge during AV development. From our data, we derived a working definition of human factors for AV development, discovered the relevant properties of agile and human factors, and defined implications for agile ways of working, managing human factors knowledge, and managing requirements. The design science approach allowed us to identify challenges related to agile requirements engineering in three case companies in iterations. Based on these three case studies, we developed a solution strategy to resolve the RE challenges in agile AV development. Moreover, we derived building blocks and described guide- lines for the creation of a requirements strategy, which should describe how requirements are structured, how work is organized, and how RE is integrated into the agile work and feature flow. Future Outlook: In future work, I plan to define a concrete requirement strategy for human factors knowledge in large-scale agile AV development. It could help establishing clear communication channels and practices for incorporating explicit human factors knowledge into AI-based large-scale agile AV development.
  •  
16.
  • Samoaa, Hazem Peter, et al. (författare)
  • A systematic mapping study of source code representation for deep learning in software engineering
  • 2022
  • Ingår i: Iet Software. - : Institution of Engineering and Technology (IET). - 1751-8806 .- 1751-8814. ; 16:4, s. 351-385
  • Tidskriftsartikel (refereegranskat)abstract
    • The usage of deep learning (DL) approaches for software engineering has attracted much attention, particularly in source code modelling and analysis. However, in order to use DL, source code needs to be formatted to fit the expected input form of DL models. This problem is known as source code representation. Source code can be represented via different approaches, most importantly, the tree-based, token-based, and graph-based approaches. We use a systematic mapping study to investigate i detail the representation approaches adopted in 103 studies that use DL in the context of software engineering. Thus, studies are collected from 2014 to 2021 from 14 different journals and 27 conferences. We show that each way of representing source code can provide a different, yet orthogonal view of the same source code. Thus, different software engineering tasks might require different (combinations of) code representation approaches, depending on the nature and complexity of the task. Particularly, we show that it is crucial to define whether the DL approach requires lexical, syntactical, or semantic code information. Our analysis shows that a wide range of different representations and combinations of representations (hybrid representations) are used to solve a wide range of common software engineering problems. However, we also observe that current research does not generally attempt to transfer existing representations or models to other studies even though there are other contexts in which these representations and models may also be useful. We believe that there is potential for more reuse and the application of transfer learning when applying DL to software engineering tasks.
  •  
17.
  • Sundell, Håkan, 1968, et al. (författare)
  • NOBLE: non-blocking programming support via lock-free shared abstract data types
  • 2009
  • Ingår i: SIGARCH Computer Architecture News. - : ACM, Association for Computing Machinery, Inc.. - 0163-5964 .- 1943-5851. ; 36:5, s. 80-87
  • Tidskriftsartikel (refereegranskat)abstract
    • An essential part of programming for multi-core and multi-processor includes ef cient and reliable means for sharing data. Lock-free data structures are known as very suitable for this purpose, although experienced to be very complex to design. In this paper, we present a software library of non-blocking abstract data types that have been designed to facilitate lock-free programming for non-experts. The system provides: i) ef cient implementations of the most commonly used data types in concurrent and sequential software design, ii) a lock-free memory management system, and iii) a run time-system. The library provides clear semantics that are at least as strong as those of corresponding lock-based implementations of the respective data types. Our software library can be used for facilitating lockfree programming; its design enables the programmer to: i) replace lock-based components of sequential or parallel code easily and ef ciently , ii) use well-tuned concurrent algorithms inside a software or hardware transactional system. In the paper we describe the design and functionality of the system. We also provide experimental results that show that the library can considerably improve the performance of software systems.
  •  
18.
  • Falkman, Göran, 1968-, et al. (författare)
  • SOMWeb - Towards an Infrastructure for Knowledge Sharing in Oral Medicine
  • 2005
  • Ingår i: Connecting Medical Informatics and Bio-Informatics: Proceedings of MIE2005 - The XIXth International Congress of the European Federation for Medical Informatics. - Amsterdam : IOS Press. - 1586035495 ; 116, s. 527-32, s. 527-532
  • Konferensbidrag (refereegranskat)abstract
    • In a net-based society, clinicians can come together for cooperative work and distance learning around a common medical material. This requires suitable techniques for cooperative knowledge management and user interfaces that are adapted to both the group as a whole and to individuals. To support distributed management and sharing of clinical knowledge, we propose the development of an intelligent web community for clinicians within oral medicine. This virtual meeting place will support the ongoing work on developing a digital knowledge base, providing a foundation for a more evidence-based oral medicine. The presented system is founded on the use and development of web services and standards for knowledge modelling and knowledge-based systems. The work is conducted within the frame of a well-established cooperation between oral medicine and computer science.
  •  
19.
  • John, Meenu Mary, et al. (författare)
  • Towards an AI-driven business development framework: A multi-case study
  • 2023
  • Ingår i: Journal of Software: Evolution and Process. - : Wiley. - 2047-7481 .- 2047-7473. ; 35:6
  • Tidskriftsartikel (refereegranskat)abstract
    • Artificial intelligence (AI) and the use of machine learning (ML) and deep learning (DL) technologies are becoming increasingly popular in companies. These technologies enable companies to leverage big quantities of data to improve system performance and accelerate business development. However, despite the appeal of ML/DL, there is a lack of systematic and structured methods and processes to help data scientists and other company roles and functions to develop, deploy and evolve models. In this paper, based on multi-case study research in six companies, we explore practices and challenges practitioners experience in developing ML/DL models as part of large software-intensive embedded systems. Based on our empirical findings, we derive a conceptual framework in which we identify three high-level activities that companies perform in parallel with the development, deployment and evolution of models. Within this framework, we outline activities, iterations and triggers that optimize model design as well as roles and company functions. In this way, we provide practitioners with a blueprint for effectively integrating ML/DL model development into the business to achieve better results than other (algorithmic) approaches. In addition, we show how this framework helps companies solve the challenges we have identified and discuss checkpoints for terminating the business case.
  •  
20.
  • Peldszus, Sven, et al. (författare)
  • Secure Data-Flow Compliance Checks between Models and Code Based on Automated Mappings
  • 2019
  • Ingår i: Proceedings - 2019 ACM/IEEE 22nd International Conference on Model Driven Engineering Languages and Systems, MODELS 2019. ; , s. 23-33
  • Konferensbidrag (refereegranskat)abstract
    • During the development of security-critical software, the system implementation must capture the security properties postulated by the architectural design. This paper presents an approach to support secure data-flow compliance checks between design models and code. To iteratively guide the developer in discovering such compliance violations we introduce automated mappings. These mappings are created by searching for correspondences between a design-level model (Security Data Flow Diagram) and an implementation-level model (Program Model). We limit the search space by considering name similarities between model elements and code elements as well as by the use of heuristic rules for matching data-flow structures. The main contributions of this paper are three-fold. First, the automated mappings support the designer in an early discovery of implementation absence, convergence, and divergence with respect to the planned software design. Second, the mappings also support the discovery of secure data-flow compliance violations in terms of illegal asset flows in the software implementation. Third, we present our implementation of the approach as a publicly available Eclipse plugin and its evaluation on five open source Java projects (including Eclipse secure storage).
  •  
21.
  • Aramrattana, Maytheewat, 1988-, et al. (författare)
  • Team Halmstad Approach to Cooperative Driving in the Grand Cooperative Driving Challenge 2016
  • 2018
  • Ingår i: IEEE transactions on intelligent transportation systems (Print). - Piscataway, N.J. : Institute of Electrical and Electronics Engineers Inc.. - 1524-9050 .- 1558-0016. ; 19:4, s. 1248-1261
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper is an experience report of team Halmstad from the participation in a competition organised by the i-GAME project, the Grand Cooperative Driving Challenge 2016. The competition was held in Helmond, The Netherlands, during the last weekend of May 2016. We give an overview of our car’s control and communication system that was developed for the competition following the requirements and specifications of the i-GAME project. In particular, we describe our implementation of cooperative adaptive cruise control, our solution to the communication and logging requirements, as well as the high level decision making support. For the actual competition we did not manage to completely reach all of the goals set out by the organizers as well as ourselves. However, this did not prevent us from outperforming the competition. Moreover, the competition allowed us to collect data for further evaluation of our solutions to cooperative driving. Thus, we discuss what we believe were the strong points of our system, and discuss post-competition evaluation of the developments that were not fully integrated into our system during competition time. © 2000-2011 IEEE.
  •  
22.
  • David, I., et al. (författare)
  • Blended modeling in commercial and open-source model-driven software engineering tools: A systematic study
  • 2023
  • Ingår i: Software and Systems Modeling. - : Springer Science and Business Media LLC. - 1619-1366 .- 1619-1374. ; 22, s. 415-447
  • Tidskriftsartikel (refereegranskat)abstract
    • Blended modeling aims to improve the user experience of modeling activities by prioritizing the seamless interaction with models through multiple notations over the consistency of the models. Inconsistency tolerance, thus, becomes an important aspect in such settings. To understand the potential of current commercial and open-source modeling tools to support blended modeling, we have designed and carried out a systematic study. We identify challenges and opportunities in the tooling aspect of blended modeling. Specifically, we investigate the user-facing and implementation-related characteristics of existing modeling tools that already support multiple types of notations and map their support for other blended aspects, such as inconsistency tolerance, and elevated user experience. For the sake of completeness, we have conducted a multivocal study, encompassing an academic review, and grey literature review. We have reviewed nearly 5000 academic papers and nearly 1500 entries of grey literature. We have identified 133 candidate tools, and eventually selected 26 of them to represent the current spectrum of modeling tools.
  •  
23.
  • Hujainah, Fadhl Mohammad Omar, 1987, et al. (författare)
  • SRPTackle: A semi-automated requirements prioritisation technique for scalable requirements of software system projects
  • 2021
  • Ingår i: Information and Software Technology. - : Elsevier BV. - 0950-5849. ; 131
  • Tidskriftsartikel (refereegranskat)abstract
    • Context Requirement prioritisation (RP) is often used to select the most important system requirements as perceived by system stakeholders. RP plays a vital role in ensuring the development of a quality system with defined constraints. However, a closer look at existing RP techniques reveals that these techniques suffer from some key challenges, such as scalability, lack of quantification, insufficient prioritisation of participating stakeholders, overreliance on the participation of professional expertise, lack of automation and excessive time consumption. These key challenges serve as the motivation for the present research. Objective This study aims to propose a new semiautomated scalable prioritisation technique called ‘SRPTackle’ to address the key challenges. Method SRPTackle provides a semiautomated process based on a combination of a constructed requirement priority value formulation function using a multi-criteria decision-making method (i.e. weighted sum model), clustering algorithms (K-means and K-means++) and a binary search tree to minimise the need for expert involvement and increase efficiency. The effectiveness of SRPTackle is assessed by conducting seven experiments using a benchmark dataset from a large actual software project. Results Experiment results reveal that SRPTackle can obtain 93.0% and 94.65% as minimum and maximum accuracy percentages, respectively. These values are better than those of alternative techniques. The findings also demonstrate the capability of SRPTackle to prioritise large-scale requirements with reduced time consumption and its effectiveness in addressing the key challenges in comparison with other techniques. Conclusion With the time effectiveness, ability to scale well with numerous requirements, automation and clear implementation guidelines of SRPTackle, project managers can perform RP for large-scale requirements in a proper manner, without necessitating an extensive amount of effort (e.g. tedious manual processes, need for the involvement of experts and time workload).
  •  
24.
  • Lidstrom, D, et al. (författare)
  • Agent based match racing simulations : Starting practice
  • 2022
  • Ingår i: SNAME 24th Chesapeake Sailing Yacht Symposium, CSYS 2022. - : Society of Naval Architects and Marine Engineers.
  • Konferensbidrag (refereegranskat)abstract
    • Match racing starts in sailing are strategically complex and of great importance for the outcome of a race. With the return of the America's Cup to upwind starts and the World Match Racing Tour attracting young and development sailors, the tactical skills necessary to master the starts could be trained and learned by means of computer simulations to assess a large range of approaches to the starting box. This project used game theory to model the start of a match race, intending to develop and study strategies using Monte-Carlo tree search to estimate the utility of a player's potential moves throughout a race. Strategies that utilised the utility estimated in different ways were defined and tested against each other through means of simulation and with an expert advice on match racing start strategy from a sailor's perspective. The results show that the strategies that put greater emphasis on what the opponent might do, perform better than those that did not. It is concluded that Monte-Carlo tree search can provide a basis for decision making in match races and that it has potential for further use. 
  •  
25.
  • Mahmood, Wardah, 1992, et al. (författare)
  • Effects of variability in models: a family of experiments
  • 2022
  • Ingår i: Empirical Software Engineering. - : Springer Science and Business Media LLC. - 1382-3256 .- 1573-7616. ; 27:3
  • Tidskriftsartikel (refereegranskat)abstract
    • The ever-growing need for customization creates a need to maintain software systems in many different variants. To avoid having to maintain different copies of the same model, developers of modeling languages and tools have recently started to provide implementation techniques for such variant-rich systems, notably variability mechanisms, which support implementing the differences between model variants. Available mechanisms either follow the annotative or the compositional paradigm, each of which have dedicated benefits and drawbacks. Currently, language and tool designers select the used variability mechanism often solely based on intuition. A better empirical understanding of the comprehension of variability mechanisms would help them in improving support for effective modeling. In this article, we present an empirical assessment of annotative and compositional variability mechanisms for three popular types of models. We report and discuss findings from a family of three experiments with 164 participants in total, in which we studied the impact of different variability mechanisms during model comprehension tasks. We experimented with three model types commonly found in modeling languages: class diagrams, state machine diagrams, and activity diagrams. We find that, in two out of three experiments, annotative technique lead to better developer performance. Use of the compositional mechanism correlated with impaired performance. For all three considered tasks, the annotative mechanism was preferred over the compositional one in all experiments. We present actionable recommendations concerning support of flexible, tasks-specific solutions, and the transfer of established best practices from the code domain to models.
  •  
26.
  • Penzenstadler, Birgit, 1981, et al. (författare)
  • Bots in Software Engineering
  • 2022
  • Ingår i: IEEE Software. - 1937-4194 .- 0740-7459. ; 39:5, s. 101-104
  • Forskningsöversikt (refereegranskat)
  •  
27.
  • Sandklef, Henrik, 1971, et al. (författare)
  • Programming with Java
  • 2017
  • Bok (övrigt vetenskapligt/konstnärligt)abstract
    • Course literature in programming with Java
  •  
28.
  • Elmqvist, Niklas, 1977, et al. (författare)
  • Employing Dynamic Transparency for 3D Occlusion Management: Design Issues and Evaluation
  • 2007
  • Ingår i: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). - 1611-3349 .- 0302-9743. - 9783540747949 ; 4662, s. 532 - 545
  • Konferensbidrag (refereegranskat)abstract
    • Recent developments in occlusion management for 3D environments often involve the use of dynamic transparency, or virtual “X-ray vision”, to promote target discovery and access in complex 3D worlds. However, there are many different approaches to achieving this effect and their actual utility for the user has yet to be evaluated. Furthermore, the introduction of semi-transparent surfaces adds additional visual complexity that may actually have a negative impact on task performance. In this paper, we report on an empirical user study comparing dynamic transparency to standard viewpoint controls. Our implementation of the technique is an image-space algorithm built using modern programmable shaders to achieve real-time performance and visually pleasing results. Results from the user study indicate that dynamic transparency is superior for perceptual tasks in terms of both efficiency and correctness.
  •  
29.
  • Blanch, Krister, 1991 (författare)
  • Beyond-application datasets and automated fair benchmarking
  • 2023
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Beyond-application perception datasets are generalised datasets that emphasise the fundamental components of good machine perception data. When analysing the history of perception datatsets, notable trends suggest that design of the dataset typically aligns with an application goal. Instead of focusing on a specific application, beyond-application datasets instead look at capturing high-quality, high-volume data from a highly kinematic environment, for the purpose of aiding algorithm development and testing in general. Algorithm benchmarking is a cornerstone of autonomous systems development, and allows developers to demonstrate their results in a comparative manner. However, most benchmarking systems allow developers to use their own hardware or select favourable data. There is also little focus on run time performance and consistency, with benchmarking systems instead showcasing algorithm accuracy. By combining both beyond-application dataset generation and methods for fair benchmarking, there is also the dilemma of how to provide the dataset to developers for this benchmarking, as the result of a high-volume, high-quality dataset generation is a significant increase in dataset size when compared to traditional perception datasets. This thesis presents the first results of attempting the creation of such a dataset. The dataset was built using a maritime platform, selected due to the highly dynamic environment presented on water. The design and initial testing of this platform is detailed, as well as as methods of sensor validation. Continuing, the thesis then presents a method of fair benchmarking, by utilising remote containerisation in a way that allows developers to present their software to the dataset, instead of having to first locally store a copy. To test this dataset and automatic online benchmarking, a number of reference algorithms were required for initial results. Three algorithms were built, using the data from three different sensors captured on the maritime platform. Each algorithm calculates vessel odometry, and the automatic benchmarking system was utilised to show the accuracy and run-time performance of these algorithms. It was found that the containerised approach alleviated data management concerns, prevented inflated accuracy results, and demonstrated precisely how computationally intensive each algorithm was.
  •  
30.
  • Nguyen, Björnborg, 1992, et al. (författare)
  • Systematic benchmarking for reproducibility of computer vision algorithms for real-time systems: The example of optic flow estimation
  • 2019
  • Ingår i: IEEE International Conference on Intelligent Robots and Systems. - : IEEE. - 2153-0858 .- 2153-0866. ; , s. 5264-5269
  • Konferensbidrag (refereegranskat)abstract
    • Until now there have been few formalized methods for conducting systematic benchmarking aiming at reproducible results when it comes to computer vision algorithms. This is evident from lists of algorithms submitted to prominent datasets, authors of a novel method in many cases primarily state the performance of their algorithms in relation to a shallow description of the hardware system where it was evaluated. There are significant problems linked to this non-systematic approach of reporting performance, especially when comparing different approaches and when it comes to the reproducibility of claimed results. Furthermore how to conduct retrospective performance analysis such as an algorithm's suitability for embedded real-time systems over time with underlying hardware and software changes in place. This paper proposes and demonstrates a systematic way of addressing such challenges by adopting containerization of software aiming at formalization and reproducibility of benchmarks. Our results show maintainers of broadly accepted datasets in the computer vision community to strive for systematic comparison and reproducibility of submissions to increase the value and adoption of computer vision algorithms in the future.
  •  
31.
  • Casado, Lander, 1985, et al. (författare)
  • ContikiSec: A Secure Network Layer for Wireless Sensor Networks under the Contiki Operating System
  • 2009
  • Ingår i: Proceedings of the 14th Nordic Conference on Secure IT Systems (NordSec 2009), Lecture Notes in Computer Science. - 1611-3349. - 9783642047657 ; 5838, s. 133 - 147
  • Konferensbidrag (refereegranskat)abstract
    • In this paper we introduce ContikiSec, a secure network layer forwireless sensor networks, designed for the Contiki Operating System. ContikiSechas a configurable design, providing three security modes starting fromconfidentiality and integrity, and expanding to confidentiality, authentication,and integrity. ContikiSec has been designed to balance low energy consumptionand security while conforming to a small memory footprint. Our design wasbased on performance evaluation of existing security primitives and is part ofthe contribution of this paper. Our evaluation was performed in the ModularSensor Board hardware platform for wireless sensor networks, running Contiki.Contiki is an open source, highly portable operating system for wireless sensornetworks (WSN) that is widely used in WSNs.
  •  
32.
  • Chatterjee, Bapi, 1982 (författare)
  • Efficient Implementation of Concurrent Data Structures on Multi-core and Many-core Architectures
  • 2015
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Synchronization of concurrent threads is the central problem in order to design efficient concurrent data-structures. The compute systems widely available in market are increasingly becoming heterogeneous involving multi-core Central Processing Units (CPUs) and many-core Graphics Processing Units (GPUs). This thesis contributes to the research of efficient synchronization in concurrent data-structures in more than one way. It is divided into two parts. In the first part, a novel design of a Set Abstract Data Type (ADT) based on an efficient lock-free Binary Search Tree (BST) with improved amortized bounds of the time complexity of set operations - Add, Remove and Contains, is presented. In the second part, a comprehensive evaluation of concurrent Queue implementations on multi-core CPUs as well as many-core GPUs are presented. Efficient Lock-free BST -To the best of our knowledge, the lock-free BST presented in this thesis is the first to achieve an amortized complexity of O(H(n)+c) for all Set operations where H(n) is the height of a BST on n nodes and c is the contention measure. Also, the presented lock-free algorithm of BST comes with an improved disjoint-access-parallelism compared to the previously existing concurrent BST algorithms. This algorithm uses single-word compare-and-swap (CAS) primitives. The presented algorithm is linearizable. We implemented the algorithm in Java and it shows good scalability. Evaluation of concurrent data-structures - We have evaluated the performance of a number of concurrent FIFO Queue algorithms on multi-core CPUs and many-core GPUs. We studied the portability of existing design of concurrent Queues from CPUs to GPUs which are inherently designed for SIMD programs. We observed that in general concurrent queues offer them to efficient implementation on GPUs with faster cache memory and better performance support for atomic synchronization primitives such as CAS. To the best of our knowledge, this is the first attempt to evaluate a concurrent data-structure on GPUs.
  •  
33.
  • Ha, Phuong, 1976, et al. (författare)
  • The Synchronization Power of Coalesced Memory Accesses
  • 2008
  • Ingår i: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). - Berlin, Heidelberg : Springer Berlin Heidelberg. - 1611-3349 .- 0302-9743. - 9783540877783 ; 5218, s. 320-334
  • Konferensbidrag (refereegranskat)abstract
    • Multicore processor architectures have established themselves as the new generation of processor architectures. As part of the one core to many cores evolution, memory access mechanisms have advanced rapidly. Several new memory access mechanisms have been implemented in many modern commodity multicore processors. Memory access mechanisms, by devising how processing cores access the shared memory, directly influence the synchronization capabilities of the multicore processors. Therefore, it is crucial to investigate the synchronization power of these new memory access mechanisms.This paper investigates the synchronization power of coalesced memory accesses, a family of memory access mechanisms introduced in recent large multicore architectures like the CUDA graphics processors. We first design three memory access models to capture the fundamental features of the new memory access mechanisms. Subsequently, we prove the exact synchronization power of these models in terms of their consensus numbers. These tight results show that the coalesced memory access mechanisms can facilitate strong synchronization between the threads of multicore processors, without the need of synchronization primitives other than reads and writes. In the case of the contemporary CUDA processors, our results imply that the coalesced memory access mechanisms have consensus numbers up to sixteen.
  •  
34.
  • Ha, Phuong, 1976, et al. (författare)
  • Wait-free Programming for General Purpose Computations on Graphics Processors
  • 2008
  • Ingår i: the Proceedings of the 22th International Parallel and Distributed Symposium (IPDPS 2008). - 1530-2075. - 9781424416936 ; , s. 1-12
  • Konferensbidrag (refereegranskat)abstract
    • The fact that graphics processors (GPUs) are today’s most powerful computational hardware for the dollar has motivated researchers to utilize the ubiquitous and powerful GPUs for general-purpose computing. Recent GPUs feature the single-program multiple-data (SPMD) multicore architecture instead of the single-instruction multiple-data (SIMD). However, unlike CPUs, GPUs devote their transistors mainly to data processing rather than data caching and flow control, and consequently most of the powerful GPUs with many cores do not support any synchronization mechanisms between their cores. This prevents GPUs from being deployed more widely for general-purpose computing. This paper aims at bridging the gap between the lack of synchronization mechanisms in recent GPU architectures and the need of synchronization mechanisms in parallel applications. Based on the intrinsic features of recent GPU architectures, we construct strong synchronization objects like wait-free and t-resilient read-modify-write objects for a general model of recent GPU architectures without strong hardware synchronization primitives like test-and-set and compare-and-swap. Accesses to the wait-free objects have time complexity O(N), whether N is the number of processes. Our result demonstrates that it is possible to construct wait-free synchronization mechanisms for GPUs without the need of strong synchronization primitives in hardware and that wait-free programming is possible for GPUs.
  •  
35.
  • Elmqvist, Niklas, 1977, et al. (författare)
  • CiteWiz: a tool for the visualization of scientific citation networks
  • 2007
  • Ingår i: Information Visualization. - 1473-8716 .- 1473-8724. ; 6:3, s. 215-232
  • Tidskriftsartikel (refereegranskat)abstract
    • We present CiteWiz, an extensible framework for visualization of scientific citation networks. The system is based on a taxonomy of citation database usage for researchers, and provides a timeline visualization for overviews and an influence visualization for detailed views. The timeline displays the general chronology and importance of authors and articles in a citation database, whereas the influence visualization is implemented using the Growing Polygons technique, suitably modified to the context of browsing citation data. Using the latter technique, hierarchies of articles with potentially very long citation chains can be graphically represented. The visualization is augmented with mechanisms for parent–child visualization and suitable interaction techniques for interacting with the view hierarchy and the individual articles in the dataset. We also provide an interactive concept map for keywords and co-authorship using a basic force-directed graph layout scheme. A formal user study indicates that CiteWiz is significantly more efficient than traditional database interfaces for high-level analysis tasks relating to influence and overviews, and equally efficient for low-level tasks such as finding a paper and correlating bibliographical data.
  •  
36.
  • Elmqvist, Niklas, 1977, et al. (författare)
  • DataMeadow: a visual canvas for analysis of large-scale multivariate data
  • 2008
  • Ingår i: Information Visualization. - : SAGE Publications. - 1473-8716 .- 1473-8724. ; 7:1, s. 18-33
  • Tidskriftsartikel (refereegranskat)abstract
    • Supporting visual analytics of multiple large-scale multidimensional data sets requires a high degree of interactivity and user control beyond the conventional challenges of visualizing such data sets. We present the DataMeadow, a visual canvas providing rich interaction for constructing visual queries using graphical set representations called DataRoses. A DataRose is essentially a starplot of selected columns in a data set displayed as multivariate visualizations with dynamic query sliders integrated into each axis. The purpose of the DataMeadow is to allow users to create advanced visual queries by iteratively selecting and filtering into the multidimensional data. Furthermore, the canvas provides a clear history of the analysis that can be annotated to facilitate dissemination of analytical results to stakeholders. A powerful direct manipulation interface allows for selection, filtering, and creation of sets, subsets, and data dependencies. We have evaluated our system using a qualitative expert review involving two visualization researchers. Results from this review are favorable for the new method.
  •  
37.
  • Paçacı, Görkem, et al. (författare)
  • Towards a visual compositional relational programming methodology
  • 2012
  • Ingår i: Diagrams 2012. ; , s. 17-19
  • Konferensbidrag (refereegranskat)abstract
    • We present a new visual programming method, based on Combilog, a compositional relational programming language. In this paper we focus on the compositional aspect of Combilog, the make operator, visually implementing it via a modification of Higraph diagrams, in an attempt to overcome the obscurity and complexity in the textual representation of this operator.
  •  
38.
  •  
39.
  • Menghi, Claudio, 1987, et al. (författare)
  • Poster: Property specification patterns for robotic missions
  • 2018
  • Ingår i: Proceedings - International Conference on Software Engineering. - New York, NY, USA : ACM. - 0270-5257. ; Part F137351, s. 434-435
  • Konferensbidrag (refereegranskat)abstract
    • Engineering dependable software for mobile robots is becoming increasingly important. A core asset in engineering mobile robots is the mission specification-A formal description of the goals that mobile robots shall achieve. Such mission specifications are used, among others, to synthesize, verify, simulate, or guide the engineering of robot software. Development of precise mission specifications is challenging. Engineers need to translate the mission requirements into specification structures expressed in a logical language-A laborious and error-prone task. To mitigate this problem, we present a catalog of mission specification patterns for mobile robots. Our focus is on robot movement, one of the most prominent and recurrent specification problems for mobile robots. Our catalog maps common mission specification problems to recurrent solutions, which we provide as templates that can be used by engineers. The patterns are the result of analyzing missions extracted from the literature. For each pattern, we describe usage intent, known uses, relationships to other patterns, and-most importantly-A template representing the solution as a logical formula in temporal logic. Our specification patterns constitute reusable building blocks that can be used by engineers to create complex mission specifications while reducing specification mistakes. We believe that our patterns support researchers working on tool support and techniques to synthesize and verify mission specifications, and language designers creating rich domain-specific languages for mobile robots, incorporating our patterns as language concepts.
  •  
40.
  • Alahdab, Mohannad, et al. (författare)
  • Empirical Analysis of Hidden Technical Debt Patterns in Machine Learning Software
  • 2019
  • Ingår i: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). - Cham : Springer International Publishing. - 1611-3349 .- 0302-9743. ; 11915 LNCS, s. 195-202
  • Konferensbidrag (refereegranskat)abstract
    • Context/Background Machine Learning (ML) software has special ability for increasing technical debt due to ML-specific issues besides having all the problems of regular code. The term “Hidden Technical Debt” (HTD) was coined by Sculley et al. to address maintainability issues in ML software as an analogy to technical debt in traditional software. Goal The aim of this paper is to empirically analyse how HTD patterns emerge during the early development phase of ML software, namely the prototyping phase.  Method Therefore, we conducted a case study with subject systems as ML models planned to be integrated into the software system owned by Västtrafik, the public transportation agency in the west area of Sweden. Results During our case study, we could detect HTD patterns, which have the potential to emerge in ML prototypes, except for “Legacy Features”, “Correlated features”, and “Plain Old Data Type Smell”. Conclusion Preliminary results indicate that emergence of significant amount of HTD patterns can occur during prototyping phase. However, generalizability of our results require analyses of further ML systems from various domains.
  •  
41.
  • Alshareef, Hanaa, 1985, et al. (författare)
  • Transforming data flow diagrams for privacy compliance
  • 2021
  • Ingår i: MODELSWARD 2021 - Proceedings of the 9th International Conference on Model-Driven Engineering and Software Development. - : SCITEPRESS - Science and Technology Publications. ; , s. 207-215
  • Konferensbidrag (refereegranskat)abstract
    • Most software design tools, as for instance Data Flow Diagrams (DFDs), are focused on functional aspects and cannot thus model non-functional aspects like privacy. In this paper, we provide an explicit algorithm and a proof-of-concept implementation to transform DFDs into so-called Privacy-Aware Data Flow Diagrams (PA-DFDs). Our tool systematically inserts privacy checks to a DFD, generating a PA-DFD. We apply our approach to two realistic applications from the construction and online retail sectors.
  •  
42.
  • Bender, Benedikt, et al. (författare)
  • Patterns in the Press Releases of Trade Unions: How toUse Structural Topic Models in the Field of Industrial Relations
  • 2022
  • Ingår i: Industrielle Beziehungen. - : Verlag Barbara Budrich GmbH. - 0943-2779 .- 1862-0035. ; 29:2, s. 91-116
  • Tidskriftsartikel (refereegranskat)abstract
    • Quantitative text analysis and the use of large data sets have received only limited attention in the field of Industrial Relations. This is unfortunate, given the variety of opportunities and possibilities these methods can address. We demonstrate the use of one promising technique of quantitative text analysis – the Structural Topic Model (STM) – to test the Insider-Outsider theory. This technique allowed us to find underlying topics in atext corpus of nearly 2,000 German trade union press releases (from 2000 to 2014). We provide astep-by-step overview of how to use STMsince we see this method as useful to the future of research in the field of Industrial Relations. Until now the methodological publications regarding STM mostly focus on the mathematics of the method and provide only aminimal discussion of their implementation. Instead, we provide apractical application of STM and apply this method to one of the most prominenttheories in the field of Industrial Relations. Contrary to the original Insider-Outsider arguments, but in line with thecurrent state of research, we show that unions do in fact use topics within their press releases which are relevant for both Insider and Outsider groups.
  •  
43.
  • Dobslaw, Felix, 1983, et al. (författare)
  • Boundary Value Exploration for Software Analysis
  • 2020
  • Ingår i: Proceedings - 2020 IEEE 13th International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2020. - : IEEE. ; , s. 346-353
  • Konferensbidrag (refereegranskat)abstract
    • For software to be reliable and resilient, it is widely accepted that tests must be created and maintained alongside the software itself. One safeguard from vulnerabilities and failures in code is to ensure correct behavior on the boundaries between subdomains of the input space. So-called boundary value analysis (BVA) and boundary value testing (BVT) techniques aim to exercise those boundaries and increase test effectiveness. However, the concepts of BVA and BVT themselves are not generally well defined, and it is not clear how to identify relevant sub-domains, and thus the boundaries delineating them, given a specification. This has limited adoption and hindered automation. We clarify BVA and BVT and introduce Boundary Value Exploration (BVE) to describe techniques that support them by helping to detect and identify boundary inputs. Additionally, we propose two concrete BVE techniques based on information-theoretic distance functions: (i) an algorithm for boundary detection and (ii) the usage of software visualization to explore the behavior of the software under test and identify its boundary behavior. As an initial evaluation, we apply these techniques on a much used and well-tested date handling library. Our results reveal questionable behavior at boundaries highlighted by our techniques. In conclusion, we argue that the boundary value exploration that our techniques enable is a step towards automated boundary value analysis and testing, which can foster their wider use and improve test effectiveness and efficiency.
  •  
44.
  • Furia, Carlo A, 1979, et al. (författare)
  • Applying Bayesian Analysis Guidelines to Empirical Software Engineering Data: The Case of Programming Languages and Code Quality
  • 2022
  • Ingår i: ACM Transactions on Software Engineering and Methodology. - : Association for Computing Machinery (ACM). - 1049-331X .- 1557-7392. ; 31:3
  • Tidskriftsartikel (refereegranskat)abstract
    • Statistical analysis is the tool of choice to turn data into information and then information into empirical knowledge. However, the process that goes from data to knowledge is long, uncertain, and riddled with pitfalls. To be valid, it should be supported by detailed, rigorous guidelines that help ferret out issues with the data or model and lead to qualified results that strike a reasonable balance between generality and practical relevance. Such guidelines are being developed by statisticians to support the latest techniques for Bayesian data analysis. In this article, we frame these guidelines in a way that is apt to empirical research in software engineering.To demonstrate the guidelines in practice, we apply them to reanalyze a GitHub dataset about code quality in different programming languages. The dataset's original analysis [Ray et al. 55] and a critical reanalysis [Berger et al. 6] have attracted considerable attention-in no small part because they target a topic (the impact of different programming languages) on which strong opinions abound. The goals of our reanalysis are largely orthogonal to this previous work, as we are concerned with demonstrating, on data in an interesting domain, how to build a principled Bayesian data analysis and to showcase its benefits. In the process, we will also shed light on some critical aspects of the analyzed data and of the relationship between programming languages and code quality-such as the impact of project-specific characteristics other than the used programming language.The high-level conclusions of our exercise will be that Bayesian statistical techniques can be applied to analyze software engineering data in a way that is principled, flexible, and leads to convincing results that inform the state-of-The-Art while highlighting the boundaries of its validity. The guidelines can support building solid statistical analyses and connecting their results. Thus, they can help buttress continued progress in empirical software engineering research.
  •  
45.
  • Furia, Carlo A, 1979, et al. (författare)
  • Bayesian Data Analysis in Empirical Software Engineering Research
  • 2021
  • Ingår i: IEEE Transactions on Software Engineering. - 0098-5589 .- 1939-3520. ; 47:9, s. 1786-1810
  • Tidskriftsartikel (refereegranskat)abstract
    • IEEE Statistics comes in two main flavors: frequentist and Bayesian. For historical and technical reasons, frequentist statistics have traditionally dominated empirical data analysis, and certainly remain prevalent in empirical software engineering. This situation is unfortunate because frequentist statistics suffer from a number of shortcomings---such as lack of flexibility and results that are unintuitive and hard to interpret---that curtail their effectiveness when dealing with the heterogeneous data that is increasingly available for empirical analysis of software engineering practice. In this paper, we pinpoint these shortcomings, and present Bayesian data analysis techniques that provide tangible benefits---as they can provide clearer results that are simultaneously robust and nuanced. After a short, high-level introduction to the basic tools of Bayesian statistics, we present the reanalysis of two empirical studies on the effectiveness of automatically generated tests and the performance of programming languages, respectively. By contrasting the original frequentist analyses with our new Bayesian analyses, we demonstrate the concrete advantages of the latter. To conclude we advocate a more prominent role for Bayesian statistical techniques in empirical software engineering research and practice.
  •  
46.
  • Gerdes, Alex, 1978, et al. (författare)
  • Understanding formal specifications through good examples
  • 2018
  • Ingår i: Erlang 2018 - Proceedings of the 17th ACM SIGPLAN International Workshop on Erlang, co-located with ICFP 2018. - New York, NY, USA : ACM. ; , s. 13-24
  • Konferensbidrag (refereegranskat)abstract
    • Formal specifications of software applications are hard to understand, even for domain experts. Because a formal specification is abstract, reading it does not immediately convey the expected behaviour of the software. Carefully chosen examples of the software’s behaviour, on the other hand, are concrete and easy to understand—but poorly-chosen examples are more confusing than helpful. In order to understand formal specifications, software developers need good examples. We have created a method that automatically derives a suite of good examples from a formal specification. Each example is judged by our method to illustrate one feature of the specification. The generated examples give users a good understanding of the behaviour of the software. We evaluated our method by measuring how well students understood an API when given different sets of examples; the students given our examples showed significantly better understanding.
  •  
47.
  • Hebig, Regina, 1984, et al. (författare)
  • How do students experience and judge software comprehension techniques?
  • 2020
  • Ingår i: IEEE International Conference on Program Comprehension. - New York, NY, USA : ACM. ; , s. 425-435
  • Konferensbidrag (refereegranskat)abstract
    • Today, there is a wide range of techniques to support softwarecomprehension. However, we do not fully understand yet whattechniques really help novices, to comprehend a software system.In this paper, we present a master level project course on softwareevolution, which has a large focus on software comprehension. Wecollected data about student's experience with diverse comprehension techniques during focus group discussions over the course oftwo years. Our results indicate that systematic code reading canbe supported by additional techniques to guiding reading efforts.Most techniques are considered valuable for gaining an overviewand some techniques are judged to be helpful only in later stagesof software comprehension efforts.
  •  
48.
  • Holtmann, Jörg, 1979, et al. (författare)
  • Exploiting Meta-Model Structures in the Generation of Xtext Editors
  • 2023
  • Ingår i: Proceedings of the 11th International Conference on Model-Based Software and Systems Engineering. - Lisbon, Portugal : SCITEPRESS - Science and Technology Publications. - 9789897586330 ; 1, s. 218-225
  • Konferensbidrag (refereegranskat)abstract
    • When generating textual editors for large and highly structured meta-models, it is possible to extend Xtext’s generator capabilities and the default implementations it provides. These extensions provide additional features such as formatters and more precise scoping for cross-references. However, for large metamodels in particular, the realization of such extensions typically is a time-consuming, awkward, and repetitive task. For some of these tasks, we motivate, present, and discuss in this position paper automatic solutions that exploit the structure of the underlying metamodel. Furthermore, we demonstrate how we used them in the development of a textual editor for EATXT, a textual concrete syntax for the automotive architecture description language EAST-ADL. This work in progress contributes to our larger goal of building a language workbench for blended modelling.
  •  
49.
  • Hovey, Gary, 1955, et al. (författare)
  • A framework for RFI simulation and performance verification
  • 2019
  • Ingår i: RFI 2019 - Proceedings of 2019 Radio Frequency Interference: Coexisting with Radio Frequency Interference.
  • Konferensbidrag (refereegranskat)abstract
    • Modern radio telescopes, like the proposed Square Kilometre Array (SKA), are extremely sensitive and the faint signals they receive can easily be contaminated irreversibly by stray radio frequency interference (RFI). Understanding how radio telescope performance is degraded by RFI is important. In this paper we describe an RFI simulation framework that can be used to generate test stimulus and verify a telescope's performance. The framework can be used during design to investigate the impact of various RFI scenarios and develop mitigation strategies. As well, it can be used to exercise and test hardware firmware after a system is installed. A prototype of the framework was implemented in the Python computer language to demonstrate the key concepts. Additionally, we outline the framework requirements, describe a suitable software structure and discuss a prototype implementation. As well, we present measurements made to verify the software generates correct test stimulus, for RFI from aircraft distance measuring equipment (DME). The work described was carried out to evaluate the impact of RFI on the Square Kilometre Array, an international effort to build the largest most sensitive radio telescope.
  •  
50.
  • John, Stefan, et al. (författare)
  • Searching for optimal models: Comparing two encoding approaches
  • 2019
  • Ingår i: Journal of Object Technology. - : AITO - Association Internationale pour les Technologies Objets. - 1660-1769. ; 18:3, s. 1-22
  • Tidskriftsartikel (refereegranskat)abstract
    • Search-Based Software Engineering (SBSE) is about solving software development problems by formulating them as optimization problems. In the last years, combining SBSE and Model-Driven Engineering (MDE), where models and model transformations are treated as key artifacts in the development of complex systems, has become increasingly popular. While search-based techniques have often successfully been applied to tackle MDE problems, a recent line of research investigates how a model-driven design can make optimization more easily accessible to a wider audience. In previous model-driven optimization efforts, a major design decision concerns the way in which solutions are encoded. Two main options have been explored: a model-based encoding representing candidate solutions as models, and a rule-based encoding representing them as sequences of transformation rule applications. While both encodings have been applied to different use cases, no study has yet compared them systematically. To close this gap, we evaluate both approaches on a common set of optimization problems, investigating their impact on the optimization performance. Additionally, we discuss their differences, strengths, and weaknesses laying the foundation for a knowledgeable choice of the right encoding for the right problem.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-50 av 4767
Typ av publikation
konferensbidrag (2546)
tidskriftsartikel (1316)
licentiatavhandling (180)
rapport (176)
bokkapitel (157)
doktorsavhandling (145)
visa fler...
forskningsöversikt (72)
annan publikation (63)
proceedings (redaktörskap) (51)
bok (32)
samlingsverk (redaktörskap) (27)
konstnärligt arbete (2)
patent (1)
visa färre...
Typ av innehåll
refereegranskat (3836)
övrigt vetenskapligt/konstnärligt (911)
populärvet., debatt m.m. (17)
Författare/redaktör
Wohlin, Claes (186)
Bosch, Jan, 1967 (184)
Staron, Miroslaw, 19 ... (130)
Weyns, Danny (126)
Petersen, Kai (117)
Runeson, Per (100)
visa fler...
Knauss, Eric, 1977 (93)
Šmite, Darja (87)
Gorschek, Tony (85)
Feldt, Robert, 1972 (84)
Gorschek, Tony, 1972 ... (78)
Bosch, Jan (76)
Feldt, Robert (73)
Berger, Christian, 1 ... (72)
Olsson, Helena Holms ... (69)
Mendes, Emilia (67)
Börstler, Jürgen (66)
Unterkalmsteiner, Mi ... (65)
Wnuk, Krzysztof, 198 ... (64)
Horkoff, Jennifer, 1 ... (61)
Berger, Thorsten, 19 ... (57)
Mendez, Daniel (55)
Torkar, Richard, 197 ... (54)
Borg, Markus (54)
Fricker, Samuel (52)
Herold, Sebastian (51)
Svahnberg, Mikael (45)
Pelliccione, Patrizi ... (44)
Felderer, Michael, 1 ... (44)
Chaudron, Michel, 19 ... (44)
Pelliccione, Patrizi ... (44)
Leitner, Philipp, 19 ... (43)
Torkar, Richard (42)
Steghöfer, Jan-Phili ... (42)
Gren, Lucas, 1984 (42)
Heldal, Rogardt, 196 ... (42)
Börstler, Jürgen, 19 ... (42)
Lundberg, Lars (41)
Afzal, Wasif (40)
Lundell, Björn (39)
Gonzalez-Huerta, Jav ... (38)
Regnell, Björn (38)
Höst, Martin (37)
Wnuk, Krzysztof (37)
Alégroth, Emil, 1984 ... (36)
Mattsson, Michael (35)
Tichy, Matthias, 197 ... (34)
Hebig, Regina (33)
Engström, Emelie (33)
Crnkovic, Ivica, 195 ... (33)
visa färre...
Lärosäte
Chalmers tekniska högskola (1505)
Blekinge Tekniska Högskola (1475)
Göteborgs universitet (794)
Lunds universitet (302)
Linnéuniversitetet (291)
Kungliga Tekniska Högskolan (265)
visa fler...
Uppsala universitet (202)
Mälardalens universitet (197)
Karlstads universitet (143)
Linköpings universitet (119)
RISE (118)
Malmö universitet (103)
Högskolan i Skövde (97)
Umeå universitet (86)
Örebro universitet (40)
Högskolan i Halmstad (29)
Luleå tekniska universitet (28)
Stockholms universitet (25)
Jönköping University (24)
Högskolan Kristianstad (15)
Mittuniversitetet (13)
Högskolan i Borås (12)
Karolinska Institutet (12)
Högskolan Väst (10)
Handelshögskolan i Stockholm (9)
Sveriges Lantbruksuniversitet (8)
Södertörns högskola (7)
Högskolan Dalarna (3)
VTI - Statens väg- och transportforskningsinstitut (3)
Högskolan i Gävle (2)
IVL Svenska Miljöinstitutet (2)
Stockholms konstnärliga högskola (1)
visa färre...
Språk
Engelska (4732)
Svenska (27)
Tyska (5)
Odefinierat språk (1)
Kinesiska (1)
Mongoliskt språk (1)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (4767)
Teknik (737)
Samhällsvetenskap (367)
Medicin och hälsovetenskap (39)
Humaniora (27)
Lantbruksvetenskap (8)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy