SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "AMNE:(NATURVETENSKAP Data- och informationsvetenskap Programvaruteknik) "

Sökning: AMNE:(NATURVETENSKAP Data- och informationsvetenskap Programvaruteknik)

  • Resultat 1-50 av 4878
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  •  
2.
  • Chatterjee, Bapi, 1982 (författare)
  • Lock-free Concurrent Search
  • 2017
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The contemporary computers typically consist of multiple computing cores with high compute power. Such computers make excellent concurrent asynchronous shared memory system. On the other hand, though many celebrated books on data structure and algorithm provide a comprehensive study of sequential search data structures, unfortunately, we do not have such a luxury if concurrency comes in the setting. The present dissertation aims to address this paucity. We describe novel lock-free algorithms for concurrent data structures that target a variety of search problems. (i) Point search (membership query, predecessor query, nearest neighbour query) for 1-dimensional data: Lock-free linked-list; lock-free internal and external binary search trees (BST). (ii) Range search for 1-dimensional data: A range search method for lock-free ordered set data structures - linked-list, skip-list and BST. (iii) Point search for multi-dimensional data: Lock-free kD-tree, specially, a generic method for nearest neighbour search. We prove that the presented algorithms are linearizable i.e. the concurrent data structure operations intuitively display their sequential behaviour to an observer of the concurrent system. The lock-freedom in the introduced algorithms guarantee overall progress in an asynchronous shared memory system. We present the amortized analysis of lock-free data structures to show their efficiency. Moreover, we provide sample implementations of the algorithms and test them over extensive micro-benchmarks. Our experiments demonstrate that the implementations are scalable and perform well when compared to related existing alternative implementations on common multi-core computers. Our focus is on propounding the generic methodologies for efficient lock-free concurrent search. In this direction, we present the notion of help-optimality, which captures the optimization of amortized step complexity of the operations. In addition to that, we explore the language-portable design of lock-free data structures that aims to simplify an implementation from programmer’s point of view. Finally, our techniques to implement lock-free linearizable range search and nearest neighbour search are independent of the underlying data structures and thus are adaptive to similar data structures.
  •  
3.
  • Liu, Yuanhua, 1971, et al. (författare)
  • Considering the importance of user profiles in interface design
  • 2009
  • Ingår i: User Interfaces. ; , s. 23-
  • Bokkapitel (övrigt vetenskapligt/konstnärligt)abstract
    • User profile is a popular term widely employed during product design processes by industrial companies. Such a profile is normally intended to represent real users of a product. The ultimate purpose of a user profile is actually to help designers to recognize or learn about the real user by presenting them with a description of a real user’s attributes, for instance; the user’s gender, age, educational level, attitude, technical needs and skill level. The aim of this chapter is to provide information on the current knowledge and research about user profile issues, as well as to emphasize the importance of considering these issues in interface design. In this chapter, we mainly focus on how users’ difference in expertise affects their performance or activity in various interaction contexts. Considering the complex interaction situations in practice, novice and expert users’ interactions with medical user interfaces of different technical complexity will be analyzed as examples: one focuses on novice and expert users’ difference when interacting with simple medical interfaces, and the other focuses on differences when interacting with complex medical interfaces. Four issues will be analyzed and discussed: (1) how novice and expert users differ in terms of performance during the interaction; (2) how novice and expert users differ in the perspective of cognitive mental models during the interaction; (3) how novice and expert users should be defined in practice; and (4) what are the main differences between novice and expert users’ implications for interface design. Besides describing the effect of users’ expertise difference during the interface design process, we will also pinpoint some potential problems for the research on interface design, as well as some future challenges that academic researchers and industrial engineers should face in practice.
  •  
4.
  • Blanch, Krister, 1991 (författare)
  • Beyond-application datasets and automated fair benchmarking
  • 2023
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Beyond-application perception datasets are generalised datasets that emphasise the fundamental components of good machine perception data. When analysing the history of perception datatsets, notable trends suggest that design of the dataset typically aligns with an application goal. Instead of focusing on a specific application, beyond-application datasets instead look at capturing high-quality, high-volume data from a highly kinematic environment, for the purpose of aiding algorithm development and testing in general. Algorithm benchmarking is a cornerstone of autonomous systems development, and allows developers to demonstrate their results in a comparative manner. However, most benchmarking systems allow developers to use their own hardware or select favourable data. There is also little focus on run time performance and consistency, with benchmarking systems instead showcasing algorithm accuracy. By combining both beyond-application dataset generation and methods for fair benchmarking, there is also the dilemma of how to provide the dataset to developers for this benchmarking, as the result of a high-volume, high-quality dataset generation is a significant increase in dataset size when compared to traditional perception datasets. This thesis presents the first results of attempting the creation of such a dataset. The dataset was built using a maritime platform, selected due to the highly dynamic environment presented on water. The design and initial testing of this platform is detailed, as well as as methods of sensor validation. Continuing, the thesis then presents a method of fair benchmarking, by utilising remote containerisation in a way that allows developers to present their software to the dataset, instead of having to first locally store a copy. To test this dataset and automatic online benchmarking, a number of reference algorithms were required for initial results. Three algorithms were built, using the data from three different sensors captured on the maritime platform. Each algorithm calculates vessel odometry, and the automatic benchmarking system was utilised to show the accuracy and run-time performance of these algorithms. It was found that the containerised approach alleviated data management concerns, prevented inflated accuracy results, and demonstrated precisely how computationally intensive each algorithm was.
  •  
5.
  • Bayram, Firas, et al. (författare)
  • DQSOps : Data Quality Scoring Operations Framework for Data-Driven Applications
  • 2023
  • Ingår i: EASE '23: Proceedings of the 27<sup>th</sup> International Conference on Evaluation and Assessment in Software Engineering. - : Association for Computing Machinery (ACM). - 9798400700446 ; , s. 32-41
  • Konferensbidrag (refereegranskat)abstract
    • Data quality assessment has become a prominent component in the successful execution of complex data-driven artificial intelligence (AI) software systems. In practice, real-world applications generate huge volumes of data at speeds. These data streams require analysis and preprocessing before being permanently stored or used in a learning task. Therefore, significant attention has been paid to the systematic management and construction of high-quality datasets. Nevertheless, managing voluminous and high-velocity data streams is usually performed manually (i.e. offline), making it an impractical strategy in production environments. To address this challenge, DataOps has emerged to achieve life-cycle automation of data processes using DevOps principles. However, determining the data quality based on a fitness scale constitutes a complex task within the framework of DataOps. This paper presents a novel Data Quality Scoring Operations (DQSOps) framework that yields a quality score for production data in DataOps workflows. The framework incorporates two scoring approaches, an ML prediction-based approach that predicts the data quality score and a standard-based approach that periodically produces the ground-truth scores based on assessing several data quality dimensions. We deploy the DQSOps framework in a real-world industrial use case. The results show that DQSOps achieves significant computational speedup rates compared to the conventional approach of data quality scoring while maintaining high prediction performance.
  •  
6.
  • Nilsson, R. Henrik, 1976, et al. (författare)
  • Mycobiome diversity: high-throughput sequencing and identification of fungi.
  • 2019
  • Ingår i: Nature reviews. Microbiology. - : Springer Science and Business Media LLC. - 1740-1534 .- 1740-1526. ; 17, s. 95-109
  • Forskningsöversikt (refereegranskat)abstract
    • Fungi are major ecological players in both terrestrial and aquatic environments by cycling organic matter and channelling nutrients across trophic levels. High-throughput sequencing (HTS) studies of fungal communities are redrawing the map of the fungal kingdom by hinting at its enormous - and largely uncharted - taxonomic and functional diversity. However, HTS approaches come with a range of pitfalls and potential biases, cautioning against unwary application and interpretation of HTS technologies and results. In this Review, we provide an overview and practical recommendations for aspects of HTS studies ranging from sampling and laboratory practices to data processing and analysis. We also discuss upcoming trends and techniques in the field and summarize recent and noteworthy results from HTS studies targeting fungal communities and guilds. Our Review highlights the need for reproducibility and public data availability in the study of fungal communities. If the associated challenges and conceptual barriers are overcome, HTS offers immense possibilities in mycology and elsewhere.
  •  
7.
  • Wilhelmsson, Kenneth (författare)
  • Automatic Question Generation from Swedish Documents as a Tool for Information Extraction
  • 2011
  • Ingår i: Proceedings of the 18th Nordic Conference of Computational Linguistics NODALIDA 2011. ; , s. 323-326
  • Konferensbidrag (refereegranskat)abstract
    • An implementation of automatic question generation (QG) from raw Swedish text is presented. QG is here chosen as an alternative to natural query systems where any query can be posed and no indication is given of whether the current text database includes the information sought for. The program builds on parsing with grammatical functions from which corresponding questions are generated and it incorporates the article database of Swedish Wikipedia. The pilot system is meant to work with a text shown in the GUI and auto-completes user input to help find available questions. The act of question generation is here described together with early test results regarding the current produced questions.
  •  
8.
  • Lu, Zhihan, et al. (författare)
  • Multimodal Hand and Foot Gesture Interaction for Handheld Devices
  • 2014
  • Ingår i: ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP). - : Association for Computing Machinery (ACM). - 1551-6857 .- 1551-6865. ; 11:1
  • Tidskriftsartikel (refereegranskat)abstract
    • We present a hand-and-foot-based multimodal interaction approach for handheld devices. Our method combines input modalities (i.e., hand and foot) and provides a coordinated output to both modalities along with audio and video. Human foot gesture is detected and tracked using contour-based template detection (CTD) and Tracking-Learning-Detection (TLD) algorithm. 3D foot pose is estimated from passive homography matrix of the camera. 3D stereoscopic and vibrotactile are used to enhance the immersive feeling. We developed a multimodal football game based on the multimodal approach as a proof-of-concept. We confirm our systems user satisfaction through a user study.
  •  
9.
  • Bergström, Gustav, et al. (författare)
  • Evaluating the layout quality of UML class diagrams using machine learning
  • 2022
  • Ingår i: Journal of Systems and Software. - : Elsevier BV. - 0164-1212. ; 192
  • Tidskriftsartikel (refereegranskat)abstract
    • UML is the de facto standard notation for graphically representing software. UML diagrams are used in the analysis, construction, and maintenance of software systems. Mostly, UML diagrams capture an abstract view of a (piece of a) software system. A key purpose of UML diagrams is to share knowledge about the system among developers. The quality of the layout of UML diagrams plays a crucial role in their comprehension. In this paper, we present an automated method for evaluating the layout quality of UML class diagrams. We use machine learning based on features extracted from the class diagram images using image processing. Such an automated evaluator has several uses: (1) From an industrial perspective, this tool could be used for automated quality assurance for class diagrams (e.g., as part of a quality monitor integrated into a DevOps toolchain). For example, automated feedback can be generated once a UML diagram is checked in the project repository. (2) In an educational setting, the evaluator can grade the layout aspect of student assignments in courses on software modeling, analysis, and design. (3) In the field of algorithm design for graph layouts, our evaluator can assess the layouts generated by such algorithms. In this way, this evaluator opens up the road for using machine learning to learn good layouting algorithms. Approach.: We use machine learning techniques to build (linear) regression models based on features extracted from the class diagram images using image processing. As ground truth, we use a dataset of 600+ UML Class Diagrams for which experts manually label the quality of the layout. Contributions.: This paper makes the following contributions: (1) We show the feasibility of the automatic evaluation of the layout quality of UML class diagrams. (2) We analyze which features of UML class diagrams are most strongly related to the quality of their layout. (3) We evaluate the performance of our layout evaluator. (4) We offer a dataset of labeled UML class diagrams. In this dataset, we supply for every diagram the following information: (a) a manually established ground truth of the quality of the layout, (b) an automatically established value for the layout-quality of the diagram (produced by our classifier), and (c) the values of key features of the layout of the diagram (obtained by image processing). This dataset can be used for replication of our study and others to build on and improve on this work. Editor's note: Open Science material was validated by the Journal of Systems and Software Open Science Board.
  •  
10.
  • Abbas, Nadeem, 1980-, et al. (författare)
  • ASPLe : a methodology to develop self-adaptive software systems with reuse
  • 2017
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • Advances in computing technologies are pushing software systems and their operating environments to become more dynamic and complex. The growing complexity of software systems coupled with uncertainties induced by runtime variations leads to challenges in software analysis and design. Self-Adaptive Software Systems (SASS) have been proposed as a solution to address design time complexity and uncertainty by adapting software systems at runtime. A vast body of knowledge on engineering self-adaptive software systems has been established. However, to the best of our knowledge, no or little work has considered systematic reuse of this knowledge. To that end, this study contributes an Autonomic Software Product Lines engineering (ASPLe) methodology. The ASPLe is based on a multi-product lines strategy which leverages systematic reuse through separation of application and adaptation logic. It provides developers with repeatable process support to design and develop self-adaptive software systems with reuse across several application domains. The methodology is composed of three core processes, and each process is organized for requirements, design, implementation, and testing activities. To exemplify and demonstrate the use of the ASPLe methodology, three application domains are used as running examples throughout the report.
  •  
11.
  • Abbas, Nadeem, 1980- (författare)
  • Designing Self-Adaptive Software Systems with Reuse
  • 2018
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Modern software systems are increasingly more connected, pervasive, and dynamic, as such, they are subject to more runtime variations than legacy systems. Runtime variations affect system properties, such as performance and availability. The variations are difficult to anticipate and thus mitigate in the system design.Self-adaptive software systems were proposed as a solution to monitor and adapt systems in response to runtime variations. Research has established a vast body of knowledge on engineering self-adaptive systems. However, there is a lack of systematic process support that leverages such engineering knowledge and provides for systematic reuse for self-adaptive systems development. This thesis proposes the Autonomic Software Product Lines (ASPL), which is a strategy for developing self-adaptive software systems with systematic reuse. The strategy exploits the separation of a managed and a managing subsystem and describes three steps that transform and integrate a domain-independent managing system platform into a domain-specific software product line for self-adaptive software systems.Applying the ASPL strategy is however not straightforward as it involves challenges related to variability and uncertainty. We analyzed variability and uncertainty to understand their causes and effects. Based on the results, we developed the Autonomic Software Product Lines engineering (ASPLe) methodology, which provides process support for the ASPL strategy. The ASPLe has three processes, 1) ASPL Domain Engineering, 2) Specialization and 3) Integration. Each process maps to one of the steps in the ASPL strategy and defines roles, work-products, activities, and workflows for requirements, design, implementation, and testing. The focus of this thesis is on requirements and design.We validate the ASPLe through demonstration and evaluation. We developed three demonstrator product lines using the ASPLe. We also conducted an extensive case study to evaluate key design activities in the ASPLe with experiments, questionnaires, and interviews. The results show a statistically significant increase in quality and reuse levels for self-adaptive software systems designed using the ASPLe compared to current engineering practices.
  •  
12.
  • Mallozzi, Piergiuseppe, 1990, et al. (författare)
  • A runtime monitoring framework to enforce invariants on reinforcement learning agents exploring complex environments
  • 2019
  • Ingår i: RoSE 2019, IEEE/ACM 2nd International Workshop on Robotics Software Engineering, p.5-12. - : IEEE. - 9781728122496
  • Konferensbidrag (refereegranskat)abstract
    • © 2019 IEEE. Without prior knowledge of the environment, a software agent can learn to achieve a goal using machine learning. Model-free Reinforcement Learning (RL) can be used to make the agent explore the environment and learn to achieve its goal by trial and error. Discovering effective policies to achieve the goal in a complex environment is a major challenge for RL. Furthermore, in safety-critical applications, such as robotics, an unsafe action may cause catastrophic consequences in the agent or in the environment. In this paper, we present an approach that uses runtime monitoring to prevent the reinforcement learning agent to perform 'wrong' actions and to exploit prior knowledge to smartly explore the environment. Each monitor is de?ned by a property that we want to enforce to the agent and a context. The monitors are orchestrated by a meta-monitor that activates and deactivates them dynamically according to the context in which the agent is learning. We have evaluated our approach by training the agent in randomly generated learning environments. Our results show that our approach blocks the agent from performing dangerous and safety-critical actions in all the generated environments. Besides, our approach helps the agent to achieve its goal faster by providing feedback and shaping its reward during learning.
  •  
13.
  • Falkman, Göran, 1968-, et al. (författare)
  • SOMWeb - Towards an Infrastructure for Knowledge Sharing in Oral Medicine
  • 2005
  • Ingår i: Connecting Medical Informatics and Bio-Informatics: Proceedings of MIE2005 - The XIXth International Congress of the European Federation for Medical Informatics. - Amsterdam : IOS Press. - 1586035495 ; 116, s. 527-32, s. 527-532
  • Konferensbidrag (refereegranskat)abstract
    • In a net-based society, clinicians can come together for cooperative work and distance learning around a common medical material. This requires suitable techniques for cooperative knowledge management and user interfaces that are adapted to both the group as a whole and to individuals. To support distributed management and sharing of clinical knowledge, we propose the development of an intelligent web community for clinicians within oral medicine. This virtual meeting place will support the ongoing work on developing a digital knowledge base, providing a foundation for a more evidence-based oral medicine. The presented system is founded on the use and development of web services and standards for knowledge modelling and knowledge-based systems. The work is conducted within the frame of a well-established cooperation between oral medicine and computer science.
  •  
14.
  • Chatterjee, Bapi, 1982 (författare)
  • Efficient Implementation of Concurrent Data Structures on Multi-core and Many-core Architectures
  • 2015
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Synchronization of concurrent threads is the central problem in order to design efficient concurrent data-structures. The compute systems widely available in market are increasingly becoming heterogeneous involving multi-core Central Processing Units (CPUs) and many-core Graphics Processing Units (GPUs). This thesis contributes to the research of efficient synchronization in concurrent data-structures in more than one way. It is divided into two parts. In the first part, a novel design of a Set Abstract Data Type (ADT) based on an efficient lock-free Binary Search Tree (BST) with improved amortized bounds of the time complexity of set operations - Add, Remove and Contains, is presented. In the second part, a comprehensive evaluation of concurrent Queue implementations on multi-core CPUs as well as many-core GPUs are presented. Efficient Lock-free BST -To the best of our knowledge, the lock-free BST presented in this thesis is the first to achieve an amortized complexity of O(H(n)+c) for all Set operations where H(n) is the height of a BST on n nodes and c is the contention measure. Also, the presented lock-free algorithm of BST comes with an improved disjoint-access-parallelism compared to the previously existing concurrent BST algorithms. This algorithm uses single-word compare-and-swap (CAS) primitives. The presented algorithm is linearizable. We implemented the algorithm in Java and it shows good scalability. Evaluation of concurrent data-structures - We have evaluated the performance of a number of concurrent FIFO Queue algorithms on multi-core CPUs and many-core GPUs. We studied the portability of existing design of concurrent Queues from CPUs to GPUs which are inherently designed for SIMD programs. We observed that in general concurrent queues offer them to efficient implementation on GPUs with faster cache memory and better performance support for atomic synchronization primitives such as CAS. To the best of our knowledge, this is the first attempt to evaluate a concurrent data-structure on GPUs.
  •  
15.
  • Aramrattana, Maytheewat, 1988-, et al. (författare)
  • Team Halmstad Approach to Cooperative Driving in the Grand Cooperative Driving Challenge 2016
  • 2018
  • Ingår i: IEEE transactions on intelligent transportation systems (Print). - Piscataway, N.J. : Institute of Electrical and Electronics Engineers Inc.. - 1524-9050 .- 1558-0016. ; 19:4, s. 1248-1261
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper is an experience report of team Halmstad from the participation in a competition organised by the i-GAME project, the Grand Cooperative Driving Challenge 2016. The competition was held in Helmond, The Netherlands, during the last weekend of May 2016. We give an overview of our car’s control and communication system that was developed for the competition following the requirements and specifications of the i-GAME project. In particular, we describe our implementation of cooperative adaptive cruise control, our solution to the communication and logging requirements, as well as the high level decision making support. For the actual competition we did not manage to completely reach all of the goals set out by the organizers as well as ourselves. However, this did not prevent us from outperforming the competition. Moreover, the competition allowed us to collect data for further evaluation of our solutions to cooperative driving. Thus, we discuss what we believe were the strong points of our system, and discuss post-competition evaluation of the developments that were not fully integrated into our system during competition time. © 2000-2011 IEEE.
  •  
16.
  • Bainomugisha, Engineer, et al. (författare)
  • Message from Chairs of SEiA 2018
  • 2018
  • Ingår i: Proceedings - International Conference on Software Engineering. - New York, NY, USA : ACM. - 0270-5257. ; 2018, s. x-xi
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)
  •  
17.
  • Javed, Muhammad, et al. (författare)
  • Safe and secure platooning of Automated Guided Vehicles in Industry 4.0
  • 2021
  • Ingår i: Journal of systems architecture. - Sweden : Elsevier B.V.. - 1383-7621 .- 1873-6165. ; 121
  • Tidskriftsartikel (refereegranskat)abstract
    • Automated Guided Vehicles (AGVs) are widely used for materials transportation. Operating them in a platooned manner has the potential to improve safety, security and efficiency, control overall traffic flow and reduce resource usage. However, the published studies on platooning focus mainly on the design of technical solutions in the context of automotive domain. In this paper we focus on a largely unexplored theme of platooning in production sites transformed to the Industry 4.0, with the aim of providing safety and security assurances. We present an overall approach for a fault- and threat tolerant platooning for materials transportation in production environments. Our functional use cases include the platoon control for collision avoidance, data acquisition and processing by considering range, and connectivity with fog and cloud levels. To perform the safety and security analyses, the Hazard and Operability (HAZOP) and Threat and Operability (THROP) techniques are used. Based on the results obtained from them, the safety and security requirements are derived for the identification and prevention/mitigation of potential platooning hazards, threats and vulnerabilities. The assurance cases are constructed to show the acceptable safety and security of materials transportation using AGV platooning. We leveraged a simulation-based digital twin for performing the verification and validation as well as finetuning of the platooning strategy. Simulation data is gathered from digital twin to monitor platoon operations, identify unexpected or incorrect behaviour, evaluate the potential implications, trigger control actions to resolve them, and continuously update assurance cases. The applicability of the AGV platooning is demonstrated in the context of a quarry site. © 2021 The Authors
  •  
18.
  •  
19.
  • Sweidan, Dirar, et al. (författare)
  • Predicting Customer Churn in Retailing
  • 2022
  • Ingår i: Proceedings 21st IEEE International Conference on Machine Learning and Applications ICMLA 2022. - : IEEE. - 9781665462839 - 9781665462846 ; , s. 635-640
  • Konferensbidrag (refereegranskat)abstract
    • Customer churn is one of the most challenging problems for digital retailers. With significantly higher costs for acquiring new customers than retaining existing ones, knowledge about which customers are likely to churn becomes essential. This paper reports a case study where a data-driven approach to churn prediction is used for predicting churners and gaining insights about the problem domain. The real-world data set used contains approximately 200 000 customers, describing each customer using more than 50 features. In the pre-processing, exploration, modeling and analysis, attributes related to recency, frequency, and monetary concepts are identified and utilized. In addition, correlations and feature importance are used to discover and understand churn indicators. One important finding is that the churn rate highly depends on the number of previous purchases. In the segment consisting of customers with only one previous purchase, more than 75% will churn, i.e., not make another purchase in the coming year. For customers with at least four previous purchases, the corresponding churn rate is around 25%. Further analysis shows that churning customers in general, and as expected, make smaller purchases and visit the online store less often. In the experimentation, three modeling techniques are evaluated, and the results show that, in particular, Gradient Boosting models can predict churners with relatively high accuracy while obtaining a good balance between precision and recall. 
  •  
20.
  • Rumman, Nadine Abu, et al. (författare)
  • Skin deformation methods for interactive character animation
  • 2017
  • Ingår i: Communications in Computer and Information Science. - Cham : Springer International Publishing. - 1865-0937 .- 1865-0929. ; 693, s. 153-174, s. 153-174
  • Konferensbidrag (refereegranskat)abstract
    • Character animation is a vital component of contemporary computer games, animated feature films and virtual reality applications. The problem of creating appealing character animation can best be described by the title of the animation bible: “The Illusion of Life”. The focus is not on completing a given motion task, but more importantly on how this motion task is performed by the character. This does not necessarily require realistic behavior, but behavior that is believable. This of course includes the skin deformations when the character is moving. In this paper, we focus on the existing research in the area of skin deformation, ranging from skeleton-based deformation and volume preserving techniques to physically based skinning methods. We also summarize the recent contributions in deformable and soft body simulations for articulated characters, and discuss various geometric and example-based approaches. © Springer International Publishing AG 2017.
  •  
21.
  • Tuma, Katja, 1991, et al. (författare)
  • Flaws in Flows : Unveiling Design Flaws via Information Flow Analysis
  • 2019
  • Ingår i: Proceedings - 2019 IEEE International Conference on Software Architecture, ICSA 2019. - : IEEE. - 9781728105284 ; , s. 191-200
  • Konferensbidrag (refereegranskat)abstract
    • This paper presents a practical and formal approach to analyze security-centric information flow policies at the level of the design model. Specifically, we focus on data confidentiality and data integrity objectives. In its guiding principles, the approach is meant to be amenable for designers (e.g., software architects) that have very limited or no background in formal models, logics, and the like. To this aim, we provide an intuitive graphical notation, which is based on the familiar Data Flow Diagrams, and which requires as little effort as possible in terms of extra security-centric information the designer has to provide. The result of the analysis algorithm is the early discovery of design flaws in the form of violations of the intended security properties. The approach is implemented as a publicly available plugin for Eclipse and evaluated with four real-world case studies from publicly available literature.
  •  
22.
  • David, I., et al. (författare)
  • Blended modeling in commercial and open-source model-driven software engineering tools: A systematic study
  • 2023
  • Ingår i: Software and Systems Modeling. - : Springer Science and Business Media LLC. - 1619-1366 .- 1619-1374. ; 22, s. 415-447
  • Tidskriftsartikel (refereegranskat)abstract
    • Blended modeling aims to improve the user experience of modeling activities by prioritizing the seamless interaction with models through multiple notations over the consistency of the models. Inconsistency tolerance, thus, becomes an important aspect in such settings. To understand the potential of current commercial and open-source modeling tools to support blended modeling, we have designed and carried out a systematic study. We identify challenges and opportunities in the tooling aspect of blended modeling. Specifically, we investigate the user-facing and implementation-related characteristics of existing modeling tools that already support multiple types of notations and map their support for other blended aspects, such as inconsistency tolerance, and elevated user experience. For the sake of completeness, we have conducted a multivocal study, encompassing an academic review, and grey literature review. We have reviewed nearly 5000 academic papers and nearly 1500 entries of grey literature. We have identified 133 candidate tools, and eventually selected 26 of them to represent the current spectrum of modeling tools.
  •  
23.
  • Abbas, Nadeem, 1980-, et al. (författare)
  • Smart Forest Observatories Network : A MAPE-K Architecture Based Approach for Detecting and Monitoring Forest Damage
  • 2023
  • Ingår i: Proceedings of the Conference Digital solutions for detecting and monitoring forest damage.
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Forests are essential for life, providing various ecological, social, and economic benefits worldwide. However, one of the main challenges faced by the world is the forest damage caused by biotic and abiotic factors. In any case, the forest damages threaten the environment, biodiversity, and ecosystem. Climate change and anthropogenic activities, such as illegal logging and industrial waste, are among the principal elements contributing to forest damage. To achieve the United Nations' Sustainable Development Goals (SDGs) related to forests and climate change, detecting and analyzing forest damages, and taking appropriate measures to prevent or reduce the damages are essential. To that end, we envision establishing a Smart Forest Observatories (SFOs) network, as shown below, which can be either a local area or a wide area network involving remote forests. The basic idea is to use Monitor, Analyze, Plan, Execute, and Knowledge (MAPE-K) architecture from autonomic computing and self-adaptive software systems domain to design and develop the SFOs network. The SFOs are planned to collect, analyze, and share the collected data and analysis results using state-of-the-art methods. The principal objective of the SFOs network is to provide accurate and real-time data to policymakers and forest managers, enabling them to develop effective policies and management strategies for global forest conservation that help to achieve SDGs related to forests and climate change.
  •  
24.
  • Robinson, Jonathan, 1986, et al. (författare)
  • An atlas of human metabolism
  • 2020
  • Ingår i: Science Signaling. - : American Association for the Advancement of Science (AAAS). - 1945-0877 .- 1937-9145. ; 13:624
  • Tidskriftsartikel (refereegranskat)abstract
    • Genome-scale metabolic models (GEMs) are valuable tools to study metabolism and provide a scaffold for the integrative analysis of omics data. Researchers have developed increasingly comprehensive human GEMs, but the disconnect among different model sources and versions impedes further progress. We therefore integrated and extensively curated the most recent human metabolic models to construct a consensus GEM, Human1. We demonstrated the versatility of Human1 through the generation and analysis of cell- and tissue-specific models using transcriptomic, proteomic, and kinetic data. We also present an accompanying web portal, Metabolic Atlas (https://www.metabolicatlas.org/), which facilitates further exploration and visualization of Human1 content. Human1 was created using a version-controlled, open-source model development framework to enable community-driven curation and refinement. This framework allows Human1 to be an evolving shared resource for future studies of human health and disease.
  •  
25.
  • Menghi, Claudio, 1987, et al. (författare)
  • Poster: Property specification patterns for robotic missions
  • 2018
  • Ingår i: Proceedings - International Conference on Software Engineering. - New York, NY, USA : ACM. - 0270-5257. ; Part F137351, s. 434-435
  • Konferensbidrag (refereegranskat)abstract
    • Engineering dependable software for mobile robots is becoming increasingly important. A core asset in engineering mobile robots is the mission specification-A formal description of the goals that mobile robots shall achieve. Such mission specifications are used, among others, to synthesize, verify, simulate, or guide the engineering of robot software. Development of precise mission specifications is challenging. Engineers need to translate the mission requirements into specification structures expressed in a logical language-A laborious and error-prone task. To mitigate this problem, we present a catalog of mission specification patterns for mobile robots. Our focus is on robot movement, one of the most prominent and recurrent specification problems for mobile robots. Our catalog maps common mission specification problems to recurrent solutions, which we provide as templates that can be used by engineers. The patterns are the result of analyzing missions extracted from the literature. For each pattern, we describe usage intent, known uses, relationships to other patterns, and-most importantly-A template representing the solution as a logical formula in temporal logic. Our specification patterns constitute reusable building blocks that can be used by engineers to create complex mission specifications while reducing specification mistakes. We believe that our patterns support researchers working on tool support and techniques to synthesize and verify mission specifications, and language designers creating rich domain-specific languages for mobile robots, incorporating our patterns as language concepts.
  •  
26.
  • Nguyen, Björnborg, 1992, et al. (författare)
  • Systematic benchmarking for reproducibility of computer vision algorithms for real-time systems: The example of optic flow estimation
  • 2019
  • Ingår i: IEEE International Conference on Intelligent Robots and Systems. - : IEEE. - 2153-0858 .- 2153-0866. ; , s. 5264-5269
  • Konferensbidrag (refereegranskat)abstract
    • Until now there have been few formalized methods for conducting systematic benchmarking aiming at reproducible results when it comes to computer vision algorithms. This is evident from lists of algorithms submitted to prominent datasets, authors of a novel method in many cases primarily state the performance of their algorithms in relation to a shallow description of the hardware system where it was evaluated. There are significant problems linked to this non-systematic approach of reporting performance, especially when comparing different approaches and when it comes to the reproducibility of claimed results. Furthermore how to conduct retrospective performance analysis such as an algorithm's suitability for embedded real-time systems over time with underlying hardware and software changes in place. This paper proposes and demonstrates a systematic way of addressing such challenges by adopting containerization of software aiming at formalization and reproducibility of benchmarks. Our results show maintainers of broadly accepted datasets in the computer vision community to strive for systematic comparison and reproducibility of submissions to increase the value and adoption of computer vision algorithms in the future.
  •  
27.
  • Casado, Lander, 1985, et al. (författare)
  • ContikiSec: A Secure Network Layer for Wireless Sensor Networks under the Contiki Operating System
  • 2009
  • Ingår i: Proceedings of the 14th Nordic Conference on Secure IT Systems (NordSec 2009), Lecture Notes in Computer Science. - 1611-3349. - 9783642047657 ; 5838, s. 133 - 147
  • Konferensbidrag (refereegranskat)abstract
    • In this paper we introduce ContikiSec, a secure network layer forwireless sensor networks, designed for the Contiki Operating System. ContikiSechas a configurable design, providing three security modes starting fromconfidentiality and integrity, and expanding to confidentiality, authentication,and integrity. ContikiSec has been designed to balance low energy consumptionand security while conforming to a small memory footprint. Our design wasbased on performance evaluation of existing security primitives and is part ofthe contribution of this paper. Our evaluation was performed in the ModularSensor Board hardware platform for wireless sensor networks, running Contiki.Contiki is an open source, highly portable operating system for wireless sensornetworks (WSN) that is widely used in WSNs.
  •  
28.
  • Smedberg, Henrik, et al. (författare)
  • Mimer : A Web-Based Tool for Knowledge Discovery in Multi-Criteria Decision Support
  • 2024
  • Ingår i: IEEE Computational Intelligence Magazine. - : Institute of Electrical and Electronics Engineers (IEEE). - 1556-603X .- 1556-6048. ; 19:3, s. 73-87
  • Tidskriftsartikel (refereegranskat)abstract
    • Practitioners of multi-objective optimization currently lack open tools that provide decision support through knowledge discovery. There exist many software platforms for multi-objective optimization, but they often fall short of implementing methods for rigorous post-optimality analysis and knowledge discovery from the generated solutions. This paper presents Mimer, a multi-criteria decision support tool for solution exploration, preference elicitation, knowledge discovery, and knowledge visualization. Mimer is openly available as a web-based tool and uses state-of-the-art web-technologies based on WebAssembly to perform heavy computations on the client-side. Its features include multiple linked visualizations and input methods that enable the decision maker to interact with the solutions, knowledge discovery through interactive data mining and graph-based knowledge visualization. It also includes a complete Python programming interface for advanced data manipulation tasks that may be too specific for the graphical interface. Mimer is evaluated through a user study in which the participants are asked to perform representative tasks simulating practical analysis and decision making. The participants also complete a questionnaire about their experience and the features available in Mimer. The survey indicates that participants find Mimer useful for decision support. The participants also offered suggestions for enhancing some features and implementing new features to extend the capabilities of the tool.
  •  
29.
  • Lwakatare, Lucy, 1987, et al. (författare)
  • On the experiences of adopting automated data validation in an industrial machine learning project
  • 2021
  • Ingår i: Proceedings - International Conference on Software Engineering. - 0270-5257. ; , s. 248-257
  • Konferensbidrag (refereegranskat)abstract
    • Background: Data errors are a common challenge in machine learning (ML) projects and generally cause significant performance degradation in ML-enabled software systems. To ensure early detection of erroneous data and avoid training ML models using bad data, research and industrial practice suggest incorporating a data validation process and tool in ML system development process. Aim: The study investigates the adoption of a data validation process and tool in industrial ML projects. The data validation process demands significant engineering resources for tool development and maintenance. Thus, it is important to identify the best practices for their adoption especially by development teams that are in the early phases of deploying ML-enabled software systems. Method: Action research was conducted at a large-software intensive organization in telecommunications, specifically within the analytics R&D organization for an ML use case of classifying faults from returned hardware telecommunication devices. Results: Based on the evaluation results and learning from our action research, we identified three best practices, three benefits, and two barriers to adopting the data validation process and tool in ML projects. We also propose a data validation framework (DVF) for systematizing the adoption of a data validation process. Conclusions: The results show that adopting a data validation process and tool in ML projects is an effective approach of testing ML-enabled software systems. It requires having an overview of the level of data (feature, dataset, cross-dataset, data stream) at which certain data quality tests can be applied.
  •  
30.
  • Heyn, Hans-Martin, 1987, et al. (författare)
  • An Investigation of Challenges Encountered When Specifying Training Data and Runtime Monitors for Safety Critical ML Applications
  • 2023
  • Ingår i: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). - : Springer. - 1611-3349 .- 0302-9743. ; 13975 LNCS, s. 206-222
  • Konferensbidrag (refereegranskat)abstract
    • [Context and motivation] The development and operation of critical software that contains machine learning (ML) models requires diligence and established processes. Especially the training data used during the development of ML models have major influences on the later behaviour of the system. Runtime monitors are used to provide guarantees for that behaviour. [Question/problem] We see major uncertainty in how to specify training data and runtime monitoring for critical ML models and by this specifying the final functionality of the system. In this interview-based study we investigate the underlying challenges for these difficulties. [Principal ideas/results] Based on ten interviews with practitioners who develop ML models for critical applications in the automotive and telecommunication sector, we identified 17 underlying challenges in 6 challenge groups that relate to the challenge of specifying training data and runtime monitoring. [Contribution] The article provides a list of the identified underlying challenges related to the difficulties practitioners experience when specifying training data and runtime monitoring for ML models. Furthermore, interconnection between the challenges were found and based on these connections recommendation proposed to overcome the root causes for the challenges.
  •  
31.
  • Scheuner, Joel, 1991, et al. (författare)
  • Performance Benchmarking of Infrastructure-as-a-Service (IaaS) Clouds with CloudWorkBench
  • 2019
  • Ingår i: ICPE 2019 - Companion of the 2019 ACM/SPEC International Conference on Performance Engineering. - New York, NY, USA : ACM. ; , s. 53-56
  • Konferensbidrag (refereegranskat)abstract
    • The continuing growth of the cloud computing market has led to an unprecedented diversity of cloud services with different performance characteristics. To support service selection, researchers and practitioners conduct cloud performance benchmarking by measuring and objectively comparing the performance of different providers and configurations (e.g., instance types in different data center regions). In this tutorial, we demonstrate how to write performance tests for IaaS clouds using the Web-based benchmarking tool Cloud WorkBench (CWB). We will motivate and introduce benchmarking of IaaS cloud in general, demonstrate the execution of a simple benchmark in a public cloud environment, summarize the CWB tool architecture, and interactively develop and deploy a more advanced benchmark together with the participants.
  •  
32.
  • Sundell, Håkan, 1968, et al. (författare)
  • NOBLE: non-blocking programming support via lock-free shared abstract data types
  • 2009
  • Ingår i: SIGARCH Computer Architecture News. - : ACM, Association for Computing Machinery, Inc.. - 0163-5964 .- 1943-5851. ; 36:5, s. 80-87
  • Tidskriftsartikel (refereegranskat)abstract
    • An essential part of programming for multi-core and multi-processor includes ef cient and reliable means for sharing data. Lock-free data structures are known as very suitable for this purpose, although experienced to be very complex to design. In this paper, we present a software library of non-blocking abstract data types that have been designed to facilitate lock-free programming for non-experts. The system provides: i) ef cient implementations of the most commonly used data types in concurrent and sequential software design, ii) a lock-free memory management system, and iii) a run time-system. The library provides clear semantics that are at least as strong as those of corresponding lock-based implementations of the respective data types. Our software library can be used for facilitating lockfree programming; its design enables the programmer to: i) replace lock-based components of sequential or parallel code easily and ef ciently , ii) use well-tuned concurrent algorithms inside a software or hardware transactional system. In the paper we describe the design and functionality of the system. We also provide experimental results that show that the library can considerably improve the performance of software systems.
  •  
33.
  • Palyvos-Giannas, Dimitrios, 1991 (författare)
  • Explainable and Resource-Efficient Stream Processing Through Provenance and Scheduling
  • 2022
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • In our era of big data, information is captured at unprecedented volumes and velocities, with technologies such as Cyber-Physical Systems making quick decisions based on the processing of streaming, unbounded datasets. In such scenarios, it can be beneficial to process the data in an online manner, using the stream processing paradigm implemented by Stream Processing Engines (SPEs). While SPEs enable high-throughput, low-latency analysis, they are faced with challenges connected to evolving deployment scenarios, like the increasing use of heterogeneous, resource-constrained edge devices together with cloud resources and the increasing user expectations for usability, control, and resource-efficiency, on par with features provided by traditional databases. This thesis tackles open challenges regarding making stream processing more user-friendly, customizable, and resource-efficient. The first part outlines our work, providing high-level background information, descriptions of the research problems, and our contributions. The second part presents our three state-of-the-art frameworks for explainable data streaming using data provenance , which can help users of streaming queries to identify important data points, explain unexpected behaviors, and aid query understanding and debugging. (A) GeneaLog provides backward provenance allowing users to identify the inputs that contributed to the generation of each output of a streaming query. (B) Ananke is the first framework to provide a duplicate-free graph of live forward provenance, enabling easy bidirectional tracing of input-output relationships in streaming queries and identifying data points that have finished contributing to results. (C) Erebus is the first framework that allows users to define expectations about the results of a streaming query, validating whether these expectations are met or providing explanations in the form of why-not provenance otherwise. The third part presents techniques for execution efficiency through custom scheduling , introducing our state-of-the-art scheduling frameworks that control resource allocation and achieve user-defined performance goals. (D) Haren is an SPE-agnostic user-level scheduler that can efficiently enforce user-defined scheduling policies. (E) Lachesis is a standalone scheduling middleware that requires no changes to SPEs but, instead, directly guides the scheduling decisions of the underlying Operating System. Our extensive evaluations using real-world SPEs and workloads show that our work significantly improves over the state-of-the-art while introducing only small performance overheads.
  •  
34.
  • Peldszus, Sven, et al. (författare)
  • Secure Data-Flow Compliance Checks between Models and Code Based on Automated Mappings
  • 2019
  • Ingår i: Proceedings - 2019 ACM/IEEE 22nd International Conference on Model Driven Engineering Languages and Systems, MODELS 2019. ; , s. 23-33
  • Konferensbidrag (refereegranskat)abstract
    • During the development of security-critical software, the system implementation must capture the security properties postulated by the architectural design. This paper presents an approach to support secure data-flow compliance checks between design models and code. To iteratively guide the developer in discovering such compliance violations we introduce automated mappings. These mappings are created by searching for correspondences between a design-level model (Security Data Flow Diagram) and an implementation-level model (Program Model). We limit the search space by considering name similarities between model elements and code elements as well as by the use of heuristic rules for matching data-flow structures. The main contributions of this paper are three-fold. First, the automated mappings support the designer in an early discovery of implementation absence, convergence, and divergence with respect to the planned software design. Second, the mappings also support the discovery of secure data-flow compliance violations in terms of illegal asset flows in the software implementation. Third, we present our implementation of the approach as a publicly available Eclipse plugin and its evaluation on five open source Java projects (including Eclipse secure storage).
  •  
35.
  • Mahmood, Wardah, 1992, et al. (författare)
  • Effects of variability in models: a family of experiments
  • 2022
  • Ingår i: Empirical Software Engineering. - : Springer Science and Business Media LLC. - 1382-3256 .- 1573-7616. ; 27:3
  • Tidskriftsartikel (refereegranskat)abstract
    • The ever-growing need for customization creates a need to maintain software systems in many different variants. To avoid having to maintain different copies of the same model, developers of modeling languages and tools have recently started to provide implementation techniques for such variant-rich systems, notably variability mechanisms, which support implementing the differences between model variants. Available mechanisms either follow the annotative or the compositional paradigm, each of which have dedicated benefits and drawbacks. Currently, language and tool designers select the used variability mechanism often solely based on intuition. A better empirical understanding of the comprehension of variability mechanisms would help them in improving support for effective modeling. In this article, we present an empirical assessment of annotative and compositional variability mechanisms for three popular types of models. We report and discuss findings from a family of three experiments with 164 participants in total, in which we studied the impact of different variability mechanisms during model comprehension tasks. We experimented with three model types commonly found in modeling languages: class diagrams, state machine diagrams, and activity diagrams. We find that, in two out of three experiments, annotative technique lead to better developer performance. Use of the compositional mechanism correlated with impaired performance. For all three considered tasks, the annotative mechanism was preferred over the compositional one in all experiments. We present actionable recommendations concerning support of flexible, tasks-specific solutions, and the transfer of established best practices from the code domain to models.
  •  
36.
  • Tuzun, Eray, et al. (författare)
  • Ground-Truth Deficiencies in Software Engineering : When Codifying the Past Can Be Counterproductive
  • 2022
  • Ingår i: IEEE Software. - : IEEE Computer Society. - 0740-7459 .- 1937-4194. ; 39:3, s. 85-95
  • Tidskriftsartikel (refereegranskat)abstract
    • In software engineering, the objective function of human decision makers might be influenced by many factors. Relying on historical data as the ground truth may give rise to systems that automate software engineering decisions by mimicking past suboptimal behavior. We describe the problem and offer some strategies. ©IEEE.
  •  
37.
  • Alshareef, Hanaa, 1985, et al. (författare)
  • Transforming data flow diagrams for privacy compliance
  • 2021
  • Ingår i: MODELSWARD 2021 - Proceedings of the 9th International Conference on Model-Driven Engineering and Software Development. - : SCITEPRESS - Science and Technology Publications. ; , s. 207-215
  • Konferensbidrag (refereegranskat)abstract
    • Most software design tools, as for instance Data Flow Diagrams (DFDs), are focused on functional aspects and cannot thus model non-functional aspects like privacy. In this paper, we provide an explicit algorithm and a proof-of-concept implementation to transform DFDs into so-called Privacy-Aware Data Flow Diagrams (PA-DFDs). Our tool systematically inserts privacy checks to a DFD, generating a PA-DFD. We apply our approach to two realistic applications from the construction and online retail sectors.
  •  
38.
  • Sweidan, Dirar (författare)
  • Data-driven decision support in digital retailing
  • 2023
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • In the digital era and advent of artificial intelligence, digital retailing has emerged as a notable shift in commerce. It empowers e-tailers with data-driven insights and predictive models to navigate a variety of challenges, driving informed decision-making and strategic formulation. While predictive models are fundamental for making data-driven decisions, this thesis spotlights binary classifiers as a central focus. These classifiers reveal the complexities of two real-world problems, marked by their particular properties. Specifically, binary decisions are made based on predictions, relying solely on predicted class labels is insufficient because of the variations in classification accuracy. Furthermore, prediction outcomes have different costs associated with making different mistakes, which impacts the utility.To confront these challenges, probabilistic predictions, often unexplored or uncalibrated, is a promising alternative to class labels. Therefore, machine learning modelling and calibration techniques are explored, employing benchmark data sets alongside empirical studies grounded in industrial contexts. These studies analyse predictions and their associated probabilities across diverse data segments and settings. The thesis found, as a proof of concept, that specific algorithms inherently possess calibration while others, with calibrated probabilities, demonstrate reliability. In both cases, the thesis concludes that utilising top predictions with the highest probabilities increases the precision level and minimises the false positives. In addition, adopting well-calibrated probabilities is a powerful alternative to mere class labels. Consequently, by transforming probabilities into reliable confidence values through classification with a rejection option, a pathway emerges wherein confident and reliable predictions take centre stage in decision-making. This enables e-tailers to form distinct strategies based on these predictions and optimise their utility.This thesis highlights the value of calibrated models and probabilistic prediction and emphasises their significance in enhancing decision-making. The findings have practical implications for e-tailers leveraging data-driven decision support. Future research should focus on producing an automated system that prioritises high and well-calibrated probability predictions while discarding others and optimising utilities based on the costs and gains associated with the different prediction outcomes to enhance decision support for e-tailers.
  •  
39.
  • Ulan, Maria, et al. (författare)
  • Quality Models Inside Out : Interactive Visualization of Software Metrics by Means of Joint Probabilities
  • 2018
  • Ingår i: Proceedings of the 2018 Sixth IEEE Working Conference on Software Visualization, (VISSOFT), Madrid, Spain, 2018. - : IEEE. - 9781538682920 - 9781538682937 ; , s. 65-75
  • Konferensbidrag (refereegranskat)abstract
    • Assessing software quality, in general, is hard; each metric has a different interpretation, scale, range of values, or measurement method. Combining these metrics automatically is especially difficult, because they measure different aspects of software quality, and creating a single global final quality score limits the evaluation of the specific quality aspects and trade-offs that exist when looking at different metrics. We present a way to visualize multiple aspects of software quality. In general, software quality can be decomposed hierarchically into characteristics, which can be assessed by various direct and indirect metrics. These characteristics are then combined and aggregated to assess the quality of the software system as a whole. We introduce an approach for quality assessment based on joint distributions of metrics values. Visualizations of these distributions allow users to explore and compare the quality metrics of software systems and their artifacts, and to detect patterns, correlations, and anomalies. Furthermore, it is possible to identify common properties and flaws, as our visualization approach provides rich interactions for visual queries to the quality models’ multivariate data. We evaluate our approach in two use cases based on: 30 real-world technical documentation projects with 20,000 XML documents, and an open source project written in Java with 1000 classes. Our results show that the proposed approach allows an analyst to detect possible causes of bad or good quality.
  •  
40.
  • Kim, Jinhan, et al. (författare)
  • Guiding Deep Learning System Testing Using Surprise Adequacy
  • 2019
  • Ingår i: Proceedings - International Conference on Software Engineering. - : IEEE. - 0270-5257. ; 2019-May, s. 1039-1049, s. 1039-1049
  • Konferensbidrag (refereegranskat)abstract
    • Deep Learning (DL) systems are rapidly being adopted in safety and security critical domains, urgently calling for ways to test their correctness and robustness. Testing of DL systems has traditionally relied on manual collection and labelling of data. Recently, a number of coverage criteria based on neuron activation values have been proposed. These criteria essentially count the number of neurons whose activation during the execution of a DL system satisfied certain properties, such as being above predefined thresholds. However, existing coverage criteria are not sufficiently fine grained to capture subtle behaviours exhibited by DL systems. Moreover, evaluations have focused on showing correlation between adversarial examples and proposed criteria rather than evaluating and guiding their use for actual testing of DL systems. We propose a novel test adequacy criterion for testing of DL systems, called Surprise Adequacy for Deep Learning Systems (SADL), which is based on the behaviour of DL systems with respect to their training data. We measure the surprise of an input as the difference in DL system's behaviour between the input and the training data (i.e., what was learnt during training), and subsequently develop this as an adequacy criterion: a good test input should be sufficiently but not overtly surprising compared to training data. Empirical evaluation using a range of DL systems from simple image classifiers to autonomous driving car platforms shows that systematic sampling of inputs based on their surprise can improve classification accuracy of DL systems against adversarial examples by up to 77.5% via retraining.
  •  
41.
  • Lidberg, Simon, MSc. 1986-, et al. (författare)
  • A Knowledge Extraction Platform for Reproducible Decision-Support from Multi-Objective Optimization Data
  • 2022
  • Ingår i: SPS2022. - Amsterdam; Berlin; Washington, DC : IOS Press. - 9781643682686 - 9781643682693 ; 21, s. 725-736
  • Konferensbidrag (refereegranskat)abstract
    • Simulation and optimization enables companies to take decision based on data, and allows prescriptive analysis of current and future production scenarios, creating a competitive edge. However, it can be difficult to visualize and extract knowledge from the large amounts of data generated by a many-objective optimization genetic algorithm, especially with conflicting objectives. Existing tools offer capabilities for extracting knowledge in the form of clusters, rules, and connections. Although powerful, most existing software is proprietary and is therefore difficult to obtain, modify, and deploy, as well as for facilitating a reproducible workflow. We propose an open-source web-based application using commonly available packages in the R programming language to extract knowledge from data generated from simulation-based optimization. This application is then verified by replicating the experimental methodology of a peer-reviewed paper on knowledge extraction. Finally, further work is also discussed, focusing on method improvements and reproducible results.
  •  
42.
  • Al Sabbagh, Khaled, 1987, et al. (författare)
  • Improving Data Quality for Regression Test Selection by Reducing Annotation Noise
  • 2020
  • Ingår i: Proceedings - 46th Euromicro Conference on Software Engineering and Advanced Applications, SEAA 2020. ; , s. 191-194
  • Konferensbidrag (refereegranskat)abstract
    • Big data and machine learning models have been increasingly used to support software engineering processes and practices. One example is the use of machine learning models to improve test case selection in continuous integration. However, one of the challenges in building such models is the identification and reduction of noise that often comes in large data. In this paper, we present a noise reduction approach that deals with the problem of contradictory training entries. We empirically evaluate the effectiveness of the approach in the context of selective regression testing. For this purpose, we use a curated training set as input to a tree-based machine learning ensemble and compare the classification precision, recall, and f-score against a non-curated set. Our study shows that using the noise reduction approach on the training instances gives better results in prediction with an improvement of 37% on precision, 70% on recall, and 59% on f-score.
  •  
43.
  • Munappy, Aiswarya Raj, 1990 (författare)
  • Data management and Data Pipelines: An empirical investigation in the embedded systems domain
  • 2021
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Context: Companies are increasingly collecting data from all possible sources to extract insights that help in data-driven decision-making. Increased data volume, variety, and velocity and the impact of poor quality data on the development of data products are leading companies to look for an improved data management approach that can accelerate the development of high-quality data products. Further, AI is being applied in a growing number of fields, and thus it is evolving as a horizontal technology. Consequently, AI components are increasingly been integrated into embedded systems along with electronics and software. We refer to these systems as AI-enhanced embedded systems. Given the strong dependence of AI on data, this expansion also creates a new space for applying data management techniques. Objective: The overall goal of this thesis is to empirically identify the data management challenges encountered during the development and maintenance of AI-enhanced embedded systems, propose an improved data management approach and empirically validate the proposed approach. Method: To achieve the goal, we conducted this research in close collaboration with Software Center companies using a combination of different empirical research methods: case studies, literature reviews, and action research. Results and conclusions: This research provides five main results. First, it identifies key data management challenges specific to Deep Learning models developed at embedded system companies. Second, it examines the practices such as DataOps and data pipelines that help to address data management challenges. We observed that DataOps is the best data management practice that improves the data quality and reduces the time tdevelop data products. The data pipeline is the critical component of DataOps that manages the data life cycle activities. The study also provides the potential faults at each step of the data pipeline and the corresponding mitigation strategies. Finally, the data pipeline model is realized in a small piece of data pipeline and calculated the percentage of saved data dumps through the implementation. Future work: As future work, we plan to realize the conceptual data pipeline model so that companies can build customized robust data pipelines. We also plan to analyze the impact and value of data pipelines in cross-domain AI systems and data applications. We also plan to develop AI-based fault detection and mitigation system suitable for data pipelines.
  •  
44.
  • Jalali, Amin, 1980- (författare)
  • Aspect-Oriented Business Process Management
  • 2016
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Separation of concerns has long been considered an effective and efficient strategy to deal with complexity in information systems.One sort of concern, like security and privacy, crosses over other concerns in a system. Such concerns are called cross-cutting concerns.As a result, the realization of these concerns is scattered through the whole system, which makes their management difficult.Aspect Orientation is a paradigm in information systems which aims to modularize cross-cutting concerns.This paradigm is well researched in the programming area, where many aspect-oriented programming languages have been developed, e.g., AspectJ.It has also been investigated in other areas, such as requirement engineering and service composition.In the Business Process Management (BPM) area, Aspect Oriented Business Process Modeling aims to specify how this modularization technique can support encapsulating cross-cutting concerns in process models.However, it is not clear how these models should be supported in the whole BPM lifecycle.In addition, the support for designing these models has only been limited to imperative process models that support rigid business processes.Neither has it been investigated how this modularization technique can be supported through declarative or hybrid models to support the separation of cross-cutting concerns for flexible business processes.Therefore, this thesis investigates how aspect orientation can be supported over the whole BPM lifecycle using imperative aspect-oriented business process models. It also investigates how declarative and hybrid aspect-oriented business process models can support the separation of cross-cutting concerns in the BPM area.This thesis has been carried out following the design science framework, and the result is presented as a set of artifacts (in the form of constructs, models, methods, and instantiations) and empirical findings.The artifacts support modeling, analysis, implementation/configuration, enactment, monitoring, adjustment, and mining cross-cutting concerns while supporting business processes using Business Process Management Systems. Thus, it covers the support for the management of these concerns over the whole BPM lifecycle. The use of these artifacts and their application shows that they can reduce the complexity of process models by separating different concerns.
  •  
45.
  • Scheuner, Joel, 1991, et al. (författare)
  • Transpiling Applications into Optimized Serverless Orchestrations
  • 2019
  • Ingår i: Proceedings - 2019 IEEE 4th International Workshops on Foundations and Applications of Self* Systems, FAS*W 2019. - : IEEE. ; June 2019, s. 72-73
  • Konferensbidrag (refereegranskat)abstract
    • The serverless computing paradigm promises increased development productivity by abstracting the underlying hardware infrastructure and software runtime when building distributed cloud applications. However, composing a serverless application consisting of many tiny functions is still a cumbersome and inflexible process due to the lack of a unified source code view and strong coupling to non-standardized function-level interfaces for code and configuration. In our vision, developers can focus on writing readable source code in a logical structure, which then gets transformed into an optimized multi-function serverless orchestration. Our idea involves transpilation (i.e., source-to-source transformation) based on an optimization model (e.g., cost optimization) by dynamically deciding which set of methods will be grouped into individual deployment units. A successful implementation of our vision would enable a broader range of serverless applications and allow for dynamic deployment optimization based on monitoring runtime metrics. Further, we would expect increased developer productivity by using more familiar abstractions and facilitating clean coding practices and code reuse.
  •  
46.
  • Elmqvist, Niklas, 1977, et al. (författare)
  • DataMeadow: a visual canvas for analysis of large-scale multivariate data
  • 2008
  • Ingår i: Information Visualization. - : SAGE Publications. - 1473-8716 .- 1473-8724. ; 7:1, s. 18-33
  • Tidskriftsartikel (refereegranskat)abstract
    • Supporting visual analytics of multiple large-scale multidimensional data sets requires a high degree of interactivity and user control beyond the conventional challenges of visualizing such data sets. We present the DataMeadow, a visual canvas providing rich interaction for constructing visual queries using graphical set representations called DataRoses. A DataRose is essentially a starplot of selected columns in a data set displayed as multivariate visualizations with dynamic query sliders integrated into each axis. The purpose of the DataMeadow is to allow users to create advanced visual queries by iteratively selecting and filtering into the multidimensional data. Furthermore, the canvas provides a clear history of the analysis that can be annotated to facilitate dissemination of analytical results to stakeholders. A powerful direct manipulation interface allows for selection, filtering, and creation of sets, subsets, and data dependencies. We have evaluated our system using a qualitative expert review involving two visualization researchers. Results from this review are favorable for the new method.
  •  
47.
  • Henriksson, Jens, 1991, et al. (författare)
  • Performance analysis of out-of-distribution detection on trained neural networks
  • 2020
  • Ingår i: Information and Software Technology. - : Elsevier B.V.. - 0950-5849 .- 1873-6025.
  • Tidskriftsartikel (refereegranskat)abstract
    • Context: Deep Neural Networks (DNN) have shown great promise in various domains, for example to support pattern recognition in medical imagery. However, DNNs need to be tested for robustness before being deployed in safety critical applications. One common challenge occurs when the model is exposed to data samples outside of the training data domain, which can yield to outputs with high confidence despite no prior knowledge of the given input. Objective: The aim of this paper is to investigate how the performance of detecting out-of-distribution (OOD) samples changes for outlier detection methods (e.g., supervisors) when DNNs become better on training samples. Method: Supervisors are components aiming at detecting out-of-distribution samples for a DNN. The experimental setup in this work compares the performance of supervisors using metrics and datasets that reflect the most common setups in related works. Four different DNNs with three different supervisors are compared during different stages of training, to detect at what point during training the performance of the supervisors begins to deteriorate. Results: Found that the outlier detection performance of the supervisors increased as the accuracy of the underlying DNN improved. However, all supervisors showed a large variation in performance, even for variations of network parameters that marginally changed the model accuracy. The results showed that understanding the relationship between training results and supervisor performance is crucial to improve a model's robustness. Conclusion: Analyzing DNNs for robustness is a challenging task. Results showed that variations in model parameters that have small variations on model predictions can have a large impact on the out-of-distribution detection performance. This kind of behavior needs to be addressed when DNNs are part of a safety critical application and hence, the necessary safety argumentation for such systems need be structured accordingly.
  •  
48.
  • Hammarstedt, Martin, et al. (författare)
  • Sparv 5 Developer’s Guide
  • 2022
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • The Sparv Pipeline developed by Språkbanken Text is a text analysis tool run from the command line. This Developer’s Guide describes its general structure and key concepts and serves as an API documentation. Most importantly, it describes how to write plugins for Sparv 5 so that you can add your own functions to the toolkit.
  •  
49.
  • Hyrynsalmi, Sami, et al. (författare)
  • Towards a Data Business Maturity Model for Software-intensive Embedded System Companies
  • 2023
  • Ingår i: Proceedings of the 29th International Conference on Engineering, Technology, and Innovation: Shaping the Future, ICE 2023. - 9798350315172
  • Konferensbidrag (refereegranskat)abstract
    • Data has been quickly becoming as the fuel, the new oil, of growth and prosperity of companies in the modern age. With useful data and sufficient tools, companies have the ability to enhance their current products, presents new innovations and services as well as generate new revenue streams with a secondary customer base. While there are ongoing efforts to develop machine learning and data science techniques, little attention has been paid to understanding and characterizing data-related business activities in software-intensive companies.This multiple-case study examines four large international embedded system companies to explore how they are utilizing data and how they have proceeded in their journey in the data business. This study identifies six distinct stages, each with unique challenges, that seems to be common for embedded system companies in their data business. As the result, this study presents an initial data business maturity model for software-intensive embedded system companies. Additionally, this research provides a foundation for future efforts to support software-intensive embedded system companies in establishing data businesses.
  •  
50.
  • John, Meenu Mary, et al. (författare)
  • Towards an AI-driven business development framework: A multi-case study
  • 2023
  • Ingår i: Journal of Software: Evolution and Process. - : Wiley. - 2047-7481 .- 2047-7473. ; 35:6
  • Tidskriftsartikel (refereegranskat)abstract
    • Artificial intelligence (AI) and the use of machine learning (ML) and deep learning (DL) technologies are becoming increasingly popular in companies. These technologies enable companies to leverage big quantities of data to improve system performance and accelerate business development. However, despite the appeal of ML/DL, there is a lack of systematic and structured methods and processes to help data scientists and other company roles and functions to develop, deploy and evolve models. In this paper, based on multi-case study research in six companies, we explore practices and challenges practitioners experience in developing ML/DL models as part of large software-intensive embedded systems. Based on our empirical findings, we derive a conceptual framework in which we identify three high-level activities that companies perform in parallel with the development, deployment and evolution of models. Within this framework, we outline activities, iterations and triggers that optimize model design as well as roles and company functions. In this way, we provide practitioners with a blueprint for effectively integrating ML/DL model development into the business to achieve better results than other (algorithmic) approaches. In addition, we show how this framework helps companies solve the challenges we have identified and discuss checkpoints for terminating the business case.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-50 av 4878
Typ av publikation
konferensbidrag (2608)
tidskriftsartikel (1352)
licentiatavhandling (182)
rapport (177)
bokkapitel (164)
doktorsavhandling (147)
visa fler...
forskningsöversikt (73)
annan publikation (62)
proceedings (redaktörskap) (51)
bok (32)
samlingsverk (redaktörskap) (28)
konstnärligt arbete (2)
patent (1)
visa färre...
Typ av innehåll
refereegranskat (3934)
övrigt vetenskapligt/konstnärligt (922)
populärvet., debatt m.m. (19)
Författare/redaktör
Wohlin, Claes (186)
Bosch, Jan, 1967 (186)
Staron, Miroslaw, 19 ... (132)
Weyns, Danny (130)
Petersen, Kai (117)
Runeson, Per (103)
visa fler...
Knauss, Eric, 1977 (96)
Šmite, Darja (87)
Feldt, Robert, 1972 (85)
Gorschek, Tony (85)
Gorschek, Tony, 1972 ... (79)
Bosch, Jan (77)
Feldt, Robert (74)
Olsson, Helena Holms ... (73)
Berger, Christian, 1 ... (72)
Mendes, Emilia (67)
Börstler, Jürgen (66)
Unterkalmsteiner, Mi ... (65)
Wnuk, Krzysztof, 198 ... (64)
Horkoff, Jennifer, 1 ... (62)
Berger, Thorsten, 19 ... (58)
Mendez, Daniel (57)
Borg, Markus (55)
Torkar, Richard, 197 ... (54)
Fricker, Samuel (52)
Herold, Sebastian (51)
Steghöfer, Jan-Phili ... (46)
Pelliccione, Patrizi ... (45)
Svahnberg, Mikael (45)
Leitner, Philipp, 19 ... (45)
Pelliccione, Patrizi ... (44)
Felderer, Michael, 1 ... (44)
Chaudron, Michel, 19 ... (44)
Gren, Lucas, 1984 (44)
Börstler, Jürgen, 19 ... (44)
Heldal, Rogardt, 196 ... (43)
Torkar, Richard (42)
Lundberg, Lars (41)
Afzal, Wasif (41)
Lundell, Björn (40)
Löwe, Welf (39)
Gonzalez-Huerta, Jav ... (38)
Regnell, Björn (38)
Dittrich, Yvonne (38)
Höst, Martin (37)
Wnuk, Krzysztof (37)
Tichy, Matthias, 197 ... (36)
Alégroth, Emil, 1984 ... (36)
Engström, Emelie (36)
Mattsson, Michael (35)
visa färre...
Lärosäte
Chalmers tekniska högskola (1531)
Blekinge Tekniska Högskola (1484)
Göteborgs universitet (800)
Linnéuniversitetet (332)
Lunds universitet (309)
Kungliga Tekniska Högskolan (274)
visa fler...
Uppsala universitet (203)
Mälardalens universitet (202)
Karlstads universitet (146)
Linköpings universitet (122)
RISE (122)
Malmö universitet (111)
Högskolan i Skövde (100)
Umeå universitet (87)
Örebro universitet (40)
Luleå tekniska universitet (30)
Högskolan i Halmstad (29)
Stockholms universitet (25)
Jönköping University (24)
Högskolan Kristianstad (15)
Mittuniversitetet (13)
Karolinska Institutet (13)
Högskolan i Borås (12)
Högskolan Väst (10)
Handelshögskolan i Stockholm (9)
Sveriges Lantbruksuniversitet (8)
Södertörns högskola (7)
Högskolan Dalarna (3)
VTI - Statens väg- och transportforskningsinstitut (3)
Högskolan i Gävle (2)
IVL Svenska Miljöinstitutet (2)
Stockholms konstnärliga högskola (1)
visa färre...
Språk
Engelska (4843)
Svenska (27)
Tyska (5)
Odefinierat språk (1)
Kinesiska (1)
Mongoliskt språk (1)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (4877)
Teknik (754)
Samhällsvetenskap (373)
Medicin och hälsovetenskap (42)
Humaniora (28)
Lantbruksvetenskap (9)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy