SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "L773:0928 8910 OR L773:1573 7535 "

Search: L773:0928 8910 OR L773:1573 7535

  • Result 1-9 of 9
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Bernardi, Simona, et al. (author)
  • DICE simulation : a tool for software performance assessment at the design stage
  • 2022
  • In: Automated Software Engineering. - : Springer. - 0928-8910 .- 1573-7535. ; 29
  • Journal article (peer-reviewed)abstract
    • In recent years, we have seen many performance fiascos in the deployment of new systems, such as the US health insurance web. This paper describes the functionality and architecture, as well as success stories, of a tool that helps address these types of issues. The tool allows assessing software designs regarding quality, in particular performance and reliability. Starting from a UML design with quality annotations, the tool applies model-transformation techniques to yield analyzable models. Such models are then leveraged by the tool to compute quality metrics. Finally, quality results, over the design, are presented to the engineer, in terms of the problem domain. Hence, the tool is an asset for the software engineer to evaluate system quality through software designs. While leveraging the Eclipse platform, the tool uses UML and the MARTE, DAM and DICE profiles for the system design and the quality modeling.
  •  
2.
  • Lavesson, Niklas, et al. (author)
  • A method for evaluation of learning components
  • 2014
  • In: Automated Software Engineering. - : Springer. - 0928-8910 .- 1573-7535. ; 21:1, s. 41-63
  • Journal article (peer-reviewed)abstract
    • Today, it is common to include machine learning components in software products. These components offer specific functionalities such as image recognition, time series analysis, and forecasting but may not satisfy the non-functional constraints of the software products. It is difficult to identify suitable learning algorithms for a particular task and software product because the non-functional requirements of the product affect algorithm suitability. A particular suitability evaluation may thus require the assessment of multiple criteria to analyse trade-offs between functional and non-functional requirements. For this purpose, we present a method for APPlication-Oriented Validation and Evaluation (APPrOVE). This method comprises four sequential steps that address the stated evaluation problem. The method provides a common ground for different stakeholders and enables a multi-expert and multi-criteria evaluation of machine learning algorithms prior to inclusion in software products. Essentially, the problem addressed in this article concerns how to choose the appropriate machine learning component for a particular software product.
  •  
3.
  • Liparas, Dimitris, et al. (author)
  • Applying the Mahalanobis-Taguchi Strategy for Software Defect Diagnosis
  • 2012
  • In: Automated Software Engineering. - : Springer Science and Business Media LLC. - 1573-7535 .- 0928-8910. ; 19:2, s. 141-165
  • Journal article (peer-reviewed)abstract
    • The Mahalanobis-Taguchi (MT) strategy combines mathematical and statistical concepts like Mahalanobis distance, Gram-Schmidt orthogonalization and experimental designs to support diagnosis and decision-making based on multivariate data. The primary purpose is to develop a scale to measure the degree of abnormality of cases, compared to “normal” or “healthy” cases, i.e. a continuous scale from a set of binary classified cases. An optimal subset of variables for measuring abnormality is then selected and rules for future diagnosis are defined based on them and the measurement scale. This maps well to problems in software defect prediction based on a multivariate set of software metrics and attributes. In this paper, the MT strategy combined with a cluster analysis technique for determining the most appropriate training set, is described and applied to well-known datasets in order to evaluate the fault-proneness of software modules. The measurement scale resulting from the MT strategy is evaluated using ROC curves and shows that it is a promising technique for software defect diagnosis. It compares favorably to previously evaluated methods on a number of publically available data sets. The special characteristic of the MT strategy that it quantifies the level of abnormality can also stimulate and inform discussions with engineers and managers in different defect prediction situations.
  •  
4.
  • Magnusson, Eva, et al. (author)
  • Demand-driven evaluation of collection attributes
  • 2009
  • In: Automated Software Engineering. - : Springer Science and Business Media LLC. - 1573-7535 .- 0928-8910. ; 16:2, s. 291-322
  • Journal article (peer-reviewed)abstract
    • n order to make attribute grammars useful for complicated analysis tasks, a number of extensions to the original Knuth formalism have been suggested. One such extension is the collection attribute mechanism, which allows the value of an attribute to be defined as a combination of contributions from distant nodes in the abstract syntax tree. Another extension that has proven useful is circular attributes, evaluated using fixed-point iteration. In this paper we show how collection attributes and the combined formalism, circular collection attributes, have been implemented in our declarative meta programming system JastAdd, and how they can be used for a variety of applications including devirtualization analysis, metrics and flow analysis. A number of evaluation algorithms are introduced and compared for applicability and efficiency. The key design criterion for our algorithms is that they work well with demand evaluation, i.e., defined properties are computed only if they are actually needed for a particular program. We show that the best algorithms work well on large practical problems including the analysis of large Java programs.
  •  
5.
  • Mukelabai, Mukelabai, 1985, et al. (author)
  • Tackling Combinatorial Explosion: A Study of Industrial Needs and Practices for Analyzing Highly Configurable Systems
  • 2018
  • In: Automated Software Engineering. - New York, NY, USA : ACM. - 1573-7535 .- 0928-8910. ; , s. 155-166
  • Conference paper (peer-reviewed)abstract
    • Highly configurable systems are complex pieces of software. To tackle this complexity, hundreds of dedicated analysis techniques have been conceived, many of which able to analyze system properties for all possible system configurations, as opposed to traditional, single-system analyses. Unfortunately, it is largely unknown whether these techniques are adopted in practice, whether they address actual needs, or what strategies practitioners actually apply to analyze highly configurable systems. We present a study of analysis practices and needs in industry. It relied on a survey with 27 practitioners engineering highly configurable systems and follow-up interviews with 15 of them, covering 18 different companies from eight countries. We confirm that typical properties considered in the literature (e.g., reliability) are relevant, that consistency between variability models and artifacts is critical, but that the majority of analyses for specifications of configuration options (a.k.a., variability model analysis) is not perceived as needed. We identified rather pragmatic analysis strategies, including practices to avoid the need for analysis. For instance, testing with experience-based sampling is the most commonly applied strategy, while systematic sampling is rarely applicable. We discuss analyses that are missing and synthesize our insights into suggestions for future research.
  •  
6.
  • Narasimhan, Krishna, et al. (author)
  • Cleaning up copy–paste clones with interactive merging
  • 2018
  • In: Automated Software Engineering. - : Springer Science and Business Media LLC. - 0928-8910 .- 1573-7535. ; 25:3, s. 627-673
  • Journal article (peer-reviewed)abstract
    • Copy-paste-modify is a form of software reuse in which developers explicitly duplicate source code. This duplicated source code, amounting to a code clone, is adapted for a new purpose. Copy-paste-modify is popular among software developers, however, empirical evidence shows that it complicates software maintenance and increases the frequency of bugs. To allow developers to use copy-paste-modify without having to worry about these concerns, we propose an approach that automatically merges similar pieces of code by creating suitable abstractions. Because different kinds of abstractions may be beneficial in different contexts, our approach offers multiple abstraction mechanisms, which were selected based on a study of popular open-source repositories. To demonstrate the feasibility of our approach, we have designed and implemented a prototype merging tool for C++ and evaluated it on a number of code clones exhibiting some variation, i.e., near-miss clones, in popular Open Source packages. We observed that maintainers find our algorithmically created abstractions to be largely preferable to the existing duplicated code.
  •  
7.
  • Sattler, Florian, et al. (author)
  • Lifting inter-app data-flow analysis to large app sets
  • 2018
  • In: Automated Software Engineering : An International Journal. - : Springer Science and Business Media LLC. - 0928-8910. ; 25:2, s. 315-346
  • Journal article (peer-reviewed)abstract
    • LLC Mobile apps process increasing amounts of private data, giving rise to privacy concerns. Such concerns do not arise only from single apps, which might—accidentally or intentionally—leak private information to untrusted parties, but also from multiple apps communicating with each other. Certain combinations of apps can create critical data flows not detectable by analyzing single apps individually. While sophisticated tools exist to analyze data flows inside and across apps, none of these scale to large numbers of apps, given the combinatorial explosion of possible (inter-app) data flows. We present a scalable approach to analyze data flows across Android apps. At the heart of our approach is a graph-based data structure that represents inter-app flows efficiently. Following ideas from product-line analysis, the data structure exploits redundancies among flows and thereby tames the combinatorial explosion. Instead of focusing on specific installations of app sets on mobile devices, we lift traditional data-flow analysis approaches to analyze and represent data flows of all possible combinations of apps. We developed the tool Sifta and applied it to several existing app benchmarks and real-world app sets, demonstrating its scalability and accuracy.
  •  
8.
  • Szilagyi, G., et al. (author)
  • Static and Dynamic Slicing of Constraint Logic Programs
  • 2002
  • In: Automated Software Engineering. - 0928-8910 .- 1573-7535. ; 9:1, s. 41-65
  • Journal article (peer-reviewed)abstract
    • Slicing is a program analysis technique originally developed for imperative languages. It facilitates understanding of data flow and debugging. This paper discusses slicing of Constraint Logic Programs. Constraint Logic Programming (CLP) is an emerging software technology with a growing number of applications. Data flow in constraint programs is not explicit, and for this reason the concepts of slice and the slicing techniques of imperative languages are not directly applicable. This paper formulates declarative notions of slice suitable for CLP. They provide a basis for defining slicing techniques (both dynamic and static) based on variable sharing. The techniques are further extended by using groundness information. A prototype dynamic slicer of CLP programs implementing the presented ideas is briefly described together with the results of some slicing experiments.
  •  
9.
  • Österlund, Erik, et al. (author)
  • Self-adaptive concurrent components
  • 2018
  • In: Automated Software Engineering. - : Springer. - 0928-8910 .- 1573-7535. ; 25:1, s. 47-99
  • Journal article (peer-reviewed)abstract
    • Selecting the optimum component implementation variant is sometimes difficult since it depends on the component's usage context at runtime, e.g., on the concurrency level of the application using the component, call sequences to the component, actual parameters, the hardware available etc. A conservative selection of implementation variants leads to suboptimal performance, e.g., if a component is conservatively implemented as thread-safe while during the actual execution it is only accessed from a single thread. In general, an optimal component implementation variant cannot be determined before runtime and a single optimal variant might not even exist since the usage contexts can change significantly over the runtime. We introduce self-adaptive concurrent components that automatically and dynamically change not only their internal representation and operation implementation variants but also their synchronization mechanism based on a possibly changing usage context. The most suitable variant is selected at runtime rather than at compile time. The decision is revised if the usage context changes, e.g., if a single-threaded context changes to a highly contended concurrent context. As a consequence, programmers can focus on the semantics of their systems and, e.g., conservatively use thread-safe components to ensure consistency of their data, while deferring implementation and optimization decisions to context-aware runtime optimizations. We demonstrate the effect on performance with self-adaptive concurrent queues, sets, and ordered sets. In all three cases, experimental evaluation shows close to optimal performance regardless of actual contention.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-9 of 9

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view