SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Lundberg Jonas) ;pers:(Löwe Welf)"

Sökning: WFRF:(Lundberg Jonas) > Löwe Welf

  • Resultat 1-10 av 28
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Danylenko, Antonina, 1986- (författare)
  • Decision Algebra : A General Approach to Learning and Using Classifiers
  • 2015
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Processing decision information is a vital part of Computer Science fields in which pattern recognition problems arise. Decision information can be generalized as alternative decisions (or classes), attributes and attribute values, which are the basis for classification. Different classification approaches exist, such as decision trees, decision tables and Naïve Bayesian classifiers, which capture and manipulate decision information in order to construct a specific decision model (or classifier). These approaches are often tightly coupled to learning strategies, special data structures and the special characteristics of the decision information captured, etc. The approaches are also connected to the way of how certain problems are addressed, e.g., memory consumption, low accuracy, etc. This situation causes problems for a simple choice, comparison, combination and manipulation of different decision models learned over the same or different samples of decision information. The choice and comparison of decision models are not merely the choice of a model with a higher prediction accuracy and a comparison of prediction accuracies, respectively. We also need to take into account that a decision model, when used in a certain application, often has an impact on the application's performance. Often, the combination and manipulation of different decision models are implementation- or application-specific, thus, lacking the generality that leads to the construction of decision models with combined or modified decision information. They also become difficult to transfer from one application domain to another. In order to unify different approaches, we define Decision Algebra, a theoretical framework that presents decision models as higher order decision functions that abstract from their implementation details. Decision Algebra defines the operations necessary to decide, combine, approximate, and manipulate decision functions along with operation signatures and general algebraic laws. Due to its algebraic completeness (i.e., a complete algebraic semantics of operations and its implementation efficiency), defining and developing decision models is simple as such instances require implementing just one core operation based on which other operations can be derived. Another advantage of Decision Algebra is composability: it allows for combination of decision models constructed using different approaches. The accuracy and learning convergence properties of the combined model can be proven regardless of the actual approach. In addition, the applications that process decision information can be defined using Decision Algebra regardless of the different classification approaches. For example, we use Decision Algebra in a context-aware composition domain, where we showed that context-aware applications improve performance when using Decision Algebra. In addition, we suggest an approach to integrate this context-aware component into legacy applications.
  •  
2.
  • Danylenko, Antonina, 1986- (författare)
  • Decisions : Algebra and Implementation
  • 2011
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Processing decision information is a constitutive part in a number of applicationsin Computer Science fields. In general, decision information can be used to deduce the relationship between a certain context and a certain decision. Decision information is represented by a decision model that captures this information. Frequently used examples of decision models are decision tables and decision trees. The choice of an appropriate decision model has an impact on application performance in terms of memory consumption and execution time. High memory expenses can possibly occur due to redundancy in a decision model; and high execution time is often a consequence of an unsuitable decision model.Applications in different domains try to overcome these problems by introducing new data structures or algorithms for implementing decision models. These solutions are usually domain-specificand hard to transfer from one domain to another. Different application domains of Computer Science often process decision information in a similar way and, hence, have similar problems. We should thus be able to present a unifying approach that can be applicable in all application domains for capturing and manipulating decision information. Therefore, the goal of this thesis is (i) to suggest a general structure(Decision Algebra) which provides a common theoretical framework that captures decision information and defines operations (signatures) for storing, accessing, merging, approximating, and manipulating such information along with some general algebraic laws regardless of the used implementation. Our Decision Algebra allows defining different construction strategiesfor decision models and data structures that capture decision information as implementation variants, and it simplifies experimental comparisons between them.Additionally, this thesis presents (ii) an implementation of Decision Algebra capturing the information in a non-redundant way and performing the operations efficiently. In fact, we show that existing decision models that originated in the field of Data Mining and Machine Learning and variants thereof as exploited in special algorithms can be understood as alternative implementation variants of the Decision Algebra by varying the implementations of the Decision Algebra operations. Hence, this work (iii) will contribute to a classification of existing technology for processing decision information in different application domains of Computer Science.
  •  
3.
  • Danylenko, Antonina, et al. (författare)
  • Decisions : Algebra, Implementation, and First Experiments
  • 2014
  • Ingår i: Journal of universal computer science (Online). - 0948-695X .- 0948-6968. ; 20:9, s. 1174-1231
  • Tidskriftsartikel (refereegranskat)abstract
    • Classification is a constitutive part in many different fields of Computer Science. There exist several approaches that capture and manipulate classification information in order to construct a specific classification model. These approaches are often tightly coupled to certain learning strategies, special data structures for capturing the models, and to how common problems, e.g. fragmentation, replication and model overfitting, are addressed. In order to unify these different classification approaches, we define a Decision Algebra which defines models for classification as higher order decision functions abstracting from their implementations using decision trees (or similar), decision rules, decision tables, etc. Decision Algebra defines operations for learning, applying, storing, merging, approximating, and manipulating models for classification, along with some general algebraic laws regardless of the implementation used. The Decision Algebra abstraction has several advantages. First, several useful Decision Algebra operations (e.g., learning and deciding) can be derived based on the implementation of a few core operations (including merging and approximating). Second, applications using classification can be defined regardless of the different approaches. Third, certain properties of Decision Algebra operations can be proved regardless of the actual implementation. For instance, we show that the merger of a series of probably accurate decision functions is even more accurate, which can be exploited for efficient and general online learning. As a proof of the Decision Algebra concept, we compare decision trees with decision graphs, an efficient implementation of the Decision Algebra core operations, which capture classification models in a non-redundant way. Compared to classical decision tree implementations, decision graphs are 20% faster in learning and classification without accuracy loss and reduce memory consumption by 44%. This is the result of experiments on a number of standard benchmark data sets comparing accuracy, access time, and size of decision graphs and trees as constructed by the standard C4.5 algorithm. Finally, in order to test our hypothesis about increased accuracy when merging decision functions, we merged a series of decision graphs constructed over the data sets. The result shows that on each step the accuracy of the merged decision graph increases with the final accuracy growth of up to 16%.
  •  
4.
  • Danylenko, Antonina, 1986-, et al. (författare)
  • Decisions : Algebra and Implementation
  • 2011
  • Ingår i: Machine Learning and Data Mining in Pattern Recognition. - Berlin, Heidelberg : Springer. - 9783642231988 - 9783642231995 ; , s. 31-45
  • Bokkapitel (refereegranskat)abstract
    • This paper presents a generalized theory for capturing and manipulating classification information. We define decision algebra which models decision-based classifiers as higher order decision functions abstracting from implementations using decision trees (or similar), decision rules, and decision tables. As a proof of the decision algebra concept we compare decision trees with decision graphs, yet another instantiation of the proposed theoretical framework, which implement the decision algebra operations efficiently and capture classification information in a non-redundant way. Compared to classical decision tree implementations, decision graphs gain learning and classification speed up to 20% without accuracy loss and reduce memory consumption by 44%. This is confirmed by experiments.
  •  
5.
  • Edvinsson, Marcus, et al. (författare)
  • Parallel Data-Flow Analysis for Multi-Core Machines
  • 2011
  • Konferensbidrag (refereegranskat)abstract
    • Static program analysis supporting software development is often part of edit-compile-cycles, and precise program analysis is time consuming. Points-to analysis is a data-flow-based static program analysis used to find object references in programs. Its applications include test case generation, compiler optimizations and program understanding, etc. Recent increases in processing power of desktop computers comes mainly from multiple cores. Parallel algorithms are vital for simultaneous use of multiple cores. An efficient parallel points-to analysis requires sufficient work for each processing unit. The present paper presents a parallel points-to analysis of object-oriented programs. It exploits that (1) different target methods of polymorphic calls and (2) independent control-flow branches can be analyzed in parallel. Carefully selected thresholds guarantee that each parallel thread has sufficient work to do and that only little work is redundant with other threads. Our experiments show that this approach achieves a maximum speed-up of 4.5.
  •  
6.
  •  
7.
  • Edvinsson, Marcus, et al. (författare)
  • Parallel Reachability and Escape Analysis
  • 2010
  • Ingår i: 2010 10th IEEE Working Conference on Source Code Analysis and Manipulation (SCAM). - : IEEE Press. - 9781424486557 ; , s. 125-134
  • Konferensbidrag (refereegranskat)abstract
    • Static program analysis usually consists of a number of steps, each producing partial results. For example, the points-to analysis step, calculating object references in a program, usually just provides the input for larger client analyses like reachability and escape analyses. All these analyses are computationally intense and it is therefore vital to create parallel approaches that make use of the processing power that comes from multiple cores in modern desktop computers.The present paper presents two parallel approachesto increase the efficiency of reachability analysis and escape analysis, based on a parallel points-to analysis. The experiments show that the two parallel approaches achieve a speed-up of 1.5 for reachability analysis and 3.8 for escape analysis on 8 cores for a benchmark suite of Java programs. 
  •  
8.
  • Gutzmann, Tobias (författare)
  • Benchmarking Points-to Analysis
  • 2013
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Points-to analysis is a static program analysis that, simply put, computes which objects created at certain points of a given program might show up at which other points of the same program. In particular, it computes possible targets of a call and possible objects referenced by a field. Such information is essential input to many client applications in optimizing compilers and software engineering tools.Comparing experimental results with respect to accuracy and performance is required in order to distinguish the promising from the less promising approaches to points-to analysis. Unfortunately, comparing the accuracy of two different points-to analysis implementations is difficult, as there are many pitfalls in the details. In particular, there are no standardized means to perform such a comparison, i.e, no benchmark suite - a set of programs with well-defined rules of how to compare different points-to analysis results - exists. Therefore, different researchers use their own means to evaluate their approaches to points-to analysis. To complicate matters, even the same researchers do not stick to the same evaluation methods, which often makes it impossible to take two research publications and reliably tell which one describes the more accurate points-to analysis.In this thesis, we define a methodology on how to benchmark points-to analysis. We create a benchmark suite, compare three different points-to analysis implementations with each other based on this methodology, and explain differences in analysis accuracy.We also argue for the need of a Gold Standard, i.e., a set of benchmark programs with exact analysis results. Such a Gold Standard is often required to compare points-to analysis results, and it also allows to assess the exact accuracy of points-to analysis results. Since such a Gold Standard cannot be computed automatically, it needs to be created semi-automatically by the research community. We propose a process for creating a Gold Standard based on under-approximating it through optimistic (dynamic) analysis and over-approximating it through conservative (static) analysis. With the help of improved static and dynamic points-to analysis and expert knowledge about benchmark programs, we present a first attempt towards a Gold Standard.We also provide a Web-based benchmarking platform, through which researchers can compare their own experimental results with those of other researchers, and can contribute towards the creation of a Gold Standard.
  •  
9.
  • Gutzmann, Tobias, et al. (författare)
  • Collections Frameworks for Points-to Analysis
  • 2012
  • Ingår i: IEEE 12th International Working Conference on Source Code Analysis and Manipulation (SCAM) 2012. - : IEEE. - 9781467323987 ; , s. 4-13
  • Konferensbidrag (refereegranskat)abstract
    • Points-to information is the basis for many analysesand transformations, e.g., for program understanding andoptimization. Collections frameworks are part of most modern programming languages’ infrastructures and used by many applications. The richness of features and the inherent structure of collection classes affect both performance and precision of points-to analysis negatively.In this paper, we discuss how to replace original collections frameworks with versions specialized for points-to analysis. We implement such a replacement for the Java Collections Framework and support its benefits for points-to analysis by applying it to three different points-to analysis implementations. In experiments, context-sensitive points-to analyses require, on average, 16-24% less time while at the same time being more precise. Context-insensitive analysis in conjunction with inlining also benefits in both precision and analysis cost.
  •  
10.
  • Gutzmann, Tobias, et al. (författare)
  • Feedback-driven Points-to Analysis
  • 2010
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • Points-to analysis is a static program analysis that extracts reference information from agiven input program. Its accuracy is limited due to abstractions that any such analysisneeds to make. Further, the exact analysis results are unknown, i.e., no so-called GoldStandard exists for points-to analysis. This hinders the assessment of new ideas to pointstoanalysis, as results can be compared only relative to results obtained by other inaccurateanalyses. In this paper, we present feedback-driven points-to analysis. We suggest performing(any classical) points-to analysis with the points-to results at certain programpoints guarded by a-priori upper bounds. Such upper bounds can come from other pointstoanalyses – this is of interest when different approaches are not strictly ordered in termsof accuracy – and from human insight, i.e., manual proofs that certain points-to relationsare infeasible for every program run. This gives us a tool at hand to compute very accuratepoints-to analysis and, ultimately, to manually create a Gold Standard.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 28

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy