SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "L4X0:1101 1335 "

Search: L4X0:1101 1335

  • Result 1-25 of 46
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Jacobsson, Mattias, 1979- (author)
  • Tinkering with Interactive Materials : Studies, Concepts and Prototypes
  • 2013
  • Doctoral thesis (other academic/artistic)abstract
    • The concept of tinkering is a central practice within research in the field of Human Computer Interaction, dealing with new interactive forms and technologies. In this thesis, tinkering is discussed not only as a practice for interaction design in general, but as an attitude that calls for a deeper reflection over research practices, knowledge generation and the recent movements in the direction of materials and materiality within the field. The presented research exemplifies practices and studies in relation to interactive technology through a number of projects, all revolving around the design and interaction with physical interactive artifacts. In particular, nearly all projects are focused around robotic artifacts for consumer settings. Three main contributions are presented in terms of studies, prototypes and concepts, together with a conceptual discussion around tinkering framed as an attitude within interaction design. The results from this research revolve around how grounding is achieved, partly through studies of existing interaction and partly through how tinkering-oriented activities generates knowledge in relation to design concepts, built prototypes and real world interaction.
  •  
2.
  • Cakici, Baki, 1984- (author)
  • The Informed Gaze : On the Implications of ICT-Based Surveillance
  • 2013
  • Doctoral thesis (other academic/artistic)abstract
    • Information and communication technologies are not value-neutral. I examine two domains, public health surveillance and sustainability, in five papers covering: (i) the design and development of a software package for computer-assisted outbreak detection; (ii) a workflow for using simulation models to provide policy advice and a list of challenges for its practice; (iii) an analysis of design documents from three smart home projects presenting intersecting visions of sustainability; (iv) an analysis of EU-financed projects dealing with sustainability and ICT; (v) an analysis of the consequences of design choices when creating surveillance technologies. My contributions include three empirical studies of surveillance discourses where I identify the forms of action that are privileged and the values that are embedded into them. In these discourses, the presence of ICT entails increased surveillance, privileging technological expertise, and prioritising centralised forms of knowledge.
  •  
3.
  • Cöster, Rickard, 1973- (author)
  • Algorithms and Representations for Personalised Information Access
  • 2005
  • Doctoral thesis (other academic/artistic)abstract
    • Personalised information access systems use historical feedback data, such as implicit and explicit ratings for textual documents and other items, to better locate the right or relevant information for individual users.Three topics in personalised information access are addressed: learning from relevance feedback and document categorisation by the use of concept-based text representations, the need for scalable and accurate algorithms for collaborative filtering, and the integration of textual and collaborative information access.Two concept-based representations are investigated that both map a sparse high-dimensional term space to a dense concept space. For learning from relevance feedback, it is found that the representation combined with the proposed learning algorithm can improve the results of novel queries, when queries are more elaborate than a few terms. For document categorisation, the representation is found useful as a complement to a traditional word-based one.For collaborative filtering, two algorithms are proposed: the first for the case where there are a large number of users and items, and the second for use in a mobile device. It is demonstrated that memory-based collaborative filtering can be more efficiently implemented using inverted files, with equal or better accuracy, and that there is little reason to use the traditional in-memory vector approach when the data is sparse. An empirical evaluation of the algorithm for collaborative filtering on mobile devices show that it can generate accurate predictions at a high speed using a small amount of resources.For integration, a system architecture is proposed where various combinations of content-based and collaborative filtering can be implemented. The architecture is general in the sense that it provides an abstract representation of documents and user profiles, and provides a mechanism for incorporating new retrieval and filtering algorithms at any time.In conclusion this thesis demonstrates that information access systems can be personalised using scalable and accurate algorithms and representations for the increased benefit of the user.
  •  
4.
  •  
5.
  • Laaksolahti, Jarmo (author)
  • Plot, Spectacle, and Experience : Contributions to the Design and Evaluation of Interactive Storytelling
  • 2008
  • Doctoral thesis (other academic/artistic)abstract
    • Interactive storytelling is a new form of storytelling emerging in the crossroads of many scholarly, artistic, and industrial traditions. In interactive stories the reader/spectator moves from being a receiver of a story to an active participant. By allowing participants to influence the progression and outcome of the story new experiences will arise. This thesis has worked on three aspects of interactive storytelling: plot, spectacle, and experience. The first aspect is concerned with finding methods for combining the linear structure of a story, with the freedom of action required for an interactive experience. Our contribution has focused on a method for avoiding unwanted plot twists by predicting the progression of a story and altering its course if such twists are detected.The second aspect is concerned with supporting the storytelling process at the level of spectacle. In Aristotelian terms, spectacle refers to the sensory display that meets the audience of a drama and is ultimately what causes the experience. Our contribution focuses on graphically making changing emotions and social relations, important elements of dramatic stories in our vision, salient to players at the level of spectacle. As a result we have broadened the view of what is important for interactive storytelling, as well as what makes characters believable. So far not very much research has been done on evaluating interactive stories. Experience, the third aspect, is concerned with finding qualitative methods for evaluating the experience of playing an interactive story. In particular we were interested in finding methods that could tell us something about how a players experience evolved over time, in addition to qualities such as agency that have been claimed to be characteristic for interactive stories. Our contribution consists of two methods that we have developed and adapted for the purposes of evaluating interactive stories that can provide such information. The methods have been evaluated on three different interactive storytelling type games.
  •  
6.
  • Sundström, Petra, 1976- (author)
  • Designing Affective Loop Experiences
  • 2010
  • Doctoral thesis (other academic/artistic)abstract
    • There is a lack of attention to the emotional and the physical aspects of communication in how we up to now have been approaching communication between people in the field of Human Computer Interaction (HCI). As designers of digital communication tools we need to consider altering the underlying model for communication that has been prevailing in HCI: the information transfer model. Communication is about so much more than transferring information. It is about getting to know yourself, who you are and what part you play in the communication as it unfolds. It is also about the experience of a communication process, what it feels like, how that feeling changes, when it changes, why and perhaps by whom the process is initiated, altered, or disrupted. The idea of Affective Loop experiences in design aims to create new expressive and experiential media for whole users, embodied with the social and physical world they live in, and where communication not only is about getting the message across but also about living the experience of communication - feeling it. An Affective Loop experience is an emerging, in the moment, emotional experience where the inner emotional experience, the situation at hand and the social and physical context act together, to create for one complete embodied experience. The loop perspective comes from how this experience takes place in communication and how there is a rhythmic pattern in communication where those involved take turns in both expressing themselves and standing back interpreting the moment. To allow for Affective Loop experiences with or through a computer system, the user needs to be allowed to express herself in rich personal ways involving our many ways of expressing and sensing emotions – muscles tensions, facial expressions and more. For the user to become further engaged in interaction, the computer system needs the capability to return relevant, either diminishing, enforcing or disruptive feedback to those emotions expressed by the user so that the she wants to continue express herself by either strengthening, changing or keeping her expression. We describe how we used the idea of Affective Loop experiences as a conceptual tool to navigate a design space of gestural input combined with rich instant feedback. In our design journey, we created two systems, eMoto and FriendSense.
  •  
7.
  • Abrahamsson, Henrik (author)
  • Network overload avoidance by traffic engineering and content caching
  • 2012
  • Doctoral thesis (other academic/artistic)abstract
    • The Internet traffic volume continues to grow at a great rate, now driven by video and TV distribution. For network operators it is important to avoid congestion in the network, and to meet service level agreements with their customers.  This thesis presents work on two methods operators can use to reduce links loads in their networks: traffic engineering and content caching.This thesis studies access patterns for TV and video and the potential for caching.  The investigation is done both using simulation and by analysis of logs from a large TV-on-Demand system over four months.The results show that there is a small set of programs that account for a large fraction of the requests and that a comparatively small local cache can be used to significantly reduce the peak link loads during prime time. The investigation also demonstrates how the popularity of programs changes over time and shows that the access pattern in a TV-on-Demand system very much depends on the content type.For traffic engineering the objective is to avoid congestion in the network and to make better use of available resources by adapting the routing to the current traffic situation. The main challenge for traffic engineering in IP networks is to cope with the dynamics of Internet traffic demands.This thesis proposes L-balanced routings that route the traffic on the shortest paths possible but make sure that no link is utilised to more than a given level L. L-balanced routing gives efficient routing of traffic and controlled spare capacity to handle unpredictable changes in traffic.  We present an L-balanced routing algorithm and a heuristic search method for finding L-balanced weight settings for the legacy routing protocols OSPF and IS-IS. We show that the search and the resulting weight settings work well in real network scenarios.
  •  
8.
  •  
9.
  • Al-Shishtawy, Ahmad, 1978- (author)
  • Self-Management for Large-Scale Distributed Systems
  • 2012
  • Doctoral thesis (other academic/artistic)abstract
    • Autonomic computing aims at making computing systems self-managing by using autonomic managers in order to reduce obstacles caused by management complexity. This thesis presents results of research on self-management for large-scale distributed systems. This research was motivated by the increasing complexity of computing systems and their management.In the first part, we present our platform, called Niche, for programming self-managing component-based distributed applications. In our work on Niche, we have faced and addressed the following four challenges in achieving self-management in a dynamic environment characterized by volatile resources and high churn: resource discovery, robust and efficient sensing and actuation, management bottleneck, and scale. We present results of our research on addressing the above challenges. Niche implements the autonomic computing architecture, proposed by IBM, in a fully decentralized way. Niche supports a network-transparent view of the system architecture simplifying the design of distributed self-management. Niche provides a concise and expressive API for self-management. The implementation of the platform relies on the scalability and robustness of structured overlay networks. We proceed by presenting a methodology for designing the management part of a distributed self-managing application. We define design steps that include partitioning of management functions and orchestration of multiple autonomic managers.In the second part, we discuss robustness of management and data consistency, which are necessary in a distributed system. Dealing with the effect of churn on management increases the complexity of the management logic and thus makes its development time consuming and error prone. We propose the abstraction of Robust Management Elements, which are able to heal themselves under continuous churn. Our approach is based on replicating a management element using finite state machine replication with a reconfigurable replica set. Our algorithm automates the reconfiguration (migration) of the replica set in order to tolerate continuous churn. For data consistency, we propose a majority-based distributed key-value store supporting multiple consistency levels that is based on a peer-to-peer network. The store enables the tradeoff between high availability and data consistency. Using majority allows avoiding potential drawbacks of a master-based consistency control, namely, a single-point of failure and a potential performance bottleneck.In the third part, we investigate self-management for Cloud-based storage systems with the focus on elasticity control using elements of control theory and machine learning. We have conducted research on a number of different designs of an elasticity controller, including a State-Space feedback controller and a controller that combines feedback and feedforward control. We describe our experience in designing an elasticity controller for a Cloud-based key-value store using state-space model that enables to trade-off performance for cost. We describe the steps in designing an elasticity controller. We continue by presenting the design and evaluation of ElastMan, an elasticity controller for Cloud-based elastic key-value stores that combines feedforward and feedback control.
  •  
10.
  • Ardelius, John, 1978- (author)
  • On the Performance Analysis of Large Scale, Dynamic, Distributed and Parallel Systems.
  • 2013
  • Doctoral thesis (other academic/artistic)abstract
    • Evaluating the performance of large distributed applications is an important and non-trivial task. With the onset of Internet wide applications there is an increasing need to quantify reliability, dependability and performance of these systems, both as a guide in system design as well as a means to understand the fundamental properties of large-scale distributed systems. Previous research has mainly focused on either formalised models where system properties can be deduced and verified using rigorous mathematics or on measurements and experiments on deployed applications. Our aim in this thesis is to study models on an abstraction level lying between the two ends of this spectrum. We adopt a model of distributed systems inspired by methods used in the study of large scale system of particles in physics and model the application nodes as a set of interacting particles each with an internal state whose actions are specified by the application program. We apply our modeling and performance evaluation methodology to four different distributed and parallel systems. The first system is the distributed hash table (DHT) Chord running in a dynamic environment.  We study the system under two scenarios. First we study how performance (in terms of lookup latency) is affectedon a network with finite communication latency. We show that an average delay in conjunction with other parameters describing changes in the network (such as timescales for network repair and join and leave processes)induces fundamentally different system performance. We also verify our analytical predictions via simulations.In the second scenario we introduce network address translators (NATs) to the network model. This makes the overlay topology non-transitive and we explore the implications of this fact to various performance metrics such as lookup latency, consistency and load balance. The latter analysis is mainly simulation based.Even though these two studies focus on a specific DHT, many of our results can easily be translated to other similar ring-based DHTs with long-range links, and the same methodology can be applied evento DHT's based on other geometries.The second type of system studied is an unstructured gossip protocol running a distributed version of the famous Belman-Ford algorithm. The algorithm, called GAP, generates a spanning tree over the participating nodes and the question we set out to study is how reliable this structure is(in terms of generating accurate aggregate values at the root)  in the presence of node churn. All our analytical results are also verified  using simulations.The third system studied is a content distribution network (CDN) of interconnected caches in an aggregation access network. In this model, content which sits at the leaves of the cache hierarchy tree, is requested by end users. Requests can then either be served by the first cache level or sent further up the tree. We study the performance of the whole system under two cache eviction policies namely LRU and LFU. We compare our analytical results with traces from related caching systems.The last system is a work stealing heuristic for task distribution in the TileraPro64 chip. This system has access to a shared memory and is therefore classified as a parallel system. We create a model for the dynamic generation of tasks as well as how they are executed and distributed among the participating nodes. We study how the heuristic scales when the number of nodes exceeds the number of processors on the chip as well as how different work stealing policies compare with each other. The work on this model is mainly simulation-based.
  •  
11.
  • Armstrong, Joe, 1950- (author)
  • Making reliable distributed systems in the presence of software errors
  • 2003
  • Doctoral thesis (other academic/artistic)abstract
    • The work described in this thesis is the result of aresearch program started in 1981 to find better ways ofprogramming Telecom applications. These applications are largeprograms which despite careful testing will probably containmany errors when the program is put into service. We assumethat such programs do contain errors, and investigate methodsfor building reliable systems despite such errors.The research has resulted in the development of a newprogramming language (called Erlang), together with a designmethodology, and set of libraries for building robust systems(called OTP). At the time of writing the technology describedhere is used in a number of major Ericsson, and Nortelproducts. A number of small companies have also been formedwhich exploit the technology.The central problem addressed by this thesis is the problemof constructing reliablesystems from programs which maythemselves contain errors. Constructing such systems imposes anumber of requirements on any programming language that is tobe used for the construction. I discuss these languagerequirements, and show how they are satisfied by Erlang.Problems can be solved in a programming language, or in thestandard libraries which accompany the language. I argue howcertain of the requirements necessary to build a fault-tolerantsystem are solved in the language, and others are solved in thestandard libraries. Together these form a basis for buildingfault-tolerant software systems.No theory is complete without proof that the ideas work inpractice. To demonstrate that these ideas work in practice Ipresent a number of case studies of large commerciallysuccessful products which use this technology. At the time ofwriting the largest of these projects is a major Ericssonproduct, having over a million lines of Erlang code. Thisproduct (the AXD301) is thought to be one of the most reliableproducts ever made by Ericsson.Finally, I ask if the goal of finding better ways to programTelecom applications was fulfilled --- I also point to areaswhere I think the system could be improved.
  •  
12.
  • Aronsson, Martin, 1963- (author)
  • GCLA : the design, use and implementation of a program development system
  • 1993
  • Doctoral thesis (other academic/artistic)abstract
    • We present a program development system, GCLA (Generalized horn Clause LAnguage*), which is based on a generalization of Horn clauses (e.g. Prolog). This generalization takes a quite different view of the meaning of a logic program - a "definitional" view rather than the traditional logical view.GCLA is based on the formalism partial inductive definitions, an extension of ordinary inductive definitions. To each partial inductive definition a sequent-like calculus is associated. One of the most important derivation rules of partial inductive definitions is the rule of definitional reflection (also named D-). The principle of definitional reflection enables us to derive conclusions from assumed assertions, and it includes the computation of a minimal substitution from a given atom and the heads of a given definition, called the generation of an A-sufficient substitution. Generating an A-sufficient substitution is a quite complex operation, but can be reduced to a look-up table in runtime by a precomputation at compile time.The possibility of nesting hypothetical conditions makes GCLA suitable for different kinds of hypothetical reasoning, such as expert systems and default reasoning, but it also includes functional programming and a kind of object oriented programming as special cases. GCLA has been used as a prototyping tool in an application for planning the construction of a building. This planning process is divided into two main phases, the choice-of-method phase and the scheduling phase. The choice-of-method phase gives a set of activities, which when scheduled form the plan to construct the building. The scheduling phase is a simulation of the construction, where resources are allocated to activities to determine their duration, and the different interdependencies of tire activities are taken into account.The drawback of having a powerful formalism as the basis for a programming system is that the issue of control becomes important. The approach taken in GCLA is to develop two different levels of representation: one object level which encodes the declarative content of the application, and one control level which encodes the procedural content of the application. The control level defines inference rules and search strategies for drawing conclusions from the object level definition. The control level can be compiled to a Prolog program, which interprets the object level definition.
  •  
13.
  •  
14.
  • Castañeda Lozano, Roberto, 1986- (author)
  • Constraint-Based Register Allocation and Instruction Scheduling
  • 2018
  • Doctoral thesis (other academic/artistic)abstract
    • Register allocation (mapping variables to processor registers or memory) and instruction scheduling (reordering instructions to improve latency or throughput) are central compiler problems. This dissertation proposes a combinatorial optimization approach to these problems that delivers optimal solutions according to a model, captures trade-offs between conflicting decisions, accommodates processor-specific features, and handles different optimization criteria.The use of constraint programming and a novel program representation enables a compact model of register allocation and instruction scheduling. The model captures the complete set of global register allocation subproblems (spilling, assignment, live range splitting, coalescing, load-store optimization, multi-allocation, register packing, and rematerialization) as well as additional subproblems that handle processor-specific features beyond the usual scope of conventional compilers.The approach is implemented in Unison, an open-source tool used in industry and research that complements the state-of-the-art LLVM compiler. Unison applies general and problem-specific constraint solving methods to scale to medium-sized functions, solving functions of up to 647 instructions optimally and improving functions of up to 874 instructions. The approach is evaluated experimentally using different processors (Hexagon, ARM and MIPS), benchmark suites (MediaBench and SPEC CPU2006), and optimization criteria (speed and code size reduction). The results show that Unison generates code of slightly to significantly better quality than LLVM, depending on the characteristics of the targeted processor (1% to 9.3% mean estimated speedup; 0.8% to 3.9% mean code size reduction). Additional experiments for Hexagon show that its estimated speedup has a strong monotonic relationship to the actual execution speedup, resulting in a mean speedup of 5.4% across MediaBench applications.The approach contributed by this dissertation is the first of its kind that is practical (it captures the complete set of subproblems, scales to medium-sized functions, and generates executable code) and effective (it generates better code than the LLVM compiler, fulfilling the promise of combinatorial optimization). It can be applied to trade compilation time for code quality beyond the usual optimization levels, explore and exploit processor-specific features, and identify improvement opportunities in conventional compilers.
  •  
15.
  • El-Ansary, Sameh, 1975- (author)
  • Designs and analyses in structured peer-to-peer systems
  • 2005
  • Doctoral thesis (other academic/artistic)abstract
    • Peer-to-Peer (P2P) computing is a recent hot topic in the areas of networking and distributed systems. Work on P2P computing was triggered by a number of ad-hoc systems that made the concept popular. Later, academic research efforts started to investigate P2P computing issues based on scientific principles. Some of that research produced a number of structured P2P systems that were collectively referred to by the term ``Distributed Hash Tables'' (DHTs). However, the research occurred in a diversified way leading to the appearance of similar concepts yet lacking a common perspective and not heavily analyzed. In this thesis we present a number of papers representing our research results in the area of structured P2P systems grouped as two sets labeled respectively ``Designs'' and ``Analyses''. The contribution of the first set of papers is as follows. First, we present the principle of distributed k-ary search (DKS) and argue that it serves as a framework for most of the recent P2P systems known as DHTs. That is, given the DKS framework, understanding existing DHT systems is done simply by seeing how they are instances of that framework. We argue that by perceiving systems as instances of the DKS framework, one can optimize some of them. We illustrate that by applying the framework to the Chord system, one of the most established DHT systems. Second, We show how the DKS framework helps in the design of P2P algorithms by two examples: (a) The DKS(n;k;f) system which is a system designed from the beginning on the principles of distributed k-ary search. (b) Two broadcast algorithms that take advantage of the distributed k-ary search tree. The contribution of the second set of papers is as follows. We account for two approaches that we used to evaluate the performance of a particular class of DHTs, namely the one adopting periodic stabilization for topology maintenance. The first approach was of an intrinsic empirical nature. In that approach, we tried to perceive a DHT as a physical system and account for its properties in a size-independent manner. The second approach was of a more analytical nature. In this approach we applied the technique of Master equations, which is a widely used technique in the analysis of natural systems. The application of the technique lead to a highly accurate description of the behavior of structured overlays. Additionally, the thesis contains a primer on structured P2P systems that tries to capture the main ideas that are prevailing in the field and enumerates a subset of the current hot and open research issues.
  •  
16.
  • Espinoza, Fredrik (author)
  • Individual service provisioning
  • 2003. - 3
  • Doctoral thesis (other academic/artistic)abstract
    • Computer usage is once again going through changes. Leaving behind the experiences of mainframes with terminal access and personal computers with graphical user interfaces, we are now headed for handheld devices and ubiquitous computing; we are facing the prospect of interacting with electronic services. These network-enabled functional components provide benefit to users regardless of their whereabouts, access method, or access device. The market place is also changing, from suppliers of monolithic oÆ-the-shelf applications, to open source and collaboratively developed specialized services. It is within this new arena of computing that we describe Individual Service Provisioning, a design and implementation that enables end users to create and provision their own services. Individual Service Provisioning consists of three components: a personal service environment, in which users can access and manage their services; ServiceDesigner, a tool with which to create new services; and the provisioning system, which turns end users into service providers.
  •  
17.
  • Frecon, Emmanuel (author)
  • DIVE on the internet
  • 2004. - 3
  • Doctoral thesis (other academic/artistic)abstract
    • This dissertation reports research and development of a platform for Collaborative Virtual Environments (CVEs). It has particularly focused on two major challenges: supporting the rapid development of scalable applications and easing their deployment on the Internet. This work employs a research method based on prototyping and refinement and promotes the use of this method for application development. A number of the solutions herein are in line with other CVE systems. One of the strengths of this work consists in a global approach to the issues raised by CVEs and the recognition that such complex problems are best tackled using a multi-disciplinary approach that understands both user and system requirements. CVE application deployment is aided by an overlay network that is able to complement any IP multicast infrastructure in place. Apart from complementing a weakly deployed worldwide multicast, this infrastructure provides for a certain degree of introspection, remote controlling and visualisation. As such, it forms an important aid in assessing the scalability of running applications. This scalability is further facilitated by specialised object distribution algorithms and an open framework for the implementation of novel partitioning techniques. CVE application development is eased by a scripting language, which enables rapid development and favours experimentation. This scripting language interfaces many aspects of the system and enables the prototyping of distribution-related components as well as user interfaces. It is the key construct of a distributed environment to which components, written in different languages, connect and onto which they operate in a network abstracted manner. The solutions proposed are exemplified and strengthened by three collaborative applications. The Dive room system is a virtual environment modelled after the room metaphor and supporting asynchronous and synchronous cooperative work. WebPath is a companion application to a Web browser that seeks to make the current history of page visits more visible and usable. Finally, the London travel demonstrator supports travellers by providing an environment where they can explore the city, utilise group collaboration facilities, rehearse particular journeys and access tourist information data.
  •  
18.
  • Fredlund, Lars-Åke (author)
  • A framework for reasoning about Erlang code
  • 2001. - 4
  • Doctoral thesis (other academic/artistic)abstract
    • We present a framework for formal reasoning about the behaviour of software written in Erlang, a functional programming language with prominent support for process based concurrency, message passing communication and distribution. The framework contains the following key ingredients: a specification language based on the mu-calculus and first-order predicate logic, a hierarchical small-step structural operational semantics of Erlang, a judgement format allowing parameterised behavioural assertions, and a Gentzen style proof system for proving validity of such assertions. The proof system supports property decomposition through a cut rule and handles program recursion through well-founded induction. An implementation is available in the form of a proof assistant tool for checking the correctness of proof steps. The tool offers support for automatic proof discovery through higher--level rules tailored to Erlang. As illustrated in several case
  •  
19.
  • Gillblad, Daniel, 1975- (author)
  • On practical machine learning and data analysis
  • 2008
  • Doctoral thesis (other academic/artistic)abstract
    • This thesis discusses and addresses some of the difficulties associated with practical machine learning and data analysis. Introducing data driven meth- ods in e. g. industrial and business applications can lead to large gains in productivity and efficiency, but the cost and complexity are often overwhelm- ing. Creating machine learning applications in practise often involves a large amount of manual labour, which often needs to be performed by an experi- enced analyst without significant experience with the application area. We will here discuss some of the hurdles faced in a typical analysis project and suggest measures and methods to simplify the process.One of the most important issues when applying machine learning meth- ods to complex data, such as e. g. industrial applications, is that the processes generating the data are modelled in an appropriate way. Relevant aspects have to be formalised and represented in a way that allow us to perform our calculations in an efficient manner. We present a statistical modelling framework, Hierarchical Graph Mixtures, based on a combination of graphi- cal models and mixture models. It allows us to create consistent, expressive statistical models that simplify the modelling of complex systems. Using a Bayesian approach, we allow for encoding of prior knowledge and make the models applicable in situations when relatively little data are available.Detecting structures in data, such as clusters and dependency structure, is very important both for understanding an application area and for speci- fying the structure of e. g. a hierarchical graph mixture. We will discuss how this structure can be extracted for sequential data. By using the inherent de- pendency structure of sequential data we construct an information theoretical measure of correlation that does not suffer from the problems most common correlation measures have with this type of data.In many diagnosis situations it is desirable to perform a classification in an iterative and interactive manner. The matter is often complicated by very limited amounts of knowledge and examples when a new system to be diag- nosed is initially brought into use. We describe how to create an incremental classification system based on a statistical model that is trained from empiri- cal data, and show how the limited available background information can still be used initially for a functioning diagnosis system.To minimise the effort with which results are achieved within data anal- ysis projects, we need to address not only the models used, but also the methodology and applications that can help simplify the process. We present a methodology for data preparation and a software library intended for rapid analysis, prototyping, and deployment.Finally, we will study a few example applications, presenting tasks within classification, prediction and anomaly detection. The examples include de- mand prediction for supply chain management, approximating complex simu- lators for increased speed in parameter optimisation, and fraud detection and classification within a media-on-demand system.
  •  
20.
  •  
21.
  • Kreuger, Per (author)
  • Computational Issues in Calculi of Partial Inductive Definitions
  • 1995. - 1
  • Doctoral thesis (other academic/artistic)abstract
    • We study the properties of a number of algorithms proposed to explore the computational space generated by a very simple and general idea: the notion of a mathematical definition and a number of suggested formal interpretations ofthis idea. Theories of partial inductive definitions (PID) constitute a class of logics based on the notion of an inductive definition. Formal systems based on this notion can be used to generalize Horn-logic and naturally allow and suggest extensions which differ in interesting ways from generalizations based on first order predicate calculus. E.g. the notion of completion generated by a calculus of PID and the resulting notion of negation is completely natural and does not require externally motivated procedures such as "negation as failure". For this reason, computational issues arising in these calculi deserve closer inspection. This work discuss a number of finitary theories of PID and analyzethe algorithmic and semantical issues that arise in each of them. There has been significant work on implementing logic programming languages in this setting and we briefly present the programming language and knowledge modelling tool GCLA II in which many of the computational prob-lems discussed arise naturally in practice.
  •  
22.
  • Montelius, Johan (author)
  • Exploiting Fine-grain Parallelism in Concurrent Constraint Languages
  • 1997. - 6
  • Doctoral thesis (other academic/artistic)abstract
    • This dissertation presents the design, implementation, and evaluation of a system that exploits fine-grain implicit parallelism in concurrent constraint programming language. The system is able to outperform a C implementation of an algorithm with complex dependencies without any user annotations. The concurrent constraint programming language AKL is used as a source programming language. A program is divided during runtime into tasks that are distributed over available processors. The system is unique in that it handles both and-parallel execution of goals as well as or-parallel execution of encapsulated search. A parallel binding scheme for a hierarchical constraint store is presented. The binding scheme allows encapsulated search to be performed in parallel. The design is justified with empirical data from the implementation. The scheme is the most efficient parallel scheme yet presented for deep concurrent constraint systems. The system was implemented on a high-performance shared-memory multiprocessor. Extensive measurements were done on the system using both smaller benchmarks as well as real-life programs. The evaluation includes detailed instruction-level simulation, including cache-performance, to explain the behavior of the system.
  •  
23.
  • Nobili, Serena, 1971- (author)
  • Tests for systematic effects in supernova cosmology
  • 2004
  • Doctoral thesis (other academic/artistic)abstract
    • Type~Ia supernovae are used as standard candles to measure the energy density components of the universe. This led to the new paradigm in cosmology: only about 30\% of the universe is made by ordinary pressure-less matter, the rest is associated with an unknown form of energy with a negative equation of state parameter, called dark energy, able to drive the accelaration of the universe. The importance of this discovery requires to fully understand and control the possible systematic effects affecting both current and future measurements, aiming at probing the equation of state parameter, i.e. the nature of dark energy.In this thesis, we tackle systematic effects involved in several aspects of supernova cosmology. Studies of supernova colours are used for investigating the homogeneity of the standard candle and to improve the spectral templates used for $K$-corrections. We have measured the intrinsic colour dispersion and assessed its correlation between different epochs of supernova evolution. We develop a technique for fitting the $I$-band ligthcurve and present studies of correlations of its properties with the SN luminosity. Moreover, we present a pioneer study of restframe $I$-band Hubble diagram extended at redshift $\sim$ 0.5. This is found to be a valuable complementary tool for cosmological studies, and the results found are consistent with the concordance model of the universe, though the uncertainties are large. Presence of grey dust in the intergalactic medium is investigated both using the $I$-band Hubble diagram and supernova colour excess. Although the low statistics of the high redshift sample used do not allow to draw firm conclusions, both methods are tested and shown to be useful for probing the presence of intergalactic dust. The hypothesis of supernova population drift is tested in two different ways, both studying restframe $I$-band lightcurve properties and by comparing spectra of high redshift with those of nearby SNe. One distant supernova SN~2002fd (z=0.279) shows spectral similarities with 1991T/1999aa like objects. No signs of evolution in supernova properties is found in these studies, strengthening our confidence in the measured cosmological parameters.
  •  
24.
  • Nylander, Stina, 1972- (author)
  • Design and Implementation of Multi-Device Services
  • 2007
  • Doctoral thesis (other academic/artistic)abstract
    • We present a method for developing multi-device services which allows for the creation of services that are adapted to a wide range of devices. Users have a wide selection of electronic services at their disposal such as shopping, banking, gaming, and messaging. They interact with these services using the computing devices they prefer or have access to, which can vary between situations. In some cases, the services that they want to use func-tions with the device they have access to, and sometimes it does not. Thus, in order for users to experience their full benefits, electronic services will need to become more flexible. They will need to be multi-device services, i.e. be accessible from different devices. We show that multi-device services are often used in different ways on different devices due to variations in device capabilities, purpose of use, context of use, and usability. This suggests that multi-device services not only need to be accessible from more than one device, they also need to be able to present functionality and user interfaces that suit various devices and situations of use. The key problem addressed in this work is that there are too many device-service combinations for developing a service version for each device. In-stead, there is a need for new methods for developing multi-device services which allows the creation of services that are adapted to various devices and situations. The challenge of designing and implementing multi-device services has been addressed in two ways in the present work: through the study of real-life use of multi-device services and through the creation of a development method for multi-device services. Studying use of multi-device services has gener-ated knowledge about how to design such services which give users the best worth. The work with development methods has resulted in a design model building on the separation of form and content, thus making it possible to create different presentations to the same content. In concrete terms, the work has resulted in design guidelines for multi-device services and a system prototype based on the principles of separation between form and content, and presentation control.
  •  
25.
  • Olsson, Fredrik (author)
  • Bootstrapping Named Entity Annotation by Means of Active Machine Learning: A Method for Creating Corpora
  • 2008. - 1
  • Doctoral thesis (other academic/artistic)abstract
    • This thesis describes the development and in-depth empirical investigation of a method, called BootMark, for bootstrapping the marking up of named entities in textual documents. The reason for working with documents, as opposed to for instance sentences or phrases, is that the BootMark method is concerned with the creation of corpora. The claim made in the thesis is that BootMark requires a human annotator to manually annotate fewer documents in order to produce a named entity recognizer with a given performance, than would be needed if the documents forming the basis for the recognizer were randomly drawn from the same corpus. The intention is then to use the created named en- tity recognizer as a pre-tagger and thus eventually turn the manual annotation process into one in which the annotator reviews system-suggested annotations rather than creating new ones from scratch. The BootMark method consists of three phases: (1) Manual annotation of a set of documents; (2) Bootstrapping – active machine learning for the purpose of selecting which document to an- notate next; (3) The remaining unannotated documents of the original corpus are marked up using pre-tagging with revision. Five emerging issues are identified, described and empirically investigated in the thesis. Their common denominator is that they all depend on the real- ization of the named entity recognition task, and as such, require the context of a practical setting in order to be properly addressed. The emerging issues are related to: (1) the characteristics of the named entity recognition task and the base learners used in conjunction with it; (2) the constitution of the set of documents annotated by the human annotator in phase one in order to start the bootstrapping process; (3) the active selection of the documents to annotate in phase two; (4) the monitoring and termination of the active learning carried out in phase two, including a new intrinsic stopping criterion for committee-based active learning; and (5) the applicability of the named entity recognizer created during phase two as a pre-tagger in phase three. The outcomes of the empirical investigations concerning the emerging is- sues support the claim made in the thesis. The results also suggest that while the recognizer produced in phases one and two is as useful for pre-tagging as a recognizer created from randomly selected documents, the applicability of the recognizer as a pre-tagger is best investigated by conducting a user study involving real annotators working on a real named entity recognition task.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-25 of 46
Type of publication
doctoral thesis (46)
Type of content
other academic/artistic (46)
Author/Editor
Haridi, Seif, Profes ... (5)
Haridi, Seif (2)
Karlgren, Jussi (2)
Boman, Magnus, Profe ... (2)
Voigt, Thiemo (1)
Van Roy, Peter, Prof ... (1)
show more...
Aberer, Karl (1)
Holst, Anders, Docen ... (1)
Olsson, Fredrik (1)
Abrahamsson, Henrik (1)
Kreuger, Per (1)
Ahlgren, Bengt (1)
Björkman, Mats, prof ... (1)
Ahlgren, Bengt, PhD (1)
Muscariello, Luca, D ... (1)
Goobar, Ariel, Profe ... (1)
Olsson, Tomas (1)
Smeets, Ben (1)
Carlsson, Mats (1)
Gunningberg, Per, Do ... (1)
Nivre, Joakim (1)
Funk, Peter, Profess ... (1)
Al-Shishtawy, Ahmad, ... (1)
Vlassov, Vlassov, As ... (1)
Brand, Per, Dr. (1)
Navarro Moldes, Lean ... (1)
Asker, Lars (1)
Sanches, Pedro (1)
Höök, Kristina, 1964 ... (1)
Sadighi, Babak (1)
Sandblad, Bengt (1)
Laaksolahti, Jarmo (1)
Sjölinder, Marie (1)
Waern, Annika (1)
Payberah, Amir H., 1 ... (1)
Ståhl, Anna (1)
Ardelius, John, 1978 ... (1)
Krishnamurthy, Supri ... (1)
Jelasity, Mark, Asso ... (1)
Armstrong, Joe, 1950 ... (1)
Aronsson, Martin, 19 ... (1)
Fredlund, Lars-Åke (1)
Voigt, Thiemo, Profe ... (1)
Rasmusson, Lars (1)
Sander, Ingo, Profes ... (1)
Lansner, Anders, Pro ... (1)
Sahlgren, Magnus, 19 ... (1)
Rahimian, Fatemeh (1)
Löwgren, Jonas (1)
Steinert, Rebecca (1)
show less...
University
RISE (39)
Royal Institute of Technology (15)
Stockholm University (11)
Uppsala University (5)
Mälardalen University (2)
Lund University (1)
Language
English (46)
Research subject (UKÄ/SCB)
Natural sciences (40)
Engineering and Technology (9)
Social Sciences (2)
Humanities (2)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view