SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "hsv:(NATURVETENSKAP) hsv:(Data och informationsvetenskap) hsv:(Systemvetenskap informationssystem och informatik) ;mspu:(doctoralthesis)"

Sökning: hsv:(NATURVETENSKAP) hsv:(Data och informationsvetenskap) hsv:(Systemvetenskap informationssystem och informatik) > Doktorsavhandling

  • Resultat 1-10 av 340
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Chatterjee, Bapi, 1982 (författare)
  • Lock-free Concurrent Search
  • 2017
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The contemporary computers typically consist of multiple computing cores with high compute power. Such computers make excellent concurrent asynchronous shared memory system. On the other hand, though many celebrated books on data structure and algorithm provide a comprehensive study of sequential search data structures, unfortunately, we do not have such a luxury if concurrency comes in the setting. The present dissertation aims to address this paucity. We describe novel lock-free algorithms for concurrent data structures that target a variety of search problems. (i) Point search (membership query, predecessor query, nearest neighbour query) for 1-dimensional data: Lock-free linked-list; lock-free internal and external binary search trees (BST). (ii) Range search for 1-dimensional data: A range search method for lock-free ordered set data structures - linked-list, skip-list and BST. (iii) Point search for multi-dimensional data: Lock-free kD-tree, specially, a generic method for nearest neighbour search. We prove that the presented algorithms are linearizable i.e. the concurrent data structure operations intuitively display their sequential behaviour to an observer of the concurrent system. The lock-freedom in the introduced algorithms guarantee overall progress in an asynchronous shared memory system. We present the amortized analysis of lock-free data structures to show their efficiency. Moreover, we provide sample implementations of the algorithms and test them over extensive micro-benchmarks. Our experiments demonstrate that the implementations are scalable and perform well when compared to related existing alternative implementations on common multi-core computers. Our focus is on propounding the generic methodologies for efficient lock-free concurrent search. In this direction, we present the notion of help-optimality, which captures the optimization of amortized step complexity of the operations. In addition to that, we explore the language-portable design of lock-free data structures that aims to simplify an implementation from programmer’s point of view. Finally, our techniques to implement lock-free linearizable range search and nearest neighbour search are independent of the underlying data structures and thus are adaptive to similar data structures.
  •  
2.
  • Brunetta, Carlo, 1992 (författare)
  • Cryptographic Tools for Privacy Preservation
  • 2021
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Data permeates every aspect of our daily life and it is the backbone of our digitalized society. Smartphones, smartwatches and many more smart devices measure, collect, modify and share data in what is known as the Internet of Things. Often, these devices don’t have enough computation power/storage space thus out-sourcing some aspects of the data management to the Cloud. Outsourcing computation/storage to a third party poses natural questions regarding the security and privacy of the shared sensitive data. Intuitively, Cryptography is a toolset of primitives/protocols of which security prop- erties are formally proven while Privacy typically captures additional social/legislative requirements that relate more to the concept of “trust” between people, “how” data is used and/or “who” has access to data. This thesis separates the concepts by introducing an abstract model that classifies data leaks into different types of breaches. Each class represents a specific requirement/goal related to cryptography, e.g. confidentiality or integrity, or related to privacy, e.g. liability, sensitive data management and more. The thesis contains cryptographic tools designed to provide privacy guarantees for different application scenarios. In more details, the thesis: (a) defines new encryption schemes that provide formal privacy guarantees such as theoretical privacy definitions like Differential Privacy (DP), or concrete privacy-oriented applications covered by existing regulations such as the European General Data Protection Regulation (GDPR); (b) proposes new tools and procedures for providing verifiable computation’s guarantees in concrete scenarios for post-quantum cryptography or generalisation of signature schemes; (c) proposes a methodology for utilising Machine Learning (ML) for analysing the effective security and privacy of a crypto-tool and, dually, proposes a secure primitive that allows computing specific ML algorithm in a privacy-preserving way; (d) provides an alternative protocol for secure communication between two parties, based on the idea of communicating in a periodically timed fashion.
  •  
3.
  • Tsaloli, Georgia, 1993 (författare)
  • Secure and Privacy-Preserving Cloud-Assisted Computing
  • 2022
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Smart devices such as smartphones, wearables, and smart appliances collect significant amounts of data and transmit them over the network forming the Internet of Things (IoT). Many applications in our daily lives (e.g., health, smart grid, traffic monitoring) involve IoT devices that often have low computational capabilities. Subsequently, powerful cloud servers are employed to process the data collected from these devices. Nevertheless, security and privacy concerns arise in cloud-assisted computing settings. Collected data can be sensitive, and it is essential to protect their confidentiality. Additionally, outsourcing computations to untrusted cloud servers creates the need to ensure that servers perform the computations as requested and that any misbehavior can be detected, safeguarding security. Cryptographic primitives and protocols are the foundation to design secure and privacy-preserving solutions that address these challenges. This thesis focuses on providing privacy and security guarantees when outsourcing heavy computations on sensitive data to untrusted cloud servers. More concretely, this work: (a)  provides solutions for outsourcing the secure computation of the sum and the product functions in the multi-server, multi-client setting, protecting the sensitive data of the data owners, even against potentially untrusted cloud servers; (b)  provides integrity guarantees for the proposed protocols, by enabling anyone to verify the correctness of the computed function values. More precisely, the employed servers or the clients (depending on the proposed solution) provide specific values which are the proofs that the computed results are correct; (c)  designs decentralized settings, where multiple cloud servers are employed to perform the requested computations as opposed to relying on a single server that might fail or lose connection; (d)  suggests ways to protect individual privacy and provide integrity. More pre- cisely, we propose a verifiable differentially private solution that provides verifiability and avoids any leakage of information regardless of the participa- tion of some individual’s sensitive data in the computation or not.
  •  
4.
  • Nairat, Malik, 1973 (författare)
  • Generative comics - A computational approach to creating comics material
  • 2021
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Digital storytelling can be employed as a tool that incorporates human creativity with technology. It synthesizes multimedia based elements to create engaging stories and compelling narratives. To this end, this research presents an approach that can be used as an assistant tool for comics artists. It focuses on generating comics-based narratives through a system that integrates three main components in the creation process, which are: agent-based system which generates raw narrative material based on the behavior of the system’s agents, an interactive evolution process where the author participate in the creation process, and comics generating engine that creates final comics as outputs. The general scope of the research is to construct a generative system that has the ability to create comics and fictional characters. The research utilizes the method of Research through Design (RtD) which favors evolution and iteration of the construction of the artifact based on trial and error to better solve complex design problems (Smith & Dean, 2014). Relevant aspects of computer science, visual arts, comics and storytelling have been combined together to form a unified research project that can answer the research questions: how can digital technology be employed in generating comics; how can it contribute to the creation of novel art forms; and how can it help artists in their creative practice. Through a review of generative comics researches, four categories are identified: Unified Comics Generators which investigate methods for generating both the story structure and its visual comics-based representation, Comics Elements Generators which explore various techniques for generating or employing particular comics elements such as panels, splashes, speech bubbles, and others, Visual Representation Generators which rely on importing the content from other narrative sources such as video games, video streaming, or chatting conversations through social media, and Generative Comics Installations which produce and present comic stories in a form of exhibited installations by capturing and manipulating live pictures of the audience. Research findings are discussed in terms of story characterization, the generated stories, and the comics visual representation. The constructed system showed high flexibility, scalability, competency, and capability that entitle it to be employed in various applications for different purposes.
  •  
5.
  • Hausknecht, Daniel, 1986 (författare)
  • Web Application Content Security
  • 2018
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The web has become ubiquitous in modern lives. People go online to stay in contact with their friends or to manage their bank account. With lots of different sensitive information handled by web applications securing them naturally becomes important. In this thesis we analyze the state of the art in client-side web security, empirically study real-world deployments, analyze best practices and actively contribute to improve security of the web platform. We explore how password meters and password generators are included into web applications and how it should be done, in particular when external code is used. Next, we investigate if and how browser extensions and modify Content Security Policy HTTP headers (CSP) by analyzing a large set of real-world browser extensions. We implement a mechanism which allows web servers to react to CSP header modifications by browser extensions. Is CSP meant to prevent data exfiltration on the web? We discuss the different positions in the security community with respect to this question. Without choosing a side we show that the current CSP standard does in fact not prevent data exfiltration and provide possible solutions. With login pages as the points of authenticating to a web service their security is particularly relevant. In a large-scale empirical study we automatically identify and analyze login page security configurations on the web, and discuss measures to improve the security of login pages. Last, we analyze a standard proposal for Origin Manifest, a mechanism for origin-wide security configurations. We implement a mechanism to automatically generate such configurations, make extensions to the mechanism, implement a prototype and run several large-scale empirical studies to evaluate the standard proposal.
  •  
6.
  • Jandinger, Magnus (författare)
  • On a Need to Know Basis : A Conceptual and Methodological Framework for Modelling and Analysis of Information Demand in an Enterprise Context
  • 2016
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • While the amount of information, readily available to workers in information- and knowledge intensive business- and industrial contexts, only seem to increase with every day, those workers continue to have difficulties in finding and managing relevant and needed information despite the numerous technical, organisational, and practical approaches promising a remedy to the situation. In this dissertation it is claimed that the main reason for the shortcomings of such approaches are a lack of understanding of the underlying information demand people and organisations have in relation to performing work tasks. Furthermore, it is also argued that while this issue, even with a better understanding of the underlying mechanisms, still would remain a complex problem, it would at least be manageable.To facilitate the development of demand-driven information solutions and organisational change with respect to information demand the dissertation sets out to first provide the empirical and theoretical foundation for a method for modelling and analysing information demand in enterprise contexts and then presents an actual method. As a part of this effort, a conceptual framework for reasoning about information demand is presented together with experiences from a number of empirical cases focusing on both method generation and -validation. A methodological framework is then defined based on principles and ideas grounded in the empirical background and finally a number of method components are introduced in terms of notations, conceptual focus, and procedural approaches for capturing and representation of various aspects of information demand.The dissertation ends with a discussion concerning the validity of the presented method and results in terms of utility, relevance, and applicability with respect to industrial context and needs, as well as possible and planned future improvements and developments of the method.
  •  
7.
  • Besker, Terese, 1970 (författare)
  • Technical Debt: An empirical investigation of its harmfulness and on management strategies in industry
  • 2020
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Background: In order to survive in today's fast-growing and ever fast-changing business environment, software companies need to continuously deliver customer value, both from a short- and long-term perspective. However, the consequences of potential long-term and far-reaching negative effects of shortcuts and quick fixes made during the software development lifecycle, described as Technical Debt (TD), can impede the software development process. Objective: The overarching goal of this Ph.D. thesis is twofold. The first goal is to empirically study and understand in what way and to what extent, TD influences today’s software development work, specifically with the intention to provide more quantitative insight into the field. Second, to understand which different initiatives can reduce the negative effects of TD and also which factors are important to consider when implementing such initiatives. Method: To achieve the objectives, a combination of both quantitative and qualitative research methodologies are used, including interviews, surveys, a systematic literature review, a longitudinal study, analysis of documents, correlation analysis, and statistical tests. In seven of the eleven studies included in this Ph.D. thesis, a combination of multiple research methods are used to achieve high validity. Results: We present results showing that software suffering from TD will cause various negative effects on both the software and the developing process. These negative effects are illustrated from a technical, financial, and a developer’s working situational perspective. These studies also identify several initiatives that can be undertaken in order to reduce the negative effects of TD. Conclusion: The results show that software developers report that they waste 23% of their working time due to experiencing TD and that TD required them to perform additional time-consuming work activities. This study also shows that, compared to all types of TD, architectural TD has the greatest negative impact on daily software development work and that TD has negative effects on several different software quality attributes. Further, the results show that TD reduces developer morale. Moreover, the findings show that intentionally introducing TD in startup companies can allow the startups to cut development time, enabling faster feedback and increased revenue, preserve resources, and decrease risk and thereby contribute to beneficial effects. This study also identifies several initiatives that can be undertaken in order to reduce the negative effects of TD, such as the introduction of a tracking process where the TD items are introduced in an official backlog. The finding also indicates that there is an unfulfilled potential regarding how managers can influence the manner in which software practitioners address TD.
  •  
8.
  • Sundvall, Erik, 1973- (författare)
  • Scalability and Semantic Sustainability in Electronic Health Record Systems
  • 2013
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This work is a small contribution to the greater goal of making software systems used in healthcare more useful and sustainable. To come closer to that goal, health record data will need to be more computable and easier to exchange between systems.Interoperability refers to getting systems to work together and semantics concerns the study of meanings. If Semantic interoperability is achieved then information entered in one information system is usable in other systems and reusable for many purposes. Scalability refers to the extent to which a system can gracefully grow by adding more resources. Sustainability refers more to how to best use available limited resources. Both aspects are important.The main focus and aim of the thesis is to increase knowledge about how to support scalability and semantic sustainability. It reports explorations of how to apply aspects of the above to Electronic Health Record (EHR) systems, associated infrastructure, data structures, terminology systems, user interfaces and their mutual boundaries.Using terminology systems is one way to improve computability and comparability of data. Modern complex ontologies and terminology systems can contain hundreds of thousands of concepts that can have many kinds of relationships to multiple other concepts. This makes visualization challenging. Many visualization approaches designed to show the local neighbourhood of a single concept node do not scale well to larger sets of nodes. The interactive TermViz approach described in this thesis, is designed to aid users to navigate and comprehend the context of several nodes simultaneously. Two applications are presented where TermViz aids management of the boundary between EHR data structures and the terminology system SNOMED CT.The amount of available time from people skilled in health informatics is limited. Adequate methods and tools are required to develop, maintain and reuse health-IT solutions in a sustainable way. Multiple levels of modelling including a fixed reference model and another layer of flexible reusable ‘archetypes’ for domain specific data structures, is an approach with that aim used in openEHR and the ISO 13606 standard. This approach, including learning, implementing and managing it, is explored from different angles in this thesis. An architecture applying Representational State Transfer (REST) to archetype-based EHR systems, in order to address scalability, is presented. Combined with archetyping this architecture also aims at enabling a sustainable way of continuously evolving multi-vendor EHR solutions. An experimental open source implementation of it, aimed for learning and prototyping, is also presented.Manually changing database structures used for storage every time new versions of archetypes and associated data structures are needed is likely not a sustainable activity. Thus storage systems that can handle change with minimal manual interventions are desirable. Initial explorations of performance and scalability in such systems are also reportedGraphical user interfaces focused on EHR navigation, time-perspectives and highlighting of EHR content are also presented – illustrating what can be done with computable health record data and the presented approaches.Desirable aspects of semantic sustainability have been discussed, including: sustainable use of limited resources (such as available time of skilled people), and reduction of unnecessary risks. A semantic sustainability perspective should be inspired and informed by research in complex systems theory, and should also include striving to be highly aware of when and where technical debt is being built up. Semantic sustainability is a shared responsibility.The combined results presented contribute to increasing knowledge about ways to support scalability and semantic sustainability in the context of electronic health record systems. Supporting tools, architectures and approaches are additional contributions.
  •  
9.
  • Götze, Jana (författare)
  • Talk the walk : Empirical studies and data-driven methods for geographical natural language applications
  • 2016
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Finding the way in known and unknown city environments is a task that all pedestrians carry out regularly. Current technology allows the use of smart devices as aids that can give automatic verbal route directions on the basis of the pedestrian's current position. Many such systems only give route directions, but are unable to interact with the user to answer clarifications or understand other verbal input. Furthermore, they rely mainly on conveying the quantitative information that can be derived directly from geographic map representations: 'In 300 meters, turn into High Street'. However, humans are reasoning about space predominantly in a qualitative manner, and it is less cognitively demanding for them to understand route directions that express such qualitative information, such as 'At the church, turn left' or 'You will see a café'. This thesis addresses three challenges that an interactive wayfinding system faces in the context of natural language generation and understanding: in a given situation, it must decide on whether it is appropriate to give an instruction based on a relative direction, it must be able to select salient landmarks, and it must be able to resolve the user's references to objects. In order to address these challenges, this thesis takes a data-driven approach: data was collected in a large-scale city environment to derive decision-making models from pedestrians' behavior. As a representation for the geographical environment, all studies use the crowd-sourced Openstreetmap database. The thesis presents methodologies on how the geographical and language data can be utilized to derive models that can be incorporated into an automatic route direction system.
  •  
10.
  • Homem, Irvin, 1985- (författare)
  • Advancing Automation in Digital Forensic Investigations
  • 2018
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Digital Forensics is used to aid traditional preventive security mechanisms when they fail to curtail sophisticated and stealthy cybercrime events. The Digital Forensic Investigation process is largely manual in nature, or at best quasi-automated, requiring a highly skilled labour force and involving a sizeable time investment. Industry standard tools are evidence-centric, automate only a few precursory tasks (E.g. Parsing and Indexing) and have limited capabilities of integration from multiple evidence sources. Furthermore, these tools are always human-driven.These challenges are exacerbated in the increasingly computerized and highly networked environment of today. Volumes of digital evidence to be collected and analyzed have increased, and so has the diversity of digital evidence sources involved in a typical case. This further handicaps digital forensics practitioners, labs and law enforcement agencies, causing delays in investigations and legal systems due to backlogs of cases. Improved efficiency of the digital investigation process is needed, in terms of increasing the speed and reducing the human effort expended. This study aims at achieving this time and effort reduction, by advancing automation within the digital forensic investigation process.Using a Design Science research approach, artifacts are designed and developed to address these practical problems. Summarily, the requirements, and architecture of a system for automating digital investigations in highly networked environments are designed. The architecture initially focuses on automation of the identification and acquisition of digital evidence, while later versions focus on full automation and self-organization of devices for all phases of the digital investigation process. Part of the remote evidence acquisition capability of this system architecture is implemented as a proof of concept. The speed and reliability of capturing digital evidence from remote mobile devices over a client-server paradigm is evaluated. A method for the uniform representation and integration of multiple diverse evidence sources for enabling automated correlation, simple reasoning and querying is developed and tested. This method is aimed at automating the analysis phase of digital investigations. Machine Learning (ML)-based triage methods are developed and tested to evaluate the feasibility and performance of using such techniques to automate the identification of priority digital evidence fragments. Models from these ML methods are evaluated in identifying network protocols within DNS tunneled network traffic. A large dataset is also created for future research in ML-based triage for identifying suspicious processes for memory forensics.From an ex ante evaluation, the designed system architecture enables individual devices to participate in the entire digital investigation process, contributing their processing power towards alleviating the burden on the human analyst. Experiments show that remote evidence acquisition of mobile devices over networks is feasible, however a single-TCP-connection paradigm scales poorly. A proof of concept experiment demonstrates the viability of the automated integration, correlation and reasoning over multiple diverse evidence sources using semantic web technologies. Experimentation also shows that ML-based triage methods can enable prioritization of certain digital evidence sources, for acquisition or analysis, with up to 95% accuracy.The artifacts developed in this study provide concrete ways to enhance automation in the digital forensic investigation process to increase the investigation speed and reduce the amount of costly human intervention needed. 
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 340
Typ av publikation
Typ av innehåll
övrigt vetenskapligt/konstnärligt (340)
Författare/redaktör
Johannesson, Paul, P ... (11)
Révay, Péter, Profes ... (5)
Wangler, Benkt, Prof ... (5)
Sandblad, Bengt (5)
Nilsson, Anders G. (5)
Ekenberg, Love, Prof ... (5)
visa fler...
Goldkuhl, Göran (5)
Grönlund, Åke, Profe ... (4)
Holmström, Jonny, Pr ... (4)
Goldkuhl, Göran, Pro ... (4)
Yngström, Louise, Pr ... (4)
Nilsson, Anders G., ... (4)
Danielson, Mats, Pro ... (4)
Johansson, Björn, As ... (3)
Ekenberg, Love (3)
Persson, Anne (3)
Ståhlbröst, Anna, 19 ... (3)
Carlsson, Sven (3)
Karlsson, Fredrik, P ... (3)
Persson, Anne, Profe ... (3)
Johnson, Pontus, Pro ... (3)
Rusu, Lazar, Profess ... (2)
Yngström, Louise (2)
Lundh Snis, Ulrika, ... (2)
Timpka, Toomas (2)
Johannesson, Paul (2)
Peng, Zebo (2)
Hansson, Henrik, Ass ... (2)
Fischer-Hübner, Simo ... (2)
Kjellin, Harald, Pro ... (2)
Elragal, Ahmed (2)
Hägglund, Sture (2)
Carlsson, Christer, ... (2)
Ståhlbröst, Anna, Pr ... (2)
Boman, Magnus, Profe ... (2)
Ng, Amos H. C., 1970 ... (2)
Carlsson, Sven, Prof ... (2)
Asker, Lars, Docent (2)
Nordström, Lars, Pro ... (2)
Gulliksen, Jan (2)
Zhang, TingTing (2)
Juell-Skielse, Gusta ... (2)
Carlsson, Sten (2)
Sundblad, Yngve (2)
Åhlfeldt, Rose-Mhari ... (2)
Sandkuhl, Kurt, Prof ... (2)
Cronholm, Stefan (2)
Gröndahl, Fredrik, A ... (2)
Kjellberg, Torsten (2)
Bergquist, Magnus, P ... (2)
visa färre...
Lärosäte
Stockholms universitet (69)
Göteborgs universitet (38)
Kungliga Tekniska Högskolan (36)
Linköpings universitet (31)
Chalmers tekniska högskola (30)
Umeå universitet (27)
visa fler...
Högskolan i Skövde (26)
Uppsala universitet (20)
Linnéuniversitetet (16)
Karlstads universitet (15)
Örebro universitet (13)
Mittuniversitetet (13)
Högskolan i Halmstad (11)
RISE (8)
Jönköping University (7)
Högskolan Dalarna (6)
Högskolan Väst (5)
Lunds universitet (5)
Luleå tekniska universitet (4)
Högskolan i Gävle (4)
Södertörns högskola (4)
Högskolan i Borås (4)
Mälardalens universitet (3)
Malmö universitet (1)
Handelshögskolan i Stockholm (1)
Försvarshögskolan (1)
Blekinge Tekniska Högskola (1)
VTI - Statens väg- och transportforskningsinstitut (1)
visa färre...
Språk
Engelska (312)
Svenska (26)
Tyska (1)
Portugisiska (1)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (339)
Samhällsvetenskap (59)
Teknik (31)
Humaniora (5)
Medicin och hälsovetenskap (4)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy