SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "hsv:(NATURVETENSKAP) hsv:(Data och informationsvetenskap) ;mspu:(licentiatethesis)"

Sökning: hsv:(NATURVETENSKAP) hsv:(Data och informationsvetenskap) > Licentiatavhandling

  • Resultat 1-10 av 1645
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Pir Muhammad, Amna, 1990 (författare)
  • Managing Human Factors and Requirements in Agile Development of Automated Vehicles: An Exploration
  • 2022
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Context: Automated Vehicle (AV) technology has evolved significantly in complexity and impact; it is expected to ultimately change urban transporta- tion. However, research shows that vehicle automation can only live up to this expectation if it is defined with human capabilities and limitations in mind. Therefore, it is necessary to bring human factors knowledge to AV developers. Objective: This thesis aims to empirically study how we can effectively bring the required human factors knowledge into large-scale agile AV develop- ment. The research goals are 1) to explore requirements engineering and human factors in agile AV development, 2) to investigate the problems of requirements engineering, human factors, and agile way of working in AV development, and 3) to demonstrate initial solutions to existing problems in agile AV development. Method: We conducted this research in close collaboration with industry, using different empirical methodologies to collect data—including interviews, workshops, and document analysis. To gain in-depth insights, we did a qualita- tive exploratory study to investigate the problem and used a design science approach to develop initial solution in several iterations. Findings and Conclusions: We found that applying human factors knowledge effectively is one of the key problem areas that need to be solved in agile development of artificial intelligence (AI)-intense systems. This motivated us to do an in-depth interview study on how to manage human factors knowl- edge during AV development. From our data, we derived a working definition of human factors for AV development, discovered the relevant properties of agile and human factors, and defined implications for agile ways of working, managing human factors knowledge, and managing requirements. The design science approach allowed us to identify challenges related to agile requirements engineering in three case companies in iterations. Based on these three case studies, we developed a solution strategy to resolve the RE challenges in agile AV development. Moreover, we derived building blocks and described guide- lines for the creation of a requirements strategy, which should describe how requirements are structured, how work is organized, and how RE is integrated into the agile work and feature flow. Future Outlook: In future work, I plan to define a concrete requirement strategy for human factors knowledge in large-scale agile AV development. It could help establishing clear communication channels and practices for incorporating explicit human factors knowledge into AI-based large-scale agile AV development.
  •  
2.
  • Blanch, Krister, 1991 (författare)
  • Beyond-application datasets and automated fair benchmarking
  • 2023
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Beyond-application perception datasets are generalised datasets that emphasise the fundamental components of good machine perception data. When analysing the history of perception datatsets, notable trends suggest that design of the dataset typically aligns with an application goal. Instead of focusing on a specific application, beyond-application datasets instead look at capturing high-quality, high-volume data from a highly kinematic environment, for the purpose of aiding algorithm development and testing in general. Algorithm benchmarking is a cornerstone of autonomous systems development, and allows developers to demonstrate their results in a comparative manner. However, most benchmarking systems allow developers to use their own hardware or select favourable data. There is also little focus on run time performance and consistency, with benchmarking systems instead showcasing algorithm accuracy. By combining both beyond-application dataset generation and methods for fair benchmarking, there is also the dilemma of how to provide the dataset to developers for this benchmarking, as the result of a high-volume, high-quality dataset generation is a significant increase in dataset size when compared to traditional perception datasets. This thesis presents the first results of attempting the creation of such a dataset. The dataset was built using a maritime platform, selected due to the highly dynamic environment presented on water. The design and initial testing of this platform is detailed, as well as as methods of sensor validation. Continuing, the thesis then presents a method of fair benchmarking, by utilising remote containerisation in a way that allows developers to present their software to the dataset, instead of having to first locally store a copy. To test this dataset and automatic online benchmarking, a number of reference algorithms were required for initial results. Three algorithms were built, using the data from three different sensors captured on the maritime platform. Each algorithm calculates vessel odometry, and the automatic benchmarking system was utilised to show the accuracy and run-time performance of these algorithms. It was found that the containerised approach alleviated data management concerns, prevented inflated accuracy results, and demonstrated precisely how computationally intensive each algorithm was.
  •  
3.
  • Chatterjee, Bapi, 1982 (författare)
  • Efficient Implementation of Concurrent Data Structures on Multi-core and Many-core Architectures
  • 2015
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Synchronization of concurrent threads is the central problem in order to design efficient concurrent data-structures. The compute systems widely available in market are increasingly becoming heterogeneous involving multi-core Central Processing Units (CPUs) and many-core Graphics Processing Units (GPUs). This thesis contributes to the research of efficient synchronization in concurrent data-structures in more than one way. It is divided into two parts. In the first part, a novel design of a Set Abstract Data Type (ADT) based on an efficient lock-free Binary Search Tree (BST) with improved amortized bounds of the time complexity of set operations - Add, Remove and Contains, is presented. In the second part, a comprehensive evaluation of concurrent Queue implementations on multi-core CPUs as well as many-core GPUs are presented.Efficient Lock-free BST -To the best of our knowledge, the lock-free BST presented in this thesis is the first to achieve an amortized complexity of O(H(n)+c) for all Set operations where H(n) is the height of a BST on n nodes and c is the contention measure. Also, the presented lock-free algorithm of BST comes with an improved disjoint-access-parallelism compared to the previously existing concurrent BST algorithms. This algorithm uses single-word compare-and-swap (CAS) primitives. The presented algorithm is linearizable. We implemented the algorithm in Java and it shows good scalability.Evaluation of concurrent data-structures - We have evaluated the performance of a number of concurrent FIFO Queue algorithms on multi-core CPUs and many-core GPUs. We studied the portability of existing design of concurrent Queues from CPUs to GPUs which are inherently designed for SIMD programs. We observed that in general concurrent queues offer them to efficient implementation on GPUs with faster cache memory and better performance support for atomic synchronization primitives such as CAS. To the best of our knowledge, this is the first attempt to evaluate a concurrent data-structure on GPUs.
  •  
4.
  • Tossou, Aristide, 1989 (författare)
  • Privacy in the Age of Artificial Intelligence
  • 2017
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • An increasing number of people are using the Internet in their daily life. Indeed, more than 40% of the world population have access to the Internet, while Facebook (one of the top social network on the web) is actively used by more than 1.3 billion users each day (Statista 2017). This huge amount of customers creates an abundance of user data containing personal information. These data are becoming valuable to companies and used in various way to enrich user experience or increase revenue. This has led many citizens and politicians to be concerned about their privacy on the Internet to such an extent that the European Union issued a "Right to be Forgotten" ruling, reflecting the desire of many individuals to restrict the use of their information. As a result, many online companies pledged to collect or share user data anonymously. However, anonymisation is not enough and makes no sense in many cases. For example, an MIT graduate was able to easily re-identify the private medical data of Governor William Weld of Massachusetts from supposedly anonymous records released by the Group Insurance Commission. All she did was to link the insurance data with the publicly available voter registration list and some background knowledge (Ohm 2009). Those shortcomings have led to the development of a more rigorous mathematical framework for privacy: Differential privacy. Its main characteristic is to bound the information one can gain from released data, no matter what side information they have available. In this thesis, we present differentially private algorithms for the multi-armed bandit problem. This is a well known multi round game, that originally stemmed from clinical trials applications and is now one promising solution to enrich user experience in the booming online advertising and recommendation systems. However, as recommendation systems are inherently based on user data, there is always some private information leakage. In our work, we show how to minimise this privacy loss, while maintaining the effectiveness of such algorithms. In addition, we show how one can take advantage of the correlation structure inherent in a user graph such as the one arising from a social network.
  •  
5.
  • Brunetta, Carlo, 1992 (författare)
  • Cryptographic Tools for Privacy Preservation and Verifiable Randomness
  • 2018
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Our society revolves around communication. The Internet is the biggest, cheapest and fastest digital communication channel used nowadays. Due to the continuous increase of daily communication among people worldwide, more and more data might be stolen, misused or tampered. We require to protect our communications and data by achieving privacy and confidentiality. Despite the two terms, "privacy" and "confidentiality",are often used as synonymous, in cryptography they are modelled in very different ways. Intuitively, cryptography can be seen as a tool-box in which every scheme, protocol or primitive is a tool that can be used to solve specific problems and provide specific communication security guarantees such as confidentiality. Privacy is instead not easy to describe and capture since it often depends on "which" information is available, "how" are these data used and/or "who" has access to our data. This licentiate thesis raises research questions and proposes solutions related to: the possibility of defining encryption schemes that provide both strong security and privacy guarantees; the importance of designing cryptographic protocols that are compliant with real-life privacy-laws or regulations; and the necessity of defining a post-quantum mechanism to achieve the verifiability of randomness. In more details, the thesis achievements are: (a) defining a new class of encryption schemes, by weakening the correctness property, that achieves Differential Privacy (DP), i.e., a mathematically sound definition of privacy; (b) formalizing a security model for a subset of articles in the European General Data Protection Regulation (GDPR), designing and implementing a cryptographic protocol based on the proposed GDPR-oriented security model, and; (c) proposing a methodology to compile a post-quantum interactive protocol for proving the correct computation of a pseudorandom function into a non-interactive one, yielding a post-quantum mechanism for verifiable randomness.
  •  
6.
  • Dolonius, Dan, 1985 (författare)
  • Sparse Voxel DAGs for Shadows and for Geometry with Colors
  • 2018
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Triangles are probably the most common format for shapes in computer graphics. Nevertheless, when high detail is desired, Sparse Voxel Octrees (SVO) and Sparse Voxel Directed Acyclic Graphs (DAG) can be considerably more memory efficient. One of the first practical use cases for DAGs was to use the structure to represent precomputed shadows. However, previous methods were very time consuming in building the DAG and did not support any other attributes than discretized geometry. Furthermore, when used for scene object representation, the DAGs lacked proper support for properties such as object colors. The focus on this thesis is to speed up the build times of the DAG and to allow other, important, attributes such as colors to be encoded. This thesis is a collection of three papers where we in Paper I solve the problem with slow construction times while also further compressing the DAG, allowing much faster feedback to an  artist making changes to a scene and also opening up the possibility to recompute the DAG in run time for slowly moving shadows. If a unique color per voxel is desired, which uncompressed would require 3 bytes per voxel, we realize that the benefit from compressing the geometry (down to or even below one bit per voxel) is rendered practically useless. We thus need to find a way to compress the colors as well. In Paper IIA, we solve this issue by mapping the voxel colors to a texture, allowing for the use of conventional compression algorithms, as well as a novel format designed for real-time  performance. In Paper IIB, we further significantly improve the compression.
  •  
7.
  • Hagström, Lovisa, 1995 (författare)
  • A Picture is Worth a Thousand Words: Natural Language Processing in Context
  • 2023
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Modern NLP models learn language from lexical co-occurrences. While this method has allowed for significant breakthroughs, it has also exposed potential limitations of modern NLP methods. For example, NLP models are prone to hallucinate, represent a biased world view and may learn spurious correlations to solve the data instead of the task at hand. This is to some extent the consequence of training the models exclusively on text. In text, concepts are only defined by the words that accompany them and the information in text is incomplete due to reporting bias. In this work, we investigate whether additional context in the form of multimodal information can be used to improve on the representations of modern NLP models. Specifically, we consider BERT-based vision-and-language models that receive additional context from images. We hypothesize that visual training primarily should improve on the visual commonsense knowledge, i.e. obvious knowledge about visual properties, of the models. To probe for this knowledge we develop the evaluation tasks Memory Colors and Visual Property Norms. Generally, we find that the vision-and-language models considered do not outperform unimodal model counterparts. In addition to this, we find that the models switch their answer depending on prompt when evaluated for the same type of knowledge. We conclude that more work is needed on understanding and developing vision-and-language models, and that extra focus should be put on how to successfully fuse image and language processing. We also reconsider the usefulness of measuring commonsense knowledge in models that cannot represent factual knowledge.
  •  
8.
  • Munappy, Aiswarya Raj, 1990 (författare)
  • Data management and Data Pipelines: An empirical investigation in the embedded systems domain
  • 2021
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Context: Companies are increasingly collecting data from all possible sources to extract insights that help in data-driven decision-making. Increased data volume, variety, and velocity and the impact of poor quality data on the development of data products are leading companies to look for an improved data management approach that can accelerate the development of high-quality data products. Further, AI is being applied in a growing number of fields, and thus it is evolving as a horizontal technology. Consequently, AI components are increasingly been integrated into embedded systems along with electronics and software. We refer to these systems as AI-enhanced embedded systems. Given the strong dependence of AI on data, this expansion also creates a new space for applying data management techniques. Objective: The overall goal of this thesis is to empirically identify the data management challenges encountered during the development and maintenance of AI-enhanced embedded systems, propose an improved data management approach and empirically validate the proposed approach. Method: To achieve the goal, we conducted this research in close collaboration with Software Center companies using a combination of different empirical research methods: case studies, literature reviews, and action research. Results and conclusions: This research provides five main results. First, it identifies key data management challenges specific to Deep Learning models developed at embedded system companies. Second, it examines the practices such as DataOps and data pipelines that help to address data management challenges. We observed that DataOps is the best data management practice that improves the data quality and reduces the time tdevelop data products. The data pipeline is the critical component of DataOps that manages the data life cycle activities. The study also provides the potential faults at each step of the data pipeline and the corresponding mitigation strategies. Finally, the data pipeline model is realized in a small piece of data pipeline and calculated the percentage of saved data dumps through the implementation. Future work: As future work, we plan to realize the conceptual data pipeline model so that companies can build customized robust data pipelines. We also plan to analyze the impact and value of data pipelines in cross-domain AI systems and data applications. We also plan to develop AI-based fault detection and mitigation system suitable for data pipelines.
  •  
9.
  • Norlund, Tobias, 1991 (författare)
  • Improving Language Models Using Augmentation and Multi-Modality
  • 2023
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Language models have become a core component in modern Natural Language Processing (NLP) as they constitute a powerful base that is easily adaptable to many language processing tasks. Part of the strength lies in their ability to embed associations representing general world knowledge. However, the associations formed by these models are brittle, even when scaled to huge sizes and using massive amounts of data. This, in combination with other problems such as lack of attributability and high costs, motivate us to investigate other methods to improve on these aspects. In this thesis, we investigate methods that augment language models with additional contextual information, for the purpose of simplifying the language modeling problem and increasing the formation of desirable associations. We also investigate whether multi-modal data can assist in forming such associations, that could otherwise be difficult to obtain from textual data only. In our experiments, we showcase augmentation to be effective toward these ends, in both a textual and multi-modal case. We also demonstrate that visual data can assist in forming knowledge-representing associations in a language model.
  •  
10.
  • Sweidan, Dirar (författare)
  • Data-driven decision support in digital retailing
  • 2023
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • In the digital era and advent of artificial intelligence, digital retailing has emerged as a notable shift in commerce. It empowers e-tailers with data-driven insights and predictive models to navigate a variety of challenges, driving informed decision-making and strategic formulation. While predictive models are fundamental for making data-driven decisions, this thesis spotlights binary classifiers as a central focus. These classifiers reveal the complexities of two real-world problems, marked by their particular properties. Specifically, binary decisions are made based on predictions, relying solely on predicted class labels is insufficient because of the variations in classification accuracy. Furthermore, prediction outcomes have different costs associated with making different mistakes, which impacts the utility.To confront these challenges, probabilistic predictions, often unexplored or uncalibrated, is a promising alternative to class labels. Therefore, machine learning modelling and calibration techniques are explored, employing benchmark data sets alongside empirical studies grounded in industrial contexts. These studies analyse predictions and their associated probabilities across diverse data segments and settings. The thesis found, as a proof of concept, that specific algorithms inherently possess calibration while others, with calibrated probabilities, demonstrate reliability. In both cases, the thesis concludes that utilising top predictions with the highest probabilities increases the precision level and minimises the false positives. In addition, adopting well-calibrated probabilities is a powerful alternative to mere class labels. Consequently, by transforming probabilities into reliable confidence values through classification with a rejection option, a pathway emerges wherein confident and reliable predictions take centre stage in decision-making. This enables e-tailers to form distinct strategies based on these predictions and optimise their utility.This thesis highlights the value of calibrated models and probabilistic prediction and emphasises their significance in enhancing decision-making. The findings have practical implications for e-tailers leveraging data-driven decision support. Future research should focus on producing an automated system that prioritises high and well-calibrated probability predictions while discarding others and optimising utilities based on the costs and gains associated with the different prediction outcomes to enhance decision support for e-tailers.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 1645
Typ av publikation
Typ av innehåll
övrigt vetenskapligt/konstnärligt (1645)
Författare/redaktör
Björkman, Mats, prof ... (9)
Hansson, Hans, Profe ... (8)
Peng, Zebo, Professo ... (7)
Eles, Petru, Profess ... (7)
Fischer-Hübner, Simo ... (6)
Haridi, Seif, Profes ... (6)
visa fler...
Eriksson, Henrik, Pr ... (6)
Axelsson, Karin (6)
Liwicki, Marcus (5)
Yi, Wang, Professor (5)
Doherty, Patrick, Pr ... (5)
Lavesson, Niklas, Pr ... (4)
Grahn, Håkan, Profes ... (4)
Kassler, Andreas, 19 ... (4)
Afzal, Wasif (4)
Lindskog, Stefan (4)
Peng, Zebo (4)
Eles, Petru (4)
Sjödin, Mikael, Prof ... (4)
Nadjm-Tehrani, Simin (4)
Enlund, Nils (4)
Fritzson, Peter (4)
Doherty, Patrick (4)
Brunström, Anna, 196 ... (3)
Ekenberg, Love (3)
Unterkalmsteiner, Mi ... (3)
Andersson, Karl, 197 ... (3)
Boeva, Veselka, Prof ... (3)
Björkman, Mats (3)
Jacobsson, Andreas (3)
Lisper, Björn (3)
Kågström, Bo, Profes ... (3)
Anderberg, Peter (3)
Nolte, Thomas, Profe ... (3)
Lindskog, Stefan, 19 ... (3)
Fleyeh, Hasan (3)
Hurtig, Per, 1980- (3)
Zander, Jens, Profes ... (3)
Jönsson, Arne (3)
Fischer-Hübner, Simo ... (3)
Persson, Jan A. (3)
Alégroth, Emil, 1984 ... (3)
Elmroth, Erik (3)
Elmroth, Erik, Profe ... (3)
Johansson, Mikael (3)
Schelén, Olov (3)
Lundgren, Jan, Profe ... (3)
Johannesson, Paul, P ... (3)
Kruse, Björn (3)
Håkansson, Johan (3)
visa färre...
Lärosäte
Chalmers tekniska högskola (481)
Linköpings universitet (314)
Kungliga Tekniska Högskolan (130)
Blekinge Tekniska Högskola (108)
Uppsala universitet (106)
Mälardalens universitet (84)
visa fler...
Lunds universitet (73)
Luleå tekniska universitet (55)
Karlstads universitet (55)
Göteborgs universitet (45)
Umeå universitet (36)
Mittuniversitetet (31)
Linnéuniversitetet (30)
Örebro universitet (29)
Högskolan i Halmstad (28)
Stockholms universitet (26)
Högskolan i Skövde (24)
RISE (22)
Jönköping University (21)
Högskolan Dalarna (20)
Högskolan i Borås (7)
Malmö universitet (6)
Högskolan Väst (4)
Högskolan i Gävle (3)
Södertörns högskola (2)
Högskolan Kristianstad (1)
Konstfack (1)
Sveriges Lantbruksuniversitet (1)
VTI - Statens väg- och transportforskningsinstitut (1)
visa färre...
Språk
Engelska (1576)
Svenska (69)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (1645)
Teknik (260)
Samhällsvetenskap (71)
Humaniora (17)
Medicin och hälsovetenskap (11)
Lantbruksvetenskap (2)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy