SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "L4X0:0345 7524 srt2:(2020-2024)"

Sökning: L4X0:0345 7524 > (2020-2024)

  • Resultat 1-25 av 292
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Caporaletti, Francesca, 1990- (författare)
  • MYC and MexR interactions with DNA : a Small Angle Scattering perspective
  • 2022
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Protein-DNA complexes govern transcription, that is, the cellular mechanism that converts the information stored in the DNA into proteins. These complexes need to be highly dynamic to respond to external factors that regulate their functions in agreement with what the cells need at that time. Macromolecular X-ray crystallography is very useful for structural studies of large molecular assemblies, but its general application is limited by the difficulties in crystallising highly dynamic and transient complexes. Furthermore, crystal lattices constrain the macromolecular conformation and do not entirely reveal the conformational ensemble adopted by protein-DNA complexes in the solution.Small-Angle X-Ray Scattering (SAXS) and Small-Angle Neutron Scattering (SANS) are two complementary techniques known jointly as Small-angle Scattering (SAS). SAS is a powerful tool for analysing the shape and changes of molecules in solution in their native state. It is beneficial if the variability of conformation or disorder complements high-resolution methods such as NMR or crystallography. With SANS, we can explore non-crystallisable protein-DNA complexes in solution without restrictions of artificially symmetrised DNA and limitations of a protein sequence. Neutrons are well-suited probes for studying protein-DNA complexes for the capability of the neutrons to scatter common atoms in biomolecules differentially and can thereby distinguish between hydrogen and deuterium. Together with varying the solvent deuterium ratio, the contrast variation approach can reveal shapes of distinct components within a macromolecular complex.The goal of this thesis is to explore unchartered territories of regulatory protein-DNA interactions by studying such complexes by SAS, with a specific focus on the flexibility of the complexes. In my study of the MexR-DNA complex, I try to elucidate the molecular mechanism by which the MexR repressor regulates the expression of the MexAB-OPrM efflux pump through DNA binding. This pump is one of the multidrug-resistant tools of the pathogen Pseudomonas Aeruginosa (P. Aer.). It can extrude antibacterial drugs from the bacteria enabling them to survive in hostile environments. In the second project, I strive to explore the MYC:MAX:DNA complex. This heterodimer assembly functions as a central hub in cellular growth control by regulating many biological functions, including proliferation, apoptosis, differentiation and transformation. Overexpression or deregulation of MYC is observed in up to 70% of human aggressive cancer forms, including prostate and breast cancers. By combining SAS with biophysical methods, the work presented in this thesis reveals novel information on the shape and dynamics of biomolecular assemblies critical to health and disease.This thesis comprises five chapters, each dealing with a different aspect of the work in those years. The first chapter introduces the reader to the motivations of this research, and it will give the reader a brief state of the art of the two projects. In the second chapter, I will give you all the theoretical instruments to understand better all the methods used in this thesis, I write first to provide an overview regarding the proteins and their capability to bind other macromolecules. I then will exploit the basics of the small-angle technique, focusing on the neutron contrast variation: the fundamental technique used throughout this thesis and the ab-initio modelling.In the third chapter, Methods, I will discuss the SAS measurements and the requirements for the experiments themselves, the procedure for the data reduction and the data processing and analysis to obtain the structural information.The fourth chapter is a summary of the results of the submitted papers and my contributions:Small-angle X-ray and neutron scattering of MexR and its complex with DNA supports a conformational selection binding modelResolving the DNA interaction of the MexR antibiotic resistance regulatory proteinUpgraded D22 SEC-SANS set-up dedicated to the biology communitySAS studies on the regulation of MYC303:MAX:DNA and MAX:MAX:DNA binding in cancer.
  •  
2.
  • Stenlund, Jörgen, 1959- (författare)
  • Visualizing the abyss of time : Students’ interpretation of visualized deep evolutionary time
  • 2023
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The immense time scales involved in Deep evolutionary time (DET) is a threshold concept in biology and interpreting temporal aspects of DET is demanding. DET is communicated through various visualizations that include static two-dimensional representations, low interactivity animations, as well as high interactivity interfaces. Given the importance of DET as fundamental scientific knowledge of potential societal application, there is a need for educational research on students’ interpretation of visually communicated DET. This thesis explores students’ interpretation of different forms of visualized DET along a continuum of interactivity. The research aim is four-fold, and probes how students interpret DET visualizations in terms of temporal aspects, communicated evolutionary concepts, degree of visualization interactivity, and generated affective responses.The work comprises four studies, which as a collective, adopt exploratory and multi-method designs. A total of 505 students participated. Data were collected from questionnaires, task-based questions, and semi-structured interviews. Data analysis was qualitative and quantitative, and incorporated deductive and inductive approaches.  In analysing students' interpretation of static two-dimensional DET visualizations, an instrument for measuring knowledge about the visual representation of deep evolutionary time (DET-Vis) was developed. Emergence of a unidimensional construct during validation represents knowledge about the visual communication of DET. Inspection of item performance suggests that interpreting visualized DET requires both procedural and declarative knowledge. Analysis of students’ interpretation of a low interactivity DET animation, communicating hominin evolution revealed five temporal aspects influencing interpretation: events at specific times, relative order, concurrent events, time intervals, and time interval durations. A further shift across the continuum involved analysing students’ interpretation of a high-interactivity DET visualization of a three-dimensional phylogenetic tree. Finger-based zooming was associated with movement within the tree itself, or as movement in time, respectively, and related to identified misinterpretations. Further analysis showed that interpreting DeepTree evoked the epistemic affective responses of awe, curiosity, surprise, and confusion. Affective responses were expressed in relation to five evolutionary conceptual themes, namely biological relationships, evolutionary time, biological diversity, common descent, and biological structure or terminology.   The thesis findings have implications for teaching, visualization design and future research. Exposing students to various DET visualizations across the continuum could support DET teaching. Visual communication of temporal aspects should be carefully considered in DET visualization design. Future work on relationships between affect, highly interactive visualizations, and evolution concepts will provide further insight for leveraging learning and teaching of DET.   
  •  
3.
  • Abedini, Fereshteh, 1989- (författare)
  • 2D and 3D Halftoning for Appearance Reproduction
  • 2023
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The appearance of an object is determined by its chromatic and geometric qualities in its surrounding environment using four optical parameters: color, gloss, translucency, and surface texture. Reconstructing the appearance of objects is of great importance in many applications, including creative industries, packaging, fine-art reproduction, medical simulation, and prosthesis-making. Printers are reproduction devices capable of replicating objects’ appearance in 2D and 3D forms. With the introduction of new printing technologies, new inks and materials, and demands for innovative applications, creating accurate reproduction of the desired visual appearance has become challenging. Thus, the appearance reproduction workflow requires improvements and adaptations. Accurate color reproduction is a critical quality measure in reproducing the desired appearance in any printing process. However, printers are devices with a limited number of inks that can either print a dot or leave it blank at a specific position on a substrate; hence, to reproduce different colors, optimal placement of the available inks is needed. Halftoning is a technique that deals with this challenge by generating a spatial distribution of the available inks that creates an illusion of the target color when viewed from a sufficiently large distance. Halftoning is a fundamental part of the color reproduction task in any full-color printing pipeline, and it is an effective technique to increase the potential of printing realistic and complex appearances. Although halftoning has been used in 2D printing for many decades, it still requires improvements in reproducing fine details and structures of images. Moreover, the emergence of new technologies in 3D printing introduces a higher degree of freedom and more parameters to the field of appearance reproduction. Therefore, there is a critical need for extensive studies to revisit existing halftoning algorithms and develop novel approaches to produce high quality prints that match the target appearance faithfully. This thesis aims at developing halftoning algorithms to improve appearance reproduction in 2D and 3D printing. Contributions of this thesis in the 2D domain is a dynamic sharpness-enhancing halftoning approach, which adaptively varies the local sharpness of the halftone image based on different textures in the original image for realistic appearance printing. The results show improvements in halftone quality in terms of sharpness, preserving structural similarity, and decreasing color reproduction error. The main contribution of this thesis in 3D printing is extending a high quality 2D halftoning algorithm to the 3D domain. The proposed method is then integrated with a multi-layer printing approach, where ink is deposited at variable depths to improve the reproduction of tones and fine details. Results demonstrate that the proposed method accurately reproduces tones and details of the target appearance. Another contribution of this thesis is studying the effect of halftoning on the perceived appearance of 3D printed surfaces. According to the results, changing the dot placement based on the elevation variation of the underlying geometry can potentially control the perception of the 3D printed appearance. It implies that the choice of halftone may prove helpful in eliminating unwanted artifacts, enhancing the object’s geometric features, and producing a more accurate 3D appearance. The proposed methods in this thesis have been evaluated using different printing techniques.    
  •  
4.
  • Abrahamsson, Tobias, 1991- (författare)
  • Synthetic Functionalities for Ion and Electron Conductive Polymers : Applications in Organic Electronics and Biological Interfaces
  • 2021
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • In the search for understanding and communicating with all biological systems, in humans, animals, plants, and even microorganisms, we find a common language of all communicating via electrons, ions and molecules. Since the discovery of organic electronics, the ability to bridge the gap and communicate be-tween modern technology and biology has emerged. Organic chemistry pro-vides us with tools for understanding and a material platform of polymer electronics for communication. Such insights give us not only the ability to observe fundamental phenomenon but to actively design and construct materials with chemical functionalities towards better interfaces and applications. Organic electronic materials and devices have found their way to be implemented in the field of medicine for diagnostic and therapeutic purposes, but also in water purification and to help tackle the monumental task in creating the next generation of sustainable energy production and storage. Ultimately it’s safe to say that organic electronics are not going to replace our traditional technology based on inorganic materials but rather the two fields can find a way to complement each other for various purposes and applications. Compared to conventional silicon based technology, production of carbon-based organic electronic polymer materials are extremely cheap and devices can even be made flexible and soft with great compatibility towards biology.  The main focus of this thesis has been developing and synthesizing new types of organic electronic and ionic conductive polymeric materials. Rational chemical design and modifications of the materials have been utilized to introduce specific functionalities to the materials. The functionalities serving the purpose to facilitate ion and electron conductive charge transport for organic electronics and with biological interface implementation of the polymer materials. Multi-functional ionic conductive hyperbranched polyglycerol polyelectrolytes (dendrolytes) were developed comprising both ionically charged groups and cross-linkable groups. The hyperbranched polyglycerol core structure of the material possesses a hydrophilic solvating platform for both ions and maintenance of solvent molecules, while being a biocompatible structure. Coupled with the peripheral charged ionic functionalities of the polymer, the dendrolyte materials are highly ionic conductive and selective towards cationic and anionic charged atoms and large molecules when implemented as ion-exchange membranes. Homogenous ion-exchange membrane casting has been achieved by the implementation of cross-linkable functionalities in the dendrolytes, utilizing robust click-chemistry for efficient micro and macro fabrication processing of the ion-ex-change membranes for organic electronic devices. The ion-exchange membrane material was implemented in electrophoretic drug delivery devices (organic electronic ion pumps), which are used for delivery of ions and neurotransmitters with spatiotemporal resolution and are able to communicate and be used for therapeutic drug delivery purposes in biological interfaces. The dendrolyte materials were also able to form free-standing membranes, making it possible for implementation in fuel cell and desalination purposes. Trimeric conjugated thiophene pre-polymer structures were also developed in the thesis and synthesized for the purpose of implementation of the material in vivo to form electrically conductive polymer structures, and in such manner to be able to create electrodes and ultimately to connect with the central nervous system. The conjugated pre-polymers being both water soluble and enzymatically polymerizable serve as a platform to realize such a concept. Also, modifying the trimeric structure with cross-linkable functionality created the capability to form better interfaces and stability towards biological environments.   
  •  
5.
  • Abramian, David, 1992- (författare)
  • Modern multimodal methods in brain MRI
  • 2023
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Magnetic resonance imaging (MRI) is one of the pillars of modern medical imaging, providing a non-invasive means to generate 3D images of the body with high soft-tissue contrast. Furthermore, the possibilities afforded by the design of MRI sequences enable the signal to be sensitized to a multitude of physiological tissue properties, resulting in a wide variety of distinct MRI modalities for clinical and research use. This thesis presents a number of advanced brain MRI applications, which fulfill, to differing extents, two complementary aims. On the one hand, they explore the benefits of a multimodal approach to MRI, combining structural, functional and diffusion MRI, in a variety of contexts. On the other, they emphasize the use of advanced mathematical and computational tools in the analysis of MRI data, such as deep learning, Bayesian statistics, and graph signal processing. Paper I introduces an anatomically-adapted extension to previous work in Bayesian spatial priors for functional MRI data, where anatomical information is introduced from a T1-weighted image to compensate for the low anatomical contrast of functional MRI data. It has been observed that the spatial correlation structure of the BOLD signal in brain white matter follows the orientation of the underlying axonal fibers. Paper II argues about the implications of this fact on the ideal shape of spatial filters for the analysis of white matter functional MRI data. By using axonal orientation information extracted from diffusion MRI, and leveraging the possibilities afforded by graph signal processing, a graph-based description of the white matter structure is introduced, which, in turn, enables the definition of spatial filters whose shape is adapted to the underlying axonal structure, and demonstrates the increased detection power resulting from their use. One of the main clinical applications of functional MRI is functional localization of the eloquent areas of the brain prior to brain surgery. This practice is widespread for various invasive surgeries, but is less common for stereotactic radiosurgery (SRS), a non-invasive surgical procedure wherein tissue is ablated by concentrating several beams of high-energy radiation. Paper III describes an analysis and processing pipeline for functional MRI data that enables its use for functional localization and delineation of organs-at-risk for Elekta GammaKnife SRS procedures. Paper IV presents a deep learning model for super-resolution of diffusion MRI fiber ODFs, which outperforms standard interpolation methods in estimating local axonal fiber orientations in white matter. Finally, Paper V demonstrates that some popular methods for anonymizing facial data in structural MRI volumes can be partially reversed by applying generative deep learning models, highlighting one way in which the enormous power of deep learning models can potentially be put to use for harmful purposes. 
  •  
6.
  • Achieng, Pauline, 1990- (författare)
  • Reconstruction of solutions of Cauchy problems for elliptic equations in bounded and unbounded domains using iterative regularization methods
  • 2023
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Cauchy problems for elliptic equations arise in applications in science and engineering. These problems often involve finding important information about an elliptical system from indirect or incomplete measurements. Cauchy problems for elliptic equations are known to be disadvantaged in the sense that a small pertubation in the input can result in a large error in the output. Regularization methods are usually required in order to be able to find stable solutions. In this thesis we study the Cauchy problem for elliptic equations in both bounded and unbounded domains using iterative regularization methods. In Paper I and II, we focus on an iterative regularization technique which involves solving a sequence of mixed boundary value well-posed problems for the same elliptic equation. The original version of the alternating iterative technique is based on iterations alternating between Dirichlet-Neumann and Neumann-Dirichlet boundary value problems. This iterative method is known to possibly work for Helmholtz equation. Instead we study a modified version based on alternating between Dirichlet-Robin and Robin-Dirichlet boundary value problems. First, we study the Cauchy problem for general elliptic equations of second order with variable coefficients in a limited domain. Then we extend to the case of unbounded domains for the Cauchy problem for Helmholtz equation. For the Cauchy problem, in the case of general elliptic equations, we show that the iterative method, based on Dirichlet-Robin, is convergent provided that parameters in the Robin condition are chosen appropriately. In the case of an unbounded domain, we derive necessary, and sufficient, conditions for convergence of the Robin-Dirichlet iterations based on an analysis of the spectrum of the Laplacian operator, with boundary conditions of Dirichlet and Robin types.In the numerical tests, we investigate the precise behaviour of the Dirichlet-Robin iterations, for different values of the wave number in the Helmholtz equation, and the results show that the convergence rate depends on the choice of the Robin parameter in the Robin condition. In the case of unbounded domain, the numerical experiments show that an appropriate truncation of the domain and an appropriate choice of Robin parameter in the Robin condition lead to convergence of the Robin-Dirichlet iterations.In the presence of noise, additional regularization techniques have to implemented for the alternating iterative procedure to converge. Therefore, in Paper III and IV we focus on iterative regularization methods for solving the Cauchy problem for the Helmholtz equation in a semi-infinite strip, assuming that the data contains measurement noise. In addition, we also reconstruct a radiation condition at infinity from the given Cauchy data. For the reconstruction of the radiation condition, we solve a well-posed problem for the Helmholtz equation in a semi-infinite strip. The remaining solution is obtained by solving an ill-posed problem. In Paper III, we consider the ordinary Helmholtz equation and use seperation of variables to analyze the problem. We show that the radiation condition is described by a non-linear well-posed problem that provides a stable oscillatory solution to the Cauchy problem. Furthermore, we show that the ill–posed problem can be regularized using the Landweber’s iterative method and the discrepancy principle. Numerical tests shows that the approach works well.Paper IV is an extension of the theory from Paper III to the case of variable coefficients. Theoretical analysis of this Cauchy problem shows that, with suitable bounds on the coefficients, can iterative regularization methods be used to stabilize the ill-posed Cauchy problem.
  •  
7.
  • Adam, Rania Elhadi, 1978- (författare)
  • Synthesis and Characterization of Some Nanostructured Materials for Visible Light-driven Photo Processes
  • 2020
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Nanostructured materials for visible light driven photo-processes such as photodegradation of organic pollutants and photoelectrochemical (PEC) water oxidation for hydrogen production are very attractive because of the positive impact on the environment. Metal oxides-based nanostructures are widely used in these photoprocesses due to their unique properties. But single nanostructured metal oxide material might suffer from low efficiency and instability in aqueous solutions under visible light. These facts make it important to have an efficient and reliable nanocomposite for the photo-processes. The combination of different nanomaterials to form a composite configuration can produce a material with new properties. The new properties which are due to the synergetic effect, are a combination of the properties of all the counterparts of the nanocomposite. Zinc oxides (ZnO) have unique optical and electrical properties which grant it to be used in optoelectronics, sensors, solar cells, nanogenerators, and photocatalysis activities. Although ZnO absorbs visible light from the sun due to the deep level band, it mainly absorbs ultraviolet wavelengths which constitute a small portion of the whole solar spectrum range. Also, ZnO has a problem with the high recombination rate of the photogenerated electrons. These problems might reduce its applicability to the photo-process. Therefore, our aim is to develop and investigate different nanocomposites materials based on the ZnO nanostructures for the enhancement of photocatalysis processes using the visible solar light as a green source of energy. Two photo-processes were applied to examine the developed nanocomposites through photocatalysis: (1) the photodegradation of organic dyes, (2) PEC water splitting. In the first photo-process, we used the ZnO nanoparticles (NPs), Magnesium (Mg)-doped ZnO NPs, and plasmonic ZnO/graphene-based nanocomposite for the decomposition of some organic dyes that have been used in industries. For the second photo-process, ZnO photoelectrode composite with different silver-based semiconductors to enhance the performance of the ZnO photoelectrode was used for PEC reaction analysis to perform water splitting. The characterization and photocatalysis experiment results showed remarkable enhancement in the photocatalysis efficiency of the synthesized nanocomposites. The observed improved properties of the ZnO are due to the synergetic effects are caused by the addition of the other nanomaterials. Hence, the present thesis attends to the synthesis and characterization of some nanostructured materials composite with ZnO that are promising candidates for visible light-driven photo-processes.  
  •  
8.
  • Ahmad, Azeem, 1984- (författare)
  • Contributions to Improving Feedback and Trust in Automated Testing and Continuous Integration and Delivery
  • 2022
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • An integrated release version (also known as a release candidate in software engineering) is produced by merging, building, and testing code on a regular basis as part of the Continuous Integration and Continuous Delivery (CI/CD) practices. Several benefits, including improved software quality and shorter release cycles, have been claimed for CI/CD. On the other hand, recent research has uncovered a plethora of problems and bad practices related to CI/CD adoption, necessitating some optimization. Some of the problems addressed in this work include the ability to respond to practitioners’ questions and obtain quick and trustworthy feedback in CI/CD. To be more specific, our effort concentrated on: 1) identifying the information needs of software practitioners engaged in CI/CD; 2) adopting test optimization approaches to obtain faster feedback that are realistic for use in CI/CD environments without introducing excessive technical requirements; 3) identifying perceived causes and automated root cause analysis of test flakiness, thereby providing developers with guidance on how to resolve test flakiness; and 4) identifying challenges in addressing information needs, providing faster and more trustworthy feedback. The findings of the research reported in this thesis are based on data from three single-case studies and three multiple-case studies. The research uses quantitative and qualitative data collected via interviews, site visits, and workshops. To perform our analyses, we used data from firms producing embedded software as well as open-source repositories. The following are major research and practical contributions. Information Needs: The initial contribution to research is a list of information needs in CI/CD. This list contains 27 frequently asked questions on continuous integration and continuous delivery by software practitioners. The identified information needs have been classified as related to testing, code & commit, confidence, bug, and artifacts. We investigated how companies deal with information needs, what tools they use to deal with them, and who is interested in them. We concluded that there is a discrepancy between the identified needs and the techniques employed to meet them. Since some information needs cannot be met by current tools, manual inspections are required, which adds time to the process. Information about code & commit, confidence level, and testing is the most frequently sought for and most important information. Evaluation of Diversity Based Techniques/Tool: The contribution is to conduct a detailed examination of diversity-based techniques using industry test cases to determine if there is a difference between diversity functions in selecting integrationlevel automated test. Additionally, how diversity-based testing compares to other optimization techniques used in industry in terms of fault detection rates, feature coverage, and execution time. This enables us to observe how coverage changes when we run fewer test cases. We concluded that some of the techniques can eliminate up to 85% of test cases (provided by the case company) while still covering all distinct features/requirements. The techniques are developed and made available as an open-source tool for further research and application. Test Flakiness Detection, Prediction & Automated Root Cause Analysis: We identified 19 factors that professionals perceive affect test flakiness. These perceived factors are divided into four categories: test code, system under test, CI/test infrastructure, and organizational. We concluded that some of the perceived factors of test flakiness in closed-source development are directly related to non-determinism, whereas other perceived factors concern different aspects e.g., lack of good properties of a test case (i.e., small, simple and robust), deviations from the established  processes, etc. To see if the developers’ perceptions were in line with what they had labelled as flaky or not, we examined the test artifacts that were readily available. We verified that two of the identified perceived factors (i.e., test case size and simplicity) are indeed indicative of test flakiness. Furthermore, we proposed a light weight technique named trace-back coverage to detect flaky tests. Trace-back coverage was combined with other factors such as test smells indicating test flakiness, flakiness frequency and test case size to investigate the effect on revealing test flakiness. When all factors are taken into consideration, the precision of flaky test detection is increased from 57% (using single factor) to 86% (combination of different factors). 
  •  
9.
  • Ait Ali, Abderrahman, 1991- (författare)
  • Methods for Capacity Allocation in Deregulated Railway Markets
  • 2020
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Faced with increasing challenges, railways around Europe have recently undergone major reforms aiming to improve the efficiency and competitiveness of the railway sector. New market structures such as vertical separation, deregulation and open access can allow for reduced public expenditures, increased market competition, and more efficient railway systems.However, these structures have introduced new challenges for managing infrastructure and operations. Railway capacity allocation, previously internally performed within monopolistic national companies, are now conferred to an infrastructure manager. The manager is responsible for transparent and efficient allocation of available capacity to the different (often competing) licensed railway undertakings.This thesis aims at developing a number of methods that can help allocate capacity in a deregulated (vertically separated) railway market. It focuses on efficiency in terms of social welfare, and transparency in terms of clarity and fairness. The work is concerned with successive allocation of capacity for publicly controlled and commercial traffic within a segmented railway market.The contributions include cost benefit analysis methods that allow public transport authorities to assess the social welfare of their traffic, and create efficient schedules. The thesis also describes a market-based transparent capacity allocation where infrastructure managers price commercial train paths to solve capacity conflicts with publicly controlled traffic. Additionally, solution methods are developed to help estimate passenger demand, which is a necessary input both for resolving conflicts, and for creating efficient timetables.Future capacity allocation in deregulated markets may include solution methods from this thesis. However, further experimentations are still required to address concerns such as data, legislation and acceptability. Moreover, future works can include prototyping and pilot projects on the proposed solutions, and investigating legal and digitalisation strategies to facilitate the implementation of such solutions.
  •  
10.
  • Akram Hassan, Kahin, 1990- (författare)
  • It’s About Time : User-centered Evaluation of Visual Representations for Temporal Data
  • 2021
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The primary goal for collecting and analyzing temporal data differs between individuals and their domain of expertise e.g., forecasting might be the goal in meteorology, anomaly detection might be the goal in finance. While the goal differs, one common denominator is the need for exploratory analysis of the temporal data, as this can aid the search for useful information. However, as temporal data can be challenging to understand and visualize, selecting appropriate visual representations for the domain and data at hand becomes a challenge. Moreover, many visual representations can show a single variable that changes over time, displaying multiple variables in a clear and easily accessible way is much harder, and inference-making and pattern recognition often require visualization of multiple variables. Additionally, as visualization aims to gain insight, it becomes crucial to investigate whether the representations used help users gain this insight. Furthermore, to create effective and efficient visual analysis tools, it is vital to understand the structure of the data, how this data can be represented, and have a clear understanding of the user needs. Developing useful visual representations can be challenging, but through close collaboration and involvement of end-users in the entire process, useful results can be accomplished. This thesis aims to investigate the usability of different visual representations for different types of multivariate temporal data, users, and tasks. Five user studies have been conducted to investigate different representation spaces, layouts, and interaction methods for investigating representations’ ability to facilitate users when analyzing and exploring such temporal datasets. The first study investigated and evaluated the experience of different radial design ideas for finding and comparison tasks when presenting hourly data based on an analog clock metaphor. The second study investigated 2D and 3D parallel coordinates for pattern finding. In the third study, the usability of three linear visual representations for presenting indoor climate data was investigated with domain experts. The fourth study continued on the third study and developed and evaluated a visual analytics tool with different visual representations and interaction techniques with domain experts. Finally, in the fifth study, another visual analytics tool presenting visual representations of temporal data was developed and evaluated with domain experts working and conducting experiments in Antarctica. The research conducted within the scope of this thesis concludes that it is vital to understand the characteristics of the temporal data and user needs for selecting the optimal representations. Without this knowledge, it becomes much harder to choose visual representations to help users gain insight from the data. It is also crucial to evaluate the perception and usability of the chosen visual representations. 
  •  
11.
  • Akram, Usman, 1984- (författare)
  • Closing nutrient cycles
  • 2020
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Adequate and balanced crop nutrition – with nitrogen (N), phosphorus (P), and potassium (K) – is vital for sustainable crop production. Inadequate and imbalanced crop nutrition contributes to the crop yield gaps – a difference in actual and potential crop yield. Yield gap is one of the many causes of insufficient food production, thus aggravating hunger and malnourishment across the globe. On the other hand, an oversupply of nutrients is highly unsustainable, in terms of both resource conservation and global environmental health. A decreasing excreta recycling in crop production is one of the many reasons for nutrient imbalances in agriculture. Previous studies show that increasing agricultural specialization leads to spatial separation of crop and animal production. Increasing distance between excreta production and crop needs is one of the leading factors that cause reduced excreta recycling. Studies focusing on excreta recycling show that a substantial barrier to a more efficient excreta nutrient reuse is the expensive transportation of bulky volumes of excreta over long distances. In order to overcome that barrier, more detailed spatial estimates of distances between excreta production and crop nutrient needs, and the associated costs for complete excreta transport in an entire country are needed. Hence, the overall aim of this thesis was to quantify the amount of nutrients in the excreta resources compared to the crop nutrient needs at multiple scales (global, national, subnational, and local), and to analyze the need for excreta transports, total distances and costs, to meet the crop nutrient needs in a country.On the global scale, annual (2000-2016) excreta supply (livestock and human) could provide at least 48% of N, 57% of P, and 81% of K crop needs. Although excreta supply was not enough to cover the annual crop nutrient needs at the global scale, at least 29 countries for N, 41 for P, and 71 for K had an excreta nutrient surplus. When including the annual use of synthetic fertilizers, at least 42 additional countries had a N surplus, with the equivalent figures for P being 17 countries, whereas 8 additional countries attained a K surplus. At the same time, when accounting for the use of synthetic fertilizers, each year, at least 57 countries had an N deficit, 70 a P deficit, and 51 countries a K deficit, in total equivalent to 14% of global N and 16% of each P and K crop needs. The total surplus in other countries during the period was always higher than the deficit in the countries with net nutrient deficits, except for P for some years. Unfortunately, both the deficits of the deficit countries and surpluses of the surplus countries were increasing substantially during the 17 years. Such global divergence in nutrient deficits and surpluses have clear implications for global food security and environmental health.A district-scale investigation of Pakistan showed that the country had a national deficit of 0.62 million tons of P and 0.59 million tons of K, but an oversupply of N. The spatial separation was not significant at this resolution; only 6% of the excreta N supply needed to be transported between districts. Recycling all excreta, within and between districts, could cut the use of synthetic N to 43% of its current use and eliminate the need for synthetic K, but there would be an additional need of 0.28 million tons of synthetic P to meet the crop nutrient needs in the entire country. The need for synthetic fertilizers to supplement the recycled excreta nutrients would cost USD 2.77 billion. However, it might not be prohibitively expensive to correct for P deficiencies because of the savings on the costs of synthetic N, and K. Excreta recycling could promote balanced crop nutrition at the national scale in Pakistan, which in turn could eliminate the nutrient-related crop yield gaps in the country.The municipal-scale investigation using Swedish data showed that the country had a national oversupply of 110,000 tons of N, 6,000 tons of P, and 76,000 tons of K. Excreta could provide up to 75% of N and 81% of P, and more than 100% of the K crop needs in the country. The spatial separation was pronounced at the municipal scale in the country. Just 40% of the municipalities produced over 50% of the excreta N and P. Nutrient balance calculations showed that excreta recycling within municipalities could provide 63% of the P crop needs. Another 18% of the P crop needs must be transported from surplus municipalities to deficit municipalities. Nationally, an optimized reallocation of surplus excreta P towards the P deficit municipalities would cost USD 192 million for a total of 24,079 km truck transports. The cost was 3.7 times more than the total NPK fertilizer value transported, and that met the crop nutrient needs. It was concluded that Sweden could potentially reduce its dependence on synthetic fertilizers, but to cover the costs of an improved excreta reuse would require valuing the additional benefits of recycling.An investigation was also done to understand the effect of the input data resolution on the results (transport needs and distances) from a model to optimize excreta redistribution. The results showed that the need for excreta transports, distances, and spatial patterns of the excreta transports changed. Increasing resolution of the spatial data, from political boundaries in Sweden and Pakistan to 0.083 decimal grids (approximately 10 km by 10 km at the equator), showed that transport needs for excreta-N increased by 12% in Pakistan, and the transport needs for excreta-P increased by 14% in Sweden. The effect of the increased resolution on transport analysis showed inconsistency in terms of the excreta total nutrient transportation distance; the average distance decreased by 67% (to 44 km) in Pakistan but increased by 1 km in Sweden. A further increase in the data resolution to 5 km by 5 km grids for Sweden showed that the average transportation distance decreased by 9 km. In both countries, increasing input data resolution resulted in a more favorable cost to fertilizer value ratios. In Pakistan, the cost of transport was only 13% of the NPK fertilizer value transported at a higher resolution. In Sweden, the costs decreased from 3.7 (at the political resolution) to slightly higher than three times of the fertilizer value transported in excreta at the higher data resolution.This Ph.D. thesis shows that we could potentially reduce the total use of synthetic fertilizers in the world and still reduce the yield gaps if we can create a more efficient recycling of nutrients both within and between countries, and a more demand adapted use of synthetic fertilizers.
  •  
12.
  • Al-Shujary, Ahmed, 1980- (författare)
  • Kähler-Poisson Algebras
  • 2020
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • In this thesis, we introduce Kähler-Poisson algebras and study their basic properties. The motivation comes from differential geometry, where one can show that the Riemannian geometry of an almost Kähler manifold can be formulated in terms of the Poisson algebra of smooth functions on the manifold. It turns out that one can identify an algebraic condition in the Poisson algebra (together with a metric) implying that most geometric objects can be given a purely algebraic formulation. This leads to the definition of a Kähler-Poisson algebra, which consists of a Poisson algebra and a metric fulfilling an algebraic condition. We show that every Kähler- Poisson algebra admits a unique Levi-Civita connection on its module of inner derivations and, furthermore, that the corresponding curvature operator has all the classical symmetries. Moreover, we present a construction procedure which allows one to associate a Kähler-Poisson algebra to a large class of Poisson algebras. From a more algebraic perspective, we introduce basic notions, such as morphisms and subalgebras, as well as direct sums and tensor products. Finally, we initiate a study of the moduli space of Kähler-Poisson algebras; i.e for a given Poisson algebra, one considers classes of metrics giving rise to non-isomorphic Kähler-Poisson algebras. As it turns out, even the simple case of a Poisson algebra generated by two variables gives rise to a nontrivial classification problem.
  •  
13.
  • Alarcón, Alvaro, 1991- (författare)
  • All-Fiber System for Photonic States Carrying Orbital Angular Momentum : A Platform for Classical and Quantum Information Processing
  • 2023
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The protection of confidential data is a fundamental need in the society in which we live. This task becomes more relevant when observing that every day, data traffic increases exponentially, as well as the number of attacks on the telecommunication infra-structure. From the natural sciences, it has been strongly argued that quantum communication has great potential to solve this problem, to such an extent that various governmental and industrial entities believe the protection provided by quantum communications will be an important layer in the field of information security in the next decades. However, integrating quantum technologies both in current optical networks and in industrial systems is not a trivial task, taking into account that a large part of current quantum optical systems are based on bulk optical devices, which could become an important limitation. Throughout this thesis we present an all-in-fiber optical platform that allows a wide range of tasks that aim to take a step forward in terms of generation and detection of photonic states. Among the main features, the generation and detection of photonic quantum states carrying orbital angular momentum stand out.   The platform can also be configured for the generation of random numbers from quantum mechanical measurements, a central aspect in future information tasks.  Our scheme is based on the use of new space-division-multiplexing (SDM) technologies such as few-mode-fibers and photonic lanterns. Furthermore, our platform can also be scaled to high dimensions, it operates in 1550 nm (telecommunications band) and all the components used for its implementation are commercially available. The results presented in this thesis can be a solid alternative to guarantee the compatibility of new SDM technologies in emerging experiments on optical networks and open up new possibilities for quantum communication. 
  •  
14.
  • Andersson, Angelica, 1990- (författare)
  • Modelling long-distance travel demand by combining mobile phone and survey data
  • 2024
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Forecasts of the demand for long-distance travel are a key component enabling the calculation of social costs and benefits of policy actions such as infrastructure investments. Traditionally, such forecasting models have been based on travel survey data. However, response rates to travel surveys have been in decline for decades, calling into question whether the sample of respondents is really representative of the full population. As such, there is a need to explore alternative data sources. One promising alternative is mobile phone network data, which is collected without the need of active participation from the traveller. However, mobile phone network data in this thesis lacks trip and traveller specific information such as trip purpose, socio-economic information, travel party size and mode. Furthermore, it is difficult to distinguish between bus and car trips even at a later stage of data processing, as the two modes share the same infrastructure. The objective of this thesis is to investigate the use of mobile phone network data for long-distance mode choice modelling. More specifically, we investigate the specific aspects of mobile phone network data as a source of mode choice travel information in the first research paper of this thesis, how uncertainties connected to the identification of the used mode matter, and how it can be handled in the model. In the second research paper of this thesis, a full-scale Multinomial Logit mode choice model is implemented and evaluated, including the development of how to handle mobile phone network data-specific challenges in the dataset of this thesis, such as the lack of distinction between bus and car trips and the lack of trip purpose information. Once this full-scale mode choice model based only on mobile phone network data has been evaluated, a method for combining mobile phone network data with survey data is proposed in the third research paper of this thesis, and the joint model is compared to the mobile phone network data model in terms of behavioural credibility. Finally, it is investigated whether machine learning can be useful in modelling mode choices using the two data sources in the fourth research paper of this thesis. From the results of the papers included in this thesis, it is clear that it is possible to model mode choice based only on mobile phone network data, but that it is preferable to combine mobile phone network data with survey data, rather than to use any one data source separately. Either Multinomial Logit (MNL) models or Artificial Neural Networks (ANNs) can be used to model mode choices based on the two data sources. However, if ANN is selected for mode choice modelling, it is advisable to formulate the network based on the transport mode choice specific principles developed in the last paper of this thesis.
  •  
15.
  • Andersson, Håkan, 1970- (författare)
  • A Co-Simulation Tool Applied to Hydraulic Percussion Units
  • 2022
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • In this dissertation, a co-simulation tool is presented that is meant to comprise a more comprehensive environment for modelling and simulation of hydraulic percussion units, which are used in hydraulic hammers and rock drills. These units generates the large impact forces, which are needed to demolish concrete structures in the construction industry or to fragment rock when drilling blast holes in mine drifting. This type of machinery is driven by fluid power and is by that dependent of coupled fluid-structure mechanisms for their operation. This tool consists of a 1D fluid system model, a 3D structural mechanic model and an interface to establish the fluid-structure couplings, which has in this work been applied to a hydraulic hammer. This approach will enable virtual prototyping during product development with an ambition to reduce the need for testing of physical prototypes, but also to facilitate more detailed studies of internal mechanisms. The tool has been implemented for two well-known simulation tools, and a co-simulation interface to enable communication between them has been devel-oped. The fluid system is simulated using the Hopsan simulation tool and the structural parts are simulated using the FE-simulation software LS-DYNA. The implementation of the co-simulation interface is based on the Functional Mock-up Interface standard in Hopsan and on the User Defined Feature module in LS-DYNA. The basic functions of the tool were first verified for a simple but relevant model comprising co-simulation of one component, and secondly co-simulation of two components were verified. These models were based on rigid body and linear elastic representation of the structural components. Further, it was experimentally validated using an existing hydraulic hammer product, where the responses from the experiments were compared to the corresponding simulated responses. To investigate the effects from a parameter change, the hammer was operated and simulated at four different running conditions. Dynamic simulation of the sealing gap, which is a fundamental mechanism used for controlling the percussive motion, was implemented to further enhance the simulated responses of the percussion unit. This implementation is based on a parametrisation of the deformed FE-model, where the gap height and the eccentric position are estimated from the deformed geometry in the sealing gap region, and then the parameters are sent to the fluid simulation for a more accurate calculation of the leakage flow. Wear in percussion units is an undesirable type of damage, which may cause significant reduction in performance or complete break-down, and today there are no methodology available to evaluate such damages on virtual prototypes. A method to study wear was developed using the co-simulation tool to simulate the fundamental behaviour of the percussion unit, and the wear routines in LS-DYNA were utilised for the calculation of wear.  
  •  
16.
  • Andersson, Jonathan, 1992- (författare)
  • Bifurcations and Exchange of Stability with Density Dependence in a Coinfection Model and an Age-structured Population Model
  • 2022
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • In nature many pathogens and in particular strains of pathogens with negative effects on species coexists. This is for simplicity often ignored in many epidemiological models. It is however still of interest to get a deeper understanding how this coexistence affects the dynamics of the disease. There are several ways at which coexistence can influence the dynamics. Coinfection which is the simultaneous infection of two or more pathogens can cause increased detrimental health effects on the host. Pathogens can also limit each others growth by the effect of cross immunity  as well as promoting isolation. On the contrary one pathogen can also aid another by making the host more vulnerable to as well as more inclined to spread disease.Spread of disease is dependent on the density of the population. If a pathogen is able to spread or not, is strongly correlated with how many times individuals interact with each other. This in turn depends on how many individuals live in a given area. The aim of papers I-III is to provide an understanding how different factors including the carrying capacity of the host population affect the dynamics of two coexisting diseases. In papers I-III we investigate how the parameters effects the long term solution in the form of a stable equilibrium point. In particular we want to provide an understanding of how changes in the carrying capacity affects the long term existence of each disease as well as the occurrence of coinfection.The model that is studied in papers I-III is a generalization of the standard susceptible, infected, recovered (SIR) compartmental model. The SIR model is generalized by the introduction of the second infected compartment as well as the coinfection compartment. We also use a logistic growth term à la Verhulst with associated carrying capacity K. In paper I and II we make the simplifying assumption that a coinfected individual has to, if anything, transmit both of the disease and simply not just one of them. This restriction is relaxed in paper III. In all papers I-III however we do restrict ourselves by letting all transmission rates, that involves scenarios where the newly infected person does not move to same compartment as the infector, to be small. By small we here mean that the results at least hold when the relevant parameters are small enough.In all paper I-III it turns out that for each set of parameters excluding K there exist a unique branch of mostly stable equilibrium points depending continuously on K. We differentiate the equilibrium points of the branch by which compartments are non-zero which we refer to as the type of the equilibrium. The way that the equilibrium point changes its type with K is made clear with the use of transition diagrams together with graphs for the stable susceptible population over K.In paper IV we consider a model for a single age-structured population á la Mckendric-von-Foerster with the addition of differing density dependence on the birth and death rates. Each vital rate is a function of age as well as a weighing of the population also referred to as a size. The birth rate influencing size and the death rate influencing size can be weighted differently allowing us to consider different age-groups to influence the birth and death rate in different proportions compared to other age groups. It is commonly assumed that an increase of population density is detrimental to the survival of each individual. However, for various reasons, it is know that for some species survival is positively correlated with population density when the population is small. This is called the Allee effect and our model includes this scenario.It is shown that the trivial equilibrium, which signifies extinction, is locally stable if the basic reproductive rate $R_0$ is less then 1. This implies global stability with certain extinction if no Allee effect is present. However if the Allee effect is present we show that the population can persist even if R0 < 1.
  •  
17.
  • Andrei, Mariana, 1981- (författare)
  • The role of industrial energy management in the transition toward sustainable energy systems : Exploring practices, knowledge dynamics and policy evaluation
  • 2023
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Mitigating climate change represents one of the most pressing challenges of our time. The EU has set the goal of reaching climate neutrality by 2050. The transition of manufacturing organizations is essential in reaching the EU’s goal, since industry accounts for circa 25% of the total final energy use and about one-fifth of EU’s GHG emissions. Energy efficiency stands as one of the essential pillars of industrial decarbonization, with energy management playing a pivotal role in reaching its full potential. To remain competitive in the long term and align with the EU’s carbon neutrality goal for 2050, the manufacturing industry must enhance energy efficiency in a cost-effective way. Manufacturing companies are exploring new ways of working with energy management in order to meet the requirements for both radical and incremental innovations needed to achieve the climate neutrality goal. However, due to the high complexity of industrial energy systems and its high diversity among sectors, improving energy efficiency is a difficult task. Knowledge, especially extensive knowledge, is a key factor for adopting innovations in energy efficiency and industrial processes. The aim of this thesis is to explore the role of industrial energy management in the transition toward sustainable energy systems using an extended system approach. Employing top-down and bottom-up approaches, this thesis specifically focuses on three key aspects: industrial energy management practices, knowledge dynamics in industrial energy management, and policy evaluation. Key aspects of this thesis have been studied by means of mixed methods, such as literature reviews, interviews, case study with action research approach, survey, and evaluations. This thesis advocates that energy management practices (EnMPs) include activities beyond energy efficiency improvements. Specifically, they incorporate activities related to the decarbonization of industrial processes, including energy supply (own and purchased) and fuel conversion, at the very least. The results show that internal EnMPs revolve around a focus on technologies, processes, and leadership, for which knowledge creation is an ongoing and evolving process. EnMPs encompass a comprehensive set of strategies and actions undertaken by manufacturing organizations to enhance energy efficiency, reduce greenhouse gas emissions, and navigate the transition towards sustainable energy systems. Such practices consist of the following components: energy conservation, energy efficiency, process innovation, energy supply and compensation measures. Furthermore, this thesis has shown that external EnMPs are connected to the participation in energy policy programs and voluntary initiatives and is a common practice in energy management work.Organizations often employ a combination of these strategies to achieve climate neutrality and align with environmental sustainability goals. Successful implementation of EnMPs is contingent upon deep process knowledge, especially in the case of radical process innovations, which necessitate a thorough understanding of interdependencies and interconnected processes. Collaboration with external sources of knowledge, including universities and stakeholders, is essential to drive innovation and adapt to evolving energy systems. Leadership plays a vital role in navigating these complexities and ensuring a strategic approach to EnMPs implementation. This thesis contributes to the field of research on energy management in different ways: i. re-viewing the role of energy management in the current context of transition toward sustainable energy systems, ii. advancing theoretical and practical understanding of energy management in manufacturing organizations, iii. enhancing the knowledge-creation perspective within energy management practices for enhancing the adoption of both energy efficiency and process innovation, and iv. advancing theoretical understanding of the knowledge-creation process for energy management through the development of a knowledge-based framework. 
  •  
18.
  • Arbring Sjöström, Theresia, 1987- (författare)
  • Organic Bioelectronics for Neurotransmitter Release at the Speed of Life
  • 2020
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The signaling dynamics in neuronal networks includes processes ranging from lifelong neuromodulation to direct synaptic neurotransmission. In chemical synapses, the time delay it takes to pass a signal from one neuron to the next lasts for less than a millisecond. At the post-synaptic neuron, further signaling is either up- or down-regulated, dependent on the specific neurotransmitter and receptor. While this up- and down-regulation of signals usually runs perfectly well and enables complex performance, even a minor dysfunction of this signaling system can cause major complications, in the shape of neurological disorders. The field of organic bioelectronics has the ability to interface neurons with high spatiotemporal recording and stimulation techniques. Local chemical stimulation, i.e. local release of neurotransmitters, enables the possibility of artificially altering the chemical environment in dysfunctional signaling pathways to regain or restore neural function. To successfully interface the biological nervous system with electronics, a range of demands must be met. Organic bioelectronic techniques and materials are capable of reaching the demands on the biological as well as the electronic side of the interface. These demands span from high performance biocompatible materials, to miniaturized and specific device architectures, and high dose control on demand within milliseconds.The content of this thesis is a continuation of the development of organic bioelectronic devices for neurotransmitter delivery. Organic materials are utilized to electrically control the dose of charged neurotransmitters by translating electric charge into controlled artificial release. The first part of the thesis, Papers 1 and 2, includes further development of the resistor-type release device called the organic electronic ion pump. This part includes material evaluation, microfluidic incorporation, and device design considerations. The aim for the second part of this thesis, Papers 3 and 4, is to enhance temporal performance, i.e. reduce the delay between electrical signal and neurotransmitter delivery to corresponding delay in biological neural signaling, while retaining tight dosage control. Diffusion of neurotransmitters between nerve cells is a slow process, but since it is restricted to short distances, the total time delay is short. In our organic bioelectronic devices, several orders of magnitude in speed can be gained by switching from lateral to vertical delivery geometries. This is realized by two different types of vertical diodes combined with a lateral preload and waste configuration. The vertical diode assembly was further expanded with a control electrode that enables individual addressing in each of several combined release sites. These integrated circuits allow for release of neurotransmitters with high on/off release ratios, approaching delivery times on par with biological neurotransmission.
  •  
19.
  • Ardi, Shanai, 1977- (författare)
  • Vulnerability and Risk Analysis Methods and Application in Large Scale Development of Secure Systems
  • 2021
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Since software products are heavily used in today’s connected society, design and implementation of such software products to make them resilient to security threats become crucial.This thesis addresses some of the challenges faced by software vendors when developing secure software. The approach is to reduce the risk of introducing security weaknesses to software products by providing solutions that support software developers during the software lifecycle.  Software developers are usually not security experts. However, there are methods and tools, such as the ones introduced in this thesis, that can help developers build more secure software.The research is performed with a design science approach, where the risk reducing method is the artifact that is iteratively developed.  Chronologically, the research is divided into two parts. The first part provides security models as a means of developing a detailed understanding of the extent of potential security issues and their respective security mitigation activities. The purpose is to lower the risk of introducing vulnerabilities to the software during its lifecycle. This is facilitated by the Sustainable Software Security Process (S3P), which is a structured and generally applicable process aimed at minimizing the effort of using security models during all phases of the software development process. S3P achieves this in three steps. The first step uses a semi-formal modeling approach and identifies causes of known vulnerabilities in terms of defects and weaknesses in development activities that may introduce the vulnerability in the code. The second step identifies measures that if in place would address the causes and eliminate the underlying vulnerability and support selection of the most suitable measures. The final step ensures that the selected measures are adopted into the development process to reduce the risk of having similar vulnerabilities in the future.Collaborative tools can be used in this process to ensure that software developers who are not security experts benefit from application of the S3P process and its components. For this thesis, proof-of-concept versions of collaboration tools were developed to support the three steps of the S3P.We present the results of our empirical evaluations on all three steps of S3P using various methods such as surveys, case studies and asking for expert opinion to verify that the method is fully understandable and easy to perform and is perceived by developers to provide value for software security.The last contribution of the first part of research deals with improving product security during requirements engineering through integration of parts of S3P into Common Criteria (CC) and in this way to improve the accuracy of CC through systematically identifying the security objectives and proposing solutions to meet those objectives using S3P. The review and validation by an industrial partner leading in the CC area demonstrate improved accuracy of CC.Based on the findings in the first part of the research, the second part focuses on early phases of software development and vulnerability causes originating from requirements engineering. We study the challenges associated with introducing a specific security activity, i.e., Security Risk Assessment (SRA), into the requirements engineering process in a large-scale software development context. Specific attention is given to the possibility of bridging the gap between developers and security experts when using SRA and examines the pros and cons of organizing personnel working with SRA in a centralized, distributed, or semi-distributed unit. As the journey of changing the way of working in a large corporation takes time and involves many factors, it was natural to perform a longitudinal case study - all the way from pilot studies to full-scale, regular use.The results of the case study clarify that introduction of a specific security activity to the development process must be evolved over time in order to achieve the desired results. The present design of the SRA method shows that it is worthwhile to work with risk assessment in the requirements phase with all types of requirements, even at a low level of abstraction. The method aligns well with a decentralized, agile development method with many teams working on the same product. During the study, we observed an increase in security awareness among the developers in the subject company. However, it was also observed that involvement of security experts to ensure acceptable quality of the risk assessment and to identify all risks cannot be totally eliminated.
  •  
20.
  • Arnström, Daniel, 1994- (författare)
  • Real-Time Certified MPC : Reliable Active-Set QP Solvers
  • 2023
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • In Model Predictive Control (MPC), optimization problems are solved recurrently to produce control actions. When MPC is used in real time to control safety-critical systems, it is important to solve these optimization problems with guarantees on the worst-case execution time. In this thesis, we take aim at such worst-case guarantees through two complementary approaches:(i) By developing methods that determine exact worst-case bounds on the computational complexity and execution time for deployed optimization solvers.(ii) By developing efficient optimization solvers that are tailored for the given application and hardware at hand.We focus on linear MPC, which means that the optimization problems in question are quadratic programs (QPs) that depend on parameters such as system states and reference signals. For solving such QPs, we consider active-set methods: a popular class of optimization algorithms used in real-time applications.The first part of the thesis concerns complexity certification of well-established active-set methods. First, we propose a certification framework that determines the sequence of subproblems that a class of active-set algorithms needs to solve, for every possible QP instance that might arise from a given linear MPC problem (i.e., for every possible state and reference signal). By knowing these sequences, one can exactly bound the number of iterations and/or floating-point operations that are required to compute a solution. In a second contribution, we use this framework to determine the exact worst-case execution time (WCET) for linear MPC. This requires factors such as hardware and software implementation/compilation to be accounted for in the analysis. The framework is further extended in a third contribution by accounting for internal numerical errors in the solver that is certified. In a similar vein, a fourth contribution extends the framework to handle proximal-point iterations, which can be used to improve the numerical stability of QP solvers, furthering their reliability.The second part of the thesis concerns efficient solvers for real-time MPC. We propose an efficient active-set solver that is contained in the above-mentioned complexity-certification framework. In addition to being real-time certifiable, we show that the solver is efficient, simple to implement, can easily be warm-started, and is numerically stable, all of which are important properties for a solver that is used in real-time MPC applications. As a final contribution, we use this solver to exemplify how the proposed complexity-certification framework developed in the first part can be used to tailor active-set solvers for a given linear MPC application. Specifically, we do this by constructing and certifying parameter-varying initializations of the solver. 
  •  
21.
  • Azeez, Ahmed, 1991- (författare)
  • High-Temperature Durability Prediction of Ferritic-Martensitic Steel
  • 2023
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Materials used for high-temperature steam turbine sections are generally subjected to harsh environments with temperatures up to 625 °C. The superior creep resistance of 9–12 % Cr ferritic-martensitic steels makes them desirable for those critical steam turbine components. Additionally, the demand for fast and frequent steam turbine start-ups, i.e. flexible operations, causes accelerated fatigue damage in critical locations, such as grooves and notches, at the high-temperature inner steam turbine casing. A durability assessment is necessary to understand the material behaviour under such high temperatures and repeated loading, and it is essential for life prediction. An accurate and less conservative fatigue life prediction approach is achieved by going past the crack initiation stage and allowing controlled growth of cracks within safe limits. Besides, beneficial load-temperature history effects, i.e. warm pre-stressing, must be utilised to enhance the fracture resistance to cracks. This dissertation presents the high-temperature durability assessment of FB2 steel, a 9-12 % Cr ferritic-martensitic steam turbine steel.Initially, isothermal low-cycle fatigue testing was performed on FB2 steel samples. A fatigue life model based on finite element strain range partitioning was utilised to predict fatigue life within the crack initiation phase. Two fatigue damage regimes were identified, i.e. plastic- and creep-dominated damage, and the transition between them depended on temperature and applied total strain. Cyclic deformation and stress relaxation behaviour were investigated to produce an elastic-plastic and creep material model that predicts the initial and mid-life cyclic behaviour of the FB2 steel.Furthermore, the thermomechanical fatigue crack growth behaviour of FB2 steel was studied. Crack closure behaviour was observed and accounted for numerically and experimentally, where crack growth rate curves collapsed into a single curve. Interestingly, the collapsed crack growth curves coincided with isothermal crack growth tests performed at the minimum temperature of the thermomechanical crack growth tests. In addition, hold times and changes in the minimum temperature of the thermomechanical fatigue cycle did not influence crack closure behaviour.Finally, warm pre-stressing effects were explored for FB2 steel. A numerical prediction model was produced to predict the increase in the apparent fracture toughness. Warm pre-stressing effects can benefit the turbine life by enhancing fracture resistance and allowing longer fatigue cracks to grow within safe limits.
  •  
22.
  • Bairagi, Samiran, 1990- (författare)
  • Optical studies of AlN and GaO based nanostructures using Mueller matrix spectroscopic ellipsometry
  • 2023
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This thesis explores the diverse optical properties manifested when light interacts with various materials, with an emphasis on circular polarization- and bandgaprelated phenomena. The studies in this work are centered around Mueller matrix spectroscopic ellipsometry, with the objective of synthesizing and characterizing nanostructured and high-quality thin films to expand our understanding of the optical properties arising from their underlying structure and electronic transitions, respectively.Papers I, II, and III of the research address the optical properties associated with circular polarization, emphasizing the importance of the morphology and structure of the sculptured thin films used. To clarify this, AlN-based chiral sculptured thin films are synthesized using glancing angle deposition and magnetron sputtering. The discussion explores the impact of different growth parameters on the morphology and crystal structure of the films. By examining these thin film samples, it is shown how their structure and crystallographic orientation can be designed to reflect narrow spectral bands of circularly polarized light at specific wavelengths. The research also tackles how thin films preferentially reflect one handedness of circularly polarized light over the other with a high degree of circular polarization. A combination of theoretical and experimental studies offers insights into the nuances of growth and light-material interactions, particularly in complex photonic structures.Papers IV and V investigate the optical properties that arise from electronic transitions in thin films, focusing on the complex dielectric function and optical bandgap phenomena. These properties are explored using high-quality single crystalline homogenous thin films of ZnGaO, grown using metal-organic chemical vapor deposition. Various formalisms to calculate bandgap values are evaluated for their precision and applicability. The modified Cody formalism stands out as the preferred choice due to its ability to provide the most linear region for extrapolating bandgap energy values. Through both theoretical calculations and experiments, a critical analysis is provided on the evolution of the crystal structure and optical properties of these thin films when exposed to elevated temperatures. These findings explain the interplay between the structural characteristics of thin films and their subsequent influence on bandgap properties.Altogether, this thesis provides a fundamental understanding of the structural and intrinsic properties of materials that govern light-matter interactions. This research paves the way for the further development of thin film-based polarization filters and advanced optoelectronic device technologies.
  •  
23.
  • Balian, Alien, 1988- (författare)
  • Nuclease Activity as a Biomarker in Cancer Detection
  • 2023
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Nucleases are a group of enzymes that cleave the phosphodiester bonds in nucleic acids. As such, nucleases act as biological scissors that exhibit a plethora of fundamental roles, in prokaryotes and eukaryotes, dependent or non-dependent on their catalytic capability. Thus, differential status of nucleases between healthy and disease conditions might not be surprising, and can be deployed in disease detection. Specifically, there is growing body of research demonstrating the potential of nucleases as diagnostic biomarkers in several types of cancer. Biomarkers for early diagnosis are an immense need in the diagnostic landscape of cancer. In this sense, nucleases are promising biomolecules, and they possess a unique feature of catalytic activity that could be deployed for diagnosis and future therapeutic strategies.    In this thesis we aim to demonstrate the use of nucleases as biomarkers associated to cancer, and the capability of oligonucleotide substrates for targeting a specific nuclease.  The thesis work begins with comprehensive review of nucleases as promising biomarkers in cancer diagnosis (paper I). Then, we provide a methodological study in paper II, in which we propose a flexible approach for detection of disease associated nuclease activity using oligonucleotides as substrates. The probes utilized here are flanked with fluorophore at the 5’-end and a quencher at the 3’-end. Upon cleavage by nucleases, the fluorescent signal is increased in a proportional fashion to nuclease activity. This platform is suitable to implement in detection of any disease in which nuclease activity is altered.   We have applied this method in paper III, by using 75 probes as substrates to screen breast cancer cells, along with controls, for nuclease activity. We have identified a probe (DNA PolyAT) that discriminates between BT-474 breast cancer cells and healthy cells based on nuclease activity profile associated with cell membrane. Next, we screened tissue samples from breast tumors for nuclease activity, and we have identified a set of probes with the capability to discriminate breast tumor and healthy tissues in 89% of the cases (paper IV). To achieve a step forward towards non-invasive diagnosis, we have developed an activatable magnetic resonance imaging (MRI)-probe (paper V). The MRI-probe is oligonucleotide-based that works like a contrast agent, and it is activated only in presence of a specific nuclease. MRI-probes provide advantages over fluorescent probes, such as high spatial resolution and unlimited tissue penetration. In conclusion, our findings suggest the utility of nuclease activity as a biomarker in cancer detection. Moreover, we demonstrate the applicability of nuclease activity-based approaches in imaging modalities, such as MRI. Our future aim is to translate our findings into non-invasive detection of breast cancer by utilizing breast cancer activatable MRI-probes. 
  •  
24.
  • Becirovic, Ema, 1992- (författare)
  • Signal Processing Aspects of Massive MIMO
  • 2022
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Massive MIMO (multiple-input-multiple-output) is a technology that uses an antenna array with a massive number of antennas at the wireless base station. It has shown widespread benefit and has become an inescapable solution for the future of wireless communication. The mainstream literature focuses on cases when high data rates for a handful of devices are of priority. In reality, due to the diversity of applications, no solution is one-size-fits-all. This thesis provides signal-processing solutions for three challenging situations.  The first challenging situation deals with the acquisition of channel estimates when the signal-to-noise-ratio (SNR) is low. The benefits of massive MIMO are unlocked by having good channel estimates. By the virtue of reciprocity in time-division duplex, the estimates are obtained by transmitting pilots on the uplink. However, if the uplink SNR is low, the quality of the channel estimates will suffer and consequently the spectral efficiency will also suffer. This thesis studies two cases where the channel estimates can be improved: one where the device is stationary such that the channel is constant over many coherence blocks and one where the device has access to accurate channel estimates such that it can design its pilots based on the knowledge of the channel. The thesis provides algorithms and methods that exploit the aforementioned structures which improve the spectral efficiency.  Next, the thesis considers massive machine-type communications, where a large number of simple devices, such as sensors, are communicating with the base station. This thesis provides a quantitative study on which type of benefits massive MIMO can provide for this communication scenario — many devices can be spatially multiplexed and their battery life can be increased. Further, activity detection is also studied and it is shown that the channel hardening and favorable propagation properties of massive MIMO can be exploited to design efficient detection algorithms.  The third part of the thesis studies a more specific application of massive MIMO, namely federated learning. In federated learning, the goal is for the devices to collectively train a machine learning model based on their local data by only transmitting model updates to the base station. Sum channel estimation has been advocated for blind over-the-air federated learning since fewer communication resources are required to obtain such estimates. On the contrary, this thesis shows that individually estimating each device's channel can save a huge number of resources owing to the fact that it allows for individual processing such as gradient sparsification which in turn saves a huge number of resources that compensates for the channel estimation overhead. 
  •  
25.
  • Belcastro, Luigi, 1992- (författare)
  • Multi-frequency SFDI : depth-resolved scattering models of wound healing
  • 2023
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • With optical techniques, we refer to a group of methods that use of light to perform measurements on matter. Spatial frequency domain imaging (SFDI) is an optical technique that operates in the spatial frequency domain. The technique involves using sinusoidal patterns of light for illumination, to study the reflectance of the target based on the spatial frequency (ƒx) of the patterns. By analysing the frequency-specific response with the aid of light transport models, we are able to determine the intrinsic optical properties of the material, such as the absorption coefficient (μa) and reduced scattering coefficient (μ's) In biological applications, these optical properties can be correlated to physiological structures and molecules, providing a useful tool for researchers and clinicians alike in understanding the phenomena happening in biological tissue. The objective of this work is to contribute to the development of SFDI, so that the technique can be used as a diagnostic tool to study the process of wound healing in tissue. In paper I we introduce the concept of cross-channels, given by the spectral overlap of the broadband LED light sources and the RGB camera sensors used in the SFDI instrumentation. The purpose of cross-channels is to improve the limited spectral information of RGB devices, allowing to detect a larger number of biological molecules. One of the biggest limitations of SFDI is that it works on the assumption of light diffusing through a homogeneous, thick layer of material. This assumption loses validity when we want to examine biological tissue, which comprises multiple thin layers with different properties. In paper IV we have developed a new method to process SFDI data that we call multi-frequency SFDI. In this new approach, we make use of the different penetration depth of the light patterns depending on their ƒx to obtain depth-sensitive measurements. We also defined a 2-layer model of light scattering that imitates the physiology of a wound, to calculate the partial volume contributions to μ's of the single layers. The 2-layer model is based on analytical formulations of light fluence. We compared the performance of three fluence models, one of which we have derived ourselves as an improvement over an existing formulation. In paper II we were able to test our new multi-frequency SFDI method by participating in an animal study on stem-cells based regenerative therapies. We contributed by performing SFDI measurements on healing wounds, in order to provide an additional evaluation metric that complemented the clinical evaluation and cell histology performed in the study. The analysis of the SFDI data at different ƒx highlighted different processes happening on the surface compared to the deeper tissue. In paper V we further refine the technique introduced in paper IV by developing an inverse solver algorithm to isolate the thickness of the thin layer and the layer-specific μ's. The reconstructed parameters were tested both on thin silicone optical phantoms and ex-vivo burn wounds treated with stem cells. 
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-25 av 292
Typ av publikation
doktorsavhandling (292)
konstnärligt arbete (1)
Typ av innehåll
övrigt vetenskapligt/konstnärligt (292)
Författare/redaktör
Berggren, Magnus, Pr ... (18)
Gao, Feng, Professor ... (9)
Larsson, Erik G., Pr ... (6)
Crispin, Xavier, Pro ... (6)
Inganäs, Olle, Profe ... (6)
Uvdal, Kajsa, Profes ... (5)
visa fler...
Eklund, Per, Associa ... (5)
Krus, Petter, Profes ... (5)
Felsberg, Michael, P ... (5)
Sunnerhagen, Maria, ... (5)
Fahlman, Mats, Profe ... (5)
Unger, Jonas, Profes ... (5)
Zhang, Fengling, Pro ... (5)
Lambrix, Patrick, Pr ... (4)
Nur, Omer, Associate ... (4)
Simon, Daniel, Assoc ... (4)
Wennergren, Uno, Pro ... (4)
Eriksson, Lars, Prof ... (4)
Leidermark, Daniel, ... (4)
Simonsson, Kjell, Pr ... (4)
Rosén, Johanna, Prof ... (4)
Singull, Martin, Pro ... (4)
Darakchieva, Vanya, ... (4)
Simon, Daniel T, Ass ... (4)
Pedersen, Henrik, Pr ... (4)
Jensen, Per, Profess ... (4)
Wright, Dominic, Pro ... (4)
Engquist, Isak, Asso ... (4)
Solin, Niclas, Assoc ... (4)
Björnson, Emil, Prof ... (3)
Eklund, Anders, Asso ... (3)
Doherty, Patrick, Pr ... (3)
Magnusson, Thomas, P ... (3)
Moverare, Johan, Pro ... (3)
Björnson, Emil, Asso ... (3)
Frisk, Erik, Profess ... (3)
Gustafsson, Fredrik, ... (3)
Birch, Jens, Profess ... (3)
Sundin, Erik, Profes ... (3)
Tybrandt, Klas, Asso ... (3)
Högberg, Hans, Assoc ... (3)
Gustafsson, Mika, Pr ... (3)
Zozoulenko, Igor, Pr ... (3)
Cedersund, Gunnar, A ... (3)
Ynnerman, Anders, Pr ... (3)
Eles, Petru Ion, Pro ... (3)
Peng, Zebo, Professo ... (3)
Chen, Weimin, Profes ... (3)
Forssén, Per-Erik, A ... (3)
Eilertsen, Gabriel, ... (3)
visa färre...
Lärosäte
Linköpings universitet (292)
Örebro universitet (1)
Språk
Engelska (290)
Svenska (2)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (157)
Teknik (110)
Samhällsvetenskap (21)
Medicin och hälsovetenskap (10)
Humaniora (3)
Lantbruksvetenskap (2)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy