SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Nettelblad Carl) "

Sökning: WFRF:(Nettelblad Carl)

  • Resultat 1-50 av 60
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Akram, Adeel (författare)
  • Towards a realistic hyperon reconstruction with PANDA at FAIR
  • 2021
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The goal of the PANDA (anti-Proton ANnihilation at DArmstadt) experiment at FAIR (Facility for Anti-proton and Ion Research) is to study strong interactions in the confinement domain. In PANDA, a continuous beam of anti-protons () will impinge on a fixed hydrogen (p) target inside the High Energy Storage Ring (HESR), a feature intended to attain high interaction rates for various physics studies e.g. hyperon production. Two types of hydrogen targets are under development: a pellet target and a cluster-jet target where either high-density pellets or clusters of cooled hydrogen gas will be injected at the interaction point. The residual gas from the target system is expected to dissipate along the beam pipe resulting in a target that is effectively extended outside the designed interaction point. The realistic density profile of target and residual gas has implications for physics studies, e.g. in the ability to select signals of interest and, at the same time, suppress background. All hyperon simulations in PANDA until now have been performed under ideal conditions. In this work, I will for the first time implement more realistic conditions for the beam-target interaction and carry out simulations using the as benchmark channel. The impact of the different configurations of the vacuum system will be discussed in detail.In addition, I will present tests of some of the PANDA's particle track finders that are not based on ideal pattern recognition approaches. The results will provide important guidance for future tracking developments within PANDA.
  •  
2.
  • Akram, Adeel (författare)
  • Towards Realistic Hyperon Reconstruction in PANDA : From Tracking with Machine Learning to Interactions with Residual Gas
  • 2023
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The PANDA (anti-Proton ANnihilation at DArmstadt) experiment at FAIR (Facility for Anti-proton and Ion Research) aims to study strong interactions in the confinement domain. In PANDA, a continuous beam of anti-protons will impinge on a fixed hydrogen target inside the High Energy Storage Ring (HESR), a feature intended to attain high interaction rates for various physics studies e.g. hyperon production.        This thesis addresses the challenges of running PANDA under realistic conditions. The focus is two-fold: developing deep learning methods to reconstruct particle trajectories and reconstruct hyperons using realistic target profiles. Two approaches are used: (i) standard deep learning model such as dense network, and (ii) geometric deep leaning model such as interaction graph neural networks. The deep learning methods have given promising results, especially when it comes to (i) reconstruction of low-momentum particles that frequently occur in hadron physics experiments and (ii) reconstruction of tracks originating far from the interaction point. Both points are critical in many hyperon studies. However, further studies are needed to mitigate e.g. high clone rate. For the realistic target profiles, these pioneering simulations address the effect of residual gas on hyperon reconstruction. The results have shown that the signal-to-background ratio becomes worse by about a factor of 2 compared to the ideal target, however, the background level is still sufficiently low for these studies to be feasible. Further improvements can be made on the target side to achieve a better vacuum in the beam pipe and on the analysis side to improve the event selection. Finally, solutions are suggested to improve results, especially for the geometric deep learning method in handling low-momentum particles contributing to the high clone rate. In addition, a better way to build ground truth can improve the performance of our approach.
  •  
3.
  •  
4.
  • Ausmees, Kristiina, et al. (författare)
  • A deep learning framework for characterization of genotype data
  • 2022
  • Ingår i: G3. - : Oxford University Press (OUP). - 2160-1836. ; 12:3
  • Tidskriftsartikel (refereegranskat)abstract
    • Dimensionality reduction is a data transformation technique widely used in various fields of genomics research. The application of dimensionality reduction to genotype data is known to capture genetic similarity between individuals, and is used for visualization of genetic variation, identification of population structure as well as ancestry mapping. Among frequently used methods are principal component analysis, which is a linear transform that often misses more fine-scale structures, and neighbor-graph based methods which focus on local relationships rather than large-scale patterns. Deep learning models are a type of nonlinear machine learning method in which the features used in data transformation are decided by the model in a data-driven manner, rather than by the researcher, and have been shown to present a promising alternative to traditional statistical methods for various applications in omics research. In this study, we propose a deep learning model based on a convolutional autoencoder architecture for dimensionality reduction of genotype data. Using a highly diverse cohort of human samples, we demonstrate that the model can identify population clusters and provide richer visual information in comparison to principal component analysis, while preserving global geometry to a higher extent than t-SNE and UMAP, yielding results that are comparable to an alternative deep learning approach based on variational autoencoders. We also discuss the use of the methodology for more general characterization of genotype data, showing that it preserves spatial properties in the form of decay of linkage disequilibrium with distance along the genome and demonstrating its use as a genetic clustering method, comparing results to the ADMIXTURE software frequently used in population genetic studies.
  •  
5.
  • Ausmees, Kristiina, et al. (författare)
  • Achieving improved accuracy for imputation of ancient DNA
  • 2023
  • Ingår i: Bioinformatics. - : Oxford University Press. - 1367-4803 .- 1367-4811. ; 39:1
  • Tidskriftsartikel (refereegranskat)abstract
    • MotivationGenotype imputation has the potential to increase the amount of information that can be gained from the often limited biological material available in ancient samples. As many widely used tools have been developed with modern data in mind, their design is not necessarily reflective of the requirements in studies of ancient DNA. Here, we investigate if an imputation method based on the full probabilistic Li and Stephens model of haplotype frequencies might be beneficial for the particular challenges posed by ancient data.ResultsWe present an implementation called prophaser and compare imputation performance to two alternative pipelines that have been used in the ancient DNA community based on the Beagle software. Considering empirical ancient data downsampled to lower coverages as well as present-day samples with artificially thinned genotypes, we show that the proposed method is advantageous at lower coverages, where it yields improved accuracy and ability to capture rare variation. The software prophaser is optimized for running in a massively parallel manner and achieved reasonable runtimes on the experiments performed when executed on a GPU.
  •  
6.
  • Ausmees, Kristiina, et al. (författare)
  • An empirical evaluation of genotype imputation of ancient DNA
  • 2022
  • Ingår i: G3. - : Oxford University Press. - 2160-1836. ; 12:6
  • Tidskriftsartikel (refereegranskat)abstract
    • With capabilities of sequencing ancient DNA to high coverage often limited by sample quality or cost, imputation of missing genotypes presents a possibility to increase the power of inference as well as cost-effectiveness for the analysis of ancient data. However, the high degree of uncertainty often associated with ancient DNA poses several methodological challenges, and performance of imputation methods in this context has not been fully explored. To gain further insights, we performed a systematic evaluation of imputation of ancient data using Beagle v4.0 and reference data from phase 3 of the 1000 Genomes project, investigating the effects of coverage, phased reference, and study sample size. Making use of five ancient individuals with high-coverage data available, we evaluated imputed data for accuracy, reference bias, and genetic affinities as captured by principal component analysis. We obtained genotype concordance levels of over 99% for data with 1× coverage, and similar levels of accuracy and reference bias at levels as low as 0.75×. Our findings suggest that using imputed data can be a realistic option for various population genetic analyses even for data in coverage ranges below 1×. We also show that a large and varied phased reference panel as well as the inclusion of low- to moderate-coverage ancient individuals in the study sample can increase imputation performance, particularly for rare alleles. In-depth analysis of imputed data with respect to genetic variants and allele frequencies gave further insight into the nature of errors arising during imputation, and can provide practical guidelines for postprocessing and validation prior to downstream analysis.
  •  
7.
  • Ausmees, Kristiina, et al. (författare)
  • An Empirical Evaluation of Genotype Imputation of Ancient DNA
  • 2019
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • With capabilities of sequencing ancient DNA to high coverage often limited by sample quality or cost, imputation of missing genotypes presents a possibility to increase power of inference as well as cost-effectiveness in analysis of ancient data. However, the high degree of uncertainty often associated with ancient DNA poses several methodological challenges, and performance of imputation methods in this context has not been fully explored. To gain further insights, we performed a systematic evaluation of imputation of ancient data using BEAGLE 4.0 and reference data from phase 3 of the 1000 Genomes project, investigating the effects of coverage, phased reference and study sample size. Making use of five ancient samples with high-coverage data available, we evaluated imputed data with respect to accuracy, reference bias and genetic affinities as captured by PCA. We obtained genotype concordance levels of over 99% for data with 1x coverage, and similar levels of accuracy and reference bias at levels as low as 0.75x. Our findings suggest that using imputed data can be a realistic option for various population genetic analyses even for data in coverage ranges below 1x. We also show that a large and varied phased reference set as well as the inclusion of low- to moderate-coverage ancient samples can increase imputation performance, particularly for rare alleles. In-depth analysis of imputed data with respect to genetic variants and allele frequencies gave further insight into the nature of errors arising during imputation, and can provide practical guidelines for post-processing and validation prior to downstream analysis.
  •  
8.
  •  
9.
  • Ausmees, Kristiina (författare)
  • Efficient computational methods for applications in genomics
  • 2019
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • During the last two decades, advances in molecular technology have facilitated the sequencing and analysis of ancient DNA recovered from archaeological finds, contributing to novel insights into human evolutionary history. As more ancient genetic information has become available, the need for specialized methods of analysis has also increased. In this thesis, we investigate statistical and computational models for analysis of genetic data, with a particular focus on the context of ancient DNA.The main focus is on imputation, or the inference of missing genotypes based on observed sequence data. We present results from a systematic evaluation of a common imputation pipeline on empirical ancient samples, and show that imputed data can constitute a realistic option for population-genetic analyses. We also discuss preliminary results from a simulation study comparing two methods of phasing and imputation, which suggest that the parametric Li and Stephens framework may be more robust to extremely low levels of sparsity than the parsimonious Browning and Browning model.An evaluation of methods to handle missing data in the application of PCA for dimensionality reduction of genotype data is also presented. We illustrate that non-overlapping sequence data can lead to artifacts in projected scores, and evaluate different methods for handling unobserved genotypes.In genomics, as in other fields of research, increasing sizes of data sets are placing larger demands on efficient data management and compute infrastructures. The last part of this thesis addresses the use of cloud resources for facilitating such analysis. We present two different cloud-based solutions, and exemplify them on applications from genomics.
  •  
10.
  • Ausmees, Kristiina (författare)
  • Methodology and Infrastructure for Statistical Computing in Genomics : Applications for Ancient DNA
  • 2022
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This thesis concerns the development and evaluation of computational methods for analysis of genetic data. A particular focus is on ancient DNA recovered from archaeological finds, the analysis of which has contributed to novel insights into human evolutionary and demographic history, while also introducing new challenges and the demand for specialized methods.A main topic is that of imputation, or the inference of missing genotypes based on observed sequence data. We present results from a systematic evaluation of a common imputation pipeline on empirical ancient samples, and show that imputed data can constitute a realistic option for population-genetic analyses. We also develop a tool for genotype imputation that is based on the full probabilistic Li and Stephens model for haplotype frequencies and show that it can yield improved accuracy on particularly challenging data.  Another central subject in genomics and population genetics is that of data characterization methods that allow for visualization and exploratory analysis of complex information. We discuss challenges associated with performing dimensionality reduction of genetic data, demonstrating how the use of principal component analysis is sensitive to incomplete information and performing an evaluation of methods to handle unobserved genotypes. We also discuss the use of deep learning models as an alternative to traditional methods of data characterization in genomics and propose a framework based on convolutional autoencoders that we exemplify on the applications of dimensionality reduction and genetic clustering.In genomics, as in other fields of research, increasing sizes of data sets are placing larger demands on efficient data management and compute infrastructures. The final part of this thesis addresses the use of cloud resources for facilitating data analysis in scientific applications. We present two different cloud-based solutions, and exemplify them on applications from genomics.
  •  
11.
  •  
12.
  • Bielecki, Johan, 1982, et al. (författare)
  • Electrospray sample injection for single-particle imaging with x-ray lasers
  • 2019
  • Ingår i: Science advances. - : American Association for the Advancement of Science (AAAS). - 2375-2548. ; 5:5
  • Tidskriftsartikel (refereegranskat)abstract
    • The possibility of imaging single proteins constitutes an exciting challenge for x-ray lasers. Despite encouraging results on large particles, imaging small particles has proven to be difficult for two reasons: not quite high enough pulse intensity from currently available x-ray lasers and, as we demonstrate here, contamination of the aerosolized molecules by nonvolatile contaminants in the solution. The amount of contamination on the sample depends on the initial droplet size during aerosolization. Here, we show that, with our electrospray injector, we can decrease the size of aerosol droplets and demonstrate virtually contaminant-free sample delivery of organelles, small virions, and proteins. The results presented here, together with the increased performance of next-generation x-ray lasers, constitute an important stepping stone toward the ultimate goal of protein structure determination from imaging at room temperature and high temporal resolution.
  •  
13.
  • Clouard, Camille (författare)
  • A computational and statistical framework for cost-effective genotyping combining pooling and imputation
  • 2024
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The information conveyed by genetic markers, such as single nucleotide polymorphisms (SNPs), has been widely used in biomedical research to study human diseases and is increasingly valued in agriculture for genomic selection purposes. Specific markers can be identified as a genetic signature that correlates with certain characteristics in a living organism, e.g. a susceptibility to disease or high-yield traits. Capturing these signatures with sufficient statistical power often requires large volumes of data, with thousands of samples to be analysed and potentially millions of genetic markers to be screened. Relevant effects are particularly delicate to detect when the genetic variations involved occur at low frequencies.The cost of producing such marker genotype data is therefore a critical part of the analysis. Despite recent technological advances, production costs can still be prohibitive on a large scale and genotype imputation strategies have been developed to address this issue. Genotype imputation methods have been extensively studied in human data and, to a lesser extent, in crop and animal species. A recognised weakness of imputation methods is their lower accuracy in predicting the genotypes for rare variants, whereas those can be highly informative in association studies and improve the accuracy of genomic selection. In this respect, pooling strategies can be well suited to complement imputation, as pooling is efficient at capturing the low-frequency items in a population. Pooling also reduces the number of genotyping tests required, making its use in combination with imputation a cost-effective compromise between accurate but expensive high-density genotyping of each sample individually and stand-alone imputation. However, due to the nature of genotype data and the limitations of genotype testing techniques, decoding pooled genotypes into unique data resolutions is challenging. In this work, we study the characteristics of decoded genotype data from pooled observations with a specific pooling scheme using the examples of a human cohort and a population of inbred wheat lines. We propose different inference strategies to reconstruct the genotypes before devising them as input to imputation, and we reflect on how the reconstructed distributions affect the results of imputation methods such as tree-based haplotype clustering or coalescent models.
  •  
14.
  • Clouard, Camille, et al. (författare)
  • A joint use of pooling and imputation for genotyping SNPs
  • 2022
  • Ingår i: BMC Bioinformatics. - : Springer Nature. - 1471-2105. ; 23
  • Tidskriftsartikel (refereegranskat)abstract
    • BackgroundDespite continuing technological advances, the cost for large-scale genotyping of a high number of samples can be prohibitive. The purpose of this study is to design a cost-saving strategy for SNP genotyping. We suggest making use of pooling, a group testing technique, to drop the amount of SNP arrays needed. We believe that this will be of the greatest importance for non-model organisms with more limited resources in terms of cost-efficient large-scale chips and high-quality reference genomes, such as application in wildlife monitoring, plant and animal breeding, but it is in essence species-agnostic. The proposed approach consists in grouping and mixing individual DNA samples into pools before testing these pools on bead-chips, such that the number of pools is less than the number of individual samples. We present a statistical estimation algorithm, based on the pooling outcomes, for inferring marker-wise the most likely genotype of every sample in each pool. Finally, we input these estimated genotypes into existing imputation algorithms. We compare the imputation performance from pooled data with the Beagle algorithm, and a local likelihood-aware phasing algorithm closely modeled on MaCH that we implemented.ResultsWe conduct simulations based on human data from the 1000 Genomes Project, to aid comparison with other imputation studies. Based on the simulated data, we find that pooling impacts the genotype frequencies of the directly identifiable markers, without imputation. We also demonstrate how a combinatorial estimation of the genotype probabilities from the pooling design can improve the prediction performance of imputation models. Our algorithm achieves 93% concordance in predicting unassayed markers from pooled data, thus it outperforms the Beagle imputation model which reaches 80% concordance. We observe that the pooling design gives higher concordance for the rare variants than traditional low-density to high-density imputation commonly used for cost-effective genotyping of large cohorts.ConclusionsWe present promising results for combining a pooling scheme for SNP genotyping with computational genotype imputation on human data. These results could find potential applications in any context where the genotyping costs form a limiting factor on the study size, such as in marker-assisted selection in plant breeding.
  •  
15.
  • Clouard, Camille (författare)
  • Computational statistical methods for genotyping biallelic DNA markers from pooled experiments
  • 2022
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The information conveyed by genetic markers such as Single Nucleotide Polymorphisms (SNPs) has been widely used in biomedical research for studying human diseases, but also increasingly in agriculture by plant and animal breeders for selection purposes. Specific identified markers can act as a genetic signature that is correlated to certain characteristics in a living organism, e.g. a sensitivity to a disease or high-yield traits. Capturing these signatures with sufficient statistical power often requires large volumes of data, with thousands of samples to analyze and possibly millions of genetic markers to screen. Establishing statistical significance for effects from genetic variations is especially delicate when they occur at low frequencies.The production cost of such marker genotype data is thereforea critical part of the analysis. Despite recent technological advances, the production cost can still be prohibitive and genotype imputation strategies have been developed for addressing this issue. The genotype imputation methods have been widely investigated on human data and to a smaller extent on crop and animal species. In the case where only few reference genomes are available for imputation purposes, such as for non-model organisms, the imputation results can be less accurate. Group testing strategies, also called pooling strategies, can be well-suited for complementing imputation in large populations and decreasing the number of genotyping tests required compared to the single testing of every individual. Pooling is especially efficient for genotyping the low-frequency variants. However, because of the particular nature of genotype data and because of the limitations inherent to the genotype testing techniques, decoding pooled genotypes into unique data resolutions is a challenge. Overall, the decoding problem with pooled genotypes can be described as as an inference problem in Missing Not At Random data with nonmonotone missingness patterns.Specific inference methods such as variations of the Expectation-Maximization algorithm can be used for resolving the pooled data into estimates of the genotype probabilities for every individual. However, the non-randomness of the undecoded data impacts the outcomes of the inference process. This impact is propagated to imputation if the inferred genotype probabilities are to be devised as input into classical imputation methods for genotypes. In this work, we propose a study of the specific characteristics of a pooling scheme on genotype data, as well as how it affects the results of imputation methods such as tree-based haplotype clustering or coalescent models.
  •  
16.
  • Clouard, Camille, et al. (författare)
  • Consistency Study of a Reconstructed Genotype Probability Distribution via Clustered Bootstrapping in NORB Pooling Blocks
  • 2022
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • For applications with biallelic genetic markers, group testing techniques, synonymous to pooling techniques, are usually applied for decreasing the cost of large-scale testing as e.g. when detecting carriers of rare genetic variants. In some configurations, the results of the grouped tests cannot be decoded and the pooled items are missing. Inference of these missing items can be performed with specific statistical methods that are for example related to the Expectation-Maximization algorithm. Pooling has also been applied for determining the genotype of markers in large populations. The particularity of full genotype data for diploid organisms in the context of group testing are the ternary outcomes (two homozygous genotypes and one heterozygous), as well as the distribution of these three outcomes in a population, which is often ruled by the Hardy-Weinberg Equilibrium and depends on the allele frequency in such situation. When using a nonoverlapping repeated block pooling design, the missing items are only observed in particular arrangements. Overall, a data set of pooled genotypes can be described as an inference problem in Missing Not At Random data with nonmonotone missingness patterns. This study presents a preliminary investigation of the consistency of various iterative methods estimating the most likely genotype probabilities of the missing items in pooled data. We use the Kullback-Leibler divergence and the L2 distance between the genotype distribution computed from our estimates and a simulated empirical distribution as a measure of the distributional consistency.
  •  
17.
  • Clouard, Camille, et al. (författare)
  • Genotyping of SNPs in bread wheat at reduced cost from pooled experiments and imputation
  • 2024
  • Ingår i: Theoretical and Applied Genetics. - : Springer Nature. - 0040-5752 .- 1432-2242. ; 137:1
  • Tidskriftsartikel (refereegranskat)abstract
    • The plant breeding industry has shown growing interest in using the genotype data of relevant markers for performing selection of new competitive varieties. The selection usually benefits from large amounts of marker data and it is therefore crucial to dispose of data collection methods that are both cost-effective and reliable. Computational methods such as genotype imputation have been proposed earlier in several plant science studies for addressing the cost challenge. Genotype imputation methods have though been used more frequently and investigated more extensively in human genetics research. The various algorithms that exist have shown lower accuracy at inferring the genotype of genetic variants occurring at low frequency, while these rare variants can have great significance and impact in the genetic studies that underlie selection. In contrast, pooling is a technique that can efficiently identify low-frequency items in a population and it has been successfully used for detecting the samples that carry rare variants in a population. In this study, we propose to combine pooling and imputation, and demonstrate this by simulating a hypothetical microarray for genotyping a population of recombinant inbred lines in a cost-effective and accurate manner, even for rare variants. We show that with an adequate imputation model, it is feasible to accurately predictthe individual genotypes at lower cost than sample-wise genotyping and time-effectively. Moreover, we provide code resources for reproducing the results presented in this study in the form of a containerized workflow.
  •  
18.
  • Clouard, Camille, et al. (författare)
  • Using feedback in pooled experiments augmented with imputation for high genotyping accuracy at reduced cost
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Conducting genomic selection in plant breeding programs can substantially speed up the development of new varieties. Genomic selection provides more reliable insights when it is based on dense marker data, in which the rare variants can be particularly informative while they are delicate to capture with sufficient statistical power. Despite the release of new performing technologies, the cost of large-scale genotyping remains a major limitation to the implementation of genomic selection. We suggest to combine pooled genotyping with population-based imputation as a cost-effective computational strategy for genotyping SNPs. Pooling saves genotyping tests and has proven to accurately capture the rare variants that are usually missed by imputation. In this study, we investigate an extension to our joint model of pooling and imputation via iterative coupling. In each iteration, the imputed genotype probabilities serve as feedback input for rectifying the decoded data, before running a new imputation in these adjusted data. Such flexible set up indirectly imposes consistency between the imputed genotypes and the pooled observations. We demonstrate that repeated cycles of feedback can take full advantage of the strengths in both pooling and imputation. The iterations improve greatly upon the initial genotype predictions, achieving very high genotype accuracy for both low and high frequency variants. We enhance the average concordance from 94.5% to 98.4% at a very limited computational cost and without requiring any additional genotype testing. We believe that these results could be of interest for plant breeders and crop scientists.
  •  
19.
  • Crooks, Lucy, et al. (författare)
  • An Improved Method for Estimating Chromosomal Line Origin in QTL Analysis of Crosses Between Outbred Lines
  • 2011
  • Ingår i: G3. - : Oxford University Press (OUP). - 2160-1836. ; 1, s. 57-64
  • Tidskriftsartikel (refereegranskat)abstract
    • Estimating the line origin of chromosomal sections from marker genotypes is a vital step in quantitative trait loci analyses of outbred line crosses. The original, and most commonly used, algorithm can only handle moderate numbers of partially informative markers. The advent of high-density genotyping with SNP chips motivates a new method because the generic sets of markers on SNP chips typically result in long stretches of partially informative markers. We validated a new method for inferring line origin, triM (tracing inheritance with Markov models), with simulated data. A realistic pattern of marker information was achieved by replicating the linkage disequilibrium from an existing chicken intercross. There were approximately 1500 SNP markers and 800 F-2 individuals. The performance of triM was compared to GridQTL, which uses a variant of the original algorithm but modified for larger datasets. triM estimated the line origin with an average error of 2%, was 10% more accurate than GridQTL, considerably faster, and better at inferring positions of recombination. GridQTL could not analyze all simulated replicates and did not estimate line origin for around a third of individuals at many positions. The study shows that triM has computational benefits and improved estimation over available algorithms and is valuable for analyzing the large datasets that will be standard in future.
  •  
20.
  • Daurer, Benedikt J., et al. (författare)
  • Experimental strategies for imaging bioparticles with femtosecond hard X-ray pulses
  • 2017
  • Ingår i: IUCrJ. - : INT UNION CRYSTALLOGRAPHY. - 2052-2525. ; 4, s. 251-262
  • Tidskriftsartikel (refereegranskat)abstract
    • This study explores the capabilities of the Coherent X-ray Imaging Instrument at the Linac Coherent Light Source to image small biological samples. The weak signal from small samples puts a significant demand on the experiment. Aerosolized Omono River virus particles of similar to 40 nm in diameter were injected into the submicrometre X-ray focus at a reduced pressure. Diffraction patterns were recorded on two area detectors. The statistical nature of the measurements from many individual particles provided information about the intensity profile of the X-ray beam, phase variations in the wavefront and the size distribution of the injected particles. The results point to a wider than expected size distribution (from similar to 35 to similar to 300 nm in diameter). This is likely to be owing to nonvolatile contaminants from larger droplets during aerosolization and droplet evaporation. The results suggest that the concentration of nonvolatile contaminants and the ratio between the volumes of the initial droplet and the sample particles is critical in such studies. The maximum beam intensity in the focus was found to be 1.9 * 10(12) photons per mu m(2) per pulse. The full-width of the focus at half-maximum was estimated to be 500 nm (assuming 20% beamline transmission), and this width is larger than expected. Under these conditions, the diffraction signal from a sample-sized particle remained above the average background to a resolution of 4.25 nm. The results suggest that reducing the size of the initial droplets during aerosolization is necessary to bring small particles into the scope of detailed structural studies with X-ray lasers.
  •  
21.
  •  
22.
  • Daurer, Benedikt J., et al. (författare)
  • Ptychographic wavefront characterization for single-particle imaging at x-ray lasers
  • 2021
  • Ingår i: Optica. - : Optical Society of America. - 2334-2536. ; 8:4, s. 551-562
  • Tidskriftsartikel (refereegranskat)abstract
    • A well-characterized wavefront is important for many x-ray free-electron laser (XFEL) experiments, especially for single-particle imaging (SPI), where individual biomolecules randomly sample a nanometer region of highly focused femtosecond pulses. We demonstrate high-resolution multiple-plane wavefront imaging of an ensemble of XFEL pulses, focused by Kirkpatrick–Baez mirrors, based on mixed-state ptychography, an approach letting us infer and reduce experimental sources of instability. From the recovered wavefront profiles, we show that while local photon fluence correction is crucial and possible for SPI, a small diversity of phase tilts likely has no impact. Our detailed characterization will aid interpretation of data from past and future SPI experiments and provides a basis for further improvements to experimental design and reconstruction algorithms.
  •  
23.
  • Daurer, Benedikt J, et al. (författare)
  • Wavefront sensing of individual XFEL pulses using ptychography
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • The characterization of the wavefront dynamics is important for many X-ray free-electron laser (XFEL) experiments, in particular for coherent diffractive imaging (CDI), as the reconstructed image is always the product of the incoming wavefront with the object. An accurate understanding of the wavefront is also important for any experiment wishing to achieve peak power densities, making use of the tightest possible focal spots. With the use of ptychography we demonstrate high-resolution imaging of the Linac Coherent Light Source (LCLS) beam focused at the endstation for Atomic, Molecular and Optical (AMO) experiments, including its phase and intensity at every plane along its propagation axis, for each individual pulse. Using a mixed-state approach, we have reconstructed the most dominant beam components that constitute an ensemble of pulses, and from the reconstructed components determined their respective contribution in each of the individual pulses. This enabled us to obtain complete wavefront information about each individual pulse. We hope that our findings aid interpretation of data from past and future LCLS experiments and we propose this method to be used routinely for XFEL beam diagnostics. 
  •  
24.
  • Ekeberg, Tomas, 1983-, et al. (författare)
  • Observation of a single protein by ultrafast X-ray diffraction
  • 2024
  • Ingår i: Light. - : Springer Nature. - 2095-5545 .- 2047-7538. ; 13:1
  • Tidskriftsartikel (refereegranskat)abstract
    • The idea of using ultrashort X-ray pulses to obtain images of single proteins frozen in time has fascinated and inspired many. It was one of the arguments for building X-ray free-electron lasers. According to theory, the extremely intense pulses provide sufficient signal to dispense with using crystals as an amplifier, and the ultrashort pulse duration permits capturing the diffraction data before the sample inevitably explodes. This was first demonstrated on biological samples a decade ago on the giant mimivirus. Since then, a large collaboration has been pushing the limit of the smallest sample that can be imaged. The ability to capture snapshots on the timescale of atomic vibrations, while keeping the sample at room temperature, may allow probing the entire conformational phase space of macromolecules. Here we show the first observation of an X-ray diffraction pattern from a single protein, that of Escherichia coli GroEL which at 14 nm in diameter is the smallest biological sample ever imaged by X-rays, and demonstrate that the concept of diffraction before destruction extends to single proteins. From the pattern, it is possible to determine the approximate orientation of the protein. Our experiment demonstrates the feasibility of ultrafast imaging of single proteins, opening the way to single-molecule time-resolved studies on the femtosecond timescale.
  •  
25.
  •  
26.
  • Gorkhover, Tais, et al. (författare)
  • Femtosecond X-ray Fourier holography imaging of free-flying nanoparticles
  • 2018
  • Ingår i: Nature Photonics. - : Springer Science and Business Media LLC. - 1749-4885 .- 1749-4893. ; 12:3, s. 150-153
  • Tidskriftsartikel (refereegranskat)abstract
    • Ultrafast X-ray imaging on individual fragile specimens such as aerosols 1 , metastable particles 2 , superfluid quantum systems 3 and live biospecimens 4 provides high-resolution information that is inaccessible with conventional imaging techniques. Coherent X-ray diffractive imaging, however, suffers from intrinsic loss of phase, and therefore structure recovery is often complicated and not always uniquely defined 4,5 . Here, we introduce the method of in-flight holography, where we use nanoclusters as reference X-ray scatterers to encode relative phase information into diffraction patterns of a virus. The resulting hologram contains an unambiguous three-dimensional map of a virus and two nanoclusters with the highest lateral resolution so far achieved via single shot X-ray holography. Our approach unlocks the benefits of holography for ultrafast X-ray imaging of nanoscale, non-periodic systems and paves the way to direct observation of complex electron dynamics down to the attosecond timescale.
  •  
27.
  •  
28.
  •  
29.
  •  
30.
  • Kurta, Ruslan P., et al. (författare)
  • Correlations in Scattered X-Ray Laser Pulses Reveal Nanoscale Structural Features of Viruses
  • 2017
  • Ingår i: Physical Review Letters. - : American Physical Society. - 0031-9007 .- 1079-7114. ; 119:15
  • Tidskriftsartikel (refereegranskat)abstract
    • We use extremely bright and ultrashort pulses from an x-ray free-electron laser (XFEL) to measure correlations in x rays scattered from individual bioparticles. This allows us to go beyond the traditional crystallography and single-particle imaging approaches for structure investigations. We employ angular correlations to recover the three-dimensional (3D) structure of nanoscale viruses from x-ray diffraction data measured at the Linac Coherent Light Source. Correlations provide us with a comprehensive structural fingerprint of a 3D virus, which we use both for model-based and ab initio structure recovery. The analyses reveal a clear indication that the structure of the viruses deviates from the expected perfect icosahedral symmetry. Our results anticipate exciting opportunities for XFEL studies of the structure and dynamics of nanoscale objects by means of angular correlations.
  •  
31.
  • Li, Haoyuan, et al. (författare)
  • Diffraction data from aerosolized Coliphage PR772 virus particles imaged with the Linac Coherent Light Source
  • 2020
  • Ingår i: Scientific Data. - : NATURE RESEARCH. - 2052-4463. ; 7:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Single Particle Imaging (SPI) with intense coherent X-ray pulses from X-ray free-electron lasers (XFELs) has the potential to produce molecular structures without the need for crystallization or freezing. Here we present a dataset of 285,944 diffraction patterns from aerosolized Coliphage PR772 virus particles injected into the femtosecond X-ray pulses of the Linac Coherent Light Source (LCLS). Additional exposures with background information are also deposited. The diffraction data were collected at the Atomic, Molecular and Optical Science Instrument (AMO) of the LCLS in 4 experimental beam times during a period of four years. The photon energy was either 1.2 or 1.7keV and the pulse energy was between 2 and 4 mJ in a focal spot of about 1.3 mu m x 1.7 mu m full width at half maximum (FWHM). The X-ray laser pulses captured the particles in random orientations. The data offer insight into aerosolised virus particles in the gas phase, contain information relevant to improving experimental parameters, and provide a basis for developing algorithms for image analysis and reconstruction.
  •  
32.
  •  
33.
  •  
34.
  • Lundholm, Ida V., et al. (författare)
  • Considerations for three-dimensional image reconstruction from experimental data in coherent diffractive imaging
  • 2018
  • Ingår i: IUCrJ. - : International Union of Crystallography. - 2052-2525. ; 5, s. 531-541
  • Tidskriftsartikel (refereegranskat)abstract
    • Diffraction before destruction using X-ray free-electron lasers (XFELs) has the potential to determine radiation-damage-free structures without the need for crystallization. This article presents the three-dimensional reconstruction of the Melbournevirus from single-particle X-ray diffraction patterns collected at the LINAC Coherent Light Source (LCLS) as well as reconstructions from simulated data exploring the consequences of different kinds of experimental sources of noise. The reconstruction from experimental data suffers from a strong artifact in the center of the particle. This could be reproduced with simulated data by adding experimental background to the diffraction patterns. In those simulations, the relative density of the artifact increases linearly with background strength. This suggests that the artifact originates from the Fourier transform of the relatively flat background, concentrating all power in a central feature of limited extent. We support these findings by significantly reducing the artifact through background removal before the phase-retrieval step. Large amounts of blurring in the diffraction patterns were also found to introduce diffuse artifacts, which could easily be mistaken as biologically relevant features. Other sources of noise such as sample heterogeneity and variation of pulse energy did not significantly degrade the quality of the reconstructions. Larger data volumes, made possible by the recent inauguration of high repetition-rate XFELs, allow for increased signal-to-background ratio and provide a way to minimize these artifacts. The anticipated development of three-dimensional Fourier-volume-assembly algorithms which are background aware is an alternative and complementary solution, which maximizes the use of data.
  •  
35.
  •  
36.
  • Mahjani, Behrang, 1981- (författare)
  • Methods from Statistical Computing for Genetic Analysis of Complex Traits
  • 2016
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The goal of this thesis is to explore, improve and implement some advanced modern computational methods in statistics, focusing on applications in genetics. The thesis has three major directions.First, we study likelihoods for genetics analysis of experimental populations. Here, the maximum likelihood can be viewed as a computational global optimization problem. We introduce a faster optimization algorithm called PruneDIRECT, and explain how it can be parallelized for permutation testing using the Map-Reduce framework. We have implemented PruneDIRECT as an open source R package, and also Software as a Service for cloud infrastructures (QTLaaS).The second part of the thesis focusses on using sparse matrix methods for solving linear mixed models with large correlation matrices. For populations with known pedigrees, we show that the inverse of covariance matrix is sparse. We describe how to use this sparsity to develop a new method to maximize the likelihood and calculate the variance components.In the final part of the thesis we study computational challenges of psychiatric genetics, using only pedigree information. The aim is to investigate existence of maternal effects in obsessive compulsive behavior. We add the maternal effects to the linear mixed model, used in the second part of this thesis, and we describe the computational challenges of working with binary traits.
  •  
37.
  •  
38.
  • Munke, Anna, et al. (författare)
  • Data Descriptor : Coherent diffraction of single Rice Dwarf virus particles using hard X-rays at the Linac Coherent Light Source
  • 2016
  • Ingår i: Scientific Data. - : Nature Publishing Group. - 2052-4463. ; 3
  • Tidskriftsartikel (refereegranskat)abstract
    • Single particle diffractive imaging data from Rice Dwarf Virus (RDV) were recorded using the Coherent X-ray Imaging (CXI) instrument at the Linac Coherent Light Source (LCLS). RDV was chosen as it is a wellcharacterized model system, useful for proof-of-principle experiments, system optimization and algorithm development. RDV, an icosahedral virus of about 70 nm in diameter, was aerosolized and injected into the approximately 0.1 mu m diameter focused hard X-ray beam at the CXI instrument of LCLS. Diffraction patterns from RDV with signal to 5.9 angstrom ngstrom were recorded. The diffraction data are available through the Coherent X-ray Imaging Data Bank (CXIDB) as a resource for algorithm development, the contents of which are described here.
  •  
39.
  • Nettelblad, Carl, et al. (författare)
  • Assessing orthogonality and statistical properties of linear regression methods for interval mapping with partial information
  • 2010
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • Background: Mapping quantitative trait loci (QTL) has become a widely used tool in genetical research. In such experiments, it is desired to obtain orthogonal estimates of genetic effects for a number of reasons concerning both the biological meaning of the estimated locations and effects, and making the statistical analysis clearer and more robust. The currently used statistical methods, however, are not optimized for orthogonality, especially in cases involving interval mapping between markers and/or in incomplete datasets. This is an adverse limitation for the application of such methods for QTL scans involving model selection over putative complex gene networks.Results: We describe how deviations from orthogonality arise in currently used methods. We demonstrate one option for obtaining orthogonal estimates of genetic effects using multiple imputations per individual in an otherwise unchanged regression context. Our proposed IRIM method avoids inflated values for explainable variance and genetic effect variables, while showing a clear preference for marker locations in a fine mapping context. Despite possible shortcomings, similar results to linear regression are demonstrated for our proposed approach (IRIM) in an experimental dataset.Conclusions: Imputation-based methods can be used to enhance the statistical dissectability of effects, as well as computational performance. We exemplify how Haley-Knott regression is not only distorting the explainable variance, but also point out how the estimated phenotype values between classes, and the resulting effects, become dependent. This illustrates the need for a more radical departure in the approach chosen in order to achieve orthogonality.
  •  
40.
  • Nettelblad, Carl (författare)
  • Breakdown of methods for phasing and imputation in the presence of double genotype sharing
  • 2012
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • In genome-wide association studies, results have been improved through imputation of a denser marker set based on reference haplotypes and phasing of the genotype data. To better handle very large sets of reference haplotypes, pre-phasing with only study individuals has been suggested. We present a possible problem which is aggravated when pre-phasing strategies are used, and suggest a modification avoiding these issues with application to the MaCH tool.We evaluate the effectiveness of our remedy to a subset of Hapmap data, comparing the original version of MaCH and our modified approach. Improvements are demonstrated on the original data (phase switch error rate decresasing by 10%), but the differences are more pronounced in cases where the data is augmented to represent the presence of closely related individuals, especially when siblings are present (30% reduction in switch error rate in the presence of children, 47% reduction in the presence of siblings). When introducing siblings, the switch error rate in results from the unmodified version of MaCH increases significantly compared to the original data.The main conclusions of this investigation is that existing statistical methods for phasing and imputation of unrelated individuals might give subpar quality results if a subset of study individuals nonetheless are related. As the populations collected for general genome-wide association studies grow in size, including relatives might become more common. If a general GWAS framework for unrelated individuals would be employed on datasets where sub-populations originally collected as familial case-control sets are included, caution should also be taken regarding the quality of haplotypes.Our modification to MaCH is available on request and straightforward to implement. We hope that this mode, if found to be of use, could be integrated as an option in future standard distributions of MaCH.
  •  
41.
  •  
42.
  •  
43.
  •  
44.
  •  
45.
  •  
46.
  • Nettelblad, Carl (författare)
  • Inferring haplotypes and parental genotypes in larger full sib-ships and other pedigrees with missing or erroneous genotype data
  • 2012
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • Background: In many contexts, pedigrees for individuals are known even though not all individuals have been fully genotyped. In one extreme case, the genotypes for a set of full siblings are known, with no knowledge of parental genotypes. We propose a method for inferring phased haplotypes and genotypes for all individuals, even those with missing data, in such pedigrees, allowing a multitude of classic and recent methods for linkage and genome analysis to be used more efficiently.Results: By artificially removing the founder generation genotype data from a well-studied simulated dataset, the quality of reconstructed genotypes in that generation can be verified. For the full structure of repeated matings with 15 offspring per mating, 10 dams per sire, 99.89% of all founder markers were phased correctly, given only the unphased genotypes for offspring. The accuracy was reduced only slightly, to 99.51%, when introducing a 2% error rate in offspring genotypes. When reduced to only 5 full-sib offspring in a single sire-dam mating, the corresponding percentage is 92.62%, which compares favorably with 89.28% from the leading Merlin package. Furthermore, Merlin is unable to handle more than approximately 10 sibs, as the number of states tracked rises exponentially with family size, while our approach has no such limit and handles 150 half-sibs with ease in our experiments.Conclusions: Our method is able to reconstruct genotypes for parents when genotype data is only available for offspring individuals, as well as haplotypes for all individuals. Compared to the Merlin package, we can handle larger pedigrees and produce superior results, mainly due to the fact that Merlin uses the Viterbi algorithm on the state space to infer the genotype sequence. Tracking of haplotype and allele origin can be used in any application where the marker set does not directly influence genotype variation influencing traits. Inference of genotypes can also reduce the effects of genotyping errors and missing data. The cnF2freq codebase implementing our approach is available under a BSD-style license.
  •  
47.
  •  
48.
  • Nettelblad, Carl, et al. (författare)
  • Stochastically Guaranteed Global Optimums Achievable with a Divide-and-Conquer Approach to Multidimensional QTL Searches
  • 2010
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • The problem of searching for multiple quantitative trait loci (QTL) in an experimental cross population of considerable size poses a significant challenge, if general interactions are to be considered. Different global optimization approaches have been suggested, but without an analysis of the mathematical properties of the objective function, it is hard to devise reasonable criteria for when the optimum found in a search is truly global.We reformulate the standard residual sum of squares objective function for QTL analysis by a simple transformation, and show that the transformed function will be Lipschitz continuous in an infinite-size population, with a well-defined Lipschitz constant. We discuss the different deviations possible in an experimental finite-size population, suggesting a simple bound for the minimum value found in the vicinity of any point in the model space.Using this bound, we modify the DIRECT optimization algorithm to exclude regions where the optimum cannot be found according to the bound. This makes the algorithm more attractive than previously realized, since optimality is now in practice guaranteed. The consequences are realized in permutation testing, used to determine the significance of QTL results. DIRECT previously failed in attaining the correct thresholds. In addition, the knowledge of a candidate QTL for which significance is tested allows spectacular increases in permutation test performance, as most searches can be abandoned at an early stage.
  •  
49.
  • Nettelblad, Carl, 1985- (författare)
  • Two Optimization Problems in Genetics : Multi-dimensional QTL Analysis and Haplotype Inference
  • 2012
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The existence of new technologies, implemented in efficient platforms and workflows has made massive genotyping available to all fields of biology and medicine. Genetic analyses are no longer dominated by experimental work in laboratories, but rather the interpretation of the resulting data. When billions of data points representing thousands of individuals are available, efficient computational tools are required. The focus of this thesis is on developing models, methods and implementations for such tools.The first theme of the thesis is multi-dimensional scans for quantitative trait loci (QTL) in experimental crosses. By mating individuals from different lines, it is possible to gather data that can be used to pinpoint the genetic variation that influences specific traits to specific genome loci. However, it is natural to expect multiple genes influencing a single trait to interact. The thesis discusses model structure and model selection, giving new insight regarding under what conditions orthogonal models can be devised. The thesis also presents a new optimization method for efficiently and accurately locating QTL, and performing the permuted data searches needed for significance testing. This method has been implemented in a software package that can seamlessly perform the searches on grid computing infrastructures.The other theme in the thesis is the development of adapted optimization schemes for using hidden Markov models in tracing allele inheritance pathways, and specifically inferring haplotypes. The advances presented form the basis for more accurate and non-biased line origin probabilities in experimental crosses, especially multi-generational ones. We show that the new tools are able to reconstruct haplotypes and even genotypes in founder individuals and offspring alike, based on only unordered offspring genotypes. The tools can also handle larger populations than competing methods, resolving inheritance pathways and phase in much larger and more complex populations. Finally, the methods presented are also applicable to datasets where individual relationships are not known, which is frequently the case in human genetics studies. One immediate application for this would be improved accuracy for imputation of SNP markers within genome-wide association studies (GWAS).
  •  
50.
  • Nettelblad, Carl (författare)
  • Using Markov models and a stochastic Lipschitz condition for genetic analyses
  • 2010
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • A proper understanding of biological processes requires an understanding of genetics and evolutionary mechanisms. The vast amounts of genetical information that can routinely be extracted with modern technology have so far not been accompanied by an equally extended understanding of the corresponding processes.The relationship between a single gene and the resulting properties, phenotype of an individual is rarely clear. This thesis addresses several computational challenges regarding identifying and assessing the effects of quantitative trait loci (QTL), genomic positions where variation is affecting a trait. The genetic information available for each individual is rarely complete, meaning that the unknown variable of the genotype in the loci modelled also needs to be addressed. This thesis contains the presentation of new tools for employing the information that is available in a way that maximizes the information used, by using hidden Markov models (HMMs), resulting in a change in algorithm runtime complexity from exponential to log-linear, in terms of the number of markers. It also proposes the introduction of inferred haplotypes to further increase the power to assess these unknown variables for pedigrees of related genetically diverse individuals. Modelling consequences of partial genetic information are also treated.Furthermore, genes are not directly affecting traits, but are rather expressed in the environment of and in concordance with other genes. Therefore, significant interactions can be expected within genes, where some combination of genetic variation gives a pronounced, or even opposite, effect, compared to when occurring separately. This thesis addresses how to perform efficient scans for multiple interacting loci, as well as how to derive highly accurate empirical significance tests in these settings. This is done by analyzing the mathematical properties of the objective function describing the quality of model fits, and reformulating it through a simple transformation. Combined with the presented prototype of a problem-solving environment, these developments can make multi-dimensional searches for QTL routine, allowing the pursuit of new biological insight.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-50 av 60
Typ av publikation
tidskriftsartikel (35)
doktorsavhandling (7)
rapport (6)
konferensbidrag (5)
licentiatavhandling (4)
annan publikation (3)
visa fler...
visa färre...
Typ av innehåll
refereegranskat (39)
övrigt vetenskapligt/konstnärligt (21)
Författare/redaktör
Nettelblad, Carl (40)
Nettelblad, Carl, 19 ... (14)
Maia, Filipe R. N. C ... (12)
Svenda, Martin (11)
Daurer, Benedikt J. (11)
Bielecki, Johan (10)
visa fler...
Hajdu, Janos (9)
Pietrini, Alberto (9)
Ausmees, Kristiina (8)
Sellberg, Jonas A. (8)
Hantke, Max F. (8)
van der Schot, Gijs (8)
Timneanu, Nicusor (7)
Barty, Anton (7)
Larsson, Daniel S. D ... (7)
Westphal, Daniel (7)
Ekeberg, Tomas (6)
Holmgren, Sverker (6)
Loh, N. Duane (6)
Reddy, Hemanth K. N. (6)
Aquila, Andrew (5)
Mancuso, Adrian P. (5)
Vartanyants, Ivan A. (5)
Hasse, Dirk (5)
Munke, Anna (5)
Okamoto, Kenta (5)
Carlborg, Örjan (4)
Hart, Philip (4)
Seibert, M Marvin (4)
Hantke, Max (4)
Chapman, Henry N. (4)
Andreasson, Jakob, 1 ... (4)
Seibert, Marvin (4)
Kirian, Richard A. (4)
Bostedt, Christoph (4)
Rose, Max (4)
DeMirci, Hasan (4)
Yoon, Chun Hong (4)
Ayyer, Kartik (4)
Toor, Salman (3)
Sierra, Raymond G. (3)
Williams, Garth J. (3)
Ulmer, Anatoli (3)
Andersson, Inger (3)
Fromme, Petra (3)
Kurta, Ruslan P. (3)
Schwander, Peter (3)
Xavier, P. Lourdu (3)
Hogue, Brenda G. (3)
Awel, Salah (3)
visa färre...
Lärosäte
Uppsala universitet (60)
Kungliga Tekniska Högskolan (10)
Chalmers tekniska högskola (4)
Sveriges Lantbruksuniversitet (3)
Umeå universitet (1)
Språk
Engelska (60)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (59)
Lantbruksvetenskap (2)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy