SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Wählby Carolina professor 1974 ) "

Sökning: WFRF:(Wählby Carolina professor 1974 )

  • Resultat 1-50 av 58
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Gupta, Ankit (författare)
  • Adapting Deep Learning for Microscopy: Interaction, Application, and Validation
  • 2023
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Microscopy is an integral technique in biology to study the fundamental components of life visually. Digital microscopy and automation have enabled biologists to conduct faster and larger-scale experiments with a sharp increase in the data generated. Microscopy images contain rich but sparse information, as typically, only small regions in the images are relevant for further study. Image analysis is a crucial tool for biologists in the objective interpretation and extraction of quantitative measurements from microscopy data. Recently, deep learning techniques have shown superior performance in various image analysis tasks. The models learn feature representations from the data by optimizing for a task. However, the techniques require a significant amount of annotated data to perform well. Domain experts are required to annotate microscopy data, making it expensive and time-consuming. The models offer no insight into their prediction, and the learned features are not directly interpretable. This poses challenges to the reliable utilization of the technique in high-trust applications such as drug discovery or disease detection. High data variability in microscopy and poor generalization performance of deep learning models further increase the difficulty in general usage of the technique. The work in this thesis presents frameworks and methods to solve the practical challenges of applying deep learning in microscopy. The application-specific evaluation approaches were presented to validate the approaches, aiming to increase trust in the system. The major contributions of this work are as follows. Papers I and III present human-in-the-loop frameworks for quick adaption of deep learning to new data and for improving models' performance based on human input in visual explanations provided by the model, respectively. Paper II proposes a template-matching approach to improve user interactions in the framework proposed in Paper I. Papers III and IV present architectural modifications in the deep learning models proposed for better visual explanation and image-to-image translation, respectively. Papers IV and V present biologically relevant evaluations of approaches, i.e., analysis of the deep learning models in relation to the biological task.This thesis is aimed towards better utilization and adaptation of the DL methods and techniques to the microscopy data. We show that the annotation burden for the user can be significantly reduced by intuitive annotation frameworks and using contemporary deep-learning paradigms. We further propose architectural modifications in the models to adapt to the requirements and demonstrate the utility of application-specific analysis in microscopy.
  •  
2.
  • Harrison, Philip John, 1977- (författare)
  • Deep learning approaches for image cytometry: assessing cellular morphological responses to drug perturbations
  • 2023
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Image cytometry is the analysis of cell properties from microscopy image data and is used ubiquitously in basic cell biology, medical diagnosis and drug development. In recent years deep learning has shown impressive results for many image cytometry tasks, including image processing, segmentation, classification and detection. Deep learning enables a more data-driven and end-to-end approach than was previously possible with conventional methods. This thesis investigates deep learning-based approaches for assessing cellular morphological responses to drug perturbations. In paper I we demonstrated the benefit of combining convolutional neural networks and transfer learning for predicting mechanism of action and nucleus translocation. In paper II we showed, using convolutional and recurrent neural networks applied to time-lapse microscopy data, that it is possible to predict if mRNA delivery via nanoparticles has been effective based on cell morphology changes at time points prior to the protein production evidence of successful delivery. In paper III we used convolutional neural networks, adversarial training and privileged information to faithfully generate fluorescence imaging channels of adipocyte cells from their corresponding z-stack of brightfield images. Our models were both faithful at the fluorescence image level and at the level of the features extracted from these images, features that are commonly used for downstream analysis, including the design of effective drug therapies. In paper IV we showed that convolutional neural networks trained on brightfield image data provide similar, and in some cases superior, performance to models trained on fluorescence image data for predicting mechanism of action, due to the brightfield images possessing additional information not available in the fluorescence images. In paper V we applied deep learning models to brightfield time-lapse image data to explore the evolution of cellular morphological changes after drug administration for a diverse set of compounds, compounds that are often used as positive controls in image-based assays.
  •  
3.
  • Partel, Gabriele, 1988- (författare)
  • Image and Data Analysis for Spatially Resolved Transcriptomics : Decrypting fine-scale spatial heterogeneity of tissue's molecular architecture
  • 2020
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Our understanding of the biological complexity in multicellular organisms has progressed at tremendous pace in the last century and even more in the last decades with the advent of sequencing technologies that make it possible to interrogate the genome and transcriptome of individual cells. It is now possible to even spatially profile the transcriptomic landscape of tissue architectures to study the molecular organization of tissue heterogeneity at subcellular resolution. Newly developed spatially resolved transcriptomic techniques are producing large amounts of high-dimensional image data with increasing throughput, that need to be processed and analysed for extracting biological relevant information that has the potential to lead to new knowledge and discoveries. The work included in this thesis aims to provide image and data analysis tools for serving this new developing field of spatially resolved transcriptomics to fulfill its purpose. First, an image analysis workflow is presented for processing and analysing images acquired with in situ sequencing protocols, aiming to extract and decode molecular features that map the spatial transcriptomic landscape in tissue sections. This thesis also presents computational methods to explore and analyse the decoded spatial gene expression for studying the spatial molecular heterogeneity of tissue architectures at different scales. In one case, it is demonstrated how dimensionality reduction and clustering of the decoded gene expression spatial profiles can be exploited and used to identify reproducible spatial compartments corresponding to know anatomical regions across mouse brain sections from different individuals. And lastly, this thesis presents an unsupervised computational method that leverages advanced deep learning techniques on graphs to model the spatial gene expression at cellular and subcellular resolution. It provides a low dimensional representation of spatial organization and interaction, finding functional units that in many cases correspond to different cell types in the local tissue environment, without the need for cell segmentation.
  •  
4.
  • Wieslander, Håkan (författare)
  • Application, Optimisation and Evaluation of Deep Learning for Biomedical Imaging
  • 2022
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Microscopy imaging is a powerful technique when studying biology at a cellular and sub-cellular level. When combined with digital image analysis it creates an invaluable tool for investigating complex biological processes and phenomena. However, imaging at the cell and sub-cellular level tends to generate large amounts of data which can be difficult to analyse, navigate and store. Despite these difficulties, large data volumes mean more information content which is beneficial for computational methods like machine learning, especially deep learning. The union of microscopy imaging and deep learning thus provides numerous opportunities for advancing our scientific understanding and uncovering interesting and useful biological insights.The work in this thesis explores various means for optimising information extraction from microscopy data utilising image analysis with deep learning. The focus is on three different imaging modalities: bright-field; fluorescence; and transmission electron microscopy. Within these modalities different learning-based image analysis and processing techniques are explored, ranging from image classification and detection to image restoration and translation. The main contributions are: (i) a computational method for diagnosing oral and cervical cancer based on smear samples and bright-field microscopy; (ii) a hierarchical analysis of whole-slide tissue images from fluorescence microscopy and introducing a confidence based measure for pixel classifications; (iii) an image restoration model for motion-degraded images from transmission electron microscopy with an evaluation of model overfitting on underlying textures; and (iv) an image-to-image translation (virtual staining) of cell images from bright-field to fluorescence microscopy, optimised for biological feature relevance. A common theme underlying all the investigations in this thesis is that the evaluation of the methods used is in relation to the biological question at hand.
  •  
5.
  • Andersson, Axel (författare)
  • Computational Methods for Image-Based Spatial Transcriptomics
  • 2024
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Why does cancer develop, spread, grow, and lead to mortality? To answer these questions, one must study the fundamental building blocks of all living organisms — cells. Like a well-calibrated manufacturing unit, cells follow precise instructions by gene expression to initiate the synthesis of proteins, the workforces that drive all living biochemical processes.Recently, researchers have developed techniques for imaging the expression of hundreds of unique genes within tissue samples. This information is extremely valuable for understanding the cellular activities behind cancer-related diseases.  These methods, collectively known as image-based spatial transcriptomics (IST) techniques,  use fluorescence microscopy to combinatorically label mRNA species (corresponding to expressed genes) in tissue samples. Here, automatic image analysis is required to locate fluorescence signals and decode the combinatorial code. This process results in large quantities of points, marking the location of expressed genes. These new data formats pose several challenges regarding visualization and automated analysis.This thesis presents several computational methods and applications related to data generated from IST methods. Key contributions include: (i) A decoding method that jointly optimizes the detection and decoding of signals, particularly beneficial in scenarios with low signal-to-noise ratios or densely packed signals;  (ii) a computational method for automatically delineating regions with similar gene compositions — efficient, interactive, and scalable for exploring patterns across different scales;  (iii) a software enabling interactive visualization of millions of gene markers atop Terapixel-sized images (TissUUmaps);  (iv) a tool utilizing signed-graph partitioning for the automatic identification of cells, independent of the complementary nuclear stain;  (v) A fast and analytical expression for a score that quantifies co-localization between spatial points (such as located genes);  (vi) a demonstration that gene expression markers can train deep-learning models to classify tissue morphology.In the final contribution (vii), an IST technique features in a clinical study to spatially map the molecular diversity within tumors from patients with colorectal liver metastases, specifically those exhibiting a desmoplastic growth pattern. The study unveils novel molecular patterns characterizing cellular diversity in the transitional region between healthy liver tissue and the tumor. While a direct answer to the initial questions remains elusive, this study sheds illuminating insights into the growth dynamics of colorectal cancer liver metastases, bringing us closer to understanding the journey from development to mortality in cancer.
  •  
6.
  • Blamey, Ben, et al. (författare)
  • Rapid development of cloud-native intelligent data pipelines for scientific data streams using the HASTE Toolkit
  • 2021
  • Ingår i: GigaScience. - : Oxford University Press. - 2047-217X. ; 10:3, s. 1-14
  • Tidskriftsartikel (refereegranskat)abstract
    • BACKGROUND: Large streamed datasets, characteristic of life science applications, are often resource-intensive to process, transport and store. We propose a pipeline model, a design pattern for scientific pipelines, where an incoming stream of scientific data is organized into a tiered or ordered "data hierarchy". We introduce the HASTE Toolkit, a proof-of-concept cloud-native software toolkit based on this pipeline model, to partition and prioritize data streams to optimize use of limited computing resources.FINDINGS: In our pipeline model, an "interestingness function" assigns an interestingness score to data objects in the stream, inducing a data hierarchy. From this score, a "policy" guides decisions on how to prioritize computational resource use for a given object. The HASTE Toolkit is a collection of tools to adopt this approach. We evaluate with 2 microscopy imaging case studies. The first is a high content screening experiment, where images are analyzed in an on-premise container cloud to prioritize storage and subsequent computation. The second considers edge processing of images for upload into the public cloud for real-time control of a transmission electron microscope.CONCLUSIONS: Through our evaluation, we created smart data pipelines capable of effective use of storage, compute, and network resources, enabling more efficient data-intensive experiments. We note a beneficial separation between scientific concerns of data priority, and the implementation of this behaviour for different resources in different deployment contexts. The toolkit allows intelligent prioritization to be `bolted on' to new and existing systems - and is intended for use with a range of technologies in different deployment scenarios.
  •  
7.
  • Gupta, Ankit, et al. (författare)
  • Is brightfield all you need for MoA prediction?
  • 2022
  • Konferensbidrag (refereegranskat)abstract
    • Fluorescence staining techniques, such as Cell Painting, together with fluorescence microscopy have proven invaluable for visualizing and quantifying the effects that drugs and other perturbations have on cultured cells. However, fluorescence microscopy is expensive, time-consuming, and labor-intensive, and the stains applied can be cytotoxic, interfering with the activity under study. The simplest form of microscopy, brightfield microscopy, lacks these downsides, but the images produced have low contrast and the cellular compartments are difficult to discern. Nevertheless, by harnessing deep learning, these brightfield images may still be sufficient for various predictive purposes. In this study, we compared the predictive performance of models trained on fluorescence images to those trained on brightfield images for predicting the mechanism of action (MoA) of different drugs. We also extracted CellProfiler features from the fluorescence images and used them to benchmark the performance. Overall, we found comparable and correlated predictive performance for the two imaging modalities. This is promising for future studies of MoAs in time-lapse experiments.
  •  
8.
  • Harrison, Philip J., et al. (författare)
  • Deep-learning models for lipid nanoparticle-based drug delivery
  • 2021
  • Ingår i: Nanomedicine. - : Future Medicine. - 1743-5889 .- 1748-6963. ; 16:13, s. 1097-1110
  • Tidskriftsartikel (refereegranskat)abstract
    • Background: Early prediction of time-lapse microscopy experiments enables intelligent data management and decision-making. Aim: Using time-lapse data of HepG2 cells exposed to lipid nanoparticles loaded with mRNA for expression of GFP, the authors hypothesized that it is possible to predict in advance whether a cell will express GFP. Methods: The first modeling approach used a convolutional neural network extracting per-cell features at early time points. These features were then combined and explored using either a long short-term memory network (approach 2) or time series feature extraction and gradient boosting machines (approach 3). Results: Accounting for the temporal dynamics significantly improved performance. Conclusion: The results highlight the benefit of accounting for temporal dynamics when studying drug delivery using high-content imaging.
  •  
9.
  • Harrison, Philip John, et al. (författare)
  • Evaluating the utility of brightfield image data for mechanism of action prediction
  • 2023
  • Ingår i: PloS Computational Biology. - : Public Library of Science (PLoS). - 1553-734X .- 1553-7358. ; 19:7
  • Tidskriftsartikel (refereegranskat)abstract
    • Fluorescence staining techniques, such as Cell Painting, together with fluorescence microscopy have proven invaluable for visualizing and quantifying the effects that drugs and other perturbations have on cultured cells. However, fluorescence microscopy is expensive, time-consuming, labor-intensive, and the stains applied can be cytotoxic, interfering with the activity under study. The simplest form of microscopy, brightfield microscopy, lacks these downsides, but the images produced have low contrast and the cellular compartments are difficult to discern. Nevertheless, by harnessing deep learning, these brightfield images may still be sufficient for various predictive purposes. In this study, we compared the predictive performance of models trained on fluorescence images to those trained on brightfield images for predicting the mechanism of action (MoA) of different drugs. We also extracted CellProfiler features from the fluorescence images and used them to benchmark the performance. Overall, we found comparable and largely correlated predictive performance for the two imaging modalities. This is promising for future studies of MoAs in time-lapse experiments for which using fluorescence images is problematic. Explorations based on explainable AI techniques also provided valuable insights regarding compounds that were better predicted by one modality over the other.
  •  
10.
  • Matuszewski, Damian J., 1988- (författare)
  • Image and Data Analysis for Biomedical Quantitative Microscopy
  • 2019
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This thesis presents automatic image and data analysis methods to facilitate and improve microscopy-based research and diagnosis. New technologies and computational tools are necessary for handling the ever-growing amounts of data produced in life science. The thesis presents methods developed in three projects with different biomedical applications.In the first project, we analyzed a large high-content screen aimed at enabling personalized medicine for glioblastoma patients. We focused on capturing drug-induced cell-cycle disruption in fluorescence microscopy images of cancer cell cultures. Our main objectives were to identify drugs affecting the cell-cycle and to increase the understanding of different drugs’ mechanisms of action.  Here we present tools for automatic cell-cycle analysis and identification of drugs of interest and their effective doses.In the second project, we developed a feature descriptor for image matching. Image matching is a central pre-processing step in many applications. For example, when two or more images must be matched and registered to create a larger field of view or to analyze differences and changes over time. Our descriptor is rotation-, scale-, and illumination-invariant and it has a short feature vector which makes it computationally attractive. The flexibility to combine it with any feature detector and the customization possibility make it a very versatile tool.In the third project, we addressed two general problems for bridging the gap between deep learning method development and their use in practical scenarios. We developed a method for convolutional neural network training using minimally annotated images. In many biomedical applications, the objects of interest cannot be accurately delineated due to their fuzzy shape, ambiguous morphology, image quality, or the expert knowledge and time it requires. The minimal annotations, in this case, consist of center-points or centerlines of target objects of approximately known size. We demonstrated our training method in a challenging application of a multi-class semantic segmentation of viruses in transmission electron microscopy images. We also systematically explored the influence of network architecture hyper-parameters on its size and performance and show the possibility to substantially reduce the size of a network without compromising its performance.All methods in this thesis were designed to work with little or no input from biomedical experts but of course, require fine-tuning for new applications. The usefulness of the tools has been demonstrated by collaborators and other researchers and has inspired further development of related algorithms.
  •  
11.
  • Pereira, Carla, et al. (författare)
  • Comparison of East‐Asia and West‐Europe cohorts explains disparities in survival outcomes and highlights predictive biomarkers of early gastric cancer aggressiveness
  • 2021
  • Ingår i: International Journal of Cancer. - : John Wiley & Sons. - 0020-7136 .- 1097-0215. ; 150:5, s. 868-880
  • Tidskriftsartikel (refereegranskat)abstract
    • Surgical resection with lymphadenectomy and perioperative chemotherapy is the universal mainstay for curative treatment of gastric cancer (GC) patients with locoregional disease. However, GC survival remains asymmetric in West- and East-world regions. We hypothesize that this asymmetry derives from differential clinical management. Therefore, we collected chemo-naïve GC patients from Portugal and South Korea to explore specific immunophenotypic profiles related to disease aggressiveness and clinicopathological factors potentially explaining associated overall survival (OS) differences. Clinicopathological and survival data were collected from chemo-naïve surgical cohorts from Portugal (West-Europe cohort [WE-C]; n = 170) and South Korea (East-Asia cohort [EA-C]; n = 367) and correlated with immunohistochemical expression profiles of E-cadherin and CD44v6 obtained from consecutive tissue microarrays sections. Survival analysis revealed a subset of 12.4% of WE-C patients, whose tumors concomitantly express E-cadherin_abnormal and CD44v6_very high, displaying extremely poor OS, even at TNM stages I and II. These WE-C stage-I and -II patients tumors were particularly aggressive compared to all others, invading deeper into the gastric wall (P = .032) and more often permeating the vasculature (P = .018) and nerves (P = .009). A similar immunophenotypic profile was found in 11.9% of EA-C patients, but unrelated to survival. Tumours, from stage-I and -II EA-C patients, that display both biomarkers, also permeated more lymphatic vessels (P = .003), promoting lymph node (LN) metastasis (P = .019), being diagnosed on average 8 years earlier and submitted to more extensive LN dissection than WE-C. Concomitant E-cadherin_abnormal/CD44v6_very-high expression predicts aggressiveness and poor survival of stage-I and -II GC submitted to conservative lymphadenectomy.
  •  
12.
  • Pielawski, Nicolas (författare)
  • Learning-based prediction, representation, and multimodal registration for bioimage processing
  • 2023
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Microscopy and imaging are essential to understanding and exploring biology. Modern staining and imaging techniques generate large amounts of data resulting in the need for automated analysis approaches. Many earlier approaches relied on handcrafted feature extractors, while today's deep-learning-based methods open up new ways to analyze data automatically.Deep learning has become popular in bioimage processing as it can extract high-level features describing image content (Paper III). The work in this thesis explores various aspects and limitations of machine learning and deep learning with applications in biology. Learning-based methods have generalization issues on out-of-distribution data points, and methods such as uncertainty estimation (Paper II) and visual quality control (Paper V) can provide ways to mitigate those issues. Furthermore, deep learning methods often require large amounts of data during training. Here the focus is on optimizing deep learning methods to meet current computational capabilities and handle the increasing volume and size of data (Paper I). Model uncertainty and data augmentation techniques are also explored (Papers II and III).This thesis is split into chapters describing the main components of cell biology, microscopy imaging, and the mathematical and machine-learning theories to give readers an introduction to biomedical image processing. The main contributions of this thesis are deep-learning methods for reconstructing patch-based segmentation (Paper I) and pixel regression of traction force images (Paper II), followed by methods for aligning images from different sensors in a common coordinate system (named multimodal image registration) using representation learning (Paper III) and Bayesian optimization (Paper IV). Finally, the thesis introduces TissUUmaps 3, a tool for visualizing multiplexed spatial transcriptomics data (Paper V). These contributions provide methods and tools detailing how to apply mathematical frameworks and machine-learning theory to biology, giving us concrete tools to improve our understanding of complex biological processes.
  •  
13.
  • Suveer, Amit, 1987- (författare)
  • Methods for Processing and Analysis of Biomedical TEM Images
  • 2019
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Transmission Electron Microscopy (TEM) has the high resolving capability and high clinical significance; however, the current manual diagnostic procedure using TEM is complicated and time-consuming, requiring rarely available expertise for analyzing TEM images of the biological specimen. This thesis addresses the bottlenecks of TEM-based analysis by proposing image analysis methods to automate and improve critical time-consuming steps of currently manual diagnostic procedures. The automation is demonstrated on the computer-assisted diagnosis of Primary Ciliary Dyskinesia (PCD), a genetic condition for which TEM analysis is considered the gold standard.The methods proposed for the automated workflow mimic the manual procedure performed by the pathologists to detect objects of interest – diagnostically relevant cilia instances – followed by a computational step to combine information from multiple detected objects to enhance the important structural details. The workflow includes an approach for efficient search through a sample to identify objects and locate areas with a high density of objects of interest in low-resolution images, to perform high-resolution imaging of the identified areas. Subsequently, high-quality objects in high-resolution images are detected, processed, and the extracted information is combined to enhance structural details.This thesis also addresses the challenges typical for TEM imaging, such as sample drift and deformation, or damage due to high electron dose for long exposure times. Two alternative paths are investigated: (i) different strategies combining short exposure imaging with suitable denoising techniques, including conventional approaches and a proposed deep learning based method, are explored; (ii) conventional interpolation approaches and a proposed deep learning based method are analyzed for super-resolution reconstruction using a single image. For both explored directions, in the best case scenario, the processing time is nearly 20 times faster as compared to the acquisition time for a single long exposure high illumination image. Moreover, the reconstruction approach (ii) requires nearly 16 times lesser data (storage space) and overcomes the need for high-resolution image acquisition.Finally, the thesis addresses critical needs to enable objective and reliable evaluation of TEM image denoising approaches. A method for synthesizing realistic noise-free TEM reference images is proposed, and a denoising benchmark dataset is generated and made publicly available. The proposed dataset consists of noise-free references along with masks encompassing the critical diagnostic structures. This enables performance evaluation based on the capability of denoising methods to preserve structural details, instead of merely grading them based on the signal to noise ratio improvement and preservation of gross structures.
  •  
14.
  • Wieslander, Håkan, et al. (författare)
  • Deep learning and conformal prediction for hierarchical analysis of large-scale whole-slide tissue images
  • 2021
  • Ingår i: IEEE journal of biomedical and health informatics. - : Institute of Electrical and Electronics Engineers (IEEE). - 2168-2194 .- 2168-2208. ; 25:2, s. 371-380
  • Tidskriftsartikel (refereegranskat)abstract
    • With the increasing amount of image data collected from biomedical experiments there is an urgent need for smarter and more effective analysis methods. Many scientific questions require analysis of image subregions related to some specific biology. Finding such regions of interest (ROIs) at low resolution and limiting the data subjected to final quantification at high resolution can reduce computational requirements and save time. In this paper we propose a three-step pipeline: First, bounding boxes for ROIs are located at low resolution. Next, ROIs are subjected to semantic segmentation into sub-regions at mid-resolution. We also estimate the confidence of the segmented sub-regions. Finally, quantitative measurements are extracted at high resolution. We use deep learning for the first two steps in the pipeline and conformal prediction for confidence assessment. We show that limiting final quantitative analysis to sub regions with high confidence reduces noise and increases separability of observed biological effects.
  •  
15.
  • Andersson, Axel, et al. (författare)
  • Cell Segmentation of in situ Transcriptomics Data using Signed Graph Partitioning
  • 2023
  • Ingår i: Graph-Based Representations in Pattern Recognition. - Cham : Springer. - 9783031427947 - 9783031427954 ; , s. 139-148
  • Konferensbidrag (refereegranskat)abstract
    • The locations of different mRNA molecules can be revealed by multiplexed in situ RNA detection. By assigning detected mRNA molecules to individual cells, it is possible to identify many different cell types in parallel. This in turn enables investigation of the spatial cellular architecture in tissue, which is crucial for furthering our understanding of biological processes and diseases. However, cell typing typically depends on the segmentation of cell nuclei, which is often done based on images of a DNA stain, such as DAPI. Limiting cell definition to a nuclear stain makes it fundamentally difficult to determine accurate cell borders, and thereby also difficult to assign mRNA molecules to the correct cell. As such, we have developed a computational tool that segments cells solely based on the local composition of mRNA molecules. First, a small neural network is trained to compute attractive and repulsive edges between pairs of mRNA molecules. The signed graph is then partitioned by a mutex watershed into components corresponding to different cells. We evaluated our method on two publicly available datasets and compared it against the current state-of-the-art and older baselines. We conclude that combining neural networks with combinatorial optimization is a promising approach for cell segmentation of in situ transcriptomics data. The tool is open-source and publicly available for use at https://github.com/wahlby-lab/IS3G.
  •  
16.
  • Andersson, Axel, et al. (författare)
  • ISTDECO : In Situ Transcriptomics Decoding by Deconvolution
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • In Situ Transcriptomics (IST) is a set of image-based transcriptomics approaches that enables localisation of gene expression directly in tissue samples. IST techniques produce multiplexed image series in which fluorescent spots are either present or absent across imaging rounds and colour channels. A spot’spresence and absence form a type of barcoded pattern that labels a particular type of mRNA. Therefore, the expression of agene can be determined by localising the fluorescent spots and decode the barcode that they form. Existing IST algorithms usually do this in two separate steps: spot localisation and barcode decoding. Although these algorithms are efficient, they are limited by strictly separating the localisation and decoding steps. This limitation becomes apparent in regions with low signal-to-noise ratio or high spot densities. We argue that an improved gene expression decoding can be obtained by combining these two steps into a single algorithm. This allows for an efficient decoding that is less sensitive to noise and optical crowding. We present IST Decoding by Deconvolution (ISTDECO), a principled decoding approach combining spectral and spatial deconvolution into a single algorithm. We evaluate ISTDECOon simulated data, as well as on two real IST datasets, and compare with state-of-the-art. ISTDECO achieves state-of-the-art performance despite high spot densities and low signal-to-noise ratios. It is easily implemented and runs efficiently using a GPU.ISTDECO implementation, datasets and demos are available online at: github.com/axanderssonuu/istdeco
  •  
17.
  • Andersson, Axel, et al. (författare)
  • Points2Regions : Fast, interactive clustering of imaging-based spatial transcriptomics data
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Imaging-based spatial transcriptomics techniques generate image data that, once processed, results in a set of spatial points with categorical labels for different mRNA species. A crucial part of analyzing downstream data involves the analysis of these point patterns. Here, biologically interesting patterns can be explored at different spatial scales. Molecular patterns on a cellular level would correspond to cell types, whereas patterns on a millimeter scale would correspond to tissue-level structures. Often, clustering methods are employed to identify and segment regions with distinct point-patterns. Traditional clustering techniques for such data are constrained by reliance on complementary data or extensive machine learning, limiting their applicability to tasks on a particular scale. This paper introduces 'Points2Regions', a practical tool for clustering spatial points with categorical labels. Its flexible and computationally efficient clustering approach enables pattern discovery across multiple scales, making it a powerful tool for exploratory analysis. Points2Regions has demonstrated efficient performance in various datasets, adeptly defining biologically relevant regions similar to those found by scale-specific methods. As a Python package integrated into TissUUmaps and a Napari plugin, it offers interactive clustering and visualization, significantly enhancing user experience in data exploration. In essence, Points2Regions presents a user-friendly and simple tool for exploratory analysis of spatial points with categorical labels. 
  •  
18.
  •  
19.
  • Andersson, Axel, et al. (författare)
  • Transcriptome-Supervised Classification of Tissue Morphology Using Deep Learning
  • 2020
  • Ingår i: IEEE 17th International Symposium on Biomedical Imaging (ISBI). - 9781538693308 - 9781538693315 ; , s. 1630-1633
  • Konferensbidrag (refereegranskat)abstract
    • Deep learning has proven to successfully learn variations in tissue and cell morphology. Training of such models typically relies on expensive manual annotations. Here we conjecture that spatially resolved gene expression, e.i., the transcriptome, can be used as an alternative to manual annotations. In particular, we trained five convolutional neural networks with patches of different size extracted from locations defined by spatially resolved gene expression. The network is trained to classify tissue morphology related to two different genes, general tissue, as well as background, on an image of fluorescence stained nuclei in a mouse brain coronal section. Performance is evaluated on an independent tissue section from a different mouse brain, reaching an average Dice score of 0.51. Results may indicate that novel techniques for spatially resolved transcriptomics together with deep learning may provide a unique and unbiased way to find genotype phenotype relationships
  •  
20.
  • Beháňová, Andrea, et al. (författare)
  • Spatial Statistics for Understanding Tissue Organization
  • 2022
  • Ingår i: Frontiers in Physiology. - : Frontiers Media S.A.. - 1664-042X. ; 13
  • Forskningsöversikt (refereegranskat)abstract
    • Interpreting tissue architecture plays an important role in gaining a better understanding of healthy tissue development and disease. Novel molecular detection and imaging techniques make it possible to locate many different types of objects, such as cells and/or mRNAs, and map their location across the tissue space. In this review, we present several methods that provide quantification and statistical verification of observed patterns in the tissue architecture. We categorize these methods into three main groups: Spatial statistics on a single type of object, two types of objects, and multiple types of objects. We discuss the methods in relation to four hypotheses regarding the methods' capability to distinguish random and non-random distributions of objects across a tissue sample, and present a number of openly available tools where these methods are provided. We also discuss other spatial statistics methods compatible with other types of input data.
  •  
21.
  • Bekkhus, Tove, et al. (författare)
  • Automated detection of vascular remodeling in human tumor draining lymph nodes by the deep learning tool HEV-finder
  • 2022
  • Ingår i: Journal of Pathology. - : John Wiley & Sons. - 0022-3417 .- 1096-9896. ; 258:1, s. 4-11
  • Tidskriftsartikel (refereegranskat)abstract
    • Vascular remodeling is common in human cancer and has potential as future biomarkers for prediction of disease progression and tumor immunity status. It can also affect metastatic sites, including the tumor-draining lymph nodes (TDLNs). Dilation of the high endothelial venules (HEVs) within TDLNs has been observed in several types of cancer. We recently demonstrated that it is a premetastatic effect that can be linked to tumor invasiveness in breast cancer. Manual visual assessment of changes in vascular morphology is a tedious and difficult task, limiting high-throughput analysis. Here we present a fully automated approach for detection and classification of HEV dilation. By using 12,524 manually classified HEVs, we trained a deep-learning model and created a graphical user interface for visualization of the results. The tool, named the HEV-finder, selectively analyses HEV dilation in specific regions of the lymph nodes. We evaluated the HEV-finder's ability to detect and classify HEV dilation in different types of breast cancer compared to manual annotations. Our results constitute a successful example of large-scale, fully automated, and user-independent, image-based quantitative assessment of vascular remodeling in human pathology and lay the ground for future exploration of HEV dilation in TDLNs as a biomarker.
  •  
22.
  • Chelebian, Eduard, et al. (författare)
  • DEPICTER : Deep representation clustering for histology annotation
  • 2024
  • Ingår i: Computers in Biology and Medicine. - : Elsevier. - 0010-4825 .- 1879-0534. ; 170
  • Tidskriftsartikel (refereegranskat)abstract
    • Automatic segmentation of histopathology whole -slide images (WSI) usually involves supervised training of deep learning models with pixel -level labels to classify each pixel of the WSI into tissue regions such as benign or cancerous. However, fully supervised segmentation requires large-scale data manually annotated by experts, which can be expensive and time-consuming to obtain. Non -fully supervised methods, ranging from semi -supervised to unsupervised, have been proposed to address this issue and have been successful in WSI segmentation tasks. But these methods have mainly been focused on technical advancements in algorithmic performance rather than on the development of practical tools that could be used by pathologists or researchers in real -world scenarios. In contrast, we present DEPICTER (Deep rEPresentatIon ClusTERing), an interactive segmentation tool for histopathology annotation that produces a patch -wise dense segmentation map at WSI level. The interactive nature of DEPICTER leverages self- and semi -supervised learning approaches to allow the user to participate in the segmentation producing reliable results while reducing the workload. DEPICTER consists of three steps: first, a pretrained model is used to compute embeddings from image patches. Next, the user selects a number of benign and cancerous patches from the multi -resolution image. Finally, guided by the deep representations, label propagation is achieved using our novel seeded iterative clustering method or by directly interacting with the embedding space via feature space gating. We report both real-time interaction results with three pathologists and evaluate the performance on three public cancer classification dataset benchmarks through simulations. The code and demos of DEPICTER are publicly available at https://github.com/eduardchelebian/depicter.
  •  
23.
  • Chelebian, Eduard, et al. (författare)
  • Morphological Features Extracted by AI Associated with Spatial Transcriptomics in Prostate Cancer
  • 2021
  • Ingår i: Cancers. - : MDPI AG. - 2072-6694. ; 13:19
  • Tidskriftsartikel (refereegranskat)abstract
    • Simple Summary Prostate cancer has very varied appearances when examined under the microscope, and it is difficult to distinguish clinically significant cancer from indolent disease. In this study, we use computer analyses inspired by neurons, so-called 'neural networks', to gain new insights into the connection between how tissue looks and underlying genes which program the function of prostate cells. Neural networks are 'trained' to carry out specific tasks, and training requires large numbers of training examples. Here, we show that a network pre-trained on different data can still identify biologically meaningful regions, without the need for additional training. The neural network interpretations matched independent manual assessment by human pathologists, and even resulted in more refined interpretation when considering the relationship with the underlying genes. This is a new way to automatically detect prostate cancer and its genetic characteristics without the need for human supervision, which means it could possibly help in making better treatment decisions. Prostate cancer is a common cancer type in men, yet some of its traits are still under-explored. One reason for this is high molecular and morphological heterogeneity. The purpose of this study was to develop a method to gain new insights into the connection between morphological changes and underlying molecular patterns. We used artificial intelligence (AI) to analyze the morphology of seven hematoxylin and eosin (H & E)-stained prostatectomy slides from a patient with multi-focal prostate cancer. We also paired the slides with spatially resolved expression for thousands of genes obtained by a novel spatial transcriptomics (ST) technique. As both spaces are highly dimensional, we focused on dimensionality reduction before seeking associations between them. Consequently, we extracted morphological features from H & E images using an ensemble of pre-trained convolutional neural networks and proposed a workflow for dimensionality reduction. To summarize the ST data into genetic profiles, we used a previously proposed factor analysis. We found that the regions were automatically defined, outlined by unsupervised clustering, associated with independent manual annotations, in some cases, finding further relevant subdivisions. The morphological patterns were also correlated with molecular profiles and could predict the spatial variation of individual genes. This novel approach enables flexible unsupervised studies relating morphological and genetic heterogeneity using AI to be carried out.
  •  
24.
  • Dobson, Ellen T.A., et al. (författare)
  • ImageJ and CellProfiler : Complements in Open‐Source Bioimage Analysis
  • 2021
  • Ingår i: Current Protocols in Microbiology. - : John Wiley & Sons. - 1934-8525 .- 1088-7423. ; 1:5
  • Tidskriftsartikel (refereegranskat)abstract
    • ImageJ and CellProfiler have long been leading open-source platforms in the field of bioimage analysis. ImageJ's traditional strength is in single-image processing and investigation, while CellProfiler is designed for building large-scale, modular analysis pipelines. Although many image analysis problems can be well solved with one or the other, using these two platforms together in a single workflow can be powerful. Here, we share two pipelines demonstrating mechanisms for productively and conveniently integrating ImageJ and CellProfiler for (1) studying cell morphology and migration via tracking, and (2) advanced stitching techniques for handling large, tiled image sets to improve segmentation. No single platform can provide all the key and most efficient functionality needed for all studies. While both programs can be and are often used separately, these pipelines demonstrate the benefits of using them together for image analysis workflows. ImageJ and CellProfiler are both committed to interoperability between their platforms, with ongoing development to improve how both are leveraged from the other
  •  
25.
  • Edfeldt, Gabriella, et al. (författare)
  • Regular Use of Depot Medroxyprogesterone Acetate Causes Thinning of the Superficial Lining and Apical Distribution of Human Immunodeficiency Virus Target Cells in the Human Ectocervix
  • 2022
  • Ingår i: Journal of Infectious Diseases. - : Oxford University Press. - 0022-1899 .- 1537-6613. ; 225:7, s. 1151-1161
  • Tidskriftsartikel (refereegranskat)abstract
    • BackgroundThe hormonal contraceptive depot medroxyprogesterone acetate (DMPA) may be associated with an increased risk of acquiring human immunodeficiency virus (HIV). We hypothesize that DMPA use influences the ectocervical tissue architecture and HIV target cell localization.MethodsQuantitative image analysis workflows were developed to assess ectocervical tissue samples collected from DMPA users and control subjects not using hormonal contraception.ResultsCompared to controls, the DMPA group exhibited a significantly thinner apical ectocervical epithelial layer and a higher proportion of CD4+CCR5+ cells with a more superficial location. This localization corresponded to an area with a nonintact E-cadherin net structure. CD4+Langerin+ cells were also more superficially located in the DMPA group, although fewer in number compared to the controls. Natural plasma progesterone levels did not correlate with any of these parameters, whereas estradiol levels were positively correlated with E-cadherin expression and a more basal location for HIV target cells of the control group.ConclusionsDMPA users have a less robust epithelial layer and a more apical distribution of HIV target cells in the human ectocervix, which could confer a higher risk of HIV infection. Our results highlight the importance of assessing intact genital tissue samples to gain insights into HIV susceptibility factors.
  •  
26.
  • Günaydin, Gökce, et al. (författare)
  • Impact of Q-Griffithsin anti-HIV microbicide gel in non-human primates : In situ analyses of epithelial and immune cell markers in rectal mucosa
  • 2019
  • Ingår i: Scientific Reports. - : NATURE PUBLISHING GROUP. - 2045-2322. ; 9
  • Tidskriftsartikel (refereegranskat)abstract
    • Natural-product derived lectins can function as potent viral inhibitors with minimal toxicity as shown in vitro and in small animal models. We here assessed the effect of rectal application of an anti-HIV lectin-based microbicide Q-Griffithsin (Q-GRFT) in rectal tissue samples from rhesus macaques. E-cadherin(+) cells, CD4(+) cells and total mucosal cells were assessed using in situ staining combined with a novel customized digital image analysis platform. Variations in cell numbers between baseline, placebo and Q-GRFT treated samples were analyzed using random intercept linear mixed effect models. The frequencies of rectal E-cadherin(+) cells remained stable despite multiple tissue samplings and Q-GRFT gel (0.1%, 0.3% and 1%, respectively) treatment. Whereas single dose application of Q-GRFT did not affect the frequencies of rectal CD4(+) cells, multi-dose Q-GRFT caused a small, but significant increase of the frequencies of intra-epithelial CD4(+) cells (placebo: median 4%; 1% Q-GRFT: median 7%) and of the CD4(+) lamina propria cells (placebo: median 30%; 0.1-1% Q-GRFT: median 36-39%). The resting time between sampling points were further associated with minor changes in the total and CD4(+) rectal mucosal cell levels. The results add to general knowledge of in vivo evaluation of anti-HIV microbicide application concerning cellular effects in rectal mucosa.
  •  
27.
  • Gupta, Ankit, et al. (författare)
  • SimSearch : A Human-in-The-Loop Learning Framework for Fast Detection of Regions of Interest in Microscopy Images
  • 2022
  • Ingår i: IEEE journal of biomedical and health informatics. - : Institute of Electrical and Electronics Engineers (IEEE). - 2168-2194 .- 2168-2208. ; 26:8, s. 4079-4089
  • Tidskriftsartikel (refereegranskat)abstract
    • Objective: Large-scale microscopy-based experiments often result in images with rich but sparse information content. An experienced microscopist can visually identify regions of interest (ROIs), but this becomes a cumbersome task with large datasets. Here we present SimSearch, a framework for quick and easy user-guided training of a deep neural model aimed at fast detection of ROIs in large-scale microscopy experiments. Methods: The user manually selects a small number of patches representing different classes of ROIs. This is followed by feature extraction using a pre-trained deep-learning model, and interactive patch selection pruning, resulting in a smaller set of clean (user approved) and larger set of noisy (unapproved) training patches of ROIs and background. The pre-trained deep-learning model is thereafter first trained on the large set of noisy patches, followed by refined training using the clean patches. Results: The framework is evaluated on fluorescence microscopy images from a large-scale drug screening experiment, brightfield images of immunohistochemistry-stained patient tissue samples, and malaria-infected human blood smears, as well as transmission electron microscopy images of cell sections. Compared to state-of-the-art and manual/visual assessment, the results show similar performance with maximal flexibility and minimal a priori information and user interaction. Conclusions: SimSearch quickly adapts to different data sets, which demonstrates the potential to speed up many microscopy-based experiments based on a small amount of user interaction. Significance: SimSearch can help biologists quickly extract informative regions and perform analyses on large datasets helping increase the throughput in a microscopy experiment.
  •  
28.
  • Gupta, Anindya, et al. (författare)
  • Weakly-supervised prediction of cell migration modes in confocal microscopy images using bayesian deep learning
  • 2020
  • Ingår i: 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI). - 9781538693308 - 9781538693315 ; , s. 1626-1629
  • Konferensbidrag (refereegranskat)abstract
    • Cell migration is pivotal for their development, physiology and disease treatment. A single cell on a 2D surface can utilize continuous or discontinuous migration modes. To comprehend the cell migration, an adequate quantification for single cell-based analysis is crucial. An automatized approach could alleviate tedious manual analysis, facilitating large-scale drug screening. Supervised deep learning has shown promising outcomes in computerized microscopy image analysis. However, their implication is limited due to the scarcity of carefully annotated data and uncertain deterministic outputs. We compare three deep learning models to study the problem of learning discriminative morphological representations using weakly annotated data for predicting the cell migration modes. We also estimate Bayesian uncertainty to describe the confidence of the probabilistic predictions. Amongst three compared models, DenseNet yielded the best results with a sensitivity of 87.91%±13.22 at a false negative rate of 1.26%±4.18.
  •  
29.
  • Hallström, Erik, et al. (författare)
  • Label-free deep learning-based species classification of bacteria imaged by phase-contrast microscopy
  • 2023
  • Ingår i: PloS Computational Biology. - : Public Library of Science (PLoS). - 1553-734X .- 1553-7358. ; 19:11
  • Tidskriftsartikel (refereegranskat)abstract
    • Reliable detection and classification of bacteria and other pathogens in the human body, animals, food, and water is crucial for improving and safeguarding public health. For instance, identifying the species and its antibiotic susceptibility is vital for effective bacterial infection treatment. Here we show that phase contrast time-lapse microscopy combined with deep learning is sufficient to classify four species of bacteria relevant to human health. The classification is performed on living bacteria and does not require fixation or staining, meaning that the bacterial species can be determined as the bacteria reproduce in a microfluidic device, enabling parallel determination of susceptibility to antibiotics. We assess the performance of convolutional neural networks and vision transformers, where the best model attained a class-average accuracy exceeding 98%. Our successful proof-of-principle results suggest that the methods should be challenged with data covering more species and clinically relevant isolates for future clinical use. Bacterial infections are a leading cause of premature death worldwide, and growing antibiotic resistance is making treatment increasingly challenging. To effectively treat a patient with a bacterial infection, it is essential to quickly detect and identify the bacterial species and determine its susceptibility to different antibiotics. Prompt and effective treatment is crucial for the patient's survival. A microfluidic device functions as a miniature "lab-on-chip" for manipulating and analyzing tiny amounts of fluids, such as blood or urine samples from patients. Microfluidic chips with chambers and channels have been designed for quickly testing bacterial susceptibility to different antibiotics by analyzing bacterial growth. Identifying bacterial species has previously relied on killing the bacteria and applying species-specific fluorescent probes. The purpose of the herein proposed species identification is to speed up decisions on treatment options by already in the first few imaging frames getting an idea of the bacterial species, without interfering with the ongoing antibiotics susceptibility testing. We introduce deep learning models as a fast and cost-effective method for identifying bacteria species. We envision this method being employed concurrently with antibiotic susceptibility tests in future applications, significantly enhancing bacterial infection treatments.
  •  
30.
  • Kartasalo, Kimmo, et al. (författare)
  • Artificial Intelligence for Diagnosis and Gleason Grading of Prostate Cancer in Biopsies-Current Status and Next Steps
  • 2021
  • Ingår i: European Urology Focus. - : Elsevier. - 2405-4569. ; 7:4, s. 687-691
  • Forskningsöversikt (refereegranskat)abstract
    • Diagnosis and Gleason grading of prostate cancer in biopsies are critical for the clinical management of men with prostate cancer. Despite this, the high grading variability among pathologists leads to the potential for under-and overtreatment. Artificial intelligence (AI) systems have shown promise in assisting pathologists to perform Gleason grading, which could help address this problem. In this mini-review, we highlight studies reporting on the development of AI systems for cancer detection and Gleason grading, and discuss the progress needed for widespread clinical implementation, as well as anticipated future developments. Patient summary: This mini-review summarizes the evidence relating to the validation of artificial intelligence (AI)-assisted cancer detection and Gleason grading of prostate cancer in biopsies, and highlights the remaining steps required prior to its widespread clinical implementation. We found that, although there is strong evidence to show that AI is able to perform Gleason grading on par with experienced uropathologists, more work is needed to ensure the accuracy of results from AI systems in diverse settings across different patient populations, digitization platforms, and pathology laboratories.
  •  
31.
  • Ke, Wenfan, et al. (författare)
  • Genes in human obesity loci are causal obesity genes in C. elegans
  • 2021
  • Ingår i: PLOS Genetics. - : Public Library of Science (PLoS). - 1553-7390 .- 1553-7404. ; 17:9
  • Tidskriftsartikel (refereegranskat)abstract
    • Obesity and its associated metabolic syndrome are a leading cause of morbidity and mortality in the United States. Given the disease's heavy burden on patients and the healthcare system, there has been increased interest in identifying pharmacological targets for the treatment and prevention of obesity. Towards this end, genome-wide association studies (GWAS) have identified hundreds of human genetic variants associated with obesity. The next challenge is to experimentally define which of these variants are causally linked to obesity, and could therefore become targets for the treatment or prevention of obesity. Here we employ high-throughput in vivo RNAi screening to test for causality 293 C. elegans orthologs of human obesity-candidate genes reported in GWAS. We RNAi screened these 293 genes in C. elegans subject to two different feeding regimens: (1) regular diet, and (2) high-fructose diet, which we developed and present here as an invertebrate model of diet-induced obesity (DIO). We report 14 genes that promote obesity and 3 genes that prevent DIO when silenced in C. elegans. Further, we show that knock-down of the 3 DIO genes not only prevents excessive fat accumulation in primary and ectopic fat depots but also improves the health and extends the lifespan of C. elegans overconsuming fructose. Importantly, the direction of the association between expression variants in these loci and obesity in mice and humans matches the phenotypic outcome of the loss-of-function of the C. elegans ortholog genes, supporting the notion that some of these genes would be causally linked to obesity across phylogeny. Therefore, in addition to defining causality for several genes so far merely correlated with obesity, this study demonstrates the value of model systems compatible with in vivo high-throughput genetic screening to causally link GWAS gene candidates to human diseases. Author summary Human GWAS have identified hundreds of genetic variants associated with human obesity. The genes being regulated by these variants at the protein or expression level represent potential anti-obesity targets. However, for the vast majority of these genes, it is unclear whether they cause obesity or are coincidentally associated with the disease. Here we use a high-throughput genetic screening strategy to test in vivo in Caenorhabditis elegans the potential causal role of human-obesity GWAS hits. Further, we combined the results of the genetic screen with analyses of mouse and human GWAS databases. As a result, we present 17 genes that promote or prevent C. elegans obesity, and the early onset of organismal deterioration and death associated with obesity. Further, the sign of the correlation between the expression levels of the human genes and their associated clinical traits matches, for the most part, the phenotypic effects of knocking down these genes in C. elegans, suggesting conserved causality and pharmacological potential for these genes.
  •  
32.
  • Marco Salas, Sergio, et al. (författare)
  • De novo spatiotemporal modelling of cell-type signatures in the developmental human heart using graph convolutional neural networks
  • 2022
  • Ingår i: PloS Computational Biology. - : Public Library of Science (PLoS). - 1553-734X .- 1553-7358. ; 18:8
  • Tidskriftsartikel (refereegranskat)abstract
    • With the emergence of high throughput single cell techniques, the understanding of the molecular and cellular diversity of mammalian organs have rapidly increased. In order to understand the spatial organization of this diversity, single cell data is often integrated with spatial data to create probabilistic cell maps. However, targeted cell typing approaches relying on existing single cell data achieve incomplete and biased maps that could mask the true diversity present in a tissue slide. Here we applied a de novo technique to spatially resolve and characterize cellular diversity of in situ sequencing data during human heart development. We obtained and made accessible well defined spatial cell-type maps of fetal hearts from 4.5 to 9 post conception weeks, not biased by probabilistic cell typing approaches. With our analysis, we could characterize previously unreported molecular diversity within cardiomyocytes and epicardial cells and identified their characteristic expression signatures, comparing them with specific subpopulations found in single cell RNA sequencing datasets. We further characterized the differentiation trajectories of epicardial cells, identifying a clear spatial component on it. All in all, our study provides a novel technique for conducting de novo spatial-temporal analyses in developmental tissue samples and a useful resource for online exploration of cell-type differentiation during heart development at sub-cellular image resolution.
  •  
33.
  • Matuszewski, Damian J., et al. (författare)
  • Image-Based Detection of Patient-Specific Drug-Induced Cell-Cycle Effects in Glioblastoma
  • 2018
  • Ingår i: SLAS Discovery. - : Elsevier BV. - 2472-5560 .- 2472-5552. ; 23:10, s. 1030-1039
  • Tidskriftsartikel (refereegranskat)abstract
    • Image-based analysis is an increasingly important tool to characterize the effect of drugs in large-scale chemical screens. Herein, we present image and data analysis methods to investigate population cell-cycle dynamics in patient-derived brain tumor cells. Images of glioblastoma cells grown in multiwell plates were used to extract per-cell descriptors, including nuclear DNA content. We reduced the DNA content data from per-cell descriptors to per-well frequency distributions, which were used to identify compounds affecting cell-cycle phase distribution. We analyzed cells from 15 patient cases representing multiple subtypes of glioblastoma and searched for clusters of cell-cycle phase distributions characterizing similarities in response to 249 compounds at 11 doses. We show that this approach applied in a blind analysis with unlabeled substances identified drugs that are commonly used for treating solid tumors as well as other compounds that are well known for inducing cell-cycle arrest. Redistribution of nuclear DNA content signals is thus a robust metric of cell-cycle arrest in patient-derived glioblastoma cells.
  •  
34.
  • Partel, Gabriele, 1988-, et al. (författare)
  • Automated identification of the mouse brain’s spatial compartments from in situ sequencing data
  • 2020
  • Ingår i: BMC Biology. - : Springer Nature. - 1741-7007. ; 18:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Background Neuroanatomical compartments of the mouse brain are identified and outlined mainly based on manual annotations of samples using features related to tissue and cellular morphology, taking advantage of publicly available reference atlases. However, this task is challenging since sliced tissue sections are rarely perfectly parallel or angled with respect to sections in the reference atlas and organs from different individuals may vary in size and shape. With the advent of in situ sequencing technologies, it is now possible to profile the gene expression of targeted genes inside preserved tissue samples and thus spatially map biological processes across anatomical compartments. This also opens up for new approaches to identifying tissue compartments.Results Here, we show how in situ sequencing data combined with dimensionality reduction and clustering can be used to identify spatial compartments that correspond to known anatomical compartments of the brain. We also visualize gradients in gene expression and sharp as well as smooth transitions between different compartments. We apply our method on mouse brain sections and show that computationally defined anatomical compartments are highly reproducible across individuals and have the potential to replace manual annotation based on cell and tissue morphology. Conclusion Mapping the brain based on molecular information means that we can create detailed atlases independent of angle at sectioning or variations between individuals.
  •  
35.
  • Partel, Gabriele, 1988-, et al. (författare)
  • Graph-based image decoding for multiplexed in situ RNA detection
  • 2021
  • Ingår i: 2020 25th International Conference on Pattern Recognition (ICPR). - : Institute of Electrical and Electronics Engineers (IEEE). - 9781728188089 ; , s. 3783-3790
  • Konferensbidrag (refereegranskat)abstract
    • Image-based multiplexed in situ RNA detectionmakes it possible to map the spatial gene expression of hundreds to thousands of genes in parallel, and thus discern at the sametime a large numbers of different cell types to better understand tissue development, heterogeneity, and disease. Fluorescent signals are detected over multiple fluorescent channels and imaging rounds and decoded in order to identify RNA molecules in their morphological context. Here we present a graph-based decoding approach that models the decoding process as a network flow problem jointly optimizing observation likelihoods and distances of signal detections, thus achieving robustness with respect to noise and spatial jitter of the fluorescent signals. We evaluated our method on synthetic data generated at different experimental conditions, and on real data of in situ RNA sequencing, comparng results with respect to alternative and gold standard imagede coding pipelines.
  •  
36.
  • Partel, Gabriele, et al. (författare)
  • Identification of spatial compartments in tissue from in situ sequencing data
  • 2024
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Spatial organization of tissue characterizes biological function, and spatially resolved gene expression has the power to reveal variations of features with high resolution. Here, we propose a novel graph-based in situ sequencing decoding approach that improves recall, enabling precise spatial gene expression analysis. We apply our method on in situ sequencing data from mouse brain sections, identify spatial compartments that correspond with known brain regions, and relate them with tissue morphology.
  •  
37.
  • Partel, Gabriele, 1988-, et al. (författare)
  • Spage2vec : Unsupervised representation of localized spatial gene expression signatures
  • 2021
  • Ingår i: The FEBS Journal. - : John Wiley & Sons. - 1742-464X .- 1742-4658. ; 288:6, s. 1859-1870
  • Tidskriftsartikel (refereegranskat)abstract
    • Investigations of spatial cellular composition of tissue architectures revealed by multiplexed in situ RNA detection often rely on inaccurate cell segmentation or prior biological knowledge from complementary single cell sequencing experiments. Here we present spage2vec, an unsupervised segmentation free approach for decrypting the spatial transcriptomic heterogeneity of complex tissues at subcellular resolution. Spage2vec represents the spatial transcriptomic landscape of tissue samples as a graph and leverages a powerful machine learning graph representation technique to create a lower dimensional representation of local spatial gene expression. We apply spage2vec to mouse brain data from three different in situ transcriptomic assays and to a spatial gene expression dataset consisting of hundreds of individual cells. We show that learned representations encode meaningful biological spatial information of re-occuring localized gene expression signatures involved in cellular and subcellular processes.
  •  
38.
  • Partel, Gabriele, et al. (författare)
  • Spage2vec: Unsupervised detection of spatial gene expression constellations
  • 2024
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Investigation of spatial cellular composition of tissue architectures revealed by multiplexed in situ RNA detection often rely on inaccurate cell segmentation or prior biological knowledge from complementary single cell sequencing experiments. Here we present spage2vec, an unsupervised segmentation free approach for decrypting the spatial transcriptomic heterogeneity of complex tissues at subcellular resolution. Spage2vec represents the spatial transcriptomic landscape of tissue samples as a spatial functional network and leverages a powerful machine learning graph representation technique to create a lower dimensional representation of local spatial gene expression. We apply spage2vec to mouse brain data from three different in situ transcriptomic assays, showing that learned representations encode meaningful biological spatial information of re-occuring gene constellations involved in cellular and subcellular processes.
  •  
39.
  • Pielawski, Nicolas, et al. (författare)
  • CoMIR: Contrastive Multimodal Image Representation for Registration
  • 2020
  • Ingår i: NeurIPS - 34th Conference on Neural Information Processing Systems.
  • Konferensbidrag (refereegranskat)abstract
    • We propose contrastive coding to learn shared, dense image representations, referred to as CoMIRs (Contrastive Multimodal Image Representations). CoMIRs enable the registration of multimodal images where existing registration methods often fail due to a lack of sufficiently similar image structures. CoMIRs reduce the multimodal registration problem to a monomodal one, in which general intensity-based, as well as feature-based, registration algorithms can be applied. The method involves training one neural network per modality on aligned images, using a contrastive loss based on noise-contrastive estimation (InfoNCE). Unlike other contrastive coding methods, used for, e.g., classification, our approach generates image-like representations that contain the information shared between modalities. We introduce a novel, hyperparameter-free modification to InfoNCE, to enforce rotational equivariance of the learnt representations, a property essential to the registration task. We assess the extent of achieved rotational equivariance and the stability of the representations with respect to weight initialization, training set, and hyperparameter settings, on a remote sensing dataset of RGB and near-infrared images. We evaluate the learnt representations through registration of a biomedical dataset of bright-field and second-harmonic generation microscopy images; two modalities with very little apparent correlation. The proposed approach based on CoMIRs significantly outperforms registration of representations created by GAN-based image-to-image translation, as well as a state-of-the-art, application-specific method which takes additional knowledge about the data into account. Code is available at: https://github.com/MIDA-group/CoMIR.
  •  
40.
  • Pielawski, Nicolas, et al. (författare)
  • In Silico Prediction of Cell Traction Forces
  • 2020
  • Ingår i: 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI). - 9781538693308 - 9781538693315 ; , s. 877-881
  • Konferensbidrag (refereegranskat)abstract
    • Traction Force Microscopy (TFM) is a technique used to determine the tensions that a biological cell conveys to the underlying surface. Typically, TFM requires culturing cells on gels with fluorescent beads, followed by bead displacement calculations. We present a new method allowing to predict those forces from a regular fluorescent image of the cell. Using Deep Learning, we trained a Bayesian Neural Network adapted for pixel regression of the forces and show that it generalises on different cells of the same strain. The predicted forces are computed along with an approximated uncertainty, which shows whether the prediction is trustworthy or not. Using the proposed method could help estimating forces when calculating non-trivial bead displacements and can also free one of the fluorescent channels of the microscope. Code is available at https://github.com/wahlby-lab/InSilicoTFM.
  •  
41.
  • Pielawski, Nicolas, et al. (författare)
  • Introducing Hann windows for reducing edge-effects in patch-based image segmentation
  • 2020
  • Ingår i: PLOS ONE. - : PUBLIC LIBRARY SCIENCE. - 1932-6203. ; 15:3
  • Tidskriftsartikel (refereegranskat)abstract
    • There is a limitation in the size of an image that can be processed using computationally demanding methods such as e.g. Convolutional Neural Networks (CNNs). Furthermore, many networks are designed to work with a pre-determined fixed image size. Some imaging modalities-notably biological and medical-can result in images up to a few gigapixels in size, meaning that they have to be divided into smaller parts, or patches, for processing. However, when performing pixel classification, this may lead to undesirable artefacts, such as edge effects in the final re-combined image. We introduce windowing methods from signal processing to effectively reduce such edge effects. With the assumption that the central part of an image patch often holds richer contextual information than its sides and corners, we reconstruct the prediction by overlapping patches that are being weighted depending on 2-dimensional windows. We compare the results of simple averaging and four different windows: Hann, Bartlett-Hann, Triangular and a recently proposed window by Cui et al., and show that the cosine-based Hann window achieves the best improvement as measured by the Structural Similarity Index (SSIM). We also apply the Dice score to show that classification errors close to patch edges are reduced. The proposed windowing method can be used together with any CNN model for segmentation without any modification and significantly improves network predictions.
  •  
42.
  • Pielawski, Nicolas, et al. (författare)
  • TissUUmaps 3 : Improvements in interactive visualization, exploration, and quality assessment of large-scale spatial omics data
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Background and Objectives: Spatially resolved techniques for exploring the molecular landscape of tissue samples, such as spatial transcriptomics, often result in millions of data points and images too large to view on a regular desktop computer, limiting the possibilities in visual interactive data exploration. TissUUmaps is a free, open-source browser-based tool for GPU-accelerated visualization and interactive exploration of 107+ data points overlaying tissue samples.Methods: Herein we describe how TissUUmaps 3 provides instant multiresolution image viewing and can be customized, shared, and also integrated into Jupyter Notebooks. We introduce new modules where users can visualize markers and regions, explore spatial statistics, perform quantitative analyses of tissue morphology, and assess the quality of decoding in situ transcriptomics data.Results: We show that thanks to targeted optimizations the time and cost associated with interactive data exploration were reduced, enabling TissUUmaps 3 to handle the scale of today’s spatial transcriptomics methods.Conclusion: TissUUmaps 3 provides significantly improved performance for large multiplex datasets as compared to previous versions. We envision TissUUmaps to contribute to broader dissemination and flexible sharing of large-scale spatial omics data.
  •  
43.
  • Pielawski, Nicolas, et al. (författare)
  • TissUUmaps 3 : Improvements in interactive visualization, exploration, and quality assessment of large-scale spatial omics data
  • 2023
  • Ingår i: Heliyon. - : Elsevier BV. - 2405-8440. ; 9:5
  • Tidskriftsartikel (refereegranskat)abstract
    • Background and objectives: Spatially resolved techniques for exploring the molecular landscape of tissue samples, such as spatial transcriptomics, often result in millions of data points and images too large to view on a regular desktop computer, limiting the possibilities in visual interactive data exploration. TissUUmaps is a free, open-source browser-based tool for GPU-accelerated visualization and interactive exploration of 107+ data points overlaying tissue samples.Methods: Herein we describe how TissUUmaps 3 provides instant multiresolution image viewing and can be customized, shared, and also integrated into Jupyter Notebooks. We introduce new modules where users can visualize markers and regions, explore spatial statistics, perform quantitative analyses of tissue morphology, and assess the quality of decoding in situ transcriptomics data.Results: We show that thanks to targeted optimizations the time and cost associated with interactive data exploration were reduced, enabling TissUUmaps 3 to handle the scale of today's spatial transcriptomics methods.Conclusion: TissUUmaps 3 provides significantly improved performance for large multiplex datasets as compared to previous versions. We envision TissUUmaps to contribute to broader dissemination and flexible sharing of largescale spatial omics data.
  •  
44.
  • Pontén, Olle, et al. (författare)
  • PACMan : A software package for automated single-cell chlorophyll fluorometry
  • 2024
  • Ingår i: Cytometry Part A. - : John Wiley & Sons. - 1552-4922 .- 1552-4930. ; 105:3, s. 203-213
  • Tidskriftsartikel (refereegranskat)abstract
    • Microalgae, small photosynthetic unicells, are of great interest to ecology, ecotoxicology and biotechnology and there is a growing need to investigate the ability of cells to photosynthesize under variable conditions. Current strategies involve hand-operated pulse-amplitude-modulated (PAM) chlorophyll fluorimeters, which can provide detailed insights into the photophysiology of entire populations- or individual cells of microalgae but are typically limited in their throughput. To increase the throughput of a commercially available MICROSCOPY-PAM system, we present the PAM Automation Control Manager (‘PACMan’), an open-source Python software package that automates image acquisition, microscopy stage control and the triggering of external hardware components. PACMan comes with a user-friendly graphical user interface and is released together with a stand-alone tool (PAMalysis) for the automated calculation of per-cell maximum quantum efficiencies (= Fv/Fm). Using these two software packages, we successfully tracked the photophysiology of >1000 individual cells of green algae (Chlamydomonas reinhardtii) and dinoflagellates (genus Symbiodiniaceae) within custom-made microfluidic devices. Compared to the manual operation of MICROSCOPY-PAM systems, this represents a 10-fold increase in throughput. During experiments, PACMan coordinated the movement of the microscope stage and triggered the MICROSCOPY-PAM system to repeatedly capture high-quality image data across multiple positions. Finally, we analyzed single-cell Fv/Fm with the manufacturer-supplied software and PAMalysis, demonstrating a median difference <0.5% between both methods. We foresee that PACMan, and its auxiliary software package will help increase the experimental throughput in a range of microalgae studies currently relying on hand-operated MICROSCOPY-PAM technologies.
  •  
45.
  •  
46.
  • Solorzano, Leslie, 1989- (författare)
  • Image Processing, Machine Learning and Visualization for Tissue Analysis
  • 2021
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Knowledge discovery for understanding mechanisms of disease requires the integration of multiple sources of data collected at various magnifications and by different imaging techniques. Using spatial information, we can build maps of tissue and cells in which it is possible to extract, e.g., measurements of cell morphology, protein expression, and gene expression. These measurements reveal knowledge about cells such as their identity, origin, density, structural organization, activity, and interactions with other cells and cell communities. Knowledge that can be correlated with survival and drug effectiveness. This thesis presents multidisciplinary projects that include a variety of methods for image and data analysis applied to images coming from fluorescence- and brightfield microscopy.In brightfield images, the number of proteins that can be observed in the same tissue section is limited. To overcome this, we identified protein expression coming from consecutive tissue sections and fused images using registration to quantify protein co-expression. Here, the main challenge was to build a framework handling very large images with a combination of rigid and non-rigid image registration. Using multiplex fluorescence microscopy techniques, many different molecular markers can be used in parallel, and here we approached the challenge to decipher cell classes based on marker combinations. We used ensembles of machine learning models to perform cell classification, both increasing performance over a single model and to get a measure of confidence of the predictions.  We also used resulting cell classes and locations as input to a graph neural network to learn cell neighborhoods that may be correlated with disease.Finally, the work leading to this thesis included the creation of an interactive visualization tool, TissUUmaps. Whole slide tissue images are often enormous and can be associated with large numbers of data points, creating challenges which call for advanced methods in processing and visualization. We built TissUUmaps so that it could visualize millions of data points from in situ sequencing experiments and enable contextual study of gene expression directly in the tissue at cellular and sub-cellular resolution. We also used TissUUmaps for interactive image registration, overlay of regions of interest, and visualization of tissue and corresponding cancer grades produced by deep learning methods.  The aforementioned methods and tools together provide the framework for analysing and visualizing vast and complex spatial tissue structures. These developments in understanding the spatial information of tissue in different diseases pave the way for new discoveries and improving the treatment for patients.
  •  
47.
  • Solorzano, Leslie, 1989-, et al. (författare)
  • Machine learning for cell classification and neighborhood analysis in glioma tissue
  • 2021
  • Ingår i: Cytometry Part A. - : Wiley. - 1552-4922 .- 1552-4930. ; 99:12, s. 1176-1186
  • Tidskriftsartikel (refereegranskat)abstract
    • Multiplexed and spatially resolved single-cell analyses that intend to study tissue heterogeneity and cell organization invariably face as a first step the challenge of cell classification. Accuracy and reproducibility are important for the downstream process of counting cells, quantifying cell-cell interactions, and extracting information on disease-specific localized cell niches. Novel staining techniques make it possible to visualize and quantify large numbers of cell-specific molecular markers in parallel. However, due to variations in sample handling and artifacts from staining and scanning, cells of the same type may present different marker profiles both within and across samples. We address multiplexed immunofluorescence data from tissue microarrays of low-grade gliomas and present a methodology using two different machine learning architectures and features insensitive to illumination to perform cell classification. The fully automated cell classification provides a measure of confidence for the decision and requires a comparably small annotated data set for training, which can be created using freely available tools. Using the proposed method, we reached an accuracy of 83.1% on cell classification without the need for standardization of samples. Using our confidence measure, cells with low-confidence classifications could be excluded, pushing the classification accuracy to 94.5%. Next, we used the cell classification results to search for cell niches with an unsupervised learning approach based on graph neural networks. We show that the approach can re-detect specialized tissue niches in previously published data, and that our proposed cell classification leads to niche definitions that may be relevant for sub-groups of glioma, if applied to larger data sets.
  •  
48.
  • Solorzano, Leslie, 1989-, et al. (författare)
  • TissUUmaps : interactive visualization of large-scale spatial gene expression and tissue morphology data
  • 2020
  • Ingår i: Bioinformatics. - : OXFORD UNIV PRESS. - 1367-4803 .- 1367-4811 .- 1460-2059. ; 36:15, s. 4363-4365
  • Tidskriftsartikel (refereegranskat)abstract
    • Motivation: Visual assessment of scanned tissue samples and associated molecular markers, such as gene expression, requires easy interactive inspection at multiple resolutions. This requires smart handling of image pyramids and efficient distribution of different types of data across several levels of detail.Results: We present TissUUmaps, enabling fast visualization and exploration of millions of data points overlaying a tissue sample. TissUUmaps can be used both as a web service or locally in any computer, and regions of interest as well as local statistics can be extracted and shared among users.
  •  
49.
  • Solorzano, Leslie, 1989-, et al. (författare)
  • Towards automatic protein co-expression quantification in immunohistochemical TMA slides
  • 2021
  • Ingår i: IEEE journal of biomedical and health informatics. - : Institute of Electrical and Electronics Engineers (IEEE). - 2168-2194 .- 2168-2208. ; 25:2, s. 393-402
  • Tidskriftsartikel (refereegranskat)abstract
    • Immunohistochemical (IHC) analysis of tissue biopsies is currently used for clinical screening of solid cancers to assess protein expression. The large amount of image data produced from these tissue samples requires specialized computational pathology methods to perform integrative analysis. Even though proteins are traditionally studied independently, the study of protein co-expression may offer new insights towards patients' clinical and therapeutic decisions. To explore protein co-expression, we constructed a modular image analysis pipeline to spatially align tissue microarray (TMA) image slides, evaluate alignment quality, define tumor regions, and ultimately quantify protein expression, before and after tumor segmentation. The pipeline was built with open-source tools that can manage gigapixel slides. To evaluate the consensus between pathologist and computer, we characterized a cohort of 142 gastric cancer (GC) cases regarding the extent of E-cadherin and CD44v6 expression. We performed IHC analysis in consecutive TMA slides and compared the automated quantification with the pathologists' manual assessment. Our results show that automated quantification within tumor regions improves agreement with the pathologists' classification. A co-expression map was created to identify the cores co-expressing both proteins. The proposed pipeline provides not only computational tools forwarding current pathology practices to explore co-expression, but also a framework for merging data and transferring information in learning-based approaches to pathology.
  •  
50.
  • Sountoulidis, Alexandros, et al. (författare)
  • A topographic atlas defines developmental origins of cell heterogeneity in the human embryonic lung
  • 2023
  • Ingår i: Nature Cell Biology. - : Springer Nature. - 1465-7392 .- 1476-4679.
  • Tidskriftsartikel (refereegranskat)abstract
    • Sountoulidis et al. provide a spatial gene expression atlas of human embryonic lung during the first trimester of gestation and identify 83 cell identities corresponding to stable cell types or transitional states. The lung contains numerous specialized cell types with distinct roles in tissue function and integrity. To clarify the origins and mechanisms generating cell heterogeneity, we created a comprehensive topographic atlas of early human lung development. Here we report 83 cell states and several spatially resolved developmental trajectories and predict cell interactions within defined tissue niches. We integrated single-cell RNA sequencing and spatially resolved transcriptomics into a web-based, open platform for interactive exploration. We show distinct gene expression programmes, accompanying sequential events of cell differentiation and maturation of the secretory and neuroendocrine cell types in proximal epithelium. We define the origin of airway fibroblasts associated with airway smooth muscle in bronchovascular bundles and describe a trajectory of Schwann cell progenitors to intrinsic parasympathetic neurons controlling bronchoconstriction. Our atlas provides a rich resource for further research and a reference for defining deviations from homeostatic and repair mechanisms leading to pulmonary diseases.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-50 av 58
Typ av publikation
tidskriftsartikel (28)
konferensbidrag (11)
doktorsavhandling (9)
annan publikation (8)
forskningsöversikt (2)
Typ av innehåll
refereegranskat (37)
övrigt vetenskapligt/konstnärligt (20)
Författare/redaktör
Wählby, Carolina, pr ... (58)
Solorzano, Leslie, 1 ... (13)
Sintorn, Ida-Maria, ... (10)
Spjuth, Ola, Profess ... (8)
Andersson, Axel (8)
Wieslander, Håkan (8)
visa fler...
Avenel, Christophe (7)
Partel, Gabriele, 19 ... (7)
Sladoje, Nataša (6)
Klemm, Anna H (6)
Nilsson, Mats (5)
Lindblad, Joakim (5)
Behanova, Andrea (5)
Wetzer, Elisabeth (5)
Hellander, Andreas (4)
Gupta, Ankit (4)
Harrison, Philip J (4)
Chelebian, Eduard (4)
Kartasalo, Kimmo (4)
Egevad, Lars (3)
Eklund, Martin (3)
Malmberg, Filip, 198 ... (3)
Sabirsh, Alan (3)
Lindberg, Johan (3)
Olsson, Henrik (3)
Klemm, Anna (3)
Samaratunga, Hemamal ... (3)
Tsuzuki, Toyonori (3)
Rantalainen, Mattias (3)
Delahunt, Brett (3)
Ruusuvuori, Pekka (3)
Lundeberg, Joakim (2)
Karlsson, Johan (2)
Carneiro, Fatima (2)
Carpenter, Anne E. (2)
Strömblad, Staffan (2)
Varma, Murali (2)
Broliden, Kristina (2)
Nysjö, Fredrik, 1985 ... (2)
Partel, Gabriele (2)
Edfeldt, Gabriella (2)
Tjernlund, Annelie (2)
Carreras-Puigvert, J ... (2)
Rietdijk, Jonne (2)
Zhou, Ming (2)
Matuszewski, Damian ... (2)
Hilscher, Markus M. (2)
Almeida, Raquel (2)
Georgiev, Polina (2)
Oxley, Jon (2)
visa färre...
Lärosäte
Uppsala universitet (58)
Karolinska Institutet (10)
Stockholms universitet (3)
Kungliga Tekniska Högskolan (2)
Göteborgs universitet (1)
Malmö universitet (1)
Språk
Engelska (58)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (33)
Teknik (25)
Medicin och hälsovetenskap (21)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy