SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Lindblad Joakim Professor) "

Sökning: WFRF:(Lindblad Joakim Professor)

  • Resultat 1-7 av 7
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Fredin Haslum, Johan (författare)
  • Machine Learning Methods for Image-based Phenotypic Profiling in Early Drug Discovery
  • 2024
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • In the search for new therapeutic treatments, strategies to make the drug discovery process more efficient are crucial. Image-based phenotypic profiling, with its millions of pictures of fluorescent stained cells, is a rich and effective means to capture the morphological effects of potential treatments on living systems. Within this complex data await biological insights and new therapeutic opportunities – but computational tools are needed to unlock them.This thesis examines the role of machine learning in improving the utility and analysis of phenotypic screening data. It focuses on challenges specific to this domain, such as the lack of reliable labels that are essential for supervised learning, as well as confounding factors present in the data that are often unavoidable due to experimental variability. We explore transfer learning to boost model generalization and robustness, analyzing the impact of domain distance, initialization, dataset size, and architecture on the effectiveness of applying natural domain pre-trained weights to biomedical contexts. Building upon this, we delve into self-supervised pretraining for phenotypic image data, but find its direct application is inadequate in this context as it fails to differentiate between various biological effects. To overcome this, we develop new self-supervised learning strategies designed to enable the network to disregard confounding experimental noise, thus enhancing its ability to discern the impacts of various treatments. We further develop a technique that allows a model trained for phenotypic profiling to be adapted to new, unseen data without the need for any labels or supervised learning. Using this approach, a general phenotypic profiling model can be readily adapted to data from different sites without the need for any labels. Beyond our technical contributions, we also show that bioactive compounds identified using the approaches outlined in this thesis have been subsequently confirmed in biological assays through replication in an industrial setting. Our findings indicate that while phenotypic data and biomedical imaging present complex challenges, machine learning techniques can play a pivotal role in making early drug discovery more efficient and effective.
  •  
2.
  • Karlsson Edlund, Patrick, 1975- (författare)
  • Methods and models for 2D and 3D image analysis in microscopy, in particular for the study of muscle cells
  • 2008
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Many research questions in biological research lead to numerous microscope images that need to be evaluated. Here digital image cytometry, i.e., quantitative, automated or semi-automated analysis of the images is an important rapidly growing discipline. This thesis presents contributions to that field. The work has been carried out in close cooperation with biomedical research partners, successfully solving real world problems.The world is 3D and modern imaging methods such as confocal microscopy provide 3D images. Hence, a large part of the work has dealt with the development of new and improved methods for quantitative analysis of 3D images, in particular fluorescently labeled skeletal muscle cells.A geometrical model for robust segmentation of skeletal muscle fibers was developed. Images of the multinucleated muscle cells were pre-processed using a novel spatially modulated transform, producing images with reduced complexity and facilitating easy nuclei segmentation. Fibers from several mammalian species were modeled and features were computed based on cell nuclei positions. Features such as myonuclear domain size and nearest neighbor distance, were shown to correlate with body mass, and femur length. Human muscle fibers from young and old males, and females, were related to fiber type and extracted features, where myonuclear domain size variations were shown to increase with age irrespectively of fiber type and gender.A segmentation method for severely clustered point-like signals was developed and applied to images of fluorescent probes, quantifying the amount and location of mitochondrial DNA within cells. A synthetic cell model was developed, to provide a controllable golden standard for performance evaluation of both expert manual and fully automated segmentations. The proposed method matches the correctness achieved by manual quantification.An interactive segmentation procedure was successfully applied to treated testicle sections of boar, showing how a common industrial plastic softener significantly affects testosterone concentrations.
  •  
3.
  • Pielawski, Nicolas, et al. (författare)
  • CoMIR: Contrastive Multimodal Image Representation for Registration
  • 2020
  • Ingår i: NeurIPS - 34th Conference on Neural Information Processing Systems.
  • Konferensbidrag (refereegranskat)abstract
    • We propose contrastive coding to learn shared, dense image representations, referred to as CoMIRs (Contrastive Multimodal Image Representations). CoMIRs enable the registration of multimodal images where existing registration methods often fail due to a lack of sufficiently similar image structures. CoMIRs reduce the multimodal registration problem to a monomodal one, in which general intensity-based, as well as feature-based, registration algorithms can be applied. The method involves training one neural network per modality on aligned images, using a contrastive loss based on noise-contrastive estimation (InfoNCE). Unlike other contrastive coding methods, used for, e.g., classification, our approach generates image-like representations that contain the information shared between modalities. We introduce a novel, hyperparameter-free modification to InfoNCE, to enforce rotational equivariance of the learnt representations, a property essential to the registration task. We assess the extent of achieved rotational equivariance and the stability of the representations with respect to weight initialization, training set, and hyperparameter settings, on a remote sensing dataset of RGB and near-infrared images. We evaluate the learnt representations through registration of a biomedical dataset of bright-field and second-harmonic generation microscopy images; two modalities with very little apparent correlation. The proposed approach based on CoMIRs significantly outperforms registration of representations created by GAN-based image-to-image translation, as well as a state-of-the-art, application-specific method which takes additional knowledge about the data into account. Code is available at: https://github.com/MIDA-group/CoMIR.
  •  
4.
  •  
5.
  • Wetzer, Elisabeth, et al. (författare)
  • Contrastive Learning for Equivariant Multimodal Image Representations
  • 2021
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Combining the information of different imaging modalities offers complimentary information about the properties of the imaged specimen. Often these modalities need to be captured by different machines, which requires that the resulting images need to be matched and registered in order to map the corresponding signals to each other. This can be a very challenging task due to the varying appearance of the specimen in different sensors.We have recently developed a method which uses contrastive learning to find representations of both modalities, such that the images of different modalities are mapped into the same representational space. The learnt representations (referred to as CoMIRs) are abstract and very similar with respect to a selected similarity measure. There are requirements which these representations need to fulfil for downstream tasks such as registration - e.g rotational equivariance or intensity similarity. We present a hyperparameter free modification of the contrastive loss, which is based on InfoNCE, to produce equivariant, dense-like image representations. These representations are similar enough to be considered in a common space, in which monomodal methods for registration can be exploited.
  •  
6.
  • Wetzer, Elisabeth, et al. (författare)
  • Registration of Multimodal Microscopy Images using CoMIR – learned structural image representations
  • 2021
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Combined information from different imaging modalities enables an integral view of a specimen, offering complementary information about a diverse variety of its properties. To efficiently utilize such heterogeneous information, spatial correspondence between acquired images has to be established. The process is referred to as image registration and is highly challenging due to complexity, size, and variety of multimodal biomedical image data.  We have recently proposed a method for multimodal image registration based on Contrastive Multimodal Image Representation (CoMIR).  It reduces the challenging problem of multimodal registration to a simpler, monomodal one. The idea is to learn image-like representations for the input modalities using a contrastive loss based on InfoNCE. These representations are abstract, and very similar for the input modalities, in fact, similar enough to be successfully registered. They are of the same spatial dimensions as the input images and a transformation aligning the representations can further be applied to the corresponding input images, aligning them in their original modalities. This transformation can be found by common monomodal registration methods (e.g. based on SIFT or alpha-AMD). We have shown that the method succeeds on a particularly challenging dataset consisting of Bright-Field (BF) and Second-Harmonic Generation (SHG) tissue microarray core images, which have very different appearances and do not share many structures. For this data, alternative learning-based approaches, such as image-to-image translation, did not produce representations usable for registration. Both feature- and intensity-based rigid registration based on CoMIRs outperform even the state-of-the-art registration method specific for BF/SHG images. An appealing property of our proposed method is that it can handle large initial displacements.The method is not limited to BF and SHG images; it is applicable to any combination of input modalities.CoMIR requires very little aligned training data thanks to our data augmentation scheme. From an input image pair, it generates augmented patches as positive and negative samples, needed for the contrastive loss. For modalities which share sufficient structural similarities, the required aligned training data can be as little as one image pair.Further details and the code are available at https://github.com/MIDA-group/CoMIR
  •  
7.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-7 av 7

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy