SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Öfverstedt Johan) srt2:(2021)"

Sökning: WFRF:(Öfverstedt Johan) > (2021)

  • Resultat 1-8 av 8
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Lu, Jiahao, et al. (författare)
  • Image-to-Image Translation in Multimodal Image Registration: How Well Does It Work?
  • 2021
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Despite current advancements in the field of biomedical image processing, propelled by the deep learning revolution, the registration of multimodal microscopy images, due to its specific challenges, is still often performed manually by specialists. Image-to-image (I2I) translation aims at transforming images from one domain while preserving their contents so they have the style of images from another domain. The recent success of I2I translation in computer vision applications and its growing use in biomedical areas provide a tempting possibility of transforming the multimodal registration problem into a, potentially easier, monomodal one.We have recently conducted an empirical study of the applicability of modern I2I translation methods for the task of multimodal biomedical image registration. We selected four Generative Adversarial Network (GAN)-based methods, which differ in terms of supervision requirement, design concepts, output quality and diversity, popularity, and scalability, and one contrastive representation learning method. The effectiveness of I2I translation for multimodal image registration is judged by comparing the performance of these five methods subsequently combined with two representative monomodal registration methods. We evaluate these method combinations on three publicly available multimodal datasets of increasing difficulty (including both cytological and histological images), and compare with the performance of registration by Mutual Information maximisation and one modern data-specific multimodal registration method.Our results suggest that, although I2I translation may be helpful when the modalities to register are clearly correlated, modalities which express distinctly different properties of the sample are not handled well enough. When less information is shared between the modalities, the I2I translation methods struggle to provide good predictions, which impairs the registration performance. They are all outperformed by the evaluated representation learning method, which aims to find an in-between representation, and also by the Mutual Information maximisation approach. We therefore conclude that current I2I approaches are, at this point, not suitable for multimodal biomedical image registration.Further details, including the code, datasets and the complete experimental setup can be found at https://github.com/MIDA-group/MultiRegEval.
  •  
2.
  • Lu, Jiahao, et al. (författare)
  • Is Image-to-Image Translation the Panacea for Multimodal Image Registration? A Comparative Study
  • 2021
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Despite current advancement in the field of biomedical image processing, propelled by the deep learning revolution, multimodal image registration, due to its several challenges, is still often performed manually by specialists. The recent success of image-to-image (I2I) translation in computer vision applications and its growing use in biomedical areas provide a tempting possibility of transforming the multimodal registration problem into a, potentially easier, monomodal one. We conduct an empirical study of the applicability of modern I2I translation methods for the task of multimodal biomedical image registration. We compare the performance of four Generative Adversarial Network (GAN)-based methods and one contrastive representation learning method, subsequently combined with two representative monomodal registration methods, to judge the effectiveness of modality translation for multimodal image registration. We evaluate these method combinations on three publicly available multimodal datasets of increasing difficulty, and compare with the performance of registration by Mutual Information maximisation and one modern data-specific multimodal registration method. Our results suggest that, although I2I translation may be helpful when the modalities to register are clearly correlated, registration of modalities which express distinctly different properties of the sample are not well handled by the I2I translation approach. When less information is shared between the modalities, the I2I translation methods struggle to provide good predictions, which impairs the registration performance. The evaluated representation learning method, which aims to find an in-between representation, manages better, and so does the Mutual Information maximisation approach. We share our complete experimental setup as open-source https://github.com/Noodles-321/Registration.
  •  
3.
  • Solorzano, Leslie, 1989-, et al. (författare)
  • Machine learning for cell classification and neighborhood analysis in glioma tissue
  • 2021
  • Ingår i: Cytometry Part A. - : Wiley. - 1552-4922 .- 1552-4930. ; 99:12, s. 1176-1186
  • Tidskriftsartikel (refereegranskat)abstract
    • Multiplexed and spatially resolved single-cell analyses that intend to study tissue heterogeneity and cell organization invariably face as a first step the challenge of cell classification. Accuracy and reproducibility are important for the downstream process of counting cells, quantifying cell-cell interactions, and extracting information on disease-specific localized cell niches. Novel staining techniques make it possible to visualize and quantify large numbers of cell-specific molecular markers in parallel. However, due to variations in sample handling and artifacts from staining and scanning, cells of the same type may present different marker profiles both within and across samples. We address multiplexed immunofluorescence data from tissue microarrays of low-grade gliomas and present a methodology using two different machine learning architectures and features insensitive to illumination to perform cell classification. The fully automated cell classification provides a measure of confidence for the decision and requires a comparably small annotated data set for training, which can be created using freely available tools. Using the proposed method, we reached an accuracy of 83.1% on cell classification without the need for standardization of samples. Using our confidence measure, cells with low-confidence classifications could be excluded, pushing the classification accuracy to 94.5%. Next, we used the cell classification results to search for cell niches with an unsupervised learning approach based on graph neural networks. We show that the approach can re-detect specialized tissue niches in previously published data, and that our proposed cell classification leads to niche definitions that may be relevant for sub-groups of glioma, if applied to larger data sets.
  •  
4.
  •  
5.
  • Wetzer, Elisabeth, et al. (författare)
  • Contrastive Learning for Equivariant Multimodal Image Representations
  • 2021
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Combining the information of different imaging modalities offers complimentary information about the properties of the imaged specimen. Often these modalities need to be captured by different machines, which requires that the resulting images need to be matched and registered in order to map the corresponding signals to each other. This can be a very challenging task due to the varying appearance of the specimen in different sensors.We have recently developed a method which uses contrastive learning to find representations of both modalities, such that the images of different modalities are mapped into the same representational space. The learnt representations (referred to as CoMIRs) are abstract and very similar with respect to a selected similarity measure. There are requirements which these representations need to fulfil for downstream tasks such as registration - e.g rotational equivariance or intensity similarity. We present a hyperparameter free modification of the contrastive loss, which is based on InfoNCE, to produce equivariant, dense-like image representations. These representations are similar enough to be considered in a common space, in which monomodal methods for registration can be exploited.
  •  
6.
  • Wetzer, Elisabeth, et al. (författare)
  • Registration of Multimodal Microscopy Images using CoMIR – learned structural image representations
  • 2021
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Combined information from different imaging modalities enables an integral view of a specimen, offering complementary information about a diverse variety of its properties. To efficiently utilize such heterogeneous information, spatial correspondence between acquired images has to be established. The process is referred to as image registration and is highly challenging due to complexity, size, and variety of multimodal biomedical image data.  We have recently proposed a method for multimodal image registration based on Contrastive Multimodal Image Representation (CoMIR).  It reduces the challenging problem of multimodal registration to a simpler, monomodal one. The idea is to learn image-like representations for the input modalities using a contrastive loss based on InfoNCE. These representations are abstract, and very similar for the input modalities, in fact, similar enough to be successfully registered. They are of the same spatial dimensions as the input images and a transformation aligning the representations can further be applied to the corresponding input images, aligning them in their original modalities. This transformation can be found by common monomodal registration methods (e.g. based on SIFT or alpha-AMD). We have shown that the method succeeds on a particularly challenging dataset consisting of Bright-Field (BF) and Second-Harmonic Generation (SHG) tissue microarray core images, which have very different appearances and do not share many structures. For this data, alternative learning-based approaches, such as image-to-image translation, did not produce representations usable for registration. Both feature- and intensity-based rigid registration based on CoMIRs outperform even the state-of-the-art registration method specific for BF/SHG images. An appealing property of our proposed method is that it can handle large initial displacements.The method is not limited to BF and SHG images; it is applicable to any combination of input modalities.CoMIR requires very little aligned training data thanks to our data augmentation scheme. From an input image pair, it generates augmented patches as positive and negative samples, needed for the contrastive loss. For modalities which share sufficient structural similarities, the required aligned training data can be as little as one image pair.Further details and the code are available at https://github.com/MIDA-group/CoMIR
  •  
7.
  • Öfverstedt, Johan, et al. (författare)
  • Cross-Sim-NGF: FFT-Based Global Rigid Multimodal Alignment of Image Volumes using Normalized Gradient Fields
  • 2021
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Multimodal image alignment involves finding spatial correspondences between volumes varying in appearance and structure. Automated alignment methods are often based on local optimization that can be highly sensitive to their initialization. We propose a global optimization method for rigid multimodal 3D image alignment, based on a novel efficient algorithm for computing similarity of normalized gradient fields (NGF) in the frequency domain. We validate the method experimentally on a dataset comprised of 20 brain volumes acquired in four modalities (T1w, Flair, CT, [18F] FDG PET), synthetically displaced with known transformations. The proposed method exhibits excellent performance on all six possible modality combinations, and outperforms all four reference methods by a large margin. The method is fast; a 3.4Mvoxel global rigid alignment requires approximately 40 seconds of computation, and the proposed algorithm outperforms a direct algorithm for the same task by more than three orders of magnitude. Open-source implementation is provided.
  •  
8.
  • Öfverstedt, Johan, et al. (författare)
  • Fast Computation of Mutual Information with Application to Global Multimodal Image Alignment of Micrographs
  • 2021
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Multimodal image alignment is the process of finding spatial correspondences between images formed by different imaging techniques, to facilitate heterogeneous data fusion and correlative analysis. Image alignment methods based on maximization of mutual information (MI) are well established and a part of most general-purpose multimodal image alignment packages. MI maximization is typically used in local optimization frameworks where an initial guess for the transformation parameters is required, and where the local region around this guess is explored, guided by the derivatives of MI. Local optimization often implies substantial robustness and usability challenges: (i) a good initial guess can be hard to find, (ii) the optimizer may fail to converge to the sought solution, and (iii) there tend to be many hyper-parameters to tune. These three challenges are likely to be present when applying MI to multimodal microscopy scenarios, due to sparseness of key structures, indistinct local features, and large displacements.We recently proposed a novel algorithm for computing MI between two images for all discrete displacements. This algorithm, based on cross-correlation computed in the frequency domain, is substantially more efficient than existing algorithms – it is several orders of magnitude faster.  Applying the algorithm for a suitable set of rotation angles, we obtain a global optimization method for rigid multimodal image alignment that successfully addresses the three previously listed challenges of local maximization of MI.To evaluate the efficacy of the proposed method, we selected public benchmark datasets comprising aligned multimodal cytological image pairs (fluorescence and quantitative phase imaging (QPI)), and aligned multimodal histological image pairs (brightfield (BF) and second-harmonic generation (SHG) imaging), where each image was subjected to synthetic rigid transformations. We observed excellent performance, in terms of the rate of successful recovery of the known transformation, on both datasets, outperforming a number of existing methods with a wide margin, including local maximization of MI as well as recent deep learning-based approaches.We implemented the method in PyTorch to enable the use of GPU acceleration to speed up the runtime and facilitate practical applicability. The implementation and reference to our full study is available at github.com/MIDA-group/globalign.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-8 av 8

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy