SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Öfverstedt Johan) "

Sökning: WFRF:(Öfverstedt Johan)

  • Resultat 1-10 av 31
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Lu, Jiahao, et al. (författare)
  • Image-to-Image Translation in Multimodal Image Registration: How Well Does It Work?
  • 2021
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Despite current advancements in the field of biomedical image processing, propelled by the deep learning revolution, the registration of multimodal microscopy images, due to its specific challenges, is still often performed manually by specialists. Image-to-image (I2I) translation aims at transforming images from one domain while preserving their contents so they have the style of images from another domain. The recent success of I2I translation in computer vision applications and its growing use in biomedical areas provide a tempting possibility of transforming the multimodal registration problem into a, potentially easier, monomodal one.We have recently conducted an empirical study of the applicability of modern I2I translation methods for the task of multimodal biomedical image registration. We selected four Generative Adversarial Network (GAN)-based methods, which differ in terms of supervision requirement, design concepts, output quality and diversity, popularity, and scalability, and one contrastive representation learning method. The effectiveness of I2I translation for multimodal image registration is judged by comparing the performance of these five methods subsequently combined with two representative monomodal registration methods. We evaluate these method combinations on three publicly available multimodal datasets of increasing difficulty (including both cytological and histological images), and compare with the performance of registration by Mutual Information maximisation and one modern data-specific multimodal registration method.Our results suggest that, although I2I translation may be helpful when the modalities to register are clearly correlated, modalities which express distinctly different properties of the sample are not handled well enough. When less information is shared between the modalities, the I2I translation methods struggle to provide good predictions, which impairs the registration performance. They are all outperformed by the evaluated representation learning method, which aims to find an in-between representation, and also by the Mutual Information maximisation approach. We therefore conclude that current I2I approaches are, at this point, not suitable for multimodal biomedical image registration.Further details, including the code, datasets and the complete experimental setup can be found at https://github.com/MIDA-group/MultiRegEval.
  •  
2.
  • Lu, Jiahao, et al. (författare)
  • Is image-to-image translation the panacea for multimodal image registration? : A comparative study
  • 2022
  • Ingår i: PLOS ONE. - : Public Library of Science (PLoS). - 1932-6203. ; 17:11
  • Tidskriftsartikel (refereegranskat)abstract
    • Despite current advancement in the field of biomedical image processing, propelled by the deep learning revolution, multimodal image registration, due to its several challenges, is still often performed manually by specialists. The recent success of image-to-image (I2I) translation in computer vision applications and its growing use in biomedical areas provide a tempting possibility of transforming the multimodal registration problem into a, potentially easier, monomodal one. We conduct an empirical study of the applicability of modern I2I translation methods for the task of rigid registration of multimodal biomedical and medical 2D and 3D images. We compare the performance of four Generative Adversarial Network (GAN)-based I2I translation methods and one contrastive representation learning method, subsequently combined with two representative monomodal registration methods, to judge the effectiveness of modality translation for multimodal image registration. We evaluate these method combinations on four publicly available multimodal (2D and 3D) datasets and compare with the performance of registration achieved by several well-known approaches acting directly on multimodal image data. Our results suggest that, although I2I translation may be helpful when the modalities to register are clearly correlated, registration of modalities which express distinctly different properties of the sample are not well handled by the I2I translation approach. The evaluated representation learning method, which aims to find abstract image-like representations of the information shared between the modalities, manages better, and so does the Mutual Information maximisation approach, acting directly on the original multimodal images. We share our complete experimental setup as open-source (https://github.com/MIDA-group/MultiRegEval), including method implementations, evaluation code, and all datasets, for further reproducing and benchmarking.
  •  
3.
  • Lu, Jiahao, et al. (författare)
  • Is Image-to-Image Translation the Panacea for Multimodal Image Registration? A Comparative Study
  • 2021
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Despite current advancement in the field of biomedical image processing, propelled by the deep learning revolution, multimodal image registration, due to its several challenges, is still often performed manually by specialists. The recent success of image-to-image (I2I) translation in computer vision applications and its growing use in biomedical areas provide a tempting possibility of transforming the multimodal registration problem into a, potentially easier, monomodal one. We conduct an empirical study of the applicability of modern I2I translation methods for the task of multimodal biomedical image registration. We compare the performance of four Generative Adversarial Network (GAN)-based methods and one contrastive representation learning method, subsequently combined with two representative monomodal registration methods, to judge the effectiveness of modality translation for multimodal image registration. We evaluate these method combinations on three publicly available multimodal datasets of increasing difficulty, and compare with the performance of registration by Mutual Information maximisation and one modern data-specific multimodal registration method. Our results suggest that, although I2I translation may be helpful when the modalities to register are clearly correlated, registration of modalities which express distinctly different properties of the sample are not well handled by the I2I translation approach. When less information is shared between the modalities, the I2I translation methods struggle to provide good predictions, which impairs the registration performance. The evaluated representation learning method, which aims to find an in-between representation, manages better, and so does the Mutual Information maximisation approach. We share our complete experimental setup as open-source https://github.com/Noodles-321/Registration.
  •  
4.
  • Nordling, Love, 1995-, et al. (författare)
  • Contrastive Learning of Equivariant Image Representations for Multimodal Deformable Registration
  • 2023
  • Ingår i: 2023 IEEE 20th International Symposium on Biomedical Imaging (ISBI). - : Institute of Electrical and Electronics Engineers (IEEE). - 9781665473583 - 9781665473590
  • Konferensbidrag (refereegranskat)abstract
    • We propose a method for multimodal deformable image registration which combines a powerful deep learning approach to generate CoMIRs, dense image-like representations of multimodal image pairs, with INSPIRE, a robust framework for monomodal deformable image registration. We introduce new equivariance constraints to improve the consistency of CoMIRs under deformation. We evaluate the method on three publicly available multimodal datasets, one remote sensing, one histological, and one cytological. The proposed method demonstrates general applicability and consistently outperforms state-of-the-art registration tools \elastixname and VoxelMorph. We share source code of the proposed method and complete experimental setup as open-source at: https://github.com/MIDA-group/CoMIR_INSPIRE.
  •  
5.
  • Nordling, Love, 1995-, et al. (författare)
  • Multimodal deformable image registration using contrastive learning of equivariant image representations
  • 2023
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • We propose a method for multimodal deformable image registration which combines a powerful deep learning approach to generate CoMIRs, dense image-like representations of multimodal image pairs, with INSPIRE, a robust framework for monomodal deformable image registration. We introduce new equivariance constraints to improve the consistency of CoMIRs under deformation. We evaluate the method on three publicly available multimodal datasets, one remote sensing, one histological, and one cytological. The proposed method demonstrates general applicability and consistently outperforms state-of-the-art registration tools \elastixname and VoxelMorph. We share source code of the proposed method and complete experimental setup as open-source at: https://github.com/MIDA-group/CoMIR_INSPIRE.
  •  
6.
  •  
7.
  •  
8.
  • Pielawski, Nicolas, et al. (författare)
  • CoMIR: Contrastive Multimodal Image Representation for Registration
  • 2020
  • Ingår i: NeurIPS - 34th Conference on Neural Information Processing Systems.
  • Konferensbidrag (refereegranskat)abstract
    • We propose contrastive coding to learn shared, dense image representations, referred to as CoMIRs (Contrastive Multimodal Image Representations). CoMIRs enable the registration of multimodal images where existing registration methods often fail due to a lack of sufficiently similar image structures. CoMIRs reduce the multimodal registration problem to a monomodal one, in which general intensity-based, as well as feature-based, registration algorithms can be applied. The method involves training one neural network per modality on aligned images, using a contrastive loss based on noise-contrastive estimation (InfoNCE). Unlike other contrastive coding methods, used for, e.g., classification, our approach generates image-like representations that contain the information shared between modalities. We introduce a novel, hyperparameter-free modification to InfoNCE, to enforce rotational equivariance of the learnt representations, a property essential to the registration task. We assess the extent of achieved rotational equivariance and the stability of the representations with respect to weight initialization, training set, and hyperparameter settings, on a remote sensing dataset of RGB and near-infrared images. We evaluate the learnt representations through registration of a biomedical dataset of bright-field and second-harmonic generation microscopy images; two modalities with very little apparent correlation. The proposed approach based on CoMIRs significantly outperforms registration of representations created by GAN-based image-to-image translation, as well as a state-of-the-art, application-specific method which takes additional knowledge about the data into account. Code is available at: https://github.com/MIDA-group/CoMIR.
  •  
9.
  • Pielawski, Nicolas, et al. (författare)
  • Global Parameter Optimization for Multimodal Biomedical Image Registration
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Multimodal image registration and fusion is a highly relevant problem in a variety of application domains, from biomedical imaging and remote sensing to computer vision. However, combining images originating from different sources (e.g. microscopes or scanners) is challenging as they are in different coordinate systems, and their content may vary greatly. To align underlying structures in multiple modalities and fuse their complementary information, image registration is required. Methods for registration generally rely on a similarity or distance function between images and an optimization algorithm to find the geometric transformation between two images -- translation and rotation for rigid registration. Global optimization can be applied to multimodal image registration such that the best transformation is guaranteed to be discovered given a large enough computational budget, eliminating failure cases of local optimization algorithms converging to a local optimum. Recently, several methods for global multimodal image registration were developed, however relying on a grid or random search to find the best orientation. We propose a framework using Bayesian optimization to find the optimal orientation between images, which combines the favorable properties of global optimization with the sophisticated parameter search of Bayesian optimization to accelerate the convergence rate. This manuscript presents preliminary results on the faster convergence rate of the Bayesian optimizer in comparison to random search on a small set of multimodal image pairs of brains acquired by positron emission tomography and magnetic resonance imaging.
  •  
10.
  • Solorzano, Leslie, 1989-, et al. (författare)
  • Machine learning for cell classification and neighborhood analysis in glioma tissue
  • 2021
  • Ingår i: Cytometry Part A. - : Wiley. - 1552-4922 .- 1552-4930. ; 99:12, s. 1176-1186
  • Tidskriftsartikel (refereegranskat)abstract
    • Multiplexed and spatially resolved single-cell analyses that intend to study tissue heterogeneity and cell organization invariably face as a first step the challenge of cell classification. Accuracy and reproducibility are important for the downstream process of counting cells, quantifying cell-cell interactions, and extracting information on disease-specific localized cell niches. Novel staining techniques make it possible to visualize and quantify large numbers of cell-specific molecular markers in parallel. However, due to variations in sample handling and artifacts from staining and scanning, cells of the same type may present different marker profiles both within and across samples. We address multiplexed immunofluorescence data from tissue microarrays of low-grade gliomas and present a methodology using two different machine learning architectures and features insensitive to illumination to perform cell classification. The fully automated cell classification provides a measure of confidence for the decision and requires a comparably small annotated data set for training, which can be created using freely available tools. Using the proposed method, we reached an accuracy of 83.1% on cell classification without the need for standardization of samples. Using our confidence measure, cells with low-confidence classifications could be excluded, pushing the classification accuracy to 94.5%. Next, we used the cell classification results to search for cell niches with an unsupervised learning approach based on graph neural networks. We show that the approach can re-detect specialized tissue niches in previously published data, and that our proposed cell classification leads to niche definitions that may be relevant for sub-groups of glioma, if applied to larger data sets.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 31

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy