SwePub
Sök i LIBRIS databas

  Extended search

WFRF:(Wang Jiancong)
 

Search: WFRF:(Wang Jiancong) > (2023) > Deep label fusion :...

  • 1 of 1
  • Previous record
  • Next record
  •    To hitlist

Deep label fusion : A generalizable hybrid multi-atlas and deep convolutional neural network for medical image segmentation

Xie, Long (author)
University of Pennsylvania
Wisse, Laura E.M. (author)
Lund University,Lunds universitet,Diagnostisk radiologi, Lund,Sektion V,Institutionen för kliniska vetenskaper, Lund,Medicinska fakulteten,Diagnostic Radiology, (Lund),Section V,Department of Clinical Sciences, Lund,Faculty of Medicine
Wang, Jiancong (author)
University of Pennsylvania
show more...
Ravikumar, Sadhana (author)
University of Pennsylvania
Khandelwal, Pulkit (author)
University of Pennsylvania
Glenn, Trevor (author)
University of Pennsylvania
Luther, Anica (author)
Lund University
Lim, Sydney (author)
University of Pennsylvania
Wolk, David A. (author)
University of Pennsylvania
Yushkevich, Paul A. (author)
University of Pennsylvania
show less...
 (creator_code:org_t)
Elsevier BV, 2023
2023
English.
In: Medical Image Analysis. - : Elsevier BV. - 1361-8415. ; 83
  • Journal article (peer-reviewed)
Abstract Subject headings
Close  
  • Deep convolutional neural networks (DCNN) achieve very high accuracy in segmenting various anatomical structures in medical images but often suffer from relatively poor generalizability. Multi-atlas segmentation (MAS), while less accurate than DCNN in many applications, tends to generalize well to unseen datasets with different characteristics from the training dataset. Several groups have attempted to integrate the power of DCNN to learn complex data representations and the robustness of MAS to changes in image characteristics. However, these studies primarily focused on replacing individual components of MAS with DCNN models and reported marginal improvements in accuracy. In this study we describe and evaluate a 3D end-to-end hybrid MAS and DCNN segmentation pipeline, called Deep Label Fusion (DLF). The DLF pipeline consists of two main components with learnable weights, including a weighted voting subnet that mimics the MAS algorithm and a fine-tuning subnet that corrects residual segmentation errors to improve final segmentation accuracy. We evaluate DLF on five datasets that represent a diversity of anatomical structures (medial temporal lobe subregions and lumbar vertebrae) and imaging modalities (multi-modality, multi-field-strength MRI and Computational Tomography). These experiments show that DLF achieves comparable segmentation accuracy to nnU-Net (Isensee et al., 2020), the state-of-the-art DCNN pipeline, when evaluated on a dataset with similar characteristics to the training datasets, while outperforming nnU-Net on tasks that involve generalization to datasets with different characteristics (different MRI field strength or different patient population). DLF is also shown to consistently improve upon conventional MAS methods. In addition, a modality augmentation strategy tailored for multimodal imaging is proposed and demonstrated to be beneficial in improving the segmentation accuracy of learning-based methods, including DLF and DCNN, in missing data scenarios in test time as well as increasing the interpretability of the contribution of each individual modality.

Subject headings

TEKNIK OCH TEKNOLOGIER  -- Medicinteknik -- Medicinsk bildbehandling (hsv//swe)
ENGINEERING AND TECHNOLOGY  -- Medical Engineering -- Medical Image Processing (hsv//eng)

Keyword

Deep learning
Generalization
Multi-atlas segmentation
Multimodal image analysis

Publication and Content Type

art (subject category)
ref (subject category)

Find in a library

To the university's database

  • 1 of 1
  • Previous record
  • Next record
  •    To hitlist

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view