SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Iftekharuddin Khan M.) "

Sökning: WFRF:(Iftekharuddin Khan M.)

  • Resultat 1-2 av 2
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Mehta, Raghav, et al. (författare)
  • QU-BraTS : MICCAI BraTS 2020 Challenge on QuantifyingUncertainty in Brain Tumor Segmentation - Analysis of Ranking Scores and Benchmarking Results
  • 2022
  • Ingår i: Journal of Machine Learning for Biomedical Imaging. - 2766-905X. ; , s. 1-54
  • Tidskriftsartikel (refereegranskat)abstract
    • Deep learning (DL) models have provided the state-of-the-art performance in a wide variety of medical imaging benchmarking challenges, including the Brain Tumor Segmentation (BraTS) challenges. However, the task of focal pathology multi-compartment segmentation (e.g., tumor and lesion sub-regions) is particularly challenging, and potential errors hinder the translation of DL models into clinical workflows. Quantifying the reliability of DL model predictions in the form of uncertainties, could enable clinical review of the most uncertain regions, thereby building trust and paving the way towards clinical translation. Recently, a number of uncertainty estimation methods have been introduced for DL medical image segmentation tasks. Developing scores to evaluate and compare the performance of uncertainty measures will assist the end-user in making more informed decisions. In this study, we explore and evaluate a score developed during the BraTS 2019-2020 task on uncertainty quantification (QU-BraTS), and designed to assess and rank uncertainty estimates for brain tumor multi-compartment segmentation. This score (1) rewards uncertainty estimates that produce high confidence in correct assertions, and those that assign low confidence levels at incorrect assertions, and (2) penalizes uncertainty measures that lead to a higher percentages of under-confident correct assertions. We further benchmark the segmentation uncertainties generated by 14 independent participating teams of QU-BraTS 2020, all of which also participated in the main BraTS segmentation task. Overall, our findings confirm the importance and complementary value that uncertainty estimates provide to segmentation algorithms, and hence highlight the need for uncertainty quantification in medical image analyses. Our evaluation code is made publicly available at https://github.com/RagMeh11/QU-BraTS
  •  
2.
  • Karlsson, Jennie, et al. (författare)
  • Classification of point-of-care ultrasound in breast imaging using deep learning
  • 2023
  • Ingår i: Medical Imaging 2023 : Computer-Aided Diagnosis - Computer-Aided Diagnosis. - 1605-7422 .- 2410-9045. - 9781510660359 ; 12465
  • Konferensbidrag (refereegranskat)abstract
    • Early detection of breast cancer is important to reduce morbidity and mortality. Access to breast imaging is limited in low- and middle-income countries compared to high-income countries. This contributes to advance-stage breast cancer presentation with poor survival. Pocket-sized portable ultrasound device, also known as point-of-care ultrasound (POCUS), aided by decision support using deep learning-based algorithms for lesion classification could be a cost-effective way to enable access to breast imaging in low-resource settings. A previous study, where using convolutional neural networks (CNN) to classify breast cancer in conventional ultrasound (US) images, showed promising results. The aim of the present study is to classify POCUS breast images. A POCUS data set containing 1100 breast images was collected. To increase the size of the data set, a Cycle-Consistent Adversarial Network (CycleGAN) was trained on US images to generate synthetic POCUS images. A CNN was implemented, trained, validated and tested on POCUS images. To improve performance, the CNN was trained with different combinations of data consisting of POCUS images, US images, CycleGAN-generated POCUS images and spatial augmentation. The best result was achieved by a CNN trained on a combination of POCUS images and CycleGAN-generated POCUS images and augmentation. This combination achieved a 95% confidence interval for AUC between 93.5% - 96.6%.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-2 av 2

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy