SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Sorkhei Moein) "

Sökning: WFRF:(Sorkhei Moein)

  • Resultat 1-6 av 6
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  •  
2.
  • Liu, Yue, et al. (författare)
  • Selecting Women for Supplemental Breast Imaging using AI Biomarkers of Cancer Signs, Masking, and Risk
  • 2023
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Background: Traditional mammographic density aids in determining the need for supplemental imagingby MRI or ultrasound. However, AI image analysis, considering more subtle and complex image features,may enable a more effective identification of women requiring supplemental imaging.Purpose: To assess if AISmartDensity, an AI-based score considering cancer signs, masking, and risk,surpasses traditional mammographic density in identifying women for supplemental imaging after negativescreening mammography.Methods: This retrospective study included randomly selected breast cancer patients and healthy controlsat Karolinska University Hospital between 2008 and 2015. Bootstrapping simulated a 0.2% interval cancerrate. We included previous exams for diagnosed women and all exams for controls. AISmartDensity hadbeen developed using random mammograms from a population non-overlapping with the current studypopulation. We evaluated AISmartDensity to, based on negative screening mammograms, identify womenwith interval cancer and next-round screen-detected cancer. It was compared to age and density models, withsensitivity and PPV calculated for women with the top 8% scores, mimicking the proportion of BIRADS“extremely dense” category. Statistical significance was determined using the Student’s t-test.Results: The study involved 2043 women, 258 with breast cancer diagnosed within 3 years of a negativemammogram, and 1785 healthy controls. Diagnosed women had a median age of 57 years (IQR 16) versus53 years (IQR 15) for controls (p < .001). At the 92nd percentile, AISmartDenstiy identified 87 (33.67%)future cancers with PPV 1.68%, whereas mammographic density identified 34 (13.18%) with PPV 0.66%(p < .001). AISmartDensity identified 32% interval and 36% next-round cancers, versus mammographicdensity’s 16% and 10%. The combined mammographic density and age model yielded an AUC of 0.60,significantly lower than AISmartDensity’s 0.73 (p < .001).Conclusions: AISmartDensity, integrating cancer signs, masking, and risk, more effectively identifiedwomen for additional breast imaging than traditional age and density models. 
  •  
3.
  • Liu, Yue, et al. (författare)
  • Use of an AI Score Combining Cancer Signs, Masking, and Risk to Select Patients for Supplemental Breast Cancer Screening
  • 2024
  • Ingår i: Radiology. - : Radiological Society of North America (RSNA). - 0033-8419 .- 1527-1315. ; 311:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Background: Mammographic density measurements are used to identify patients who should undergo supplemental imaging for breast cancer detection, but artificial intelligence (AI) image analysis may be more effective. Purpose: To assess whether AISmartDensity-an AI -based score integrating cancer signs, masking, and risk-surpasses measurements of mammographic density in identifying patients for supplemental breast imaging after a negative screening mammogram. Materials and Methods: This retrospective study included randomly selected individuals who underwent screening mammography at Karolinska University Hospital between January 2008 and December 2015. The models in AISmartDensity were trained and validated using nonoverlapping data. The ability of AISmartDensity to identify future cancer in patients with a negative screening mammogram was evaluated and compared with that of mammographic density models. Sensitivity and positive predictive value (PPV) were calculated for the top 8% of scores, mimicking the proportion of patients in the Breast Imaging Reporting and Data System "extremely dense" category. Model performance was evaluated using area under the receiver operating characteristic curve (AUC) and was compared using the DeLong test. Results: The study population included 65 325 examinations (median patient age, 53 years [IQR, 47-62 years])-64 870 examinations in healthy patients and 455 examinations in patients with breast cancer diagnosed within 3 years of a negative screening mammogram. The AUC for detecting subsequent cancers was 0.72 and 0.61 ( P < .001) for AISmartDensity and the best -performing density model (age -adjusted dense area), respectively. For examinations with scores in the top 8%, AISmartDensity identified 152 of 455 (33%) future cancers with a PPV of 2.91%, whereas the best -performing density model (age -adjusted dense area) identified 57 of 455 (13%) future cancers with a PPV of 1.09% ( P < .001). AISmartDensity identified 32% (41 of 130) and 34% (111 of 325) of interval and next -round screen -detected cancers, whereas the best -performing density model (dense area) identified 16% (21 of 130) and 9% (30 of 325), respectively. Conclusion: AISmartDensity, integrating cancer signs, masking, and risk, outperformed traditional density models in identifying patients for supplemental imaging after a negative screening mammogram.
  •  
4.
  • Matsoukas, Christos, et al. (författare)
  • What Makes Transfer Learning Work for Medical Images : Feature Reuse & Other Factors
  • 2022
  • Ingår i: 2022 IEEE/CVF conference on computer vision and pattern recognition (CVPR). - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 9215-9224
  • Konferensbidrag (refereegranskat)abstract
    • Transfer learning is a standard technique to transfer knowledge from one domain to another. For applications in medical imaging, transfer from ImageNet has become the de-facto approach, despite differences in the tasks and image characteristics between the domains. However, it is unclear what factors determine whether - and to what extent transfer learning to the medical domain is useful. The longstanding assumption that features from the source domain get reused has recently been called into question. Through a series of experiments on several medical image benchmark datasets, we explore the relationship between transfer learning, data size, the capacity and inductive bias of the model, as well as the distance between the source and target domain. Our findings suggest that transfer learning is beneficial in most cases, and we characterize the important role feature reuse plays in its success.
  •  
5.
  • Sorkhei, Moein, et al. (författare)
  • CSAW-M : An Ordinal Classification Dataset for Benchmarking Mammographic Masking of Cancer
  • 2021
  • Ingår i: Conference on Neural Information Processing Systems (NeurIPS) – Datasets and Benchmarks Proceedings, 2021..
  • Konferensbidrag (refereegranskat)abstract
    • Interval and large invasive breast cancers, which are associated with worse prognosis than other cancers, are usually detected at a late stage due to false negative assessments of screening mammograms. The missed screening-time detection is commonly caused by the tumor being obscured by its surrounding breast tissues, a phenomenon called masking. To study and benchmark mammographic masking of cancer, in this work we introduce CSAW-M, the largest public mammographic dataset, collected from over 10,000 individuals and annotated with potential masking. In contrast to the previous approaches which measure breast image density as a proxy, our dataset directly provides annotations of masking potential assessments from five specialists. We also trained deep learning models on CSAW-M to estimate the masking level and showed that the estimated masking is significantly more predictive of screening participants diagnosed with interval and large invasive cancers – without being explicitly trained for these tasks – than its breast density counterparts.
  •  
6.
  • Sorkhei, Mohammad Moein, 1995-, et al. (författare)
  • Full-Glow : Fully conditional Glow for more realistic image generation
  • 2021
  • Ingår i: Pattern Recognition. - Cham, Switzerland : Springer Nature. ; , s. 697-711
  • Konferensbidrag (refereegranskat)abstract
    • Autonomous agents, such as driverless cars, require large amounts of labeled visual data for their training. A viable approach for acquiring such data is training a generative model with collected real data, and then augmenting the collected real dataset with synthetic images from the model, generated with control of the scene layout and ground truth labeling. In this paper we propose Full-Glow, a fully conditional Glow-based architecture for generating plausible and realistic images of novel street scenes given a semantic segmentation map indicating the scene layout. Benchmark comparisons show our model to outperform recent works in terms of the semantic segmentation performance of a pretrained PSPNet. This indicates that images from our model are, to a higher degree than from other models, similar to real images of the same kinds of scenes and objects, making them suitable as training data for a visual semantic segmentation or object recognition system.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-6 av 6

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy