SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Sala Evis) "

Sökning: WFRF:(Sala Evis)

  • Resultat 1-4 av 4
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Buddenkotte, Thomas, et al. (författare)
  • Calibrating ensembles for scalable uncertainty quantification in deep learning-based medical image segmentation
  • 2023
  • Ingår i: Computers in Biology and Medicine. - : Elsevier Ltd. - 0010-4825 .- 1879-0534. ; 163
  • Tidskriftsartikel (refereegranskat)abstract
    • Uncertainty quantification in automated image analysis is highly desired in many applications. Typically, machine learning models in classification or segmentation are only developed to provide binary answers; however, quantifying the uncertainty of the models can play a critical role for example in active learning or machine human interaction. Uncertainty quantification is especially difficult when using deep learning-based models, which are the state-of-the-art in many imaging applications. The current uncertainty quantification approaches do not scale well in high-dimensional real-world problems. Scalable solutions often rely on classical techniques, such as dropout, during inference or training ensembles of identical models with different random seeds to obtain a posterior distribution. In this paper, we present the following contributions. First, we show that the classical approaches fail to approximate the classification probability. Second, we propose a scalable and intuitive framework for uncertainty quantification in medical image segmentation that yields measurements that approximate the classification probability. Third, we suggest the usage of k-fold cross-validation to overcome the need for held out calibration data. Lastly, we motivate the adoption of our method in active learning, creating pseudo-labels to learn from unlabeled images and human–machine collaboration.
  •  
2.
  • Buddenkotte, Thomas, et al. (författare)
  • Deep learning-based segmentation of multisite disease in ovarian cancer
  • 2023
  • Ingår i: EUROPEAN RADIOLOGY EXPERIMENTAL. - : Springer Nature. - 2509-9280. ; 7:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Purpose: To determine if pelvic/ovarian and omental lesions of ovarian cancer can be reliably segmented on computed tomography (CT) using fully automated deep learning-based methods.Methods: A deep learning model for the two most common disease sites of high-grade serous ovarian cancer lesions (pelvis/ovaries and omentum) was developed and compared against the well-established “no-new-Net” framework and unrevised trainee radiologist segmentations. A total of 451 CT scans collected from four different institutions were used for training (n = 276), evaluation (n = 104) and testing (n = 71) of the methods. The performance was evaluated using the Dice similarity coefficient (DSC) and compared using a Wilcoxon test.Results: Our model outperformed no-new-Net for the pelvic/ovarian lesions in cross-validation, on the evaluation and test set by a significant margin (p values being 4 × 10–7, 3 × 10–4, 4 × 10–2, respectively), and for the omental lesions on the evaluation set (p = 1 × 10–3). Our model did not perform significantly differently in segmenting pelvic/ovarian lesions (p = 0.371) compared to a trainee radiologist. On an independent test set, the model achieved a DSC performance of 71 ± 20 (mean ± standard deviation) for pelvic/ovarian and 61 ± 24 for omental lesions.Conclusion: Automated ovarian cancer segmentation on CT scans using deep neural networks is feasible and achieves performance close to a trainee-level radiologist for pelvic/ovarian lesions.Relevance statement: Automated segmentation of ovarian cancer may be used by clinicians for CT-based volumetric assessments and researchers for building complex analysis pipelines.Key points:The first automated approach for pelvic/ovarian and omental ovarian cancer lesion segmentation on CT images has been presented.Automated segmentation of ovarian cancer lesions can be comparable with manual segmentation of trainee radiologists.Careful hyperparameter tuning can provide models significantly outperforming strong state-of-the-art baselines. Graphical Abstract: [Figure not available: see fulltext.]
  •  
3.
  • Oprea-Lager, Daniela Elena, et al. (författare)
  • European Association of Nuclear Medicine Focus 5 : Consensus on Molecular Imaging and Theranostics in Prostate Cancer
  • 2024
  • Ingår i: European Urology. - 0302-2838. ; 85:1, s. 49-60
  • Tidskriftsartikel (refereegranskat)abstract
    • Background: In prostate cancer (PCa), questions remain on indications for prostate-specific membrane antigen (PSMA) positron emission tomography (PET) imaging and PSMA radioligand therapy, integration of advanced imaging in nomogram-based decision-making, dosimetry, and development of new theranostic applications. Objective: We aimed to critically review developments in molecular hybrid imaging and systemic radioligand therapy, to reach a multidisciplinary consensus on the current state of the art in PCa. Design, setting, and participants: The results of a systematic literature search informed a two-round Delphi process with a panel of 28 PCa experts in medical or radiation oncology, urology, radiology, medical physics, and nuclear medicine. The results were discussed and ratified in a consensus meeting. Outcome measurements and statistical analysis: Forty-eight statements were scored on a Likert agreement scale and six as ranking options. Agreement statements were analysed using the RAND appropriateness method. Ranking statements were analysed using weighted summed scores. Results and limitations: After two Delphi rounds, there was consensus on 42/48 (87.5%) of the statements. The expert panel recommends PSMA PET to be used for staging the majority of patients with unfavourable intermediate and high risk, and for restaging of suspected recurrent PCa. There was consensus that oligometastatic disease should be defined as up to five metastases, even using advanced imaging modalities. The group agreed that [177Lu]Lu-PSMA should not be administered only after progression to cabazitaxel and that [223Ra]RaCl2 remains a valid therapeutic option in bone-only metastatic castration-resistant PCa. Uncertainty remains on various topics, including the need for concordant findings on both [18F]FDG and PSMA PET prior to [177Lu]Lu-PSMA therapy. Conclusions: There was a high proportion of agreement among a panel of experts on the use of molecular imaging and theranostics in PCa. Although consensus statements cannot replace high-certainty evidence, these can aid in the interpretation and dissemination of best practice from centres of excellence to the wider clinical community. Patient summary: There are situations when dealing with prostate cancer (PCa) where both the doctors who diagnose and track the disease development and response to treatment, and those who give treatments are unsure about what the best course of action is. Examples include what methods they should use to obtain images of the cancer and what to do when the cancer has returned or spread. We reviewed published research studies and provided a summary to a panel of experts in imaging and treating PCa. We also used the research summary to develop a questionnaire whereby we asked the experts to state whether or not they agreed with a list of statements. We used these results to provide guidance to other health care professionals on how best to image men with PCa and what treatments to give, when, and in what order, based on the information the images provide.
  •  
4.
  • Sanchez, Lorena Escudero, et al. (författare)
  • Integrating Artificial Intelligence Tools in the Clinical Research Setting : The Ovarian Cancer Use Case
  • 2023
  • Ingår i: Diagnostics. - : MDPI. - 2075-4418. ; 13:17
  • Tidskriftsartikel (refereegranskat)abstract
    • Artificial intelligence (AI) methods applied to healthcare problems have shown enormous potential to alleviate the burden of health services worldwide and to improve the accuracy and reproducibility of predictions. In particular, developments in computer vision are creating a paradigm shift in the analysis of radiological images, where AI tools are already capable of automatically detecting and precisely delineating tumours. However, such tools are generally developed in technical departments that continue to be siloed from where the real benefit would be achieved with their usage. Significant effort still needs to be made to make these advancements available, first in academic clinical research and ultimately in the clinical setting. In this paper, we demonstrate a prototype pipeline based entirely on open-source software and free of cost to bridge this gap, simplifying the integration of tools and models developed within the AI community into the clinical research setting, ensuring an accessible platform with visualisation applications that allow end-users such as radiologists to view and interact with the outcome of these AI tools.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-4 av 4

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy