SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Sala Leonardo) srt2:(2023)"

Sökning: WFRF:(Sala Leonardo) > (2023)

  • Resultat 1-5 av 5
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  •  
2.
  • Buddenkotte, Thomas, et al. (författare)
  • Calibrating ensembles for scalable uncertainty quantification in deep learning-based medical image segmentation
  • 2023
  • Ingår i: Computers in Biology and Medicine. - : Elsevier Ltd. - 0010-4825 .- 1879-0534. ; 163
  • Tidskriftsartikel (refereegranskat)abstract
    • Uncertainty quantification in automated image analysis is highly desired in many applications. Typically, machine learning models in classification or segmentation are only developed to provide binary answers; however, quantifying the uncertainty of the models can play a critical role for example in active learning or machine human interaction. Uncertainty quantification is especially difficult when using deep learning-based models, which are the state-of-the-art in many imaging applications. The current uncertainty quantification approaches do not scale well in high-dimensional real-world problems. Scalable solutions often rely on classical techniques, such as dropout, during inference or training ensembles of identical models with different random seeds to obtain a posterior distribution. In this paper, we present the following contributions. First, we show that the classical approaches fail to approximate the classification probability. Second, we propose a scalable and intuitive framework for uncertainty quantification in medical image segmentation that yields measurements that approximate the classification probability. Third, we suggest the usage of k-fold cross-validation to overcome the need for held out calibration data. Lastly, we motivate the adoption of our method in active learning, creating pseudo-labels to learn from unlabeled images and human–machine collaboration.
  •  
3.
  • Buddenkotte, Thomas, et al. (författare)
  • Deep learning-based segmentation of multisite disease in ovarian cancer
  • 2023
  • Ingår i: EUROPEAN RADIOLOGY EXPERIMENTAL. - : Springer Nature. - 2509-9280. ; 7:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Purpose: To determine if pelvic/ovarian and omental lesions of ovarian cancer can be reliably segmented on computed tomography (CT) using fully automated deep learning-based methods.Methods: A deep learning model for the two most common disease sites of high-grade serous ovarian cancer lesions (pelvis/ovaries and omentum) was developed and compared against the well-established “no-new-Net” framework and unrevised trainee radiologist segmentations. A total of 451 CT scans collected from four different institutions were used for training (n = 276), evaluation (n = 104) and testing (n = 71) of the methods. The performance was evaluated using the Dice similarity coefficient (DSC) and compared using a Wilcoxon test.Results: Our model outperformed no-new-Net for the pelvic/ovarian lesions in cross-validation, on the evaluation and test set by a significant margin (p values being 4 × 10–7, 3 × 10–4, 4 × 10–2, respectively), and for the omental lesions on the evaluation set (p = 1 × 10–3). Our model did not perform significantly differently in segmenting pelvic/ovarian lesions (p = 0.371) compared to a trainee radiologist. On an independent test set, the model achieved a DSC performance of 71 ± 20 (mean ± standard deviation) for pelvic/ovarian and 61 ± 24 for omental lesions.Conclusion: Automated ovarian cancer segmentation on CT scans using deep neural networks is feasible and achieves performance close to a trainee-level radiologist for pelvic/ovarian lesions.Relevance statement: Automated segmentation of ovarian cancer may be used by clinicians for CT-based volumetric assessments and researchers for building complex analysis pipelines.Key points:The first automated approach for pelvic/ovarian and omental ovarian cancer lesion segmentation on CT images has been presented.Automated segmentation of ovarian cancer lesions can be comparable with manual segmentation of trainee radiologists.Careful hyperparameter tuning can provide models significantly outperforming strong state-of-the-art baselines. Graphical Abstract: [Figure not available: see fulltext.]
  •  
4.
  • Milne, Christopher J., et al. (författare)
  • Disentangling the evolution of electrons and holes in photoexcited ZnO nanoparticles
  • 2023
  • Ingår i: Structural Dynamics. - : American Institute of Physics (AIP). - 2329-7778. ; 10:6
  • Tidskriftsartikel (refereegranskat)abstract
    • The evolution of charge carriers in photoexcited room temperature ZnO nanoparticles in solution is investigated using ultrafast ultraviolet photoluminescence spectroscopy, ultrafast Zn K-edge absorption spectroscopy, and ab initio molecular dynamics (MD) simulations. The photoluminescence is excited at 4.66 eV, well above the band edge, and shows that electron cooling in the conduction band and exciton formation occur in <500 fs, in excellent agreement with theoretical predictions. The x-ray absorption measurements, obtained upon excitation close to the band edge at 3.49 eV, are sensitive to the migration and trapping of holes. They reveal that the 2 ps transient largely reproduces the previously reported transient obtained at 100 ps time delay in synchrotron studies. In addition, the x-ray absorption signal is found to rise in similar to 1.4 ps, which we attribute to the diffusion of holes through the lattice prior to their trapping at singly charged oxygen vacancies. Indeed, the MD simulations show that impulsive trapping of holes induces an ultrafast expansion of the cage of Zn atoms in <200 fs, followed by an oscillatory response at a frequency of similar to 100 cm-1, which corresponds to a phonon mode of the system involving the Zn sub-lattice.
  •  
5.
  • Sanchez, Lorena Escudero, et al. (författare)
  • Integrating Artificial Intelligence Tools in the Clinical Research Setting : The Ovarian Cancer Use Case
  • 2023
  • Ingår i: Diagnostics. - : MDPI. - 2075-4418. ; 13:17
  • Tidskriftsartikel (refereegranskat)abstract
    • Artificial intelligence (AI) methods applied to healthcare problems have shown enormous potential to alleviate the burden of health services worldwide and to improve the accuracy and reproducibility of predictions. In particular, developments in computer vision are creating a paradigm shift in the analysis of radiological images, where AI tools are already capable of automatically detecting and precisely delineating tumours. However, such tools are generally developed in technical departments that continue to be siloed from where the real benefit would be achieved with their usage. Significant effort still needs to be made to make these advancements available, first in academic clinical research and ultimately in the clinical setting. In this paper, we demonstrate a prototype pipeline based entirely on open-source software and free of cost to bridge this gap, simplifying the integration of tools and models developed within the AI community into the clinical research setting, ensuring an accessible platform with visualisation applications that allow end-users such as radiologists to view and interact with the outcome of these AI tools.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-5 av 5

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy