SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Jamtheim Gustafsson Christian) "

Sökning: WFRF:(Jamtheim Gustafsson Christian)

  • Resultat 1-10 av 21
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Brynolfsson, Patrik, et al. (författare)
  • Tensor-valued diffusion magnetic resonance imaging in a radiotherapy setting
  • 2022
  • Ingår i: Physics and imaging in radiation oncology. - : Elsevier BV. - 2405-6316. ; 24, s. 144-151
  • Tidskriftsartikel (refereegranskat)abstract
    • Background and purposeDiagnostic information about cell density variations and microscopic tissue anisotropy can be gained from tensor-valued diffusion magnetic resonance imaging (MRI). These properties of tissue microstructure have the potential to become novel imaging biomarkers for radiotherapy response. However, tensor-valued diffusion encoding is more demanding than conventional encoding, and its compatibility with MR scanners that are dedicated to radiotherapy has not been established. Thus, our aim was to investigate the feasibility of tensor-valued diffusion MRI with radiotherapy dedicated MR equipment.Material and methodsA tensor-valued diffusion protocol was implemented, and five healthy volunteers were scanned with different resolutions using conventional head coil and radiotherapy coil setup with fixation masks. Signal-to-noise-ratio (SNR) was evaluated to assess the risk of signal bias due to rectified noise floor. We also evaluated the repeatability and reproducibility of the microstructure parameters. One patient with brain metastasis was scanned to investigate the image quality and the transferability of the setup to diseased tissue.ResultsA resolution of 3×3×3 mm3 provided images with SNR>3 for 93% of the voxels using radiotherapy coil setup. The parameter maps and repeatability characteristics were comparable to those observed with a conventional head coil. The patient evaluation demonstrated successful parameter analysis also in tumor tissue, with SNR>3 for 93% of the voxels.ConclusionWe demonstrate that tensor-valued diffusion MRI is compatible with radiotherapy fixation masks and coil setup for investigations of microstructure parameters. The reported reproducibility may be used to plan future investigations of imaging biomarkers in brain cancer radiotherapy.
  •  
2.
  • Gorgisyan, Jenny, et al. (författare)
  • Evalutation of two commercial deep learning OAR segmentation models for prostate cancer treatment
  • 2022
  • Konferensbidrag (refereegranskat)abstract
    • Purpose or ObjectiveTo evaluate two commercial, CE labeled deep learning-based models for automatic organs at risk segmentation on planning CT images for prostate cancer radiotherapy. Model evaluation was focused on assessing both geometrical metrics and evaluating a potential time saving.Material and MethodsThe evaluated models consisted of RayStation 10B Deep Learning Segmentation (RaySearch Laboratories AB, Stockholm, Sweden) and MVision AI Segmentation Service (MVision, Helsinki, Finland) and were applied to CT images for a dataset of 54 male pelvis patients. The RaySearch model was re-trained with 44 clinic specific patients (Skåne University Hospital, Lund, Sweden) for the femoral head structures to adjust the model to our specific delineation guidelines. The model was evaluated on 10 patients from the same clinic. Dice similarity coefficient (DSC) and Hausdorff distance (95th percentile) was computed for model evaluation, using an in-house developed Python script. The average time for manual and AI model delineations was recorded.ResultsAverage DSC scores and Hausdorff distances for all patients and both models are presented in Figure 1 and Table 1, respectively. The femoral head segmentations in the re-trained RaySearch model had increased overlap with our clinical data, with a DSC (mean±1 STD) for the right femoral head of 0.55±0.06 (n=53) increasing to 0.91±0.02 (n=10) and mean Hausdorff (mm) decreasing from 55±7 (n=53) to 4±1 (n=10) (similar results for the left femoral head). The deviation in femoral head compared to the RaySearch and MVision original models occurred due to a difference in the femoral head segmentation guideline in the clinic specific data, see Figure 2. Time recording of manual delineation was 13 minutes compared to 0.5 minutes (RaySearch) and 1.4 minutes (MVision) for the AI models, manual correction not included.ConclusionBoth AI models demonstrate good segmentation performance for bladder and rectum. Clinic specific training data (or data that complies to the clinic specific delineation guideline) might be necessary to achieve segmentation results in accordance to the clinical specific standard for some anatomical structures, such as the femoral heads in our case. The time saving was around 90%, not including manual correction.
  •  
3.
  • Gunnlaugsson, Adalsteinn, et al. (författare)
  • Target definition in radiotherapy of prostate cancer using magnetic resonance imaging only workflow
  • 2019
  • Ingår i: Physics and imaging in radiation oncology. - : Elsevier BV. - 2405-6316. ; 9, s. 89-91
  • Tidskriftsartikel (refereegranskat)abstract
    • In magnetic resonance (MR) only radiotherapy, the target delineation needs to be performed without computed tomography (CT). We investigated in thirteenpatients with prostate cancer, how the clinical target volume (CTV) was affected, when the target delineation procedure was changed from using both CT and MRimages to using MR images only. The mean volume of the CTVCT/MR was 61.0 cm3 as compared to 49.9 cm3 from MR-only based target delineation, corresponding toan average decrease of 18%. Our results show that CTVMR-only was consistently smaller than CTVCT/MR, which has to be taken into consideration before clinicalcommissioning of MR-only radiotherapy.
  •  
4.
  • Gustafsson, Christian Jamtheim, et al. (författare)
  • Development and evaluation of a deep learning based artificial intelligence for automatic identification of gold fiducial markers in an MRI-only prostate radiotherapy workflow
  • 2020
  • Ingår i: Physics in Medicine and Biology. - : IOP Publishing. - 1361-6560 .- 0031-9155. ; 65:22
  • Tidskriftsartikel (refereegranskat)abstract
    • Identification of prostate gold fiducial markers in magnetic resonance imaging (MRI) images is challenging when CT images are not available, due to misclassifications from intra-prostatic calcifications. It is also a time consuming task and automated identification methods have been suggested as an improvement for both objectives. Multi-echo gradient echo (MEGRE) images have been utilized for manual fiducial identification with 100% detection accuracy. The aim is therefore to develop an automatic deep learning based method for fiducial identification in MRI images intended for MRI-only prostate radiotherapy. MEGRE images from 326 prostate cancer patients with fiducials were acquired on a 3T MRI, post-processed with N4 bias correction, and the fiducial center of mass (CoM) was identified. A 9 mm radius sphere was created around the CoM as ground truth. A deep learning HighRes3DNet model for semantic segmentation was trained using image augmentation. The model was applied to 39 MRI-only patients and 3D probability maps for fiducial location and segmentation were produced and spatially smoothed. In each of the three largest probability peaks, a 9 mm radius sphere was defined. Detection sensitivity and geometric accuracy was assessed. To raise awareness of potential false findings a 'BeAware' score was developed, calculated from the total number and quality of the probability peaks. All datasets, annotations and source code used were made publicly available. The detection sensitivity for all fiducials were 97.4%. Thirty-six out of thirty-nine patients had all fiducial markers correctly identified. All three failed patients generated a user notification using the BeAware score. The mean absolute difference between the detected fiducial and ground truth CoM was 0.7 ± 0.9 [0 3.1] mm. A deep learning method for automatic fiducial identification in MRI images was developed and evaluated with state-of-the-art results. The BeAware score has the potential to notify the user regarding patients where the proposed method is uncertain.
  •  
5.
  • Jamtheim Gustafsson, Christian, et al. (författare)
  • Deep learning-based classification and structure name standardization for organ at risk and target delineations in prostate cancer radiotherapy
  • 2021
  • Ingår i: Journal of Applied Clinical Medical Physics. - : John Wiley & Sons. - 1526-9914. ; 22:12, s. 51-63
  • Tidskriftsartikel (refereegranskat)abstract
    • Radiotherapy (RT) datasets can suffer from variations in annotation of organ at risk (OAR) and target structures. Annotation standards exist, but their description for prostate targets is limited. This restricts the use of such data for supervised machine learning purposes as it requires properly annotated data. The aim of this work was to develop a modality independent deep learning (DL) model for automatic classification and annotation of prostate RT DICOM structures.Delineated prostate organs at risk (OAR), support- and target structures (gross tumor volume [GTV]/clinical target volume [CTV]/planning target volume [PTV]), along with or without separate vesicles and/or lymph nodes, were extracted as binary masks from 1854 patients. An image modality independent 2D InceptionResNetV2 classification network was trained with varying amounts of training data using four image input channels. Channel 1–3 consisted of orthogonal 2D projections from each individual binary structure. The fourth channel contained a summation of the other available binary structure masks. Structure classification performance was assessed in independent CT (n = 200 pat) and magnetic resonance imaging (MRI) (n = 40 pat) test datasets and an external CT (n = 99 pat) dataset from another clinic.A weighted classification accuracy of 99.4% was achieved during training. The unweighted classification accuracy and the weighted average F1 score among different structures in the CT test dataset were 98.8% and 98.4% and 98.6% and 98.5% for the MRI test dataset, respectively. The external CT dataset yielded the corresponding results 98.4% and 98.7% when analyzed for trained structures only, and results from the full dataset yielded 79.6% and 75.2%. Most misclassifications in the external CT dataset occurred due to multiple CTVs and PTVs being fused together, which was not included in the training data.Our proposed DL-based method for automated renaming and standardization of prostate radiotherapy annotations shows great potential. Clinic specific contouring standards however need to be represented in the training data for successful use. Source code is available at https://github.com/jamtheim/DicomRTStructRenamerPublic
  •  
6.
  •  
7.
  • Kahraman, Ali Teymur, et al. (författare)
  • A Simple End-to-End Computer-Aided Detection Pipeline for Trained Deep Learning Models
  • 2024
  • Ingår i: Engineering of Computer-Based Systems : 8th International Conference, ECBS 2023, Proceedings - 8th International Conference, ECBS 2023, Proceedings. - 0302-9743 .- 1611-3349. - 9783031492518 ; 14390 LNCS, s. 259-262
  • Konferensbidrag (refereegranskat)abstract
    • Recently, there has been a significant rise in research and development focused on deep learning (DL) models within healthcare. This trend arises from the availability of extensive medical imaging data and notable advances in graphics processing unit (GPU) computational capabilities. Trained DL models show promise in supporting clinicians with tasks like image segmentation and classification. However, advancement of these models into clinical validation remains limited due to two key factors. Firstly, DL models are trained on off-premises environments by DL experts using Unix-like operating systems (OS). These systems rely on multiple libraries and third-party components, demanding complex installations. Secondly, the absence of a user-friendly graphical interface for model outputs complicates validation by clinicians. Here, we introduce a conceptual Computer-Aided Detection (CAD) pipeline designed to address these two issues and enable non-AI experts, such as clinicians, to use trained DL models offline in Windows OS. The pipeline divides tasks between DL experts and clinicians, where experts handle model development, training, inference mechanisms, Grayscale Softcopy Presentation State (GSPS) objects creation, and containerization for deployment. The clinicians execute a simple script to install necessary software and dependencies. Hence, they can use a universal image viewer to analyze results generated by the models. This paper illustrates the pipeline's effectiveness through a case study on pulmonary embolism detection, showcasing successful deployment on a local workstation by an in-house radiologist. By simplifying model deployment and making it accessible to non-AI experts, this CAD pipeline bridges the gap between technical development and practical application, promising broader healthcare applications.
  •  
8.
  • Lempart, Michael, et al. (författare)
  • A deeply supervised convolutional neural network ensemble for multilabel segmentation of pelvic OARs
  • 2021
  • Ingår i: Radiotherapy and Oncology. - 1879-0887. ; 161:Suppl 1, s. 1417-1418
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Accurate delineation of organs at risk (OAR) is a crucial step in radiation therapy (RT) treatment planning but is a manual and time-consuming process. Deep learning-based methods have shown promising results for medical image segmentation and can be used to accelerate this task. Nevertheless, it is rarely applied to complex structures found in the pelvis region, where manual segmentation can be difficult, costly and is not always feasible. The aim of this study was to train and validate a model, based on a modified U-Net architecture, for automated and improved multilabel segmentation of 10 pelvic OAR structures (total bone marrow, lower pelvis bone marrow, iliac bone marrow, lumosacral bone marrow, bowel cavity, bowel, small bowel, large bowel, rectum, and bladder).
  •  
9.
  • Lempart, Michael, et al. (författare)
  • Deep learning-based classification of organs at risk and delineation guideline in pelvic cancer radiation therapy
  • 2023
  • Ingår i: Journal of Applied Clinical Medical Physics. - 1526-9914. ; 24:9
  • Tidskriftsartikel (refereegranskat)abstract
    • Deep learning (DL) models for radiation therapy (RT) image segmentation require accurately annotated training data. Multiple organ delineation guidelines exist; however, information on the used guideline is not provided with the delineation. Extraction of training data with coherent guidelines can therefore be challenging. We present a supervised classification method for pelvis structure delineations where bowel cavity, femoral heads, bladder, and rectum data, with two guidelines, were classified. The impact on DL-based segmentation quality using mixed guideline training data was also demonstrated. Bowel cavity was manually delineated on CT images for anal cancer patients (n = 170) according to guidelines Devisetty and RTOG. The DL segmentation quality from using training data with coherent or mixed guidelines was investigated. A supervised 3D squeeze-and-excite SENet-154 model was trained to classify two bowel cavity delineation guidelines. In addition, a pelvis CT dataset with manual delineations from prostate cancer patients (n = 1854) was used where data with an alternative guideline for femoral heads, rectum, and bladder were generated using commercial software. The model was evaluated on internal (n = 200) and external test data (n = 99). By using mixed, compared to coherent, delineation guideline training data mean DICE score decreased 3% units, mean Hausdorff distance (95%) increased 5 mm and mean surface distance (MSD) increased 1 mm. The classification of bowel cavity test data achieved 99.8% unweighted classification accuracy, 99.9% macro average precision, 97.2% macro average recall, and 98.5% macro average F1. Corresponding metrics for the pelvis internal test data were all 99% or above and for the external pelvis test data they were 96.3%, 96.6%, 93.3%, and 94.6%. Impaired segmentation performance was observed for training data with mixed guidelines. The DL delineation classification models achieved excellent results on internal and external test data. This can facilitate automated guideline-specific data extraction while avoiding the need for consistent and correct structure labels.
  •  
10.
  • Lempart, Michael, et al. (författare)
  • Pelvic U-Net : multi-label semantic segmentation of pelvic organs at risk for radiation therapy anal cancer patients using a deeply supervised shuffle attention convolutional neural network
  • 2022
  • Ingår i: Radiation Oncology. - : Springer Science and Business Media LLC. - 1748-717X. ; 17:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Background: Delineation of organs at risk (OAR) for anal cancer radiation therapy treatment planning is a manual and time-consuming process. Deep learning-based methods can accelerate and partially automate this task. The aim of this study was to develop and evaluate a deep learning model for automated and improved segmentations of OAR in the pelvic region. Methods: A 3D, deeply supervised U-Net architecture with shuffle attention, referred to as Pelvic U-Net, was trained on 143 computed tomography (CT) volumes, to segment OAR in the pelvic region, such as total bone marrow, rectum, bladder, and bowel structures. Model predictions were evaluated on an independent test dataset (n = 15) using the Dice similarity coefficient (DSC), the 95th percentile of the Hausdorff distance (HD95), and the mean surface distance (MSD). In addition, three experienced radiation oncologists rated model predictions on a scale between 1–4 (excellent, good, acceptable, not acceptable). Model performance was also evaluated with respect to segmentation time, by comparing complete manual delineation time against model prediction time without and with manual correction of the predictions. Furthermore, dosimetric implications to treatment plans were evaluated using different dose-volume histogram (DVH) indices. Results: Without any manual corrections, mean DSC values of 97%, 87% and 94% were found for total bone marrow, rectum, and bladder. Mean DSC values for bowel cavity, all bowel, small bowel, and large bowel were 95%, 91%, 87% and 81%, respectively. Total bone marrow, bladder, and bowel cavity segmentations derived from our model were rated excellent (89%, 93%, 42%), good (9%, 5%, 42%), or acceptable (2%, 2%, 16%) on average. For almost all the evaluated DVH indices, no significant difference between model predictions and manual delineations was found. Delineation time per patient could be reduced from 40 to 12 min, including manual corrections of model predictions, and to 4 min without corrections. Conclusions: Our Pelvic U-Net led to credible and clinically applicable OAR segmentations and showed improved performance compared to previous studies. Even though manual adjustments were needed for some predicted structures, segmentation time could be reduced by 70% on average. This allows for an accelerated radiation therapy treatment planning workflow for anal cancer patients.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 21
Typ av publikation
tidskriftsartikel (15)
konferensbidrag (5)
doktorsavhandling (1)
Typ av innehåll
refereegranskat (19)
övrigt vetenskapligt/konstnärligt (2)
Författare/redaktör
Jamtheim Gustafsson, ... (17)
Olsson, Lars E (16)
Scherman, Jonas (8)
Persson, Emilia (8)
Lempart, Michael (6)
Alkner, Sara (6)
visa fler...
Lerner, Minna (6)
Gunnlaugsson, Adalst ... (6)
Medin, Joakim (4)
Gustafsson, Christia ... (4)
Munck af Rosenschöld ... (3)
Ceberg, Sofie (3)
Bäck, Sven (3)
Nilsson, Mikael (3)
Swärd, Johan (2)
Nilsson, Per (2)
Engelholm, Silke (2)
Brynolfsson, Patrik (2)
Nilsson, Martin P. (2)
Af Wetterstedt, Sach ... (2)
Ambolt, Petra (2)
Seceleanu, Cristina (1)
Wieslander, Elinore (1)
Jakobsson, Andreas (1)
Adalbjörnsson, Stefa ... (1)
Nyholm, Tufve (1)
Kjellén, Elisabeth (1)
Adrian, Gabriel (1)
Nilsson, Martin (1)
Szczepankiewicz, Fil ... (1)
Nilsson, Markus (1)
Sjöblom, Tobias (1)
Thellenberg-Karlsson ... (1)
Fridhammar, Adam (1)
Maly Sundgren, Pia (1)
Toumpanakis, Dimitri ... (1)
Benedek, Hunor (1)
Jonsson, Joakim, 198 ... (1)
Fridenfalk, Mikael (1)
Siversson, Carl (1)
Mannerberg, Annika (1)
Gorgisyan, Jenny (1)
Emin, Sevgi (1)
Engleson, Jens (1)
Bengtsson, Ida (1)
Hjalte, Frida (1)
Svanberg, Niklas (1)
Kahraman, Ali Teymur (1)
Fröding, Tomas (1)
Kofroň, Jan (1)
visa färre...
Lärosäte
Lunds universitet (21)
Umeå universitet (2)
Språk
Engelska (20)
Svenska (1)
Forskningsämne (UKÄ/SCB)
Medicin och hälsovetenskap (17)
Naturvetenskap (6)
Teknik (3)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy