SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Kaboteh R.) srt2:(2020-2022)"

Sökning: WFRF:(Kaboteh R.) > (2020-2022)

  • Resultat 1-8 av 8
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  •  
2.
  • Borrelli, P., et al. (författare)
  • AI-based detection of lung lesions in F-18 FDG PET-CT from lung cancer patients
  • 2021
  • Ingår i: Ejnmmi Physics. - : Springer Science and Business Media LLC. - 2197-7364. ; 8:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Background[F-18]-fluorodeoxyglucose (FDG) positron emission tomography with computed tomography (PET-CT) is a well-established modality in the work-up of patients with suspected or confirmed diagnosis of lung cancer. Recent research efforts have focused on extracting theragnostic and textural information from manually indicated lung lesions. Both semi-automatic and fully automatic use of artificial intelligence (AI) to localise and classify FDG-avid foci has been demonstrated. To fully harness AI's usefulness, we have developed a method which both automatically detects abnormal lung lesions and calculates the total lesion glycolysis (TLG) on FDG PET-CT.MethodsOne hundred twelve patients (59 females and 53 males) who underwent FDG PET-CT due to suspected or for the management of known lung cancer were studied retrospectively. These patients were divided into a training group (59%; n = 66), a validation group (20.5%; n = 23) and a test group (20.5%; n = 23). A nuclear medicine physician manually segmented abnormal lung lesions with increased FDG-uptake in all PET-CT studies. The AI-based method was trained to segment the lesions based on the manual segmentations. TLG was then calculated from manual and AI-based measurements, respectively and analysed with Bland-Altman plots.ResultsThe AI-tool's performance in detecting lesions had a sensitivity of 90%. One small lesion was missed in two patients, respectively, where both had a larger lesion which was correctly detected. The positive and negative predictive values were 88% and 100%, respectively. The correlation between manual and AI TLG measurements was strong (R-2 = 0.74). Bias was 42 g and 95% limits of agreement ranged from -736 to 819 g. Agreement was particularly high in smaller lesions.ConclusionsThe AI-based method is suitable for the detection of lung lesions and automatic calculation of TLG in small- to medium-sized tumours. In a clinical setting, it will have an added value due to its capability to sort out negative examinations resulting in prioritised and focused care on patients with potentially malignant lesions.
  •  
3.
  • Borrelli, P., et al. (författare)
  • Artificial intelligence-aided CT segmentation for body composition analysis: a validation study
  • 2021
  • Ingår i: European Radiology Experimental. - : Springer Science and Business Media LLC. - 2509-9280. ; 5:1
  • Tidskriftsartikel (refereegranskat)abstract
    • BackgroundBody composition is associated with survival outcome in oncological patients, but it is not routinely calculated. Manual segmentation of subcutaneous adipose tissue (SAT) and muscle is time-consuming and therefore limited to a single CT slice. Our goal was to develop an artificial-intelligence (AI)-based method for automated quantification of three-dimensional SAT and muscle volumes from CT images.MethodsEthical approvals from Gothenburg and Lund Universities were obtained. Convolutional neural networks were trained to segment SAT and muscle using manual segmentations on CT images from a training group of 50 patients. The method was applied to a separate test group of 74 cancer patients, who had two CT studies each with a median interval between the studies of 3days. Manual segmentations in a single CT slice were used for comparison. The accuracy was measured as overlap between the automated and manual segmentations.ResultsThe accuracy of the AI method was 0.96 for SAT and 0.94 for muscle. The average differences in volumes were significantly lower than the corresponding differences in areas in a single CT slice: 1.8% versus 5.0% (p <0.001) for SAT and 1.9% versus 3.9% (p < 0.001) for muscle. The 95% confidence intervals for predicted volumes in an individual subject from the corresponding single CT slice areas were in the order of 20%.Conclusions The AI-based tool for quantification of SAT and muscle volumes showed high accuracy and reproducibility and provided a body composition analysis that is more relevant than manual analysis of a single CT slice.
  •  
4.
  • Borrelli, P., et al. (författare)
  • Automated classification of PET-CT lesions in lung cancer: An independent validation study
  • 2022
  • Ingår i: Clinical Physiology and Functional Imaging. - : Wiley. - 1475-0961 .- 1475-097X. ; 42:5, s. 327-332
  • Tidskriftsartikel (refereegranskat)abstract
    • Introduction Recently, a tool called the positron emission tomography (PET)-assisted reporting system (PARS) was developed and presented to classify lesions in PET/computed tomography (CT) studies in patients with lung cancer or lymphoma. The aim of this study was to validate PARS with an independent group of lung-cancer patients using manual lesion segmentations as a reference standard, as well as to evaluate the association between PARS-based measurements and overall survival (OS). Methods This study retrospectively included 115 patients who had undergone clinically indicated (18F)-fluorodeoxyglucose (FDG) PET/CT due to suspected or known lung cancer. The patients had a median age of 66 years (interquartile range [IQR]: 61-72 years). Segmentations were made manually by visual inspection in a consensus reading by two nuclear medicine specialists and used as a reference. The research prototype PARS was used to automatically analyse all the PET/CT studies. The PET foci classified as suspicious by PARS were compared with the manual segmentations. No manual corrections were applied. Total lesion glycolysis (TLG) was calculated based on the manual and PARS-based lung-tumour segmentations. Associations between TLG and OS were investigated using Cox analysis. Results PARS showed sensitivities for lung tumours of 55.6% per lesion and 80.2% per patient. Both manual and PARS TLG were significantly associated with OS. Conclusion Automatically calculated TLG by PARS contains prognostic information comparable to manually measured TLG in patients with known or suspected lung cancer. The low sensitivity at both the lesion and patient levels makes the present version of PARS less useful to support clinical reading, reporting and staging.
  •  
5.
  • Borrelli, P., et al. (författare)
  • Freely available convolutional neural network-based quantification of PET/CT lesions is associated with survival in patients with lung cancer
  • 2022
  • Ingår i: EJNMMI Physics. - : Springer Science and Business Media LLC. - 2197-7364. ; 9:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Background: Metabolic positron emission tomography/computed tomography (PET/CT) parameters describing tumour activity contain valuable prognostic information, but to perform the measurements manually leads to both intra- and inter-reader variability and is too time-consuming in clinical practice. The use of modern artificial intelligence-based methods offers new possibilities for automated and objective image analysis of PET/CT data. Purpose: We aimed to train a convolutional neural network (CNN) to segment and quantify tumour burden in [18F]-fluorodeoxyglucose (FDG) PET/CT images and to evaluate the association between CNN-based measurements and overall survival (OS) in patients with lung cancer. A secondary aim was to make the method available to other researchers. Methods: A total of 320 consecutive patients referred for FDG PET/CT due to suspected lung cancer were retrospectively selected for this study. Two nuclear medicine specialists manually segmented abnormal FDG uptake in all of the PET/CT studies. One-third of the patients were assigned to a test group. Survival data were collected for this group. The CNN was trained to segment lung tumours and thoracic lymph nodes. Total lesion glycolysis (TLG) was calculated from the CNN-based and manual segmentations. Associations between TLG and OS were investigated using a univariate Cox proportional hazards regression model. Results: The test group comprised 106 patients (median age, 76years (IQR 61–79); n = 59 female). Both CNN-based TLG (hazard ratio 1.64, 95% confidence interval 1.21–2.21; p = 0.001) and manual TLG (hazard ratio 1.54, 95% confidence interval 1.14–2.07; p = 0.004) estimations were significantly associated with OS. Conclusion: Fully automated CNN-based TLG measurements of PET/CT data showed were significantly associated with OS in patients with lung cancer. This type of measurement may be of value for the management of future patients with lung cancer. The CNN is publicly available for research purposes. © 2022, The Author(s).
  •  
6.
  • Polymeri, Erini, et al. (författare)
  • Deep learning-based quantification of PET/CT prostate gland uptake : association with overall survival
  • 2020
  • Ingår i: Clinical Physiology and Functional Imaging. - Chichester : Blackwell Publishing. - 1475-0961 .- 1475-097X. ; 40:2, s. 106-113
  • Tidskriftsartikel (refereegranskat)abstract
    • Aim: To validate a deep-learning (DL) algorithm for automated quantification of prostate cancer on positron emission tomography/computed tomography (PET/CT) and explore the potential of PET/CT measurements as prognostic biomarkers. Material and methods: Training of the DL-algorithm regarding prostate volume was performed on manually segmented CT images in 100 patients. Validation of the DL-algorithm was carried out in 45 patients with biopsy-proven hormone-naïve prostate cancer. The automated measurements of prostate volume were compared with manual measurements made independently by two observers. PET/CT measurements of tumour burden based on volume and SUV of abnormal voxels were calculated automatically. Voxels in the co-registered 18F-choline PET images above a standardized uptake value (SUV) of 2·65, and corresponding to the prostate as defined by the automated segmentation in the CT images, were defined as abnormal. Validation of abnormal voxels was performed by manual segmentation of radiotracer uptake. Agreement between algorithm and observers regarding prostate volume was analysed by Sørensen-Dice index (SDI). Associations between automatically based PET/CT biomarkers and age, prostate-specific antigen (PSA), Gleason score as well as overall survival were evaluated by a univariate Cox regression model. Results: The SDI between the automated and the manual volume segmentations was 0·78 and 0·79, respectively. Automated PET/CT measures reflecting total lesion uptake and the relation between volume of abnormal voxels and total prostate volume were significantly associated with overall survival (P = 0·02), whereas age, PSA, and Gleason score were not. Conclusion: Automated PET/CT biomarkers showed good agreement to manual measurements and were significantly associated with overall survival. © 2019 The Authors. Clinical Physiology and Functional Imaging published by John Wiley & Sons Ltd on behalf of Scandinavian Society of Clinical Physiology and Nuclear Medicine
  •  
7.
  • Trägårdh, Elin, et al. (författare)
  • RECOMIA-a cloud-based platform for artificial intelligence research in nuclear medicine and radiology
  • 2020
  • Ingår i: Ejnmmi Physics. - : Springer Science and Business Media LLC. - 2197-7364. ; 7:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Background: Artificial intelligence (AI) is about to transform medical imaging. The Research Consortium for Medical Image Analysis (RECOMIA), a not-for-profit organisation, has developed an online platform to facilitate collaboration between medical researchers and AI researchers. The aim is to minimise the time and effort researchers need to spend on technical aspects, such as transfer, display, and annotation of images, as well as legal aspects, such as de-identification. The purpose of this article is to present the RECOMIA platform and its AI-based tools for organ segmentation in computed tomography (CT), which can be used for extraction of standardised uptake values from the corresponding positron emission tomography (PET) image. Results: The RECOMIA platform includes modules for (1) local de-identification of medical images, (2) secure transfer of images to the cloud-based platform, (3) display functions available using a standard web browser, (4) tools for manual annotation of organs or pathology in the images, (5) deep learning-based tools for organ segmentation or other customised analyses, (6) tools for quantification of segmented volumes, and (7) an export function for the quantitative results. The AI-based tool for organ segmentation in CT currently handles 100 organs (77 bones and 23 soft tissue organs). The segmentation is based on two convolutional neural networks (CNNs): one network to handle organs with multiple similar instances, such as vertebrae and ribs, and one network for all other organs. The CNNs have been trained using CT studies from 339 patients. Experienced radiologists annotated organs in the CT studies. The performance of the segmentation tool, measured as mean Dice index on a manually annotated test set, with 10 representative organs, was 0.93 for all foreground voxels, and the mean Dice index over the organs were 0.86 (0.82 for the soft tissue organs and 0.90 for the bones). Conclusion: The paper presents a platform that provides deep learning-based tools that can perform basic organ segmentations in CT, which can then be used to automatically obtain the different measurement in the corresponding PET image. The RECOMIA platform is available on request atfor research purposes.
  •  
8.
  • Ying, T. M., et al. (författare)
  • Automated artificial intelligence-based analysis of skeletal muscle volume predicts overall survival after cystectomy for urinary bladder cancer
  • 2021
  • Ingår i: European Radiology Experimental. - : Springer Science and Business Media LLC. - 2509-9280. ; 5:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Background Radical cystectomy for urinary bladder cancer is a procedure associated with a high risk of complications, and poor overall survival (OS) due to both patient and tumour factors. Sarcopenia is one such patient factor. We have developed a fully automated artificial intelligence (AI)-based image analysis tool for segmenting skeletal muscle of the torso and calculating the muscle volume. Methods All patients who have undergone radical cystectomy for urinary bladder cancer 2011-2019 at Sahlgrenska University Hospital, and who had a pre-operative computed tomography of the abdomen within 90 days of surgery were included in the study. All patients CT studies were analysed with the automated AI-based image analysis tool. Clinical data for the patients were retrieved from the Swedish National Register for Urinary Bladder Cancer. Muscle volumes dichotomised by the median for each sex were analysed with Cox regression for OS and logistic regression for 90-day high-grade complications. The study was approved by the Swedish Ethical Review Authority (2020-03985). Results Out of 445 patients who underwent surgery, 299 (67%) had CT studies available for analysis. The automated AI-based tool failed to segment the muscle volume in seven (2%) patients. Cox regression analysis showed an independent significant association with OS (HR 1.62; 95% CI 1.07-2.44; p = 0.022). Logistic regression did not show any association with high-grade complications. Conclusion The fully automated AI-based CT image analysis provides a low-cost and meaningful clinical measure that is an independent biomarker for OS following radical cystectomy.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-8 av 8

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy