SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Muehlboeck J. Sebastian) ;lar1:(kth)"

Sökning: WFRF:(Muehlboeck J. Sebastian) > Kungliga Tekniska Högskolan

  • Resultat 1-3 av 3
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Brusini, Irene, et al. (författare)
  • Shape Information Improves the Cross-Cohort Performance of Deep Learning-Based Segmentation of the Hippocampus
  • 2020
  • Ingår i: Frontiers in Neuroscience. - : Frontiers Media S.A.. - 1662-4548 .- 1662-453X. ; 14
  • Tidskriftsartikel (refereegranskat)abstract
    • Performing an accurate segmentation of the hippocampus from brain magnetic resonance images is a crucial task in neuroimaging research, since its structural integrity is strongly related to several neurodegenerative disorders, including Alzheimer's disease (AD). Some automatic segmentation tools are already being used, but, in recent years, new deep learning (DL)-based methods have been proven to be much more accurate in various medical image segmentation tasks. In this work, we propose a DL-based hippocampus segmentation framework that embeds statistical shape of the hippocampus as context information into the deep neural network (DNN). The inclusion of shape information is achieved with three main steps: (1) a U-Net-based segmentation, (2) a shape model estimation, and (3) a second U-Net-based segmentation which uses both the original input data and the fitted shape model. The trained DL architectures were tested on image data of three diagnostic groups [AD patients, subjects with mild cognitive impairment (MCI) and controls] from two cohorts (ADNI and AddNeuroMed). Both intra-cohort validation and cross-cohort validation were performed and compared with the conventional U-net architecture and some variations with other types of context information (i.e., autocontext and tissue-class context). Our results suggest that adding shape information can improve the segmentation accuracy in cross-cohort validation, i.e., when DNNs are trained on one cohort and applied to another. However, no significant benefit is observed in intra-cohort validation, i.e., training and testing DNNs on images from the same cohort. Moreover, compared to other types of context information, the use of shape context was shown to be the most successful in increasing the accuracy, while keeping the computational time in the order of a few minutes.
  •  
2.
  • Dartora, Caroline, et al. (författare)
  • A deep learning model for brain age prediction using minimally preprocessed T1w images as input
  • 2023
  • Ingår i: Frontiers in Aging Neuroscience. - : Frontiers Media SA. - 1663-4365 .- 1663-4365. ; 15
  • Tidskriftsartikel (refereegranskat)abstract
    • Introduction: In the last few years, several models trying to calculate the biological brain age have been proposed based on structural magnetic resonance imaging scans (T1-weighted MRIs, T1w) using multivariate methods and machine learning. We developed and validated a convolutional neural network (CNN)-based biological brain age prediction model that uses one T1w MRI preprocessing step when applying the model to external datasets to simplify implementation and increase accessibility in research settings. Our model only requires rigid image registration to the MNI space, which is an advantage compared to previous methods that require more preprocessing steps, such as feature extraction. Methods: We used a multicohort dataset of cognitively healthy individuals (age range = 32.0–95.7 years) comprising 17,296 MRIs for training and evaluation. We compared our model using hold-out (CNN1) and cross-validation (CNN2–4) approaches. To verify generalisability, we used two external datasets with different populations and MRI scan characteristics to evaluate the model. To demonstrate its usability, we included the external dataset’s images in the cross-validation training (CNN3). To ensure that our model used only the brain signal on the image, we also predicted brain age using skull-stripped images (CNN4). Results: The trained models achieved a mean absolute error of 2.99, 2.67, 2.67, and 3.08 years for CNN1–4, respectively. The model’s performance in the external dataset was in the typical range of mean absolute error (MAE) found in the literature for testing sets. Adding the external dataset to the training set (CNN3), overall, MAE is unaffected, but individual cohort MAE improves (5.63–2.25 years). Salience maps of predictions reveal that periventricular, temporal, and insular regions are the most important for age prediction. Discussion: We provide indicators for using biological (predicted) brain age as a metric for age correction in neuroimaging studies as an alternative to the traditional chronological age. In conclusion, using different approaches, our CNN-based model showed good performance using one T1w brain MRI preprocessing step. The proposed CNN model is made publicly available for the research community to be easily implemented and used to study ageing and age-related disorders.
  •  
3.
  • Mårtensson, Gustav, et al. (författare)
  • AVRA : Automatic visual ratings of atrophy from MRI images using recurrent convolutional neural networks
  • 2019
  • Ingår i: NeuroImage. - : ELSEVIER SCI LTD. - 2213-1582. ; 23
  • Tidskriftsartikel (refereegranskat)abstract
    • Quantifying the degree of atrophy is done clinically by neuroradiologists following established visual rating scales. For these assessments to be reliable the rater requires substantial training and experience, and even then the rating agreement between two radiologists is not perfect. We have developed a model we call AVRA (Automatic Visual Ratings of Atrophy) based on machine learning methods and trained on 2350 visual ratings made by an experienced neuroradiologist. It provides fast and automatic ratings for Scheltens' scale of medial temporal atrophy (MTA), the frontal subscale of Pasquier's Global Cortical Atrophy (GCA-F) scale, and Koedam's scale of Posterior Atrophy (PA). We demonstrate substantial inter-rater agreement between AVRA's and a neuroradiologist ratings with Cohen's weighted kappa values of kappa(w) = 0.74/0.72 (MTA left/right), kappa(w) = 0.62 (GCA-F) and kappa(w) = 0.74 (PA). We conclude that automatic visual ratings of atrophy can potentially have great scientific value, and aim to present AVRA as a freely available toolbox.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-3 av 3

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy