SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Åström Karl) ;pers:(Heyden Anders)"

Search: WFRF:(Åström Karl) > Heyden Anders

  • Result 1-10 of 38
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Källén, Hanna, et al. (author)
  • Towards Grading Gleason Score using Generically Trained Deep convolutional Neural Networks
  • 2016
  • In: 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI). - : Institute of Electrical and Electronics Engineers (IEEE). - 9781479923496 - 9781479923502 ; 2016-June, s. 1163-1167
  • Conference paper (peer-reviewed)abstract
    • We developed an automatic algorithm with the purpose to assist pathologists to report Gleason score on malignant prostatic adenocarcinoma specimen. In order to detect and classify the cancerous tissue, a deep convolutional neural network that had been pre-trained on a large set of photographic images was used. A specific aim was to support intuitive interaction with the result, to let pathologists adjust and correct the output. Therefore, we have designed an algorithm that makes a spatial classification of the whole slide into the same growth patterns as pathologists do. The 22-layer network was cut at an earlier layer and the output from that layer was used to train both a random forest classifier and a support vector machines classifier. At a specific layer a small patch of the image was used to calculate a feature vector and an image is represented by a number of those vectors. We have classified both the individual patches and the entire images. The classification results were compared for different scales of the images and feature vectors from two different layers from the network. Testing was made on a dataset consisting of 213 images, all containing a single class, benign tissue or Gleason score 3-5. Using 10-fold cross validation the accuracy per patch was 81 %. For whole images, the accuracy was increased to 89 %.
  •  
2.
  • Landgren, Matilda, et al. (author)
  • An Automated System for the Detection and Diagnosis of Kidney Lesions in Children from Scintigraphy Images
  • 2011
  • In: Lecture Notes in Computer Science. - Berlin, Heidelberg : Springer Berlin Heidelberg. - 0302-9743 .- 1611-3349. - 9783642212277 - 9783642212260 ; 6688, s. 489-500
  • Conference paper (peer-reviewed)abstract
    • Designing a system for computer aided diagnosis is a complex procedure requiring an understanding of the biology of the disease, insight into hospital workflow and awareness of available technical solutions. This paper aims to show that a valuable system can be designed for diagnosing kidney lesions in children and adolescents from 99m Tc-DMSA scintigraphy images. We present the chain of analysis and provide a discussion of its performance. On a per-lesion basis, the classification reached an ROC-curve area of 0.96 (sensitivity/specificity e.g. 97%/85%) measured using an independent test group consisting of 56 patients with 730 candidate lesions. We conclude that the presented system for diagnostic support has the potential of increasing the quality of care regarding this type of examination.
  •  
3.
  •  
4.
  • Ståhl, Daniel, et al. (author)
  • Automatic Compartment Modelling and Segmentation for Dynamical Renal Scintigraphies
  • 2011
  • In: Lecture Notes in Computer Science. - Berlin, Heidelberg : Springer Berlin Heidelberg. - 1611-3349 .- 0302-9743. - 9783642212260 - 9783642212277 ; 6688, s. 557-568
  • Conference paper (peer-reviewed)abstract
    • Time-resolved medical data has important applications in a large variety of medical applications. In this paper we study automatic analysis of dynamical renal scintigraphies. The traditional analysis pipeline for dynamical renal scintigraphies is to use manual or semiautomatic methods for segmentation of pixels into physical compartments, extract their corresponding time-activity curves and then compute the parameters that are relevant for medical assessment. In this paper we present a fully automatic system that incorporates spatial smoothing constraints, compartment modelling and positivity constraints to produce an interpretation of the full time-resolved data. The method has been tested on renal dynamical scintigraphies with promising results. It is shown that the method indeed produces more compact representations, while keeping the residual of fit low. The parameters of the time activity curve, such as peak-time and time for half activity from peak, are compared between the previous semiautomatic method and the method presented in this paper. It is also shown how to obtain new and clinically relevant features using our novel system.
  •  
5.
  • Arvidsson, Ida, et al. (author)
  • Comparing a pre-defined versus deep learning approach for extracting brain atrophy patterns to predict cognitive decline due to Alzheimer’s disease in patients with mild cognitive symptoms
  • 2024
  • In: Alzheimer's Research and Therapy. - 1758-9193. ; 16:1
  • Journal article (peer-reviewed)abstract
    • Background: Predicting future Alzheimer’s disease (AD)-related cognitive decline among individuals with subjective cognitive decline (SCD) or mild cognitive impairment (MCI) is an important task for healthcare. Structural brain imaging as measured by magnetic resonance imaging (MRI) could potentially contribute when making such predictions. It is unclear if the predictive performance of MRI can be improved using entire brain images in deep learning (DL) models compared to using pre-defined brain regions. Methods: A cohort of 332 individuals with SCD/MCI were included from the Swedish BioFINDER-1 study. The goal was to predict longitudinal SCD/MCI-to-AD dementia progression and change in Mini-Mental State Examination (MMSE) over four years. Four models were evaluated using different predictors: (1) clinical data only, including demographics, cognitive tests and APOE ε4 status, (2) clinical data plus hippocampal volume, (3) clinical data plus all regional MRI gray matter volumes (N = 68) extracted using FreeSurfer software, (4) a DL model trained using multi-task learning with MRI images, Jacobian determinant images and baseline cognition as input. A double cross-validation scheme, with five test folds and for each of those ten validation folds, was used. External evaluation was performed on part of the ADNI dataset, including 108 patients. Mann-Whitney U-test was used to determine statistically significant differences in performance, with p-values less than 0.05 considered significant. Results: In the BioFINDER cohort, 109 patients (33%) progressed to AD dementia. The performance of the clinical data model for prediction of progression to AD dementia was area under the curve (AUC) = 0.85 and four-year cognitive decline was R2 = 0.14. The performance was improved for both outcomes when adding hippocampal volume (AUC = 0.86, R2 = 0.16). Adding FreeSurfer brain regions improved prediction of four-year cognitive decline but not progression to AD (AUC = 0.83, R2 = 0.17), while the DL model worsened the performance for both outcomes (AUC = 0.84, R2 = 0.08). A sensitivity analysis showed that the Jacobian determinant image was more informative than the MRI image, but that performance was maximized when both were included. In the external evaluation cohort from ADNI, 23 patients (21%) progressed to AD dementia. The results for predicted progression to AD dementia were similar to the results for the BioFINDER test data, while the performance for the cognitive decline was deteriorated. Conclusions: The DL model did not significantly improve the prediction of clinical disease progression in AD, compared to regression models with a single pre-defined brain region.
  •  
6.
  • Berthilsson, Rikard, et al. (author)
  • Projective Reconstruction of 3D-curves from its 2D-images using Error Models and Bundle Adjustments
  • 1997
  • In: Proceedings of the 10th Scandinavian Conference on Image Analysis. - 9517641451 ; , s. 581-588
  • Conference paper (other academic/artistic)abstract
    • In this paper, an algorithm for projective reconstruction of general 3D-curves from a number of its 2D-images taken by uncalibrated cameras is proposed. No point correspondences between the images are assumed. The curve and the view points are uniquely reconstructed, modulo projective transformations. The algorithm is divided into two separate algorithms, where the output of the first is used as input to the second. The first algorithm is independent of the choice of coordinates in the images and is based on orthogonal projections and aligning subspaces. The ideas behind the algorithm are based on an extension of aOEne shape of finite point configurations to curves. The second algorithm uses the well-known technique of bundle adjustments, where an error function is minimised with respect to all free parameters. The errors in the detection of the curve in the images are used in the error function. These errors are obtained from a proposed model of image acquisition and scale space smoothing, making it possible to analyse the errors in a simple edge detection algorithm. Finally, experiments using real images, have been carried out and it is shown that the results are superior to previous approaches.
  •  
7.
  •  
8.
  •  
9.
  • Flood, Gabrielle, et al. (author)
  • Estimating Uncertainty in Time-difference and Doppler Estimates
  • 2018
  • In: ICPRAM 2018 - Proceedings of the 7th International Conference on Pattern Recognition Applications and Methods - Volume 1: ICPRAM. - : SCITEPRESS - Science and Technology Publications. - 9789897582769 ; , s. 245-253
  • Conference paper (peer-reviewed)abstract
    • Sound and radio can be used to estimate the distance between a transmitter and a sender by correlating the emitted and received signal. Alternatively by correlating two received signals it is possible to estimate distance difference. Such methods can be divided into methods that are robust to noise and reverberation, but give limited precision and sub-sample refinements that are sensitive to noise, but give higher precision when initialized close to the real translation. In this paper we develop stochastic models that can explain the limits in the precision of such sub-sample time-difference estimates. Using such models we provide new methods for precise estimates of time-differences as well as Doppler effects. The method is verified on both synthetic and real data.
  •  
10.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 38

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view