SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Alvén Jennifer 1989) "

Sökning: WFRF:(Alvén Jennifer 1989)

  • Resultat 1-18 av 18
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Larsson, Måns, 1989, et al. (författare)
  • Max-margin learning of deep structured models for semantic segmentation
  • 2017
  • Ingår i: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). - Cham : Springer International Publishing. - 1611-3349 .- 0302-9743. - 9783319591285 ; 10270 LNCS, s. 28-40
  • Konferensbidrag (refereegranskat)abstract
    • During the last few years most work done on the task of image segmentation has been focused on deep learning and Convolutional Neural Networks (CNNs) in particular. CNNs are powerful for modeling complex connections between input and output data but lack the ability to directly model dependent output structures, for instance, enforcing properties such as smoothness and coherence. This drawback motivates the use of Conditional Random Fields (CRFs), widely applied as a post-processing step in semantic segmentation. In this paper, we propose a learning framework that jointly trains the parameters of a CNN paired with a CRF. For this, we develop theoretical tools making it possible to optimize a max-margin objective with back-propagation. The max-margin loss function gives the model good generalization capabilities. Thus, the method is especially suitable for applications where labelled data is limited, for example, medical applications. This generalization capability is reflected in our results where we are able to show good performance on two relatively small medical datasets. The method is also evaluated on a public benchmark (frequently used for semantic segmentation) yielding results competitive to state-of-the-art. Overall, we demonstrate that end-to-end max-margin training is preferred over piecewise training when combining a CNN with a CRF.
  •  
2.
  • Palmquist, Emma, et al. (författare)
  • NoiseNet, a fully automatic noise assessment tool that can identify non-diagnostic CCTA examinations
  • 2024
  • Ingår i: International Journal of Cardiovascular Imaging. - 1569-5794 .- 1573-0743 .- 1875-8312. ; 40:7, s. 1493-1500
  • Tidskriftsartikel (refereegranskat)abstract
    • Image noise and vascular attenuation are important factors affecting image quality and diagnostic accuracy of coronary computed tomography angiography (CCTA). The aim of this study was to develop an algorithm that automatically performs noise and attenuation measurements in CCTA and to evaluate the ability of the algorithm to identify non-diagnostic examinations. The algorithm, “NoiseNet”, was trained and tested on 244 CCTA studies from the Swedish CArdioPulmonary BioImage Study. The model is a 3D U-Net that automatically segments the aortic root and measures attenuation (Hounsfield Units, HU), noise (standard deviation of HU, HUsd) and signal-to-noise ratio (SNR, HU/HUsd) in the aortic lumen, close to the left coronary ostium. NoiseNet was then applied to 529 CCTA studies previously categorized into three subgroups: fully diagnostic, diagnostic with excluded parts and non-diagnostic. There was excellent correlation between NoiseNet and manual measurements of noise (r = 0.948; p < 0.001) and SNR (r = 0.948; <0.001). There was a significant difference in noise levels between the image quality subgroups: fully diagnostic 33.1 (29.8–37.9); diagnostic with excluded parts 36.1 (31.5–40.3) and non-diagnostic 42.1 (35.2–47.7; p < 0.001). Corresponding values for SNR were 16.1 (14.0–18.0); 14.0 (12.4–16.2) and 11.1 (9.6–14.0; p < 0.001). ROC analysis for prediction of a non-diagnostic study showed an AUC for noise of 0.73 (CI 0.64–0.83) and for SNR of 0.80 (CI 0.71–0.89). In conclusion, NoiseNet can perform noise and SNR measurements with high accuracy. Noise and SNR impact image quality and automatic measurements may be used to identify CCTA studies with low image quality.
  •  
3.
  • Alvén, Jennifer, 1989, et al. (författare)
  • A Deep Learning Approach to MR-less Spatial Normalization for Tau PET Images
  • 2019
  • Ingår i: Medical Image Computing and Computer Assisted Intervention : MICCAI 2019 - 22nd International Conference, Proceedings - MICCAI 2019 - 22nd International Conference, Proceedings. - Cham : Springer International Publishing. - 1611-3349 .- 0302-9743. - 9783030322458 - 9783030322441 ; 11765 LNCS, s. 355-363
  • Konferensbidrag (refereegranskat)abstract
    • The procedure of aligning a positron emission tomography (PET) image with a common coordinate system, spatial normalization, typically demands a corresponding structural magnetic resonance (MR) image. However, MR imaging is not always available or feasible for the subject, which calls for enabling spatial normalization without MR, MR-less spatial normalization. In this work, we propose a template-free approach to MR-less spatial normalization for [18F]flortaucipir tau PET images. We use a deep neural network that estimates an aligning transformation from the PET input image, and outputs the spatially normalized image as well as the parameterized transformation. In order to do so, the proposed network iteratively estimates a set of rigid and affine transformations by means of convolutional neural network regressors as well as spatial transformer layers. The network is trained and validated on 199 tau PET volumes with corresponding ground truth transformations, and tested on two different datasets. The proposed method shows competitive performance in terms of registration accuracy as well as speed, and compares favourably to previously published results.
  •  
4.
  • Alvén, Jennifer, 1989, et al. (författare)
  • A deep multi-stream model for robust prediction of left ventricular ejection fraction in 2D echocardiography
  • 2024
  • Ingår i: Scientific Reports. - 2045-2322. ; 14:1
  • Tidskriftsartikel (refereegranskat)abstract
    • We propose a deep multi-stream model for left ventricular ejection fraction (LVEF) prediction in 2D echocardiographic (2DE) examinations. We use four standard 2DE views as model input, which are automatically selected from the full 2DE examination. The LVEF prediction model processes eight streams of data (images + optical flow) and consists of convolutional neural networks terminated with transformer layers. The model is made robust to missing, misclassified and duplicate views via pre-training, sampling strategies and parameter sharing. The model is trained and evaluated on an existing clinical dataset (12,648 unique examinations) with varying properties in terms of quality, examining physician, and ultrasound system. We report R2= 0.84 and mean absolute error = 4.0% points for the test set. When evaluated on two public benchmarks, the model performs on par or better than all previous attempts on fully automatic LVEF prediction. Code and trained models are available on a public project repository .
  •  
5.
  • Alvén, Jennifer, 1989 (författare)
  • Combining Shape and Learning for Medical Image Analysis
  • 2020
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Automatic methods with the ability to make accurate, fast and robust assessments of medical images are highly requested in medical research and clinical care. Excellent automatic algorithms are characterized by speed, allowing for scalability, and an accuracy comparable to an expert radiologist. They should produce morphologically and physiologically plausible results while generalizing well to unseen and rare anatomies. Still, there are few, if any, applications where today's automatic methods succeed to meet these requirements.  The focus of this thesis is two tasks essential for enabling automatic medical image assessment, medical image segmentation and medical image registration . Medical image registration, i.e. aligning two separate medical images, is used as an important sub-routine in many image analysis tools as well as in image fusion, disease progress tracking and population statistics. Medical image segmentation, i.e. delineating anatomically or physiologically meaningful boundaries, is used for both diagnostic and visualization purposes in a wide range of applications, e.g. in computer-aided diagnosis and surgery. The thesis comprises five papers addressing medical image registration and/or segmentation for a diverse set of applications and modalities, i.e. pericardium segmentation in cardiac CTA, brain region parcellation in MRI, multi-organ segmentation in CT, heart ventricle segmentation in cardiac ultrasound and tau PET registration. The five papers propose competitive registration and segmentation methods enabled by machine learning techniques, e.g. random decision forests and convolutional neural networks, as well as by shape modelling, e.g. multi-atlas segmentation and conditional random fields.
  •  
6.
  • Alvén, Jennifer, 1989 (författare)
  • Improving Multi-Atlas Segmentation Methods for Medical Images
  • 2017
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Semantic segmentation of organs or tissues, i.e. delineating anatomically or physiologically meaningful boundaries, is an essential task in medical image analysis. One particular class of automatic segmentation algorithms has proved to excel at a diverse set of medical applications, namely multi-atlas segmentation. However, these multi-atlas methods exhibit several issues recognized in the literature. Firstly, multi-atlas segmentation requires several computationally expensive image registrations. In addition, the registration procedure needs to be executed with a high accuracy in order to enable competitive segmentation results. Secondly, up-to-date multi-atlas frameworks require large sets of labelled data to model all possible anatomical variations. Unfortunately, acquisition of manually annotated medical data is time-consuming which needless to say limits the applicability. Finally, standard multi-atlas approaches pose no explicit constraints on the output shape and thus allow for implausibly segmented anatomies. This thesis includes four papers addressing the difficulties associated with multi-atlas segmentation in several ways; by speeding up and increasing the accuracy of feature-based registration methods, by incorporating explicit shape models into the label fusion framework using robust optimization techniques and by refining the solutions with means of machine learning algorithms, such as random decision forests and convolutional neural networks, taking both performance and data-efficiency into account. The proposed improvements are evaluated on three medical segmentation tasks with vastly different characteristics; pericardium segmentation in cardiac CTA images, region parcellation in brain MRI and multi-organ segmentation in whole-body CT images. Extensive experimental comparisons to previously published methods show promising results on par or better than state-of-the-art as of date.
  •  
7.
  • Alvén, Jennifer, 1989, et al. (författare)
  • Shape-aware label fusion for multi-atlas frameworks
  • 2019
  • Ingår i: Pattern Recognition Letters. - : Elsevier BV. - 0167-8655. ; 124, s. 109-117
  • Tidskriftsartikel (refereegranskat)abstract
    • Despite of having no explicit shape model, multi-atlas approaches to image segmentation have proved to be a top-performer for several diverse datasets and imaging modalities. In this paper, we show how one can directly incorporate shape regularization into the multi-atlas framework. Unlike traditional multi-atlas methods, our proposed approach does not rely on label fusion on the voxel level. Instead, each registered atlas is viewed as an estimate of the position of a shape model. We evaluate and compare our method on two public benchmarks: (i) the VISCERAL Grand Challenge on multi-organ segmentation of whole-body CT images and (ii) the Hammers brain atlas of MR images for segmenting the hippocampus and the amygdala. For this wide spectrum of both easy and hard segmentation tasks, our experimental quantitative results are on par or better than state-of-the-art. More importantly, we obtain qualitatively better segmentation boundaries, for instance, preserving topology and fine structures.
  •  
8.
  • Alvén, Jennifer, 1989, et al. (författare)
  • Shape-aware multi-atlas segmentation
  • 2016
  • Ingår i: Proceedings - International Conference on Pattern Recognition. - 1051-4651. ; 0, s. 1101-1106
  • Konferensbidrag (refereegranskat)abstract
    • Despite of having no explicit shape model, multi-atlas approaches to image segmentation have proved to be a top-performer for several diverse datasets and imaging modalities. In this paper, we show how one can directly incorporate shape regularization into the multi-atlas framework. Unlike traditional methods, our proposed approach does not rely on label fusion on the voxel level. Instead, each registered atlas is viewed as an estimate of the position of a shape model. We evaluate and compare our method on two public benchmarks: (i) the VISCERAL Grand Challenge on multi-organ segmentation of whole-body CT images and (ii) the Hammers brain atlas of MR images for segmenting the hippocampus and the amygdala. For this wide spectrum of both easy and hard segmentation tasks, our experimental quantitative results are on par or better than state-of-the-art. More importantly, we obtain qualitatively better segmentation boundaries, for instance, preserving fine structures.
  •  
9.
  • Alvén, Jennifer, 1989, et al. (författare)
  • Überatlas: Fast and robust registration for multi-atlas segmentation
  • 2016
  • Ingår i: Pattern Recognition Letters. - : Elsevier BV. - 0167-8655. ; 80, s. 249-255
  • Tidskriftsartikel (refereegranskat)abstract
    • Multi-atlas segmentation has become a frequently used tool for medical image segmentation due to its outstanding performance. A computational bottleneck is that all atlas images need to be registered to a new target image. In this paper, we propose an intermediate representation of the whole atlas set – an überatlas – that can be used to speed up the registration process. The representation consists of feature points that are similar and detected consistently throughout the atlas set. A novel feature-based registration method is presented which uses the überatlas to simultaneously and robustly find correspondences and affine transformations to all atlas images. The method is evaluated on 20 CT images of the heart and 30 MR images of the brain with corresponding ground truth. Our approach succeeds in producing better and more robust segmentation results compared to three baseline methods, two intensity-based and one feature-based, and significantly reduces the running times.
  •  
10.
  • Alvén, Jennifer, 1989, et al. (författare)
  • Überatlas: Robust Speed-Up of Feature-Based Registration and Multi-Atlas Segmentation
  • 2015
  • Ingår i: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). - Cham : Springer International Publishing. - 1611-3349 .- 0302-9743. - 9783319196640 ; 9127, s. 92-102
  • Konferensbidrag (refereegranskat)abstract
    • Registration is a key component in multi-atlas approaches to medical image segmentation. Current state of the art uses intensitybased registration methods, but such methods tend to be slow, which sets practical limitations on the size of the atlas set. In this paper, a novel feature-based registration method for affine registration is presented. The algorithm constructs an abstract representation of the entire atlas set, an uberatlas, through clustering of features that are similar and detected consistently through the atlas set. This is done offline. At runtime only the feature clusters are matched to the target image, simultaneously yielding robust correspondences to all atlases in the atlas set from which the affine transformations can be estimated efficiently. The method is evaluated on 20 CT images of the heart and 30 MR images of the brain with corresponding gold standards. Our approach succeeds in producing better and more robust segmentation results compared to two baseline methods, one intensity-based and one feature-based, and significantly reduces the running times.
  •  
11.
  • Fagman, Erika, et al. (författare)
  • High-quality annotations for deep learning enabled plaque analysis in SCAPIS cardiac computed tomography angiography
  • 2023
  • Ingår i: Heliyon. - : Elsevier BV. - 2405-8440. ; 9:5
  • Tidskriftsartikel (refereegranskat)abstract
    • Background: Plaque analysis with coronary computed tomography angiography (CCTA) is a promising tool to identify high risk of future coronary events. The analysis process is time-consuming, and requires highly trained readers. Deep learning models have proved to excel at similar tasks, however, training these models requires large sets of expert-annotated training data. The aims of this study were to generate a large, high-quality annotated CCTA dataset derived from Swedish CArdioPulmonary BioImage Study (SCAPIS), report the reproducibility of the annotation core lab and describe the plaque characteristics and their association with established risk factors.Methods and results: The coronary artery tree was manually segmented using semi-automatic software by four primary and one senior secondary reader. A randomly selected sample of 469 subjects, all with coronary plaques and stratified for cardiovascular risk using the Systematic Coronary Risk Evaluation (SCORE), were analyzed. The reproducibility study (n = 78) showed an agreement for plaque detection of 0.91 (0.84-0.97). The mean percentage difference for plaque volumes was-0.6% the mean absolute percentage difference 19.4% (CV 13.7%, ICC 0.94). There was a positive correlation between SCORE and total plaque volume (rho = 0.30, p < 0.001) and total low attenuation plaque volume (rho = 0.29, p < 0.001).Conclusions: We have generated a CCTA dataset with high-quality plaque annotations showing good reproducibility and an expected correlation between plaque features and cardiovascular risk. The stratified data sampling has enriched high-risk plaques making the data well suited as training, validation and test data for a fully automatic analysis tool based on deep learning.
  •  
12.
  • Fejne, Frida, 1986, et al. (författare)
  • Multiatlas Segmentation Using Robust Feature-Based Registration
  • 2017
  • Ingår i: , Cloud-Based Benchmarking of Medical Image Analysis. - Cham : Springer International Publishing. - 9783319496429 ; , s. 203-218
  • Bokkapitel (övrigt vetenskapligt/konstnärligt)abstract
    • This paper presents a pipeline which uses a multiatlas approach for multiorgan segmentation in whole-body CT images. In order to obtain accurate registrations between the target and the atlas images, we develop an adapted feature-based method which uses organ-specific features. These features are learnt during an offline preprocessing step, and thus, the algorithm still benefits from the speed of feature-based registration methods. These feature sets are then used to obtain pairwise non-rigid transformations using RANSAC followed by a thin-plate spline refinement or NiftyReg. The fusion of the transferred atlas labels is performed using a random forest classifier, and finally, the segmentation is obtained using graph cuts with a Potts model as interaction term. Our pipeline was evaluated on 20 organs in 10 whole-body CT images at the VISCERAL Anatomy Challenge, in conjunction with the International Symposium on Biomedical Imaging, Brooklyn, New York, in April 2015. It performed best on majority of the organs, with respect to the Dice index.
  •  
13.
  • Flehr, Alison, et al. (författare)
  • Development of a novel method to measure bone marrow fat fraction in older women using high-resolution peripheral quantitative computed tomography
  • 2022
  • Ingår i: Osteoporosis International. - : Springer Science and Business Media LLC. - 0937-941X .- 1433-2965. ; 33:7, s. 1545-1556
  • Tidskriftsartikel (refereegranskat)abstract
    • Bone marrow adipose tissue (BMAT) has been implicated in a number of conditions associated with bone deterioration and osteoporosis. Several studies have found an inverse relationship between BMAT and bone mineral density (BMD), and higher levels of BMAT in those with prevalent fracture. Magnetic resonance imaging (MRI) is the gold standard for measuring BMAT, but its use is limited by high costs and low availability. We hypothesized that BMAT could also be accurately quantified using high-resolution peripheral quantitative computed tomography (HR-pQCT). Methods: In the present study, a novel method to quantify the tibia bone marrow fat fraction, defined by MRI, using HR-pQCT was developed. In total, 38 postmenopausal women (mean [standard deviation] age 75.9 [3.1] years) were included and measured at the same site at the distal (n = 38) and ultradistal (n = 18) tibia using both MRI and HR-pQCT. To adjust for partial volume effects, the HR-pQCT images underwent 0 to 10 layers of voxel peeling to remove voxels adjacent to the bone. Linear regression equations were then tested for different degrees of voxel peeling, using the MRI-derived fat fractions as the dependent variable and the HR-pQCT-derived radiodensity as the independent variables. Results: The most optimal HR-pQCT derived model, which applied a minimum of 4 layers of peeled voxel and with more than 1% remaining marrow volume, was able to explain 76% of the variation in the ultradistal tibia bone marrow fat fraction, measured with MRI (p < 0.001). Conclusion: The novel HR-pQCT method, developed to estimate BMAT, was able to explain a substantial part of the variation in the bone marrow fat fraction and can be used in future studies investigating the role of BMAT in osteoporosis and fracture prediction.
  •  
14.
  • Hagberg, Eva, et al. (författare)
  • Semi-supervised learning with natural language processing for right ventricle classification in echocardiography—a scalable approach
  • 2022
  • Ingår i: Computers in Biology and Medicine. - : Elsevier BV. - 0010-4825 .- 1879-0534. ; 143
  • Tidskriftsartikel (refereegranskat)abstract
    • We created a deep learning model, trained on text classified by natural language processing (NLP), to assess right ventricular (RV) size and function from echocardiographic images. We included 12,684 examinations with corresponding written reports for text classification. After manual annotation of 1489 reports, we trained an NLP model to classify the remaining 10,651 reports. A view classifier was developed to select the 4-chamber or RV-focused view from an echocardiographic examination (n = 539). The final models were two image classification models trained on the predicted labels from the combined manual annotation and NLP models and the corresponding echocardiographic view to assess RV function (training set n = 11,008) and size (training set n = 9951. The text classifier identified impaired RV function with 99% sensitivity and 98% specificity and RV enlargement with 98% sensitivity and 98% specificity. The view classification model identified the 4-chamber view with 92% accuracy and the RV-focused view with 73% accuracy. The image classification models identified impaired RV function with 93% sensitivity and 72% specificity and an enlarged RV with 80% sensitivity and 85% specificity; agreement with the written reports was substantial (both κ = 0.65). Our findings show that models for automatic image assessment can be trained to classify RV size and function by using model-annotated data from written echocardiography reports. This pipeline for auto-annotation of the echocardiographic images, using a NLP model with medical reports as input, can be used to train an image-assessment model without manual annotation of images and enables fast and inexpensive expansion of the training dataset when needed. © 2022
  •  
15.
  • Häggström, Ida, 1982, et al. (författare)
  • Deep learning for [ 18 F]fluorodeoxyglucose-PET-CT classification in patients with lymphoma: a dual-centre retrospective analysis
  • 2024
  • Ingår i: The Lancet Digital Health. - 2589-7500. ; 6:2, s. e114-e125
  • Tidskriftsartikel (refereegranskat)abstract
    • Background : The rising global cancer burden has led to an increasing demand for imaging tests such as [18F]fluorodeoxyglucose ([18F]FDG)-PET-CT. To aid imaging specialists in dealing with high scan volumes, we aimed to train a deep learning artificial intelligence algorithm to classify [18F]FDG-PET-CT scans of patients with lymphoma with or without hypermetabolic tumour sites. Methods : In this retrospective analysis we collected 16 583 [18F]FDG-PET-CTs of 5072 patients with lymphoma who had undergone PET-CT before or after treatment at the Memorial Sloa Kettering Cancer Center, New York, NY, USA. Using maximum intensity projection (MIP), three dimensional (3D) PET, and 3D CT data, our ResNet34-based deep learning model (Lymphoma Artificial Reader System [LARS]) for [18F]FDG-PET-CT binary classification (Deauville 1–3 vs 4–5), was trained on 80% of the dataset, and tested on 20% of this dataset. For external testing, 1000 [18F]FDG-PET-CTs were obtained from a second centre (Medical University of Vienna, Vienna, Austria). Seven model variants were evaluated, including MIP-based LARS-avg (optimised for accuracy) and LARS-max (optimised for sensitivity), and 3D PET-CT-based LARS-ptct. Following expert curation, areas under the curve (AUCs), accuracies, sensitivities, and specificities were calculated. Findings : In the internal test cohort (3325 PET-CTs, 1012 patients), LARS-avg achieved an AUC of 0·949 (95% CI 0·942–0·956), accuracy of 0·890 (0·879–0·901), sensitivity of 0·868 (0·851–0·885), and specificity of 0·913 (0·899–0·925); LARS-max achieved an AUC of 0·949 (0·942–0·956), accuracy of 0·868 (0·858–0·879), sensitivity of 0·909 (0·896–0·924), and specificity of 0·826 (0·808–0·843); and LARS-ptct achieved an AUC of 0·939 (0·930–0·948), accuracy of 0·875 (0·864–0·887), sensitivity of 0·836 (0·817–0·855), and specificity of 0·915 (0·901–0·927). In the external test cohort (1000 PET-CTs, 503 patients), LARS-avg achieved an AUC of 0·953 (0·938–0·966), accuracy of 0·907 (0·888–0·925), sensitivity of 0·874 (0·843–0·904), and specificity of 0·949 (0·921–0·960); LARS-max achieved an AUC of 0·952 (0·937–0·965), accuracy of 0·898 (0·878–0·916), sensitivity of 0·899 (0·871–0·926), and specificity of 0·897 (0·871–0·922); and LARS-ptct achieved an AUC of 0·932 (0·915–0·948), accuracy of 0·870 (0·850–0·891), sensitivity of 0·827 (0·793–0·863), and specificity of 0·913 (0·889–0·937). Interpretation : Deep learning accurately distinguishes between [18F]FDG-PET-CT scans of lymphoma patients with and without hypermetabolic tumour sites. Deep learning might therefore be potentially useful to rule out the presence of metabolically active disease in such patients, or serve as a second reader or decision support tool. Funding: National Institutes of Health-National Cancer Institute Cancer Center Support Grant.
  •  
16.
  • Kahl, Fredrik, 1972, et al. (författare)
  • Good Features for Reliable Registration in Multi-Atlas Segmentation
  • 2015
  • Ingår i: CEUR Workshop Proceedings. - 1613-0073. ; 1390:January, s. 12-17
  • Konferensbidrag (refereegranskat)abstract
    • This work presents a method for multi-organ segmentation in whole-body CT images based on a multi-atlas approach. A robust and efficient feature-based registration technique is developed which uses sparse organ specific features that are learnt based on their ability to register different organ types accurately. The best fitted feature points are used in RANSAC to estimate an affine transformation, followed by a thin plate spline refinement. This yields an accurate and reliable nonrigid transformation for each organ, which is independent of initialization and hence does not suffer from the local minima problem. Further, this is accomplished at a fraction of the time required by intensity-based methods. The technique is embedded into a standard multi-atlas framework using label transfer and fusion, followed by a random forest classifier which produces the data term for the final graph cut segmentation. For a majority of the classes our approach outperforms the competitors at the VISCERAL Anatomy Grand Challenge on segmentation at ISBI 2015.
  •  
17.
  • Liu, Xixi, 1995, et al. (författare)
  • Deep Nearest Neighbors for Anomaly Detection in Chest X-Rays
  • 2024
  • Ingår i: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). - 1611-3349 .- 0302-9743. ; 14349 LNCS, s. 293-302
  • Konferensbidrag (refereegranskat)abstract
    • Identifying medically abnormal images is crucial to the diagnosis procedure in medical imaging. Due to the scarcity of annotated abnormal images, most reconstruction-based approaches for anomaly detection are trained only with normal images. At test time, images with large reconstruction errors are declared abnormal. In this work, we propose a novel feature-based method for anomaly detection in chest x-rays in a setting where only normal images are provided during training. The model consists of lightweight adaptor and predictor networks on top of a pre-trained feature extractor. The parameters of the pre-trained feature extractor are frozen, and training only involves fine-tuning the proposed adaptor and predictor layers using Siamese representation learning. During inference, multiple augmentations are applied to the test image, and our proposed anomaly score is simply the geometric mean of the k-nearest neighbor distances between the augmented test image features and the training image features. Our method achieves state-of-the-art results on two challenging benchmark datasets, the RSNA Pneumonia Detection Challenge dataset, and the VinBigData Chest X-ray Abnormalities Detection dataset. Furthermore, we empirically show that our method is robust to different amounts of anomalies among the normal images in the training dataset. The code is available at: https://github.com/XixiLiu95/deep-kNN-anomaly-detection.
  •  
18.
  • Norlén, Alexander, 1988, et al. (författare)
  • Automatic pericardium segmentation and quantification of epicardial fat from computed tomography angiography
  • 2016
  • Ingår i: Journal of Mecial Imaging. - 2329-4302 .- 2329-4310. ; 3:3
  • Tidskriftsartikel (refereegranskat)abstract
    • Recent findings indicate a strong correlation between the risk of future heart disease and the volume ofadipose tissue inside of the pericardium. So far, large-scale studies have been hindered by the fact that manual delin-eation of the pericardium is extremely time-consuming and that existing methods for automatic delineation strugglewith accuracy. In this paper, an efficient and fully automatic approach to pericardium segmentation and epicardial fatvolume estimation is presented, based on a variant of multi-atlas segmentation for spatial initialization and a randomforest classifier for accurate pericardium detection. Experimental validation on a set of 30 manually delineated Com-puter Tomography Angiography (CTA) volumes shows a significant improvement on state-of-the-art in terms of EFVestimation (mean absolute epicardial fat volume difference: 3.8 ml (4.7%), Pearson correlation: 0.99) with run-timessuitable for large-scale studies (52 s). Further, the results compare favorably to inter-observer variability measured on10 volumes.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-18 av 18
Typ av publikation
tidskriftsartikel (9)
konferensbidrag (6)
doktorsavhandling (1)
bokkapitel (1)
licentiatavhandling (1)
Typ av innehåll
refereegranskat (15)
övrigt vetenskapligt/konstnärligt (3)
Författare/redaktör
Alvén, Jennifer, 198 ... (18)
Kahl, Fredrik, 1972 (9)
Enqvist, Olof, 1981 (5)
Ulen, Johannes (4)
Larsson, Viktor (4)
Hjelmgren, Ola (4)
visa fler...
Landgren, Matilda (4)
Norlén, Alexander, 1 ... (3)
Bergström, Göran, 19 ... (2)
Hagberg, Eva (2)
Hagerman, David, 198 ... (2)
Larsson, Måns, 1989 (2)
Fagman, Erika (2)
Häggström, Ida, 1982 ... (2)
Liu, J. (1)
Kahl, Fredrik (1)
Vandenput, Liesbeth, ... (1)
Lorentzon, Mattias, ... (1)
Enqvist, Olof (1)
Johansson, Richard, ... (1)
Lagerstrand, Kerstin ... (1)
Hansson, Oskar (1)
Engvall, Jan (1)
Molnar, David (1)
Goncalves, Isabel (1)
Ostenfeld, Ellen (1)
Khan, Ali (1)
Schöll, Michael (1)
Heurling, Kerstin (1)
Smith, Ruben (1)
Strandberg, Olof (1)
Shen, Dinggang (1)
Yap, Pew-Thian (1)
Liu, Tianming (1)
Peters, Terry M. (1)
Staib, Lawrence H. (1)
Essert, Caroline (1)
Zhou, Sean (1)
Petersen, Richard, 1 ... (1)
Björnsson, E. (1)
Rossi-Norrlund, Raun ... (1)
Axelsson, Kristian F ... (1)
Markstad, Hanna (1)
Cederlund, K (1)
Zach, Christopher, 1 ... (1)
Westerbergh, Johan (1)
Brandberg, John, 196 ... (1)
Duvernoy, Olov (1)
Salles, Gilles (1)
Palmquist, Emma (1)
visa färre...
Lärosäte
Chalmers tekniska högskola (18)
Göteborgs universitet (7)
Lunds universitet (7)
Uppsala universitet (1)
Linköpings universitet (1)
Karolinska Institutet (1)
Språk
Engelska (18)
Forskningsämne (UKÄ/SCB)
Teknik (16)
Naturvetenskap (13)
Medicin och hälsovetenskap (9)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy