SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Wang Chunliang) "

Search: WFRF:(Wang Chunliang)

  • Result 1-50 of 100
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Bernard, Olivier, et al. (author)
  • Standardized evaluation system for left ventricular segmentation algorithms in 3D echocardiography.
  • 2016
  • In: IEEE Transactions on Medical Imaging. - : Institute of Electrical and Electronics Engineers (IEEE). - 0278-0062 .- 1558-254X. ; 35:4, s. 967-977
  • Journal article (peer-reviewed)abstract
    • Real-time 3D Echocardiography (RT3DE) has been proven to be an accurate tool for left ventricular (LV) volume assessment. However, identification of the LV endocardium remains a challenging task, mainly because of the low tissue/blood contrast of the images combined with typical artifacts. Several semi and fully automatic algorithms have been proposed for segmenting the endocardium in RT3DE data in order to extract relevant clinical indices, but a systematic and fair comparison between such methods has so far been impossible due to the lack of a publicly available common database. Here, we introduce a standardized evaluation framework to reliably evaluate and compare the performance of the algorithms developed to segment the LV border in RT3DE. A database consisting of 45 multivendor cardiac ultrasound recordings acquired at different centers with corresponding reference measurements from 3 experts are made available. The algorithms from nine research groups were quantitatively evaluated and compared using the proposed online platform. The results showed that the best methods produce promising results with respect to the experts' measurements for the extraction of clinical indices, and that they offer good segmentation precision in terms of mean distance error in the context of the experts' variability range. The platform remains open for new submissions.
  •  
2.
  • Wang, Chunliang, et al. (author)
  • Automatic heart and vessel segmentation using random forests and a local phase guided level set method
  • 2017
  • In: Reconstruction, Segmentation, and Analysis Of Medical Images. - Cham : Springer Verlag. - 9783319522791 ; , s. 159-164
  • Conference paper (peer-reviewed)abstract
    • In this report, a novel automatic heart and vessel segmentation method is proposed. The heart segmentation pipeline consists of three major steps: heart localization using landmark detection, heart isolation using statistical shape model and myocardium segmentation using learning based voxel classification and local phase analysis. In our preliminary test, the proposed method achieved encouraging results.
  •  
3.
  • Zhuang, Xiahai, et al. (author)
  • Evaluation of algorithms for Multi-Modality Whole Heart Segmentation : An open-access grand challenge.
  • 2019
  • In: Medical Image Analysis. - : Elsevier BV. - 1361-8415 .- 1361-8423. ; 58
  • Journal article (peer-reviewed)abstract
    • Knowledge of whole heart anatomy is a prerequisite for many clinical applications. Whole heart segmentation (WHS), which delineates substructures of the heart, can be very valuable for modeling and analysis of the anatomy and functions of the heart. However, automating this segmentation can be challenging due to the large variation of the heart shape, and different image qualities of the clinical data. To achieve this goal, an initial set of training data is generally needed for constructing priors or for training. Furthermore, it is difficult to perform comparisons between different methods, largely due to differences in the datasets and evaluation metrics used. This manuscript presents the methodologies and evaluation results for the WHS algorithms selected from the submissions to the Multi-Modality Whole Heart Segmentation (MM-WHS) challenge, in conjunction with MICCAI 2017. The challenge provided 120 three-dimensional cardiac images covering the whole heart, including 60 CT and 60 MRI volumes, all acquired in clinical environments with manual delineation. Ten algorithms for CT data and eleven algorithms for MRI data, submitted from twelve groups, have been evaluated. The results showed that the performance of CT WHS was generally better than that of MRI WHS. The segmentation of the substructures for different categories of patients could present different levels of challenge due to the difference in imaging and variations of heart shapes. The deep learning (DL)-based methods demonstrated great potential, though several of them reported poor results in the blinded evaluation. Their performance could vary greatly across different network structures and training strategies. The conventional algorithms, mainly based on multi-atlas segmentation, demonstrated good performance, though the accuracy and computational efficiency could be limited. The challenge, including provision of the annotated training data and the blinded evaluation for submitted algorithms on the test data, continues as an ongoing benchmarking resource via its homepage (www.sdspeople.fudan.edu.cn/zhuangxiahai/0/mmwhs/).
  •  
4.
  • Andersson, Malin, et al. (author)
  • How to measure renal artery stenosis - a retrospective comparison of morphological measurement approaches in relation to hemodynamic significance
  • 2015
  • In: BMC Medical Imaging. - : BioMed Central. - 1471-2342. ; 15
  • Journal article (peer-reviewed)abstract
    • Background: Although it is well known that renal artery stenosis may cause renovascular hypertension, it is unclear how the degree of stenosis should best be measured in morphological images. The aim of this study was to determine which morphological measures from Computed Tomography Angiography (CTA) and Magnetic Resonance Angiography (MRA) are best in predicting whether a renal artery stenosis is hemodynamically significant or not. Methods: Forty-seven patients with hypertension and a clinical suspicion of renovascular hypertension were examined with CTA, MRA, captopril-enhanced renography (CER) and captopril test (Ctest). CTA and MRA images of the renal arteries were analyzed by two readers using interactive vessel segmentation software. The measures included minimum diameter, minimum area, diameter reduction and area reduction. In addition, two radiologists visually judged the diameter reduction without automated segmentation. The results were then compared using limits of agreement and intra-class correlation, and correlated with the results from CER combined with Ctest (which were used as standard of reference) using receiver operating characteristics (ROC) analysis. Results: A total of 68 kidneys had all three investigations (CTA, MRA and CER + Ctest), where 11 kidneys (16.2 %) got a positive result on the CER + Ctest. The greatest area under ROC curve (AUROC) was found for the area reduction on MRA, with a value of 0.91 (95 % confidence interval 0.82-0.99), excluding accessory renal arteries. As comparison, the AUROC for the radiologists' visual assessments on CTA and MRA were 0.90 (0.82-0.98) and 0.91 (0.83-0.99) respectively. None of the differences were statistically significant. Conclusions: No significant differences were found between the morphological measures in their ability to predict hemodynamically significant stenosis, but a tendency of MRA having higher AUROC than CTA. There was no significant difference between measurements made by the radiologists and measurements made with fuzzy connectedness segmentation. Further studies are required to definitely identify the optimal measurement approach.
  •  
5.
  • Astaraki, Mehdi, PhD Student, 1984-, et al. (author)
  • A Comparative Study of Radiomics and Deep-Learning Based Methods for Pulmonary Nodule Malignancy Prediction in Low Dose CT Images
  • 2021
  • In: Frontiers in Oncology. - : Frontiers Media SA. - 2234-943X. ; 11
  • Journal article (peer-reviewed)abstract
    • Objectives: Both radiomics and deep learning methods have shown great promise in predicting lesion malignancy in various image-based oncology studies. However, it is still unclear which method to choose for a specific clinical problem given the access to the same amount of training data. In this study, we try to compare the performance of a series of carefully selected conventional radiomics methods, end-to-end deep learning models, and deep-feature based radiomics pipelines for pulmonary nodule malignancy prediction on an open database that consists of 1297 manually delineated lung nodules.Methods: Conventional radiomics analysis was conducted by extracting standard handcrafted features from target nodule images. Several end-to-end deep classifier networks, including VGG, ResNet, DenseNet, and EfficientNet were employed to identify lung nodule malignancy as well. In addition to the baseline implementations, we also investigated the importance of feature selection and class balancing, as well as separating the features learned in the nodule target region and the background/context region. By pooling the radiomics and deep features together in a hybrid feature set, we investigated the compatibility of these two sets with respect to malignancy prediction.Results: The best baseline conventional radiomics model, deep learning model, and deep-feature based radiomics model achieved AUROC values (mean ± standard deviations) of 0.792 ± 0.025, 0.801 ± 0.018, and 0.817 ± 0.032, respectively through 5-fold cross-validation analyses. However, after trying out several optimization techniques, such as feature selection and data balancing, as well as adding context features, the corresponding best radiomics, end-to-end deep learning, and deep-feature based models achieved AUROC values of 0.921 ± 0.010, 0.824 ± 0.021, and 0.936 ± 0.011, respectively. We achieved the best prediction accuracy from the hybrid feature set (AUROC: 0.938 ± 0.010).Conclusion: The end-to-end deep-learning model outperforms conventional radiomics out of the box without much fine-tuning. On the other hand, fine-tuning the models lead to significant improvements in the prediction performance where the conventional and deep-feature based radiomics models achieved comparable results. The hybrid radiomics method seems to be the most promising model for lung nodule malignancy prediction in this comparative study.
  •  
6.
  • Astaraki, Mehdi, PhD Student, 1984- (author)
  • Advanced Machine Learning Methods for Oncological Image Analysis
  • 2022
  • Doctoral thesis (other academic/artistic)abstract
    • Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally-invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow.This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis.The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the proposed prior-aware DL framework compared to the state of the art. Study II aims to address a crucial challenge faced by supervised segmentation models: dependency on the large-scale labeled dataset. In this study, an unsupervised segmentation approach is proposed based on the concept of image inpainting to segment lung and head-neck tumors in images from single and multiple modalities. The proposed autoinpainting pipeline shows great potential in synthesizing high-quality tumor-free images and outperforms a family of well-established unsupervised models in terms of segmentation accuracy.Studies III and IV aim to automatically discriminate the benign from the malignant pulmonary nodules by analyzing the low-dose computed tomography (LDCT) scans. In Study III, a dual-pathway deep classification framework is proposed to simultaneously take into account the local intra-nodule heterogeneities and the global contextual information. Study IV seeks to compare the discriminative power of a series of carefully selected conventional radiomics methods, end-to-end Deep Learning (DL) models, and deep features-based radiomics analysis on the same dataset. The numerical analyses show the potential of fusing the learned deep features into radiomic features for boosting the classification power.Study V focuses on the early assessment of lung tumor response to the applied treatments by proposing a novel feature set that can be interpreted physiologically. This feature set was employed to quantify the changes in the tumor characteristics from longitudinal PET-CT scans in order to predict the overall survival status of the patients two years after the last session of treatments. The discriminative power of the introduced imaging biomarkers was compared against the conventional radiomics, and the quantitative evaluations verified the superiority of the proposed feature set. Whereas Study V focuses on a binary survival prediction task, Study VI addresses the prediction of survival rate in patients diagnosed with lung and head-neck cancer by investigating the potential of spherical convolutional neural networks and comparing their performance against other types of features, including radiomics. While comparable results were achieved in intra-dataset analyses, the proposed spherical-based features show more predictive power in inter-dataset analyses.In summary, the six studies incorporate different imaging modalities and a wide range of image processing and machine-learning techniques in the methods developed for the quantitative assessment of tumor characteristics and contribute to the essential procedures of cancer diagnosis and prognosis.
  •  
7.
  • Astaraki, Mehdi, PhD Student, 1984-, et al. (author)
  • Benign-malignant pulmonary nodule classification in low-dose CT with convolutional features
  • 2021
  • In: Physica medica (Testo stampato). - : Elsevier BV. - 1120-1797 .- 1724-191X. ; 83, s. 146-153
  • Journal article (peer-reviewed)abstract
    • Purpose: Low-Dose Computed Tomography (LDCT) is the most common imaging modality for lung cancer diagnosis. The presence of nodules in the scans does not necessarily portend lung cancer, as there is an intricate relationship between nodule characteristics and lung cancer. Therefore, benign-malignant pulmonary nodule classification at early detection is a crucial step to improve diagnosis and prolong patient survival. The aim of this study is to propose a method for predicting nodule malignancy based on deep abstract features.Methods: To efficiently capture both intra-nodule heterogeneities and contextual information of the pulmonary nodules, a dual pathway model was developed to integrate the intra-nodule characteristics with contextual attributes. The proposed approach was implemented with both supervised and unsupervised learning schemes. A random forest model was added as a second component on top of the networks to generate the classification results. The discrimination power of the model was evaluated by calculating the Area Under the Receiver Operating Characteristic Curve (AUROC) metric. Results: Experiments on 1297 manually segmented nodules show that the integration of context and target supervised deep features have a great potential for accurate prediction, resulting in a discrimination power of 0.936 in terms of AUROC, which outperformed the classification performance of the Kaggle 2017 challenge winner.Conclusion: Empirical results demonstrate that integrating nodule target and context images into a unified network improves the discrimination power, outperforming the conventional single pathway convolutional neural networks.
  •  
8.
  • Astaraki, Mehdi, PhD Student, 1984-, et al. (author)
  • Early survival prediction in non-small cell lung cancer from PET/CT images using an intra-tumor partitioning method
  • 2019
  • In: Physica medica (Testo stampato). - : Elsevier BV. - 1120-1797 .- 1724-191X. ; 60, s. 58-65
  • Journal article (peer-reviewed)abstract
    • PurposeTo explore prognostic and predictive values of a novel quantitative feature set describing intra-tumor heterogeneity in patients with lung cancer treated with concurrent and sequential chemoradiotherapy.MethodsLongitudinal PET-CT images of 30 patients with non-small cell lung cancer were analysed. To describe tumor cell heterogeneity, the tumors were partitioned into one to ten concentric regions depending on their sizes, and, for each region, the change in average intensity between the two scans was calculated for PET and CT images separately to form the proposed feature set. To validate the prognostic value of the proposed method, radiomics analysis was performed and a combination of the proposed novel feature set and the classic radiomic features was evaluated. A feature selection algorithm was utilized to identify the optimal features, and a linear support vector machine was trained for the task of overall survival prediction in terms of area under the receiver operating characteristic curve (AUROC).ResultsThe proposed novel feature set was found to be prognostic and even outperformed the radiomics approach with a significant difference (AUROCSALoP = 0.90 vs. AUROCradiomic = 0.71) when feature selection was not employed, whereas with feature selection, a combination of the novel feature set and radiomics led to the highest prognostic values.ConclusionA novel feature set designed for capturing intra-tumor heterogeneity was introduced. Judging by their prognostic power, the proposed features have a promising potential for early survival prediction.
  •  
9.
  • Astaraki, Mehdi, PhD Student, 1984-, et al. (author)
  • Multimodal brain tumor segmentation with normal appearance autoencoder
  • 2019
  • In: International MICCAI Brainlesion Workshop. - Cham : Springer Nature. ; , s. 316-323
  • Conference paper (peer-reviewed)abstract
    • We propose a hybrid segmentation pipeline based on the autoencoders’ capability of anomaly detection. To this end, we, first, introduce a new augmentation technique to generate synthetic paired images. Gaining advantage from the paired images, we propose a Normal Appearance Autoencoder (NAA) that is able to remove tumors and thus reconstruct realistic-looking, tumor-free images. After estimating the regions where the abnormalities potentially exist, a segmentation network is guided toward the candidate region. We tested the proposed pipeline on the BraTS 2019 database. The preliminary results indicate that the proposed model improved the segmentation accuracy of brain tumor subregions compared to the U-Net model. 
  •  
10.
  • Astaraki, Mehdi, PhD Student, 1984-, et al. (author)
  • Normal appearance autoencoder for lung cancer detection and segmentation
  • 2019
  • In: International Conference on Medical Image Computing and Computer-Assisted Intervention. - Cham : Springer Nature. ; , s. 249-256
  • Conference paper (peer-reviewed)abstract
    • One of the major differences between medical doctor training and machine learning is that doctors are trained to recognize normal/healthy anatomy first. Knowing the healthy appearance of anatomy structures helps doctors to make better judgement when some abnormality shows up in an image. In this study, we propose a normal appearance autoencoder (NAA), that removes abnormalities from a diseased image. This autoencoder is semi-automatically trained using another partial convolutional in-paint network that is trained using healthy subjects only. The output of the autoencoder is then fed to a segmentation net in addition to the original input image, i.e. the latter gets both the diseased image and a simulated healthy image where the lesion is artificially removed. By getting access to knowledge of how the abnormal region is supposed to look, we hypothesized that the segmentation network could perform better than just being shown the original slice. We tested the proposed network on the LIDC-IDRI dataset for lung cancer detection and segmentation. The preliminary results show the NAA approach improved segmentation accuracy substantially in comparison with the conventional U-Net architecture. 
  •  
11.
  •  
12.
  • Astaraki, Mehdi, PhD Student, 1984-, et al. (author)
  • Prior-aware autoencoders for lung pathology segmentation
  • 2022
  • In: Medical Image Analysis. - : Elsevier BV. - 1361-8415 .- 1361-8423. ; 80, s. 102491-
  • Journal article (peer-reviewed)abstract
    • Segmentation of lung pathology in Computed Tomography (CT) images is of great importance for lung disease screening. However, the presence of different types of lung pathologies with a wide range of heterogeneities in size, shape, location, and texture, on one side, and their visual similarity with respect to surrounding tissues, on the other side, make it challenging to perform reliable automatic lesion seg-mentation. To leverage segmentation performance, we propose a deep learning framework comprising a Normal Appearance Autoencoder (NAA) model to learn the distribution of healthy lung regions and re-construct pathology-free images from the corresponding pathological inputs by replacing the pathological regions with the characteristics of healthy tissues. Detected regions that represent prior information re-garding the shape and location of pathologies are then integrated into a segmentation network to guide the attention of the model into more meaningful delineations. The proposed pipeline was tested on three types of lung pathologies, including pulmonary nodules, Non-Small Cell Lung Cancer (NSCLC), and Covid-19 lesion on five comprehensive datasets. The results show the superiority of the proposed prior model, which outperformed the baseline segmentation models in all the cases with significant margins. On av-erage, adding the prior model improved the Dice coefficient for the segmentation of lung nodules by 0.038, NSCLCs by 0.101, and Covid-19 lesions by 0.041. We conclude that the proposed NAA model pro-duces reliable prior knowledge regarding the lung pathologies, and integrating such knowledge into a prior segmentation network leads to more accurate delineations.
  •  
13.
  •  
14.
  • Bendazzoli, Simone, et al. (author)
  • Automatic rat brain segmentation from MRI using statistical shape models and random forest
  • 2019
  • In: MEDICAL IMAGING 2019. - : SPIE-INT SOC OPTICAL ENGINEERING. - 9781510625464 - 9781510625457
  • Conference paper (peer-reviewed)abstract
    • In MRI neuroimaging, the shimming procedure is used before image acquisition to correct for inhomogeneity of the static magnetic field within the brain. To correctly adjust the field, the brain's location and edges must first be identified from quickly-acquired low resolution data. This process is currently carried out manually by an operator, which can be time-consuming and not always accurate. In this work, we implement a quick and automatic technique for brain segmentation to be potentially used during the shimming. Our method is based on two main steps. First, a random forest classifier is used to get a preliminary segmentation from an input MRI image. Subsequently, a statistical shape model of the brain, which was previously generated from ground-truth segmentations, is fitted to the output of the classifier to obtain a model-based segmentation mask. In this way, a-priori knowledge on the brain's shape is included in the segmentation pipeline. The proposed methodology was tested on low resolution images of rat brains and further validated on rabbit brain images of higher resolution. Our results suggest that the present method is promising for the desired purpose in terms of time efficiency, segmentation accuracy and repeatability. Moreover, the use of shape modeling was shown to be particularly useful when handling low-resolution data, which could lead to erroneous classifications when using only machine learning-based methods.
  •  
15.
  •  
16.
  • Bilic, Patrick, et al. (author)
  • The Liver Tumor Segmentation Benchmark (LiTS)
  • 2023
  • In: Medical Image Analysis. - : Elsevier BV. - 1361-8415 .- 1361-8423. ; 84, s. 102680-
  • Journal article (peer-reviewed)abstract
    • In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018. The image dataset is diverse and contains primary and secondary tumors with varied sizes and appearances with various lesion-to-background levels (hyper-/hypo-dense), created in collaboration with seven hospitals and research institutions. Seventy-five submitted liver and liver tumor segmentation algorithms were trained on a set of 131 computed tomography (CT) volumes and were tested on 70 unseen test images acquired from different patients. We found that not a single algorithm performed best for both liver and liver tumors in the three events. The best liver segmentation algorithm achieved a Dice score of 0.963, whereas, for tumor segmentation, the best algorithms achieved Dices scores of 0.674 (ISBI 2017), 0.702 (MICCAI 2017), and 0.739 (MICCAI 2018). Retrospectively, we performed additional analysis on liver tumor detection and revealed that not all top-performing segmentation algorithms worked well for tumor detection. The best liver tumor detection method achieved a lesion-wise recall of 0.458 (ISBI 2017), 0.515 (MICCAI 2017), and 0.554 (MICCAI 2018), indicating the need for further research. LiTS remains an active benchmark and resource for research, e.g., contributing the liver-related segmentation tasks in http://medicaldecathlon.com/. In addition, both data and online evaluation are accessible via https://competitions.codalab.org/competitions/17094.
  •  
17.
  •  
18.
  • Brusini, Irene, et al. (author)
  • Changes in brain architecture are consistent with altered fear processing in domestic rabbits
  • 2018
  • In: Proceedings of the National Academy of Sciences of the United States of America. - : National Academy of Sciences. - 0027-8424 .- 1091-6490. ; 115:28, s. 7380-7385
  • Journal article (peer-reviewed)abstract
    • The most characteristic feature of domestic animals is their change in behavior associated with selection for tameness. Here we show, using high-resolution brain magnetic resonance imaging in wild and domestic rabbits, that domestication reduced amygdala volume and enlarged medial prefrontal cortex volume, supporting that areas driving fear have lost volume while areas modulating negative affect have gained volume during domestication. In contrast to the localized gray matter alterations, white matter anisotropy was reduced in the corona radiata, corpus callosum, and the subcortical white matter. This suggests a compromised white matter structural integrity in projection and association fibers affecting both afferent and efferent neural flow, consistent with reduced neural processing. We propose that compared with their wild ancestors, domestic rabbits are less fearful and have an attenuated flight response because of these changes in brain architecture.
  •  
19.
  • Brusini, Irene, et al. (author)
  • Fully automatic estimation of the waist of the nerve fiber layer at the optic nerve head angularly resolved
  • 2021
  • In: Progress in Biomedical Optics and Imaging - Proceedings of SPIE. - : SPIE-Intl Soc Optical Eng. ; , s. 1D1-1D8
  • Conference paper (peer-reviewed)abstract
    • The present project aims at developing a fully automatic software for estimation of the waist of the nerve fiber layer in the Optic Nerve Head (ONH) angularly resolved in the frontal plane as a tool for morphometric monitoring of glaucoma. The waist of the nerve fiber layer is here defined as Pigment epithelium central limit –Inner limit of the retina – Minimal Distance, (PIMD). 3D representations of the ONH were collected with high resolution OCT in young not glaucomatous eyes and glaucomatous eyes. An improved tool for manual annotation was developed in Python. This tool was found user friendly and to provide sufficiently precise manual annotation. PIMD was automatically estimated with a software consisting of one AI model for detection of the inner limit of the retina and another AI model for localization of the Optic nerve head Pigment epithelium Central limit (OPCL). In the current project, the AI model for OPCL localization was retrained with new data manually annotated with the improved tool for manual annotation both in not glaucomatous eyes and in glaucomatous eyes. Finally, automatic annotations were compared to 3 annotations made by 3 independent annotators in an independent subset of both the not glaucomatous and the glaucomatous eyes. It was found that the fully automatic estimation of PIMD-angle overlapped the 3 manual annotators with small variation among the manual annotators. Considering interobserver variation, the improved tool for manual annotation provided less variation than our original annotation tool in not glaucomatous eyes suggesting that variation in glaucomatous eyes is due to variable pathological anatomy, difficult to annotate without error. The small relative variation in relation to the substantial overall loss of PIMD in the glaucomatous eyes compared to the not glaucomatous eyes suggests that our software for fully automatic estimation of PIMD-angle can now be implemented clinically for monitoring of glaucoma progression.
  •  
20.
  • Brusini, Irene (author)
  • Methods for the analysis and characterization of brain morphology from MRI images
  • 2022
  • Doctoral thesis (other academic/artistic)abstract
    • Brain magnetic resonance imaging (MRI) is an imaging modality that produces detailed images of the brain without using any ionizing radiation. From a structural MRI scan, it is possible to extract morphological properties of different brain regions, such as their volume and shape. These measures can both allow a better understanding of how the brain changes due to multiple factors (e.g., environmental and pathological) and contribute to the identification of new imaging biomarkers of neurological and psychiatric diseases. The overall goal of the present thesis is to advance the knowledge on how brain MRI image processing can be effectively used to analyze and characterize brain structure.The first two works presented in this thesis are animal studies that primarily aim to use MRI data for analyzing differences between groups of interest. In Paper I, MRI scans from wild and domestic rabbits were processed to identify structural brain differences between these two groups. Domestication was found to significantly reshape brain structure in terms of both regional gray matter volume and white matter integrity. In Paper II, rat brain MRI scans were used to train a brain age prediction model. This model was then tested on both controls and a group of rats that underwent long-term environmental enrichment and dietary restriction. This healthy lifestyle intervention was shown to significantly affect the predicted brain age trajectories by slowing the rats' aging process compared to controls. Furthermore, brain age predicted on young adult rats was found to have a significant effect on survival.Papers III to V are human studies that propose deep learning-based methods for segmenting brain structures that can be severely affected by neurodegeneration. In particular, Papers III and IV focus on U-Net-based 2D segmentation of the corpus callosum (CC) in multiple sclerosis (MS) patients. In both studies, good segmentation accuracy was obtained and a significant correlation was found between CC area and the patient's level of cognitive and physical disability. Additionally, in Paper IV, shape analysis of the segmented CC revealed a significant association between disability and both CC thickness and bending angle. Conversely, in Paper V, a novel method for automatic segmentation of the hippocampus is proposed, which consists of embedding a statistical shape prior as context information into a U-Net-based framework. The inclusion of shape information was shown to significantly improve segmentation accuracy when testing the method on a new unseen cohort (i.e., different from the one used for training). Furthermore, good performance was observed across three different diagnostic groups (healthy controls, subjects with mild cognitive impairment and Alzheimer's patients) that were characterized by different levels of hippocampal atrophy.In summary, the studies presented in this thesis support the great value of MRI image analysis for the advancement of neuroscientific knowledge, and their contribution is mostly two-fold. First, by applying well-established processing methods on datasets that had not yet been explored in the literature, it was possible to characterize specific brain changes and disentangle relevant problems of a clinical or biological nature. Second, a technical contribution is provided by modifying and extending already-existing brain image processing methods to achieve good performance on new datasets.
  •  
21.
  • Brusini, Irene, et al. (author)
  • MRI-derived brain age as a biomarker of ageing in rats : validation using a healthy lifestyle intervention
  • 2022
  • In: Neurobiology of Aging. - : Elsevier BV. - 0197-4580 .- 1558-1497. ; 109, s. 204-215
  • Journal article (peer-reviewed)abstract
    • The difference between brain age predicted from MRI and chronological age (the so-called BrainAGE) has been proposed as an ageing biomarker. We analyse its cross-species potential by testing it on rats undergoing an ageing modulation intervention. Our rat brain age prediction model combined Gaussian process regression with a classifier and achieved a mean absolute error (MAE) of 4.87 weeks using cross-validation on a longitudinal dataset of 31 normal ageing rats. It was then tested on two groups of 24 rats (MAE = 9.89 weeks, correlation coefficient = 0.86): controls vs. a group under long-term environmental enrichment and dietary restriction (EEDR). Using a linear mixed-effects model, BrainAGE was found to increase more slowly with chronological age in EEDR rats ( p = 0 . 015 for the interaction term). Cox re-gression showed that older BrainAGE at 5 months was associated with higher mortality risk ( p = 0 . 03 ). Our findings suggest that lifestyle-related prevention approaches may help to slow down brain ageing in rodents and the potential of BrainAGE as a predictor of age-related health outcomes.
  •  
22.
  • Brusini, Irene, et al. (author)
  • Shape Information Improves the Cross-Cohort Performance of Deep Learning-Based Segmentation of the Hippocampus
  • 2020
  • In: Frontiers in Neuroscience. - : Frontiers Media S.A.. - 1662-4548 .- 1662-453X. ; 14
  • Journal article (peer-reviewed)abstract
    • Performing an accurate segmentation of the hippocampus from brain magnetic resonance images is a crucial task in neuroimaging research, since its structural integrity is strongly related to several neurodegenerative disorders, including Alzheimer's disease (AD). Some automatic segmentation tools are already being used, but, in recent years, new deep learning (DL)-based methods have been proven to be much more accurate in various medical image segmentation tasks. In this work, we propose a DL-based hippocampus segmentation framework that embeds statistical shape of the hippocampus as context information into the deep neural network (DNN). The inclusion of shape information is achieved with three main steps: (1) a U-Net-based segmentation, (2) a shape model estimation, and (3) a second U-Net-based segmentation which uses both the original input data and the fitted shape model. The trained DL architectures were tested on image data of three diagnostic groups [AD patients, subjects with mild cognitive impairment (MCI) and controls] from two cohorts (ADNI and AddNeuroMed). Both intra-cohort validation and cross-cohort validation were performed and compared with the conventional U-net architecture and some variations with other types of context information (i.e., autocontext and tissue-class context). Our results suggest that adding shape information can improve the segmentation accuracy in cross-cohort validation, i.e., when DNNs are trained on one cohort and applied to another. However, no significant benefit is observed in intra-cohort validation, i.e., training and testing DNNs on images from the same cohort. Moreover, compared to other types of context information, the use of shape context was shown to be the most successful in increasing the accuracy, while keeping the computational time in the order of a few minutes.
  •  
23.
  • Buizza, Giulia, et al. (author)
  • Early tumor response prediction for lung cancer patients using novel longitudinal pattern features from sequential PET/CT image scans
  • 2018
  • In: Physica medica (Testo stampato). - : ELSEVIER SCI LTD. - 1120-1797 .- 1724-191X. ; 54, s. 21-29
  • Journal article (peer-reviewed)abstract
    • Purpose: A new set of quantitative features that capture intensity changes in PET/CT images over time and space is proposed for assessing the tumor response early during chemoradiotherapy. The hypothesis whether the new features, combined with machine learning, improve outcome prediction is tested. Methods: The proposed method is based on dividing the tumor volume into successive zones depending on the distance to the tumor border. Mean intensity changes are computed within each zone, for CT and PET scans separately, and used as image features for tumor response assessment. Doing so, tumors are described by accounting for temporal and spatial changes at the same time. Using linear support vector machines, the new features were tested on 30 non-small cell lung cancer patients who underwent sequential or concurrent chemoradiotherapy. Prediction of 2-years overall survival was based on two PET-CT scans, acquired before the start and during the first 3 weeks of treatment. The predictive power of the newly proposed longitudinal pattern features was compared to that of previously proposed radiomics features and radiobiological parameters. Results: The highest areas under the receiver operating characteristic curves were 0.98 and 0.93 for patients treated with sequential and concurrent chemoradiotherapy, respectively. Results showed an overall comparable performance with respect to radiomics features and radiobiological parameters. Conclusions: A novel set of quantitative image features, based on underlying tumor physiology, was computed from PET/CT scans and successfully employed to distinguish between early responders and non-responders to chemoradiotherapy.
  •  
24.
  •  
25.
  • Carrizo, Garrizo, et al. (author)
  • Fully automatic estimation of the angular distribution of the waist of the nerve fiber layer in the optic nerve head
  • 2020
  • In: Ophthalmic Technologies XXX. - : SPIE-Intl Soc Optical Eng.
  • Conference paper (peer-reviewed)abstract
    • In this paper, an automatic strategy for measuring the thickness of the nerve fiber layer around the optic nerve head is proposed. The strategy presented uses two independent 2D U-nets that each perform a segmentation task. One network learns to segment the vitreous body in standard Cartesian image domain and the second learns to segment a disc around a point of interest in a polar image domain. The output from the neural networks are then combined to find the thickness of the waist of the nerve fiber layer as a function of the angle around the center of the optic nerve head in the frontal plane. The two networks are trained with a combined data set that has been captured on two separate OCT systems (spectral domain Topcon OCT 2000 and swept source Topcon OCT Triton) which have been annotated with a semi-automatic algorithm by up to 3 annotators. Initial results show that the automatic algorithm produces results that are comparable to the results from the semi-automatic algorithm used for reference, in a fraction of the time, independent of the annotator. The automatic algorithm has the potential to replace the semi-automatic algorithm and opens the possibility for clinical routine estimation of the nerve fiber layer. This would in turn allow the detection of loss of nerve fiber layer earlier than before which is anticipated to be important for detection of glaucoma.
  •  
26.
  • Chen, Heping, et al. (author)
  • Real-Time Cerebral Vessel Segmentation in Laser Speckle Contrast Image Based on Unsupervised Domain Adaptation
  • 2021
  • In: Frontiers in Neuroscience. - : Frontiers Media SA. - 1662-4548 .- 1662-453X. ; 15
  • Journal article (peer-reviewed)abstract
    • Laser speckle contrast imaging (LSCI) is a full-field, high spatiotemporal resolution and low-cost optical technique for measuring blood flow, which has been successfully used for neurovascular imaging. However, due to the low signal-noise ratio and the relatively small sizes, segmenting the cerebral vessels in LSCI has always been a technical challenge. Recently, deep learning has shown its advantages in vascular segmentation. Nonetheless, ground truth by manual labeling is usually required for training the network, which makes it difficult to implement in practice. In this manuscript, we proposed a deep learning-based method for real-time cerebral vessel segmentation of LSCI without ground truth labels, which could be further integrated into intraoperative blood vessel imaging system. Synthetic LSCI images were obtained with a synthesis network from LSCI images and public labeled dataset of Digital Retinal Images for Vessel Extraction, which were then used to train the segmentation network. Using matching strategies to reduce the size discrepancy between retinal images and laser speckle contrast images, we could further significantly improve image synthesis and segmentation performance. In the testing LSCI images of rodent cerebral vessels, the proposed method resulted in a dice similarity coefficient of over 75%.
  •  
27.
  • Chowdhury, Manish, et al. (author)
  • Segmentation of Cortical Bone using Fast Level Sets
  • 2017
  • In: MEDICAL IMAGING 2017. - : SPIE - International Society for Optical Engineering. - 9781510607118
  • Conference paper (peer-reviewed)abstract
    • Cortical bone plays a big role in the mechanical competence of bone. The analysis of cortical bone requires accurate segmentation methods. Level set methods are usually in the state-of-the-art for segmenting medical images. However, traditional implementations of this method are computationally expensive. This drawback was recently tackled through the so-called coherent propagation extension of the classical algorithm which has decreased computation times dramatically. In this study, we assess the potential of this technique for segmenting cortical bone in interactive time in 3D images acquired through High Resolution peripheral Quantitative Computed Tomography (HR-pQCT). The obtained segmentations are used to estimate cortical thickness and cortical porosity of the investigated images. Cortical thickness and Cortical porosity is computed using sphere fitting and mathematical morphological operations respectively. Qualitative comparison between the segmentations of our proposed algorithm and a previously published approach on six images volumes reveals superior smoothness properties of the level set approach. While the proposed method yields similar results to previous approaches in regions where the boundary between trabecular and cortical bone is well defined, it yields more stable segmentations in challenging regions. This results in more stable estimation of parameters of cortical bone. The proposed technique takes few seconds to compute, which makes it suitable for clinical settings.
  •  
28.
  • Commowick, Olivier, et al. (author)
  • Objective Evaluation of Multiple Sclerosis Lesion Segmentation using a Data Management and Processing Infrastructure
  • 2018
  • In: Scientific Reports. - : Nature Publishing Group. - 2045-2322. ; 8
  • Journal article (peer-reviewed)abstract
    • We present a study of multiple sclerosis segmentation algorithms conducted at the international MICCAI 2016 challenge. This challenge was operated using a new open-science computing infrastructure. This allowed for the automatic and independent evaluation of a large range of algorithms in a fair and completely automatic manner. This computing infrastructure was used to evaluate thirteen methods of MS lesions segmentation, exploring a broad range of state-of-theart algorithms, against a high-quality database of 53 MS cases coming from four centers following a common definition of the acquisition protocol. Each case was annotated manually by an unprecedented number of seven different experts. Results of the challenge highlighted that automatic algorithms, including the recent machine learning methods (random forests, deep learning,.), are still trailing human expertise on both detection and delineation criteria. In addition, we demonstrate that computing a statistically robust consensus of the algorithms performs closer to human expertise on one score (segmentation) although still trailing on detection scores.
  •  
29.
  • Fuchs, Alexander, 1985- (author)
  • Assessment of predicting blood flow and atherosclerosis in the aorta and renal arteries
  • 2020
  • Doctoral thesis (other academic/artistic)abstract
    • Cardiovascular diseases (CVD) are the most common cause of death in large parts of the world. Atherosclerosis (AS) has a major part in most CVDs. AS is a slowly developingdisease which is dependent on multiple factors such as genetics and life style (food, smoking, and physical activities). AS is primarily a disease of the arterial wall and develops preferentially at certain locations (such as arterial branches and in certain vessels like thecoronary arteries). The close relation between AS sites and blood flow has been well established over the years. However, due to multi-factorial causes, there exist no early prognostic tools for identifying individuals that should be treated prophylactically or followed up. The underlying hypothesis of this thesis was to determine if it is possible to use bloodflow simulations of patient-specific cases in order to identify individuals with risk for developing AS. CT scans from patients with renal artery stenosis (RAS) were used to get the affected vessels geometry. Blood flow in original and “reconstructed” arteries were simulated. Commonly used wall shear stress (WSS) related indicators of AS were studied to assess their use as risk indicators for developing AS. Divergent results indicated urgent need to assess the impact ofsimulation related factors on results. Altogether, blood flow in the following vessels was studied: The whole aorta with branches from the aortic arch and the abdominal aorta, abdominal aorta as well as the renal arteries, and separately the thoracic aorta with the three main branching arteries from the aortic arch. The impact of geometrical reconstruction, employed boundary conditions (BCs), effects of flow-rate, heart-rate and models of blood viscosity as function of local hematocrit (red blood cell, RBC, concentration) and shear-rate were studied in some detail. In addition to common WSS-related indicators, we suggested the use of endothelial activation models as a further risk indicator. The simulations data was used to extract not only the WSS-related data but also the impact of flow-rate on the extent of retrograde flow in the aorta and close to its walls. The formation of helical motion and flow instabilities (which at high flow- and heart-rate lead to turbulence) was also considered.Results:A large number of simulations (more than 100) were carried out. These simulations assessed the use of flow-rate specified BCs, pressure based BCs or so called windkessel (WK) outlet BCs that simulate effects of peripheral arterial compliance. The results showed high sensitivity of the flow to BCs. For example, the deceleration phase of the flow-rate is more prone to flow instabilities (as also expressed in terms of multiple inflection points in the streamwise velocity profile) as well as leading to retrograde flow. In contrast, the acceleration phase leads to uni-directional and more stable flow. As WSS unsteadiness was found to be pro-AS, it was important to assess the effect flow-rate deceleration, under physiological and pathological conditions. Peaks of retrograde flow occur at local temporal minima in flow-rate. WK BCs require ad-hoc adjusted parameters and are therefore useful only when fully patient specific (i.e. all information is valid for a particular patient at a particular point of time) data is available. Helical flows which are considered as atheroprotective, are formed naturally, depending primarily on the geometry (due to the bends in the thoracic aorta). Helical flow was also observed in the major aortic branches. The helical motion is weaker during flow deceleration and diastole when it may locally also change direction. Most common existing blood viscosity models are based on hematocrit and shear-rate. These models show strong variation of blood (mixture) viscosity. With strong shear-rate blood viscosity is lowest and is almost constant. The impact of blood viscosity in terms of dissipation is counter balanced by the shear-rate; At low shear-rate the blood has larger viscosity and at high shear-rate it is the opposite. This effect and due to the temporal variations in the local flow conditions the effect of blood rheology on the WSS indicators is weak. Tracking of blood components and clot-models shows that the retrograde motion and the flow near branches may have so strong curvature that centrifugal force can become important. This effect may lead to the transport of a thrombus from the descending aorta back to the branches of the aortic arch and could cause embolic stroke. The latter results confirm clinical observation of the risk of stroke due to transport of emboli from the proximal part of the descending aorta upstream to the vessels branching from the aortic arch and which lead blood to the brain.Conclusions:The main reasons for not being able to propose an early predictive tool for future developmentof AS are four-folded:i. At present, the mechanisms behind AS are not adequately understood to enable to define aset of parameters that are sensitive and specific enough to be predictive of its development.ii. The lack of accurate patient-specific data (BC:s) over the whole physiological “envelop”allows only limited number of flow simulations which may not be adequate for patientspecificpredictive purposes.iii. The shortcomings of current models with respect to material properties of blood andarterial walls (for patient-specific space- and time-variations) are lacking.iv. There is a need for better simulation data processing, i.e. tools that enable deducinggeneral predictive atherosclerotic parameters from a limited number of simulations, throughe.g. extending reduced modeling and/or deep learning.The results do show, however, that blood flow simulations may produce very useful data thatenhances understanding of clinically observed processes such as explaining helical- andretrograde flows and the transport of blood components and emboli in larger arteries.
  •  
30.
  • Jimenez-del-Toro, Oscar, et al. (author)
  • Cloud-Based Evaluation of Anatomical Structure Segmentation and Landmark Detection Algorithms : VISCERAL Anatomy Benchmarks
  • 2016
  • In: IEEE Transactions on Medical Imaging. - : Institute of Electrical and Electronics Engineers (IEEE). - 0278-0062 .- 1558-254X. ; 35:11, s. 2459-2475
  • Journal article (peer-reviewed)abstract
    • Variations in the shape and appearance of anatomical structures in medical images are often relevant radiological signs of disease. Automatic tools can help automate parts of this manual process. A cloud-based evaluation framework is presented in this paper including results of benchmarking current state-of-the-art medical imaging algorithms for anatomical structure segmentation and landmark detection: the VISCERAL Anatomy benchmarks. The algorithms are implemented in virtual machines in the cloud where participants can only access the training data and can be run privately by the benchmark administrators to objectively compare their performance in an unseen common test set. Overall, 120 computed tomography and magnetic resonance patient volumes were manually annotated to create a standard Gold Corpus containing a total of 1295 structures and 1760 landmarks. Ten participants contributed with automatic algorithms for the organ segmentation task, and three for the landmark localization task. Different algorithms obtained the best scores in the four available imaging modalities and for subsets of anatomical structures. The annotation framework, resulting data set, evaluation setup, results and performance analysis from the three VISCERAL Anatomy benchmarks are presented in this article. Both the VISCERAL data set and Silver Corpus generated with the fusion of the participant algorithms on a larger set of non-manually-annotated medical images are available to the research community.
  •  
31.
  • Jörgens, Daniel, 1988- (author)
  • Development and application of rule- and learning-based approaches within the scope of neuroimaging : Tensor voting, tractography and machine learning
  • 2020
  • Doctoral thesis (other academic/artistic)abstract
    • The opportunity to non-invasively probe the structure and function of different parts of the human body makes medical imaging an indispensable tool in clinical diagnostics and related fields of research. Especially neuroscientists rely on modalities like structural or functional Magnetic Resonance Imaging, Computed Tomography or Positron Emission Tomography to study the human brain in vivo. But also in clinical routine, diagnosis, screening or follow-up of different pathological conditions build upon the use of neuroimaging.Computational solutions are essential for the analysis of medical images. While in the case of conventional photography the recorded signal comprises the actual image, most medical imaging devices require the reconstruction of an image from the acquired data. However, not only the image formation, but also further processing tasks to assist doctors or researchers in the interpretation of the data and eventually in subsequent decision making, rely more and more on automation. Typical tasks range from locating and measuring objects in a single patient, e.g. a particular organ, a tumour or a specific region in the brain, to comparing such measurements over time between groups consisting of large numbers of subjects. Automated solutions for these scenarios are required to model complex relations of data in the presence of acquisition noise and subject variability while assuring a tractable computational demand.Traditionally, the development of computational algorithms for medical imaging problems focused on rule-based strategies. Explicitly defined rules that encode the knowledge of the developer are characteristic for such approaches. Within the last decade, this paradigm began to change and learning-based models dramatically gained in popularity. These rely on fitting a complex model to large amounts of data samples, often annotated, which are representative for a particular problem. Instead of manually designing the sought-after solution, it is ‘learned from the data’. While these models have shown enormous potential, they also pose important questions for method developers. How can I get hold of enough data? How much data is enough? How can I obtain proper annotations?This thesis comprises six studies covering the development and the application of methods along the whole pipeline of medical image analysis. Studies I and II propose different extensions to the method of tensor voting to make it applicable in specific medical imaging problems. Studies III–V address the use of modern machine learning techniques, in particular neural networks, in the field of tractography. Notably, the challenge of obtaining adequately annotated data samples is a topic in Study V. In Study VI, a prospective neuroimaging study of unilateral ear canal atresia in adults is presented, covering the application of methods from data acquisition to group comparison. Overall, the compiled works contributed, in one way or the other, to the non-invasive extraction of knowledge from the human body through automated processing of medical images.
  •  
32.
  • Kirişli, Hortense, et al. (author)
  • Standardized evaluation framework for evaluating coronary artery stenosis detection, stenosis quantification and lumen segmentation algorithms in computed tomography angiography
  • 2013
  • In: Medical Image Analysis. - : Elsevier. - 1361-8415 .- 1361-8423. ; 17:8, s. 859-876
  • Journal article (peer-reviewed)abstract
    • Though conventional coronary angiography (CCA) has been the standard of reference for diagnosing coronary artery disease in the past decades, computed tomography angiography (CIA) has rapidly emerged, and is nowadays widely used in clinical practice. Here, we introduce a standardized evaluation framework to reliably evaluate and compare the performance of the algorithms devised to detect and quantify the coronary artery stenoses, and to segment the coronary artery lumen in CIA data. The objective of this evaluation framework is to demonstrate the feasibility of dedicated algorithms to: (I) (semi-)automatically detect and quantify stenosis on CIA, in comparison with quantitative coronary angiography (QCA) and CIA consensus reading, and (2) (semi-)automatically segment the coronary lumen on CIA, in comparison with expert's manual annotation. A database consisting of 48 multicenter multivendor cardiac CIA datasets with corresponding reference standards are described and made available. The algorithms from 11 research groups were quantitatively evaluated and compared. The results show that (1) some of the current stenosis detection/quantification algorithms may be used for triage or as a second-reader in clinical practice, and that (2) automatic lumen segmentation is possible with a precision similar to that obtained by experts. The framework is open for new submissions through the website, at http://coronary.bigr.nl/stenoses/.
  •  
33.
  •  
34.
  • Kisonaite, Konstancija, et al. (author)
  • Automatic estimation of the cross-sectional area of the waist of the nerve fiber layer at the optic nerve head
  • 2023
  • In: Acta Ophthalmologica. - : John Wiley & Sons. - 1755-375X .- 1755-3768.
  • Journal article (peer-reviewed)abstract
    • PurposeGlaucoma leads to pathological loss of axons in the retinal nerve fibre layer at the optic nerve head (ONH). This study aimed to develop a strategy for the estimation of the cross-sectional area of the axons in the ONH. Furthermore, improving the estimation of the thickness of the nerve fibre layer, as compared to a method previously published by us.MethodsIn the 3D-OCT image of the ONH, the central limit of the pigment epithelium and the inner limit of the retina, respectively, were identified with deep learning algorithms. The minimal distance was estimated at equidistant angles around the circumference of the ONH. The cross-sectional area was estimated by the computational algorithm. The computational algorithm was applied on 16 non-glaucomatous subjects.ResultsThe mean cross-sectional area of the waist of the nerve fibre layer in the ONH was 1.97 ± 0.19 mm2. The mean difference in minimal thickness of the waist of the nerve fibre layer between our previous and the current strategies was estimated as CIμ (0.95) 0 ± 1 μm (d.f. = 15).ConclusionsThe developed algorithm demonstrated an undulating cross-sectional area of the nerve fibre layer at the ONH. Compared to studies using radial scans, our algorithm resulted in slightly higher values for cross-sectional area, taking the undulations of the nerve fibre layer at the ONH into account. The new algorithm for estimation of the thickness of the waist of the nerve fibre layer in the ONH yielded estimates of the same order as our previous algorithm.
  •  
35.
  • Kisonaite, Konstancija, et al. (author)
  • Estimation of the cross-sectional surface area of the waist of the nerve fiber layer at the optic nerve head
  • 2022
  • In: Progress in Biomedical Optics and Imaging. - : SPIE-Intl Soc Optical Eng.
  • Conference paper (peer-reviewed)abstract
    • Glaucoma is a global disease that leads to blindness due to pathological loss of retinal ganglion cell axons in the optic nerve head (ONH). The presented project aims at improving a computational algorithm for estimating the thickness and surface area of the waist of the nerve fiber layer in the ONH. Our currently developed deep learning AI algorithm meets the need for a morphometric parameter that detects glaucomatous change earlier than current clinical follow-up methods. In 3D OCT image volumes, two different AI algorithms identify the Optic nerve head Pigment epithelium Central Limit (OPCL) and the Inner limit of the Retina Closest Point (IRCP) in a 3D grid. Our computational algorithm includes the undulating surface area of the waist of the ONH, as well as waist thickness. In 16 eyes of 16 non-glaucomatous subjects aged [20;30] years, the mean difference in minimal thickness of the waist of the nerve fiber layer between our previous and the current post-processing strategies was estimated as CIμ(0.95) 0 ±1 μm (D.f. 15). The mean surface area of the waist of the nerve fiber layer in the optic nerve head was 1.97 ± 0.19 mm2. Our computational algorithm results in slightly higher values for surface areas compared to published work, but as expected, this may be due to surface undulations of the waist being considered. Estimates of the thickness of the waist of the ONH yields estimates of the same order as our previous computational algorithm.
  •  
36.
  • Kumar, Neeraj, et al. (author)
  • A Multi-Organ Nucleus Segmentation Challenge
  • 2020
  • In: IEEE Transactions on Medical Imaging. - : Institute of Electrical and Electronics Engineers (IEEE). - 0278-0062 .- 1558-254X. ; 39:5, s. 1380-1391
  • Journal article (peer-reviewed)abstract
    • Generalized nucleus segmentation techniques can contribute greatly to reducing the time to develop and validate visual biomarkers for new digital pathology datasets. We summarize the results of MoNuSeg 2018 Challenge whose objective was to develop generalizable nuclei segmentation techniques in digital pathology. The challenge was an official satellite event of the MICCAI 2018 conference in which 32 teams with more than 80 participants from geographically diverse institutes participated. Contestants were given a training set with 30 images from seven organs with annotations of 21,623 individual nuclei. A test dataset with 14 images taken from seven organs, including two organs that did not appear in the training set was released without annotations. Entries were evaluated based on average aggregated Jaccard index (AJI) on the test set to prioritize accurate instance segmentation as opposed to mere semantic segmentation. More than half the teams that completed the challenge outperformed a previous baseline. Among the trends observed that contributed to increased accuracy were the use of color normalization as well as heavy data augmentation. Additionally, fully convolutional networks inspired by variants of U-Net, FCN, and Mask-RCNN were popularly used, typically based on ResNet or VGG base architectures. Watershed segmentation on predicted semantic segmentation maps was a popular post-processing strategy. Several of the top techniques compared favorably to an individual human annotator and can be used with confidence for nuclear morphometrics.
  •  
37.
  • Li, Quan, et al. (author)
  • Diagnostic performance of CT-derived resting distal to aortic pressure ratio (resting Pd/Pa) vs. CT-derived fractional flow reserve (CT-FFR) in coronary lesion severity assessment
  • 2021
  • In: Annals of Translational Medicine. - : AME Publishing Company. - 2305-5839 .- 2305-5847. ; 9:17
  • Journal article (peer-reviewed)abstract
    • Background: Computed tomography-derived fractional flow reserve (CT-FFR) has emerged as a promising non-invasive substitute for fractional flow reserve (FFR) measurement. Normally, CT-FFR providing functional significance of coronary artery disease (CAD) by using a simplified total coronary resistance index (TCRI) model. Yet the error or discrepancy caused by this simplified model remains unclear. Methods: A total of 20 consecutive patients with suspected CAD who underwent CTA and invasive FFR measurement were retrospectively analyzed. CT-FFR and CT-(Pd/Pa)rest values derived from the coronary CTA images. The diagnostic performance of CT-FFR and CT-(Pd/Pa)rest were evaluated on a per-vessel level using C statistics with invasive FFR<0.80 as the reference standard. Results: Of the 25 vessels eventually analyzed, the prevalence of functionally significant CAD were 64%. The Youden index of the ROC curve indicated that the best cutoff value of invasive resting Pd/Pa was 0.945 for identifying functionally significant lesions. Sensitivity, specificity, negative predictive value, positive predictive value and accuracy were 85%, 91%, 92%, 83% and 88% for CT-(Pd/Pa)rest and 85%, 58% 69%, 78% and 72% for CT-FFR. Area under the receiver-operating characteristic curve (AUC) to detect functionally significant stenoses of CT-(Pd/Pa)rest and CT-FFR were 0.87 and 0.90. Conclusions: In this study, the results suggest CT-derived resting Pd/Pa has a potential advantage over CT-FFR in triaging patients for revascularization.
  •  
38.
  • Lidayová, Kristína, et al. (author)
  • Coverage segmentation of 3D thin structures
  • 2015
  • In: Image Processing Theory, Tools and Applications (IPTA), 2015 International Conference on. - Piscataway, NJ : IEEE conference proceedings. - 9781479986361 ; , s. 23-28
  • Conference paper (peer-reviewed)abstract
    • We present a coverage segmentation method for extracting thin structures in three-dimensional images. The proposed method is an improved extension of our coverage segmentation method for 2D thin structures. We suggest implementation that enables low memory consumption and processing time, and by that applicability of the method on real CTA data. The method needs a reliable crisp segmentation as an input and uses information from linear unmixing and the crisp segmentation to create a high-resolution crisp reconstruction of the object, which can then be used as a final result, or down-sampled to a coverage segmentation at the starting image resolution. Performed quantitative and qualitative analysis confirm excellent performance of the proposed method, both on synthetic and on real data, in particular in terms of robustness to noise.
  •  
39.
  • Lidayová, Kristína, et al. (author)
  • Fast vascular skeleton extraction algorithm
  • 2016
  • In: Pattern Recognition Letters. - : Elsevier. - 0167-8655 .- 1872-7344. ; 76, s. 67-75
  • Journal article (peer-reviewed)abstract
    • Vascular diseases are a common cause of death, particularly in developed countries. Computerized image analysis tools play a potentially important role in diagnosing and quantifying vascular pathologies. Given the size and complexity of modern angiographic data acquisition, fast, automatic and accurate vascular segmentation is a challenging task.In this paper we introduce a fully automatic high-speed vascular skeleton extraction algorithm that is intended as a first step in a complete vascular tree segmentation program. The method takes a 3D unprocessed Computed Tomography Angiography (CTA) scan as input and produces a graph in which the nodes are centrally located artery voxels and the edges represent connections between them. The algorithm works in two passes where the first pass is designed to extract the skeleton of large arteries and the second pass focuses on smaller vascular structures. Each pass consists of three main steps. The first step sets proper parameters automatically using Gaussian curve fitting. In the second step different filters are applied to detect voxels - nodes - that are part of arteries. In the last step the nodes are connected in order to obtain a continuous centerline tree for the entire vasculature. Structures found, that do not belong to the arteries, are removed in a final anatomy-based analysis. The proposed method is computationally efficient with an average execution time of 29s and has been tested on a set of CTA scans of the lower limbs achieving an average overlap rate of 97% and an average detection rate of 71%.
  •  
40.
  • Lidayová, K., et al. (author)
  • Skeleton-based fast, fully automated generation of vessel tree structure for clinical evaluation of blood vessel systems
  • 2017
  • In: Skeletonization. - : Elsevier. - 9780081012925 - 9780081012918 ; , s. 345-382
  • Book chapter (other academic/artistic)abstract
    • This chapter focuses on skeleton detection for clinical evaluation of blood vessel systems. In clinical evaluation, there is a need for fast and accurate segmentation algorithms that can reliably provide vessel measurements and additional information for clinicians to decide the diagnosis.Since blood vessels have a characteristic tubular shape, their segmentation can be accelerated and facilitated by first identifying the rough vessel centerlines, which can be seen as a special case of an image skeleton extraction algorithm. A segmentation algorithm will finally use the resulting skeleton as a seed region during the segmentation. The proposed method takes an unprocessed 3D computed tomography angiography (CTA) scan as an input and generates a connected graph of centrally located arterial voxels. The method works in two levels, where large arteries are captured in the first level, and small arteries are added in the second one. Experimental results show that the method can achieve high overlap rate and acceptable detection rate accuracies. High computational efficiency of the method opens the possibility for an interactive clinical use.
  •  
41.
  • Lidén, Mats, 1976-, et al. (author)
  • Machine learning slice-wise whole-lung CT emphysema score correlates with airway obstruction
  • 2024
  • In: European Radiology. - : Springer. - 0938-7994 .- 1432-1084. ; 34:1, s. 39-49
  • Journal article (peer-reviewed)abstract
    • OBJECTIVES: Quantitative CT imaging is an important emphysema biomarker, especially in smoking cohorts, but does not always correlate to radiologists' visual CT assessments. The objectives were to develop and validate a neural network-based slice-wise whole-lung emphysema score (SWES) for chest CT, to validate SWES on unseen CT data, and to compare SWES with a conventional quantitative CT method.MATERIALS AND METHODS: Separate cohorts were used for algorithm development and validation. For validation, thin-slice CT stacks from 474 participants in the prospective cross-sectional Swedish CArdioPulmonary bioImage Study (SCAPIS) were included, 395 randomly selected and 79 from an emphysema cohort. Spirometry (FEV1/FVC) and radiologists' visual emphysema scores (sum-visual) obtained at inclusion in SCAPIS were used as reference tests. SWES was compared with a commercially available quantitative emphysema scoring method (LAV950) using Pearson's correlation coefficients and receiver operating characteristics (ROC) analysis.RESULTS: SWES correlated more strongly with the visual scores than LAV950 (r = 0.78 vs. r = 0.41, p < 0.001). The area under the ROC curve for the prediction of airway obstruction was larger for SWES than for LAV950 (0.76 vs. 0.61, p = 0.007). SWES correlated more strongly with FEV1/FVC than either LAV950 or sum-visual in the full cohort (r =  - 0.69 vs. r =  - 0.49/r =  - 0.64, p < 0.001/p = 0.007), in the emphysema cohort (r =  - 0.77 vs. r =  - 0.69/r =  - 0.65, p = 0.03/p = 0.002), and in the random sample (r =  - 0.39 vs. r =  - 0.26/r =  - 0.25, p = 0.001/p = 0.007).CONCLUSION: The slice-wise whole-lung emphysema score (SWES) correlates better than LAV950 with radiologists' visual emphysema scores and correlates better with airway obstruction than do LAV950 and radiologists' visual scores.CLINICAL RELEVANCE STATEMENT: The slice-wise whole-lung emphysema score provides quantitative emphysema information for CT imaging that avoids the disadvantages of threshold-based scores and is correlated more strongly with reference tests than LAV950 and reader visual scores.KEY POINTS: • A slice-wise whole-lung emphysema score (SWES) was developed to quantify emphysema in chest CT images. • SWES identified visual emphysema and spirometric airflow limitation significantly better than threshold-based score (LAV950). • SWES improved emphysema quantification in CT images, which is especially useful in large-scale research.
  •  
42.
  • Mahbod, A., et al. (author)
  • A Two-Stage U-Net Algorithm for Segmentation of Nuclei in H&E-Stained Tissues
  • 2019
  • In: Digital Pathology. - Cham : Springer Verlag. - 9783030239367 ; , s. 75-82
  • Conference paper (peer-reviewed)abstract
    • Nuclei segmentation is an important but challenging task in the analysis of hematoxylin and eosin (H&E)-stained tissue sections. While various segmentation methods have been proposed, machine learning-based algorithms and in particular deep learning-based models have been shown to deliver better segmentation performance. In this work, we propose a novel approach to segment touching nuclei in H&E-stained microscopic images using U-Net-based models in two sequential stages. In the first stage, we perform semantic segmentation using a classification U-Net that separates nuclei from the background. In the second stage, the distance map of each nucleus is created using a regression U-Net. The final instance segmentation masks are then created using a watershed algorithm based on the distance maps. Evaluated on a publicly available dataset containing images from various human organs, the proposed algorithm achieves an average aggregate Jaccard index of 56.87%, outperforming several state-of-the-art algorithms applied on the same dataset.
  •  
43.
  • Mahbod, Amirreza, et al. (author)
  • Automatic brain segmentation using artificial neural networks with shape context
  • 2018
  • In: Pattern Recognition Letters. - : Elsevier. - 0167-8655 .- 1872-7344. ; 101, s. 74-79
  • Journal article (peer-reviewed)abstract
    • Segmenting brain tissue from MR scans is thought to be highly beneficial for brain abnormality diagnosis, prognosis monitoring, and treatment evaluation. Many automatic or semi-automatic methods have been proposed in the literature in order to reduce the requirement of user intervention, but the level of accuracy in most cases is still inferior to that of manual segmentation. We propose a new brain segmentation method that integrates volumetric shape models into a supervised artificial neural network (ANN) framework. This is done by running a preliminary level-set based statistical shape fitting process guided by the image intensity and then passing the signed distance maps of several key structures to the ANN as feature channels, in addition to the conventional spatial-based and intensity-based image features. The so-called shape context information is expected to help the ANN to learn local adaptive classification rules instead of applying universal rules directly on the local appearance features. The proposed method was tested on a public datasets available within the open MICCAI grand challenge (MRBrainS13). The obtained average Dice coefficient were 84.78%, 88.47%, 82.76%, 95.37% and 97.73% for gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), brain (WM + GM) and intracranial volume respectively. Compared with other methods tested on the same dataset, the proposed method achieved competitive results with comparatively shorter training time.
  •  
44.
  • Mahbod, Amirreza, et al. (author)
  • Automatic multiple sclerosis lesion segmentation using hybrid artificial neural networks
  • 2016
  • In: MSSEG Challenge Proceedings: Multiple Sclerosis Lesions Segmentation Challenge Using a Data Management and Processing Infrastructure. ; , s. 29-36
  • Conference paper (peer-reviewed)abstract
    • Multiple sclerosis (MS) is a demyelinating disease which could cause severe motor and cognitive deterioration. Segmenting MS lesions could be highly beneficial for diagnosing, analyzing and monitoring treatment efficacy. To do so, manual segmentation, performed by experts, is the conventional method in hospitals and clinical environments. Although manual segmentation is accurate, it is time consuming, expensive and might not be reliable. The aim of this work was to propose an automatic method for MS lesion segmentation and evaluate it using brain images available within the MICCAI MS segmentation challenge. The proposed method employs supervised artificial neural network based algorithm, exploiting intensity-based and spatial-based features as the input of the network. This method achieved relatively accurate results with acceptable training and testing time for training datasets.
  •  
45.
  • Mahbod, A., et al. (author)
  • Breast Cancer Histological Image Classification Using Fine-Tuned Deep Network Fusion
  • 2018
  • In: 15th International Conference on Image Analysis and Recognition, ICIAR 2018. - Cham : Springer. - 9783319929996 ; , s. 754-762
  • Conference paper (peer-reviewed)abstract
    • Breast cancer is the most common cancer type in women worldwide. Histological evaluation of the breast biopsies is a challenging task even for experienced pathologists. In this paper, we propose a fully automatic method to classify breast cancer histological images to four classes, namely normal, benign, in situ carcinoma and invasive carcinoma. The proposed method takes normalized hematoxylin and eosin stained images as input and gives the final prediction by fusing the output of two residual neural networks (ResNet) of different depth. These ResNets were first pre-trained on ImageNet images, and then fine-tuned on breast histological images. We found that our approach outperformed a previous published method by a large margin when applied on the BioImaging 2015 challenge dataset yielding an accuracy of 97.22%. Moreover, the same approach provided an excellent classification performance with an accuracy of 88.50% when applied on the ICIAR 2018 grand challenge dataset using 5-fold cross validation.
  •  
46.
  • Mahbod, A., et al. (author)
  • Fusing fine-tuned deep features for skin lesion classification
  • 2019
  • In: Computerized Medical Imaging and Graphics. - : Elsevier. - 0895-6111 .- 1879-0771. ; 71, s. 19-29
  • Journal article (peer-reviewed)abstract
    • Malignant melanoma is one of the most aggressive forms of skin cancer. Early detection is important as it significantly improves survival rates. Consequently, accurate discrimination of malignant skin lesions from benign lesions such as seborrheic keratoses or benign nevi is crucial, while accurate computerised classification of skin lesion images is of great interest to support diagnosis. In this paper, we propose a fully automatic computerised method to classify skin lesions from dermoscopic images. Our approach is based on a novel ensemble scheme for convolutional neural networks (CNNs) that combines intra-architecture and inter-architecture network fusion. The proposed method consists of multiple sets of CNNs of different architecture that represent different feature abstraction levels. Each set of CNNs consists of a number of pre-trained networks that have identical architecture but are fine-tuned on dermoscopic skin lesion images with different settings. The deep features of each network were used to train different support vector machine classifiers. Finally, the average prediction probability classification vectors from different sets are fused to provide the final prediction. Evaluated on the 600 test images of the ISIC 2017 skin lesion classification challenge, the proposed algorithm yields an area under receiver operating characteristic curve of 87.3% for melanoma classification and an area under receiver operating characteristic curve of 95.5% for seborrheic keratosis classification, outperforming the top-ranked methods of the challenge while being simpler compared to them. The obtained results convincingly demonstrate our proposed approach to represent a reliable and robust method for feature extraction, model fusion and classification of dermoscopic skin lesion images.
  •  
47.
  • Mahbod, Amirreza, et al. (author)
  • Investigating and Exploiting Image Resolution for Transfer Learning-based Skin Lesion Classification
  • 2021
  • In: 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR). - : IEEE Computer Society. ; , s. 4047-4053
  • Conference paper (peer-reviewed)abstract
    • Skin cancer is among the most common cancer types. Dermoscopic image analysis improves the diagnostic accuracy for detection of malignant melanoma and other pigmented skin lesions when compared to unaided visual inspection. Hence, computer-based methods to support medical experts in the diagnostic procedure are of great interest. Fine-tuning pre-trained convolutional neural networks (CNNs) has been shown to work well for skin lesion classification. Pre-trained CNNs are typically trained with natural images of a fixed image size significantly smaller than captured skin lesion images and consequently dermoscopic images are downsampled for fine-tuning. However, useful medical information may be lost during this transformation. In this paper, we explore the effect of input image size on skin lesion classification performance of fine-tuned CNNs. For this, we resize dermoscopic images to different resolutions, ranging from 64 x 64 to 768 x 768 pixels and investigate the resulting classification performance of three well-established CNNs, namely DenseNet-121, ResNet-18, and ResNet-50. Our results show that using very small images (of size 64 x 64 pixels) degrades the classification performance, while images of size 128 x 128 pixels and above support good performance with larger image sizes leading to slightly improved classification. We further propose a novel fusion approach based on a three-level ensemble strategy that exploits multiple fine-tuned networks trained with dermoscopic images at various sizes. When applied on the ISIC 2017 skin lesion classification challenge, our fusion approach yields an area under the receiver operating characteristic curve of 89.2% and 96.6% for melanoma classification and seborrheic keratosis classification, respectively, outperforming state-of-the-art algorithms.
  •  
48.
  • Mahbod, Amirreza, et al. (author)
  • SKIN LESION CLASSIFICATION USING HYBRID DEEP NEURAL NETWORKS
  • 2019
  • In: 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP). - : IEEE. - 9781479981311 ; , s. 1229-1233
  • Conference paper (peer-reviewed)abstract
    • Skin cancer is one of the major types of cancers with an increasing incidence over the past decades. Accurately diagnosing skin lesions to discriminate between benign and malignant skin lesions is crucial to ensure appropriate patient treatment. While there are many computerised methods for skin lesion classification, convolutional neural networks (CNNs) have been shown to be superior over classical methods. In this work, we propose a fully automatic computerised method for skin lesion classification which employs optimised deep features from a number of well-established CNNs and from different abstraction levels. We use three pre-trained deep models, namely AlexNet, VGG16 and ResNet-18, as deep feature generators. The extracted features then are used to train support vector machine classifiers. In a final stage, the classifier outputs are fused to obtain a classification. Evaluated on the 150 validation images from the ISIC 2017 classification challenge, the proposed method is shown to achieve very good classification performance, yielding an area under receiver operating characteristic curve of 83.83% for melanoma classification and of 97.55% for seborrheic keratosis classification.
  •  
49.
  • Mahbod, A., et al. (author)
  • Transfer learning using a multi-scale and multi-network ensemble for skin lesion classification
  • 2020
  • In: Computer Methods and Programs in Biomedicine. - : Elsevier BV. - 0169-2607 .- 1872-7565. ; 193, s. 105475-
  • Journal article (peer-reviewed)abstract
    • Background and objective: Skin cancer is among the most common cancer types in the white population and consequently computer aided methods for skin lesion classification based on dermoscopic images are of great interest. A promising approach for this uses transfer learning to adapt pre-trained convolutional neural networks (CNNs) for skin lesion diagnosis. Since pre-training commonly occurs with natural images of a fixed image resolution and these training images are usually significantly smaller than dermoscopic images, downsampling or cropping of skin lesion images is required. This however may result in a loss of useful medical information, while the ideal resizing or cropping factor of dermoscopic images for the fine-tuning process remains unknown. Methods: We investigate the effect of image size for skin lesion classification based on pre-trained CNNs and transfer learning. Dermoscopic images from the International Skin Imaging Collaboration (ISIC) skin lesion classification challenge datasets are either resized to or cropped at six different sizes ranging from 224 × 224 to 450 × 450. The resulting classification performance of three well established CNNs, namely EfficientNetB0, EfficientNetB1 and SeReNeXt-50 is explored. We also propose and evaluate a multi-scale multi-CNN (MSM-CNN) fusion approach based on a three-level ensemble strategy that utilises the three network architectures trained on cropped dermoscopic images of various scales. Results: Our results show that image cropping is a better strategy compared to image resizing delivering superior classification performance at all explored image scales. Moreover, fusing the results of all three fine-tuned networks using cropped images at all six scales in the proposed MSM-CNN approach boosts the classification performance compared to a single network or a single image scale. On the ISIC 2018 skin lesion classification challenge test set, our MSM-CNN algorithm yields a balanced multi-class accuracy of 86.2% making it the currently second ranked algorithm on the live leaderboard. Conclusions: We confirm that the image size has an effect on skin lesion classification performance when employing transfer learning of CNNs. We also show that image cropping results in better performance compared to image resizing. Finally, a straightforward ensembling approach that fuses the results from images cropped at six scales and three fine-tuned CNNs is shown to lead to the best classification performance.
  •  
50.
  • Maria Marreiros, Filipe Miguel, 1978-, et al. (author)
  • Non-rigid Deformation Pipeline for Compensation of Superficial Brain Shift
  • 2013
  • In: Medical Image Computing and Computer-Assisted Intervention, MICCAI 2013. - Berlin, Heidelberg : Springer Berlin/Heidelberg. - 9783642407628 - 9783642407635 ; , s. 141-148
  • Conference paper (peer-reviewed)abstract
    • The correct visualization of anatomical structures is a critical component of neurosurgical navigation systems, to guide the surgeon to the areas of interest as well as to avoid brain damage. A major challenge for neuronavigation systems is the brain shift, or deformation of the exposed brain in comparison to preoperative Magnetic Resonance (MR) image sets. In this work paper, a non-rigid deformation pipeline is proposed for brain shift compensation of preoperative imaging datasets using superficial blood vessels as landmarks. The input was preoperative and intraoperative 3D image sets of superficial vessel centerlines. The intraoperative vessels (obtained using 3 Near-Infrared cameras) were registered and aligned with preoperative Magnetic Resonance Angiography vessel centerlines using manual interaction for the rigid transformation and, for the non-rigid transformation, the non-rigid point set registration method Coherent Point Drift. The rigid registration transforms the intraoperative points from the camera coordinate system to the preoperative MR coordinate system, and the non-rigid registration deals with local transformations in the MR coordinate system. Finally, the generation of a new deformed volume is achieved with the Thin-Plate Spline (TPS) method using as control points the matches in the MR coordinate system found in the previous step. The method was tested in a rabbit brain exposed via craniotomy, where deformations were produced by a balloon inserted into the brain. There was a good correlation between the real state of the brain and the deformed volume obtained using the pipeline. Maximum displacements were approximately 4.0 mm for the exposed brain alone, and 6.7 mm after balloon inflation.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-50 of 100
Type of publication
journal article (52)
conference paper (35)
doctoral thesis (6)
book chapter (3)
other publication (2)
research review (1)
show more...
licentiate thesis (1)
show less...
Type of content
peer-reviewed (85)
other academic/artistic (15)
Author/Editor
Wang, Chunliang, 198 ... (71)
Smedby, Örjan, Profe ... (27)
Wang, Chunliang (25)
Smedby, Örjan (21)
Smedby, Örjan, 1956- (14)
Brusini, Irene (11)
show more...
Astaraki, Mehdi, PhD ... (10)
Toma-Daşu, Iuliana (8)
Bendazzoli, Simone (6)
Frimmel, Hans (5)
Carrizo, Garrizo (5)
Moreno, Rodrigo, 197 ... (5)
Mahbod, Amirreza (5)
Yu, Zhaohua, 1983- (4)
Kisonaite, Konstanci ... (4)
Webster, Mark (4)
Rossitti, Sandro (4)
Ormiston, John (4)
Westman, Eric (3)
Persson, Anders (3)
Wang, Chunliang, Doc ... (3)
Buizza, Giulia (3)
Lazzeroni, Marta (3)
Damberg, Peter (3)
Schaefer, G. (3)
Platten, Michael (3)
Raeme, Faisal (3)
Söderberg, Per, 1956 ... (3)
Smedby, Örjan, Profe ... (3)
Yang, Guang (2)
Piehl, Fredrik (2)
Andersson, Leif (2)
Fransson, Sven Göran (2)
Zakko, Yousuf (2)
Chowdhury, Manish (2)
Connolly, Bryan (2)
Granberg, Tobias (2)
Gustafsson, Torbjorn (2)
Schaefer, Gerald (2)
Schaap, M (2)
Ouellette, Russell (2)
Jörgens, Daniel, 198 ... (2)
Sandberg Melin, Cami ... (2)
Muehlboeck, J-Sebast ... (2)
Kisonaite, Konstanci ... (2)
Carleberg, Per (2)
Metz, C. T. (2)
Kitslaar, P. H. (2)
Orkisz, M. (2)
Krestin, G. P. (2)
show less...
University
Royal Institute of Technology (91)
Linköping University (33)
Karolinska Institutet (19)
Uppsala University (10)
Stockholm University (3)
Chalmers University of Technology (2)
show more...
University of Gothenburg (1)
Örebro University (1)
Swedish University of Agricultural Sciences (1)
show less...
Language
English (100)
Research subject (UKÄ/SCB)
Engineering and Technology (64)
Medical and Health Sciences (44)
Natural sciences (22)
Social Sciences (2)
Agricultural Sciences (1)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view