SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Wang Chunliang) "

Sökning: WFRF:(Wang Chunliang)

  • Resultat 41-50 av 99
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
41.
  • Lidén, Mats, 1976-, et al. (författare)
  • Machine learning slice-wise whole-lung CT emphysema score correlates with airway obstruction
  • 2024
  • Ingår i: European Radiology. - : Springer. - 0938-7994 .- 1432-1084. ; 34:1, s. 39-49
  • Tidskriftsartikel (refereegranskat)abstract
    • OBJECTIVES: Quantitative CT imaging is an important emphysema biomarker, especially in smoking cohorts, but does not always correlate to radiologists' visual CT assessments. The objectives were to develop and validate a neural network-based slice-wise whole-lung emphysema score (SWES) for chest CT, to validate SWES on unseen CT data, and to compare SWES with a conventional quantitative CT method.MATERIALS AND METHODS: Separate cohorts were used for algorithm development and validation. For validation, thin-slice CT stacks from 474 participants in the prospective cross-sectional Swedish CArdioPulmonary bioImage Study (SCAPIS) were included, 395 randomly selected and 79 from an emphysema cohort. Spirometry (FEV1/FVC) and radiologists' visual emphysema scores (sum-visual) obtained at inclusion in SCAPIS were used as reference tests. SWES was compared with a commercially available quantitative emphysema scoring method (LAV950) using Pearson's correlation coefficients and receiver operating characteristics (ROC) analysis.RESULTS: SWES correlated more strongly with the visual scores than LAV950 (r = 0.78 vs. r = 0.41, p < 0.001). The area under the ROC curve for the prediction of airway obstruction was larger for SWES than for LAV950 (0.76 vs. 0.61, p = 0.007). SWES correlated more strongly with FEV1/FVC than either LAV950 or sum-visual in the full cohort (r =  - 0.69 vs. r =  - 0.49/r =  - 0.64, p < 0.001/p = 0.007), in the emphysema cohort (r =  - 0.77 vs. r =  - 0.69/r =  - 0.65, p = 0.03/p = 0.002), and in the random sample (r =  - 0.39 vs. r =  - 0.26/r =  - 0.25, p = 0.001/p = 0.007).CONCLUSION: The slice-wise whole-lung emphysema score (SWES) correlates better than LAV950 with radiologists' visual emphysema scores and correlates better with airway obstruction than do LAV950 and radiologists' visual scores.CLINICAL RELEVANCE STATEMENT: The slice-wise whole-lung emphysema score provides quantitative emphysema information for CT imaging that avoids the disadvantages of threshold-based scores and is correlated more strongly with reference tests than LAV950 and reader visual scores.KEY POINTS: • A slice-wise whole-lung emphysema score (SWES) was developed to quantify emphysema in chest CT images. • SWES identified visual emphysema and spirometric airflow limitation significantly better than threshold-based score (LAV950). • SWES improved emphysema quantification in CT images, which is especially useful in large-scale research.
  •  
42.
  • Mahbod, A., et al. (författare)
  • A Two-Stage U-Net Algorithm for Segmentation of Nuclei in H&E-Stained Tissues
  • 2019
  • Ingår i: Digital Pathology. - Cham : Springer Verlag. - 9783030239367 ; , s. 75-82
  • Konferensbidrag (refereegranskat)abstract
    • Nuclei segmentation is an important but challenging task in the analysis of hematoxylin and eosin (H&E)-stained tissue sections. While various segmentation methods have been proposed, machine learning-based algorithms and in particular deep learning-based models have been shown to deliver better segmentation performance. In this work, we propose a novel approach to segment touching nuclei in H&E-stained microscopic images using U-Net-based models in two sequential stages. In the first stage, we perform semantic segmentation using a classification U-Net that separates nuclei from the background. In the second stage, the distance map of each nucleus is created using a regression U-Net. The final instance segmentation masks are then created using a watershed algorithm based on the distance maps. Evaluated on a publicly available dataset containing images from various human organs, the proposed algorithm achieves an average aggregate Jaccard index of 56.87%, outperforming several state-of-the-art algorithms applied on the same dataset.
  •  
43.
  • Mahbod, Amirreza, et al. (författare)
  • Automatic brain segmentation using artificial neural networks with shape context
  • 2018
  • Ingår i: Pattern Recognition Letters. - : Elsevier. - 0167-8655 .- 1872-7344. ; 101, s. 74-79
  • Tidskriftsartikel (refereegranskat)abstract
    • Segmenting brain tissue from MR scans is thought to be highly beneficial for brain abnormality diagnosis, prognosis monitoring, and treatment evaluation. Many automatic or semi-automatic methods have been proposed in the literature in order to reduce the requirement of user intervention, but the level of accuracy in most cases is still inferior to that of manual segmentation. We propose a new brain segmentation method that integrates volumetric shape models into a supervised artificial neural network (ANN) framework. This is done by running a preliminary level-set based statistical shape fitting process guided by the image intensity and then passing the signed distance maps of several key structures to the ANN as feature channels, in addition to the conventional spatial-based and intensity-based image features. The so-called shape context information is expected to help the ANN to learn local adaptive classification rules instead of applying universal rules directly on the local appearance features. The proposed method was tested on a public datasets available within the open MICCAI grand challenge (MRBrainS13). The obtained average Dice coefficient were 84.78%, 88.47%, 82.76%, 95.37% and 97.73% for gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), brain (WM + GM) and intracranial volume respectively. Compared with other methods tested on the same dataset, the proposed method achieved competitive results with comparatively shorter training time.
  •  
44.
  • Mahbod, Amirreza, et al. (författare)
  • Automatic multiple sclerosis lesion segmentation using hybrid artificial neural networks
  • 2016
  • Ingår i: MSSEG Challenge Proceedings: Multiple Sclerosis Lesions Segmentation Challenge Using a Data Management and Processing Infrastructure. ; , s. 29-36
  • Konferensbidrag (refereegranskat)abstract
    • Multiple sclerosis (MS) is a demyelinating disease which could cause severe motor and cognitive deterioration. Segmenting MS lesions could be highly beneficial for diagnosing, analyzing and monitoring treatment efficacy. To do so, manual segmentation, performed by experts, is the conventional method in hospitals and clinical environments. Although manual segmentation is accurate, it is time consuming, expensive and might not be reliable. The aim of this work was to propose an automatic method for MS lesion segmentation and evaluate it using brain images available within the MICCAI MS segmentation challenge. The proposed method employs supervised artificial neural network based algorithm, exploiting intensity-based and spatial-based features as the input of the network. This method achieved relatively accurate results with acceptable training and testing time for training datasets.
  •  
45.
  • Mahbod, A., et al. (författare)
  • Breast Cancer Histological Image Classification Using Fine-Tuned Deep Network Fusion
  • 2018
  • Ingår i: 15th International Conference on Image Analysis and Recognition, ICIAR 2018. - Cham : Springer. - 9783319929996 ; , s. 754-762
  • Konferensbidrag (refereegranskat)abstract
    • Breast cancer is the most common cancer type in women worldwide. Histological evaluation of the breast biopsies is a challenging task even for experienced pathologists. In this paper, we propose a fully automatic method to classify breast cancer histological images to four classes, namely normal, benign, in situ carcinoma and invasive carcinoma. The proposed method takes normalized hematoxylin and eosin stained images as input and gives the final prediction by fusing the output of two residual neural networks (ResNet) of different depth. These ResNets were first pre-trained on ImageNet images, and then fine-tuned on breast histological images. We found that our approach outperformed a previous published method by a large margin when applied on the BioImaging 2015 challenge dataset yielding an accuracy of 97.22%. Moreover, the same approach provided an excellent classification performance with an accuracy of 88.50% when applied on the ICIAR 2018 grand challenge dataset using 5-fold cross validation.
  •  
46.
  • Mahbod, A., et al. (författare)
  • Fusing fine-tuned deep features for skin lesion classification
  • 2019
  • Ingår i: Computerized Medical Imaging and Graphics. - : Elsevier. - 0895-6111 .- 1879-0771. ; 71, s. 19-29
  • Tidskriftsartikel (refereegranskat)abstract
    • Malignant melanoma is one of the most aggressive forms of skin cancer. Early detection is important as it significantly improves survival rates. Consequently, accurate discrimination of malignant skin lesions from benign lesions such as seborrheic keratoses or benign nevi is crucial, while accurate computerised classification of skin lesion images is of great interest to support diagnosis. In this paper, we propose a fully automatic computerised method to classify skin lesions from dermoscopic images. Our approach is based on a novel ensemble scheme for convolutional neural networks (CNNs) that combines intra-architecture and inter-architecture network fusion. The proposed method consists of multiple sets of CNNs of different architecture that represent different feature abstraction levels. Each set of CNNs consists of a number of pre-trained networks that have identical architecture but are fine-tuned on dermoscopic skin lesion images with different settings. The deep features of each network were used to train different support vector machine classifiers. Finally, the average prediction probability classification vectors from different sets are fused to provide the final prediction. Evaluated on the 600 test images of the ISIC 2017 skin lesion classification challenge, the proposed algorithm yields an area under receiver operating characteristic curve of 87.3% for melanoma classification and an area under receiver operating characteristic curve of 95.5% for seborrheic keratosis classification, outperforming the top-ranked methods of the challenge while being simpler compared to them. The obtained results convincingly demonstrate our proposed approach to represent a reliable and robust method for feature extraction, model fusion and classification of dermoscopic skin lesion images.
  •  
47.
  • Mahbod, Amirreza, et al. (författare)
  • Investigating and Exploiting Image Resolution for Transfer Learning-based Skin Lesion Classification
  • 2021
  • Ingår i: 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR). - : IEEE Computer Society. ; , s. 4047-4053
  • Konferensbidrag (refereegranskat)abstract
    • Skin cancer is among the most common cancer types. Dermoscopic image analysis improves the diagnostic accuracy for detection of malignant melanoma and other pigmented skin lesions when compared to unaided visual inspection. Hence, computer-based methods to support medical experts in the diagnostic procedure are of great interest. Fine-tuning pre-trained convolutional neural networks (CNNs) has been shown to work well for skin lesion classification. Pre-trained CNNs are typically trained with natural images of a fixed image size significantly smaller than captured skin lesion images and consequently dermoscopic images are downsampled for fine-tuning. However, useful medical information may be lost during this transformation. In this paper, we explore the effect of input image size on skin lesion classification performance of fine-tuned CNNs. For this, we resize dermoscopic images to different resolutions, ranging from 64 x 64 to 768 x 768 pixels and investigate the resulting classification performance of three well-established CNNs, namely DenseNet-121, ResNet-18, and ResNet-50. Our results show that using very small images (of size 64 x 64 pixels) degrades the classification performance, while images of size 128 x 128 pixels and above support good performance with larger image sizes leading to slightly improved classification. We further propose a novel fusion approach based on a three-level ensemble strategy that exploits multiple fine-tuned networks trained with dermoscopic images at various sizes. When applied on the ISIC 2017 skin lesion classification challenge, our fusion approach yields an area under the receiver operating characteristic curve of 89.2% and 96.6% for melanoma classification and seborrheic keratosis classification, respectively, outperforming state-of-the-art algorithms.
  •  
48.
  • Mahbod, Amirreza, et al. (författare)
  • SKIN LESION CLASSIFICATION USING HYBRID DEEP NEURAL NETWORKS
  • 2019
  • Ingår i: 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP). - : IEEE. - 9781479981311 ; , s. 1229-1233
  • Konferensbidrag (refereegranskat)abstract
    • Skin cancer is one of the major types of cancers with an increasing incidence over the past decades. Accurately diagnosing skin lesions to discriminate between benign and malignant skin lesions is crucial to ensure appropriate patient treatment. While there are many computerised methods for skin lesion classification, convolutional neural networks (CNNs) have been shown to be superior over classical methods. In this work, we propose a fully automatic computerised method for skin lesion classification which employs optimised deep features from a number of well-established CNNs and from different abstraction levels. We use three pre-trained deep models, namely AlexNet, VGG16 and ResNet-18, as deep feature generators. The extracted features then are used to train support vector machine classifiers. In a final stage, the classifier outputs are fused to obtain a classification. Evaluated on the 150 validation images from the ISIC 2017 classification challenge, the proposed method is shown to achieve very good classification performance, yielding an area under receiver operating characteristic curve of 83.83% for melanoma classification and of 97.55% for seborrheic keratosis classification.
  •  
49.
  • Mahbod, A., et al. (författare)
  • Transfer learning using a multi-scale and multi-network ensemble for skin lesion classification
  • 2020
  • Ingår i: Computer Methods and Programs in Biomedicine. - : Elsevier BV. - 0169-2607 .- 1872-7565. ; 193, s. 105475-
  • Tidskriftsartikel (refereegranskat)abstract
    • Background and objective: Skin cancer is among the most common cancer types in the white population and consequently computer aided methods for skin lesion classification based on dermoscopic images are of great interest. A promising approach for this uses transfer learning to adapt pre-trained convolutional neural networks (CNNs) for skin lesion diagnosis. Since pre-training commonly occurs with natural images of a fixed image resolution and these training images are usually significantly smaller than dermoscopic images, downsampling or cropping of skin lesion images is required. This however may result in a loss of useful medical information, while the ideal resizing or cropping factor of dermoscopic images for the fine-tuning process remains unknown. Methods: We investigate the effect of image size for skin lesion classification based on pre-trained CNNs and transfer learning. Dermoscopic images from the International Skin Imaging Collaboration (ISIC) skin lesion classification challenge datasets are either resized to or cropped at six different sizes ranging from 224 × 224 to 450 × 450. The resulting classification performance of three well established CNNs, namely EfficientNetB0, EfficientNetB1 and SeReNeXt-50 is explored. We also propose and evaluate a multi-scale multi-CNN (MSM-CNN) fusion approach based on a three-level ensemble strategy that utilises the three network architectures trained on cropped dermoscopic images of various scales. Results: Our results show that image cropping is a better strategy compared to image resizing delivering superior classification performance at all explored image scales. Moreover, fusing the results of all three fine-tuned networks using cropped images at all six scales in the proposed MSM-CNN approach boosts the classification performance compared to a single network or a single image scale. On the ISIC 2018 skin lesion classification challenge test set, our MSM-CNN algorithm yields a balanced multi-class accuracy of 86.2% making it the currently second ranked algorithm on the live leaderboard. Conclusions: We confirm that the image size has an effect on skin lesion classification performance when employing transfer learning of CNNs. We also show that image cropping results in better performance compared to image resizing. Finally, a straightforward ensembling approach that fuses the results from images cropped at six scales and three fine-tuned CNNs is shown to lead to the best classification performance.
  •  
50.
  • Maria Marreiros, Filipe Miguel, 1978-, et al. (författare)
  • Non-rigid Deformation Pipeline for Compensation of Superficial Brain Shift
  • 2013
  • Ingår i: Medical Image Computing and Computer-Assisted Intervention, MICCAI 2013. - Berlin, Heidelberg : Springer Berlin/Heidelberg. - 9783642407628 - 9783642407635 ; , s. 141-148
  • Konferensbidrag (refereegranskat)abstract
    • The correct visualization of anatomical structures is a critical component of neurosurgical navigation systems, to guide the surgeon to the areas of interest as well as to avoid brain damage. A major challenge for neuronavigation systems is the brain shift, or deformation of the exposed brain in comparison to preoperative Magnetic Resonance (MR) image sets. In this work paper, a non-rigid deformation pipeline is proposed for brain shift compensation of preoperative imaging datasets using superficial blood vessels as landmarks. The input was preoperative and intraoperative 3D image sets of superficial vessel centerlines. The intraoperative vessels (obtained using 3 Near-Infrared cameras) were registered and aligned with preoperative Magnetic Resonance Angiography vessel centerlines using manual interaction for the rigid transformation and, for the non-rigid transformation, the non-rigid point set registration method Coherent Point Drift. The rigid registration transforms the intraoperative points from the camera coordinate system to the preoperative MR coordinate system, and the non-rigid registration deals with local transformations in the MR coordinate system. Finally, the generation of a new deformed volume is achieved with the Thin-Plate Spline (TPS) method using as control points the matches in the MR coordinate system found in the previous step. The method was tested in a rabbit brain exposed via craniotomy, where deformations were produced by a balloon inserted into the brain. There was a good correlation between the real state of the brain and the deformed volume obtained using the pipeline. Maximum displacements were approximately 4.0 mm for the exposed brain alone, and 6.7 mm after balloon inflation.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 41-50 av 99
Typ av publikation
tidskriftsartikel (51)
konferensbidrag (35)
doktorsavhandling (6)
bokkapitel (3)
annan publikation (2)
forskningsöversikt (1)
visa fler...
licentiatavhandling (1)
visa färre...
Typ av innehåll
refereegranskat (84)
övrigt vetenskapligt/konstnärligt (15)
Författare/redaktör
Wang, Chunliang, 198 ... (70)
Smedby, Örjan, Profe ... (27)
Wang, Chunliang (25)
Smedby, Örjan (21)
Smedby, Örjan, 1956- (14)
Brusini, Irene (11)
visa fler...
Astaraki, Mehdi, PhD ... (10)
Toma-Daşu, Iuliana (8)
Bendazzoli, Simone (6)
Frimmel, Hans (5)
Carrizo, Garrizo (5)
Moreno, Rodrigo, 197 ... (5)
Mahbod, Amirreza (5)
Yu, Zhaohua, 1983- (4)
Kisonaite, Konstanci ... (4)
Webster, Mark (4)
Rossitti, Sandro (4)
Ormiston, John (4)
Westman, Eric (3)
Persson, Anders (3)
Wang, Chunliang, Doc ... (3)
Buizza, Giulia (3)
Lazzeroni, Marta (3)
Damberg, Peter (3)
Schaefer, G. (3)
Platten, Michael (3)
Raeme, Faisal (3)
Söderberg, Per, 1956 ... (3)
Smedby, Örjan, Profe ... (3)
Yang, Guang (2)
Piehl, Fredrik (2)
Andersson, Leif (2)
Fransson, Sven Göran (2)
Zakko, Yousuf (2)
Chowdhury, Manish (2)
Connolly, Bryan (2)
Granberg, Tobias (2)
Gustafsson, Torbjorn (2)
Schaefer, Gerald (2)
Schaap, M (2)
Ouellette, Russell (2)
Jörgens, Daniel, 198 ... (2)
Sandberg Melin, Cami ... (2)
Muehlboeck, J-Sebast ... (2)
Kisonaite, Konstanci ... (2)
Carleberg, Per (2)
Metz, C. T. (2)
Kitslaar, P. H. (2)
Orkisz, M. (2)
Krestin, G. P. (2)
visa färre...
Lärosäte
Kungliga Tekniska Högskolan (90)
Linköpings universitet (33)
Karolinska Institutet (19)
Uppsala universitet (10)
Stockholms universitet (3)
Chalmers tekniska högskola (2)
visa fler...
Göteborgs universitet (1)
Örebro universitet (1)
Sveriges Lantbruksuniversitet (1)
visa färre...
Språk
Engelska (99)
Forskningsämne (UKÄ/SCB)
Teknik (63)
Medicin och hälsovetenskap (44)
Naturvetenskap (22)
Samhällsvetenskap (2)
Lantbruksvetenskap (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy