SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Mahbod A.) "

Search: WFRF:(Mahbod A.)

  • Result 1-4 of 4
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Mahbod, A., et al. (author)
  • Fusing fine-tuned deep features for skin lesion classification
  • 2019
  • In: Computerized Medical Imaging and Graphics. - : Elsevier. - 0895-6111 .- 1879-0771. ; 71, s. 19-29
  • Journal article (peer-reviewed)abstract
    • Malignant melanoma is one of the most aggressive forms of skin cancer. Early detection is important as it significantly improves survival rates. Consequently, accurate discrimination of malignant skin lesions from benign lesions such as seborrheic keratoses or benign nevi is crucial, while accurate computerised classification of skin lesion images is of great interest to support diagnosis. In this paper, we propose a fully automatic computerised method to classify skin lesions from dermoscopic images. Our approach is based on a novel ensemble scheme for convolutional neural networks (CNNs) that combines intra-architecture and inter-architecture network fusion. The proposed method consists of multiple sets of CNNs of different architecture that represent different feature abstraction levels. Each set of CNNs consists of a number of pre-trained networks that have identical architecture but are fine-tuned on dermoscopic skin lesion images with different settings. The deep features of each network were used to train different support vector machine classifiers. Finally, the average prediction probability classification vectors from different sets are fused to provide the final prediction. Evaluated on the 600 test images of the ISIC 2017 skin lesion classification challenge, the proposed algorithm yields an area under receiver operating characteristic curve of 87.3% for melanoma classification and an area under receiver operating characteristic curve of 95.5% for seborrheic keratosis classification, outperforming the top-ranked methods of the challenge while being simpler compared to them. The obtained results convincingly demonstrate our proposed approach to represent a reliable and robust method for feature extraction, model fusion and classification of dermoscopic skin lesion images.
  •  
2.
  • Mahbod, A., et al. (author)
  • A Two-Stage U-Net Algorithm for Segmentation of Nuclei in H&E-Stained Tissues
  • 2019
  • In: Digital Pathology. - Cham : Springer Verlag. - 9783030239367 ; , s. 75-82
  • Conference paper (peer-reviewed)abstract
    • Nuclei segmentation is an important but challenging task in the analysis of hematoxylin and eosin (H&E)-stained tissue sections. While various segmentation methods have been proposed, machine learning-based algorithms and in particular deep learning-based models have been shown to deliver better segmentation performance. In this work, we propose a novel approach to segment touching nuclei in H&E-stained microscopic images using U-Net-based models in two sequential stages. In the first stage, we perform semantic segmentation using a classification U-Net that separates nuclei from the background. In the second stage, the distance map of each nucleus is created using a regression U-Net. The final instance segmentation masks are then created using a watershed algorithm based on the distance maps. Evaluated on a publicly available dataset containing images from various human organs, the proposed algorithm achieves an average aggregate Jaccard index of 56.87%, outperforming several state-of-the-art algorithms applied on the same dataset.
  •  
3.
  • Mahbod, A., et al. (author)
  • Breast Cancer Histological Image Classification Using Fine-Tuned Deep Network Fusion
  • 2018
  • In: 15th International Conference on Image Analysis and Recognition, ICIAR 2018. - Cham : Springer. - 9783319929996 ; , s. 754-762
  • Conference paper (peer-reviewed)abstract
    • Breast cancer is the most common cancer type in women worldwide. Histological evaluation of the breast biopsies is a challenging task even for experienced pathologists. In this paper, we propose a fully automatic method to classify breast cancer histological images to four classes, namely normal, benign, in situ carcinoma and invasive carcinoma. The proposed method takes normalized hematoxylin and eosin stained images as input and gives the final prediction by fusing the output of two residual neural networks (ResNet) of different depth. These ResNets were first pre-trained on ImageNet images, and then fine-tuned on breast histological images. We found that our approach outperformed a previous published method by a large margin when applied on the BioImaging 2015 challenge dataset yielding an accuracy of 97.22%. Moreover, the same approach provided an excellent classification performance with an accuracy of 88.50% when applied on the ICIAR 2018 grand challenge dataset using 5-fold cross validation.
  •  
4.
  • Mahbod, A., et al. (author)
  • Transfer learning using a multi-scale and multi-network ensemble for skin lesion classification
  • 2020
  • In: Computer Methods and Programs in Biomedicine. - : Elsevier BV. - 0169-2607 .- 1872-7565. ; 193, s. 105475-
  • Journal article (peer-reviewed)abstract
    • Background and objective: Skin cancer is among the most common cancer types in the white population and consequently computer aided methods for skin lesion classification based on dermoscopic images are of great interest. A promising approach for this uses transfer learning to adapt pre-trained convolutional neural networks (CNNs) for skin lesion diagnosis. Since pre-training commonly occurs with natural images of a fixed image resolution and these training images are usually significantly smaller than dermoscopic images, downsampling or cropping of skin lesion images is required. This however may result in a loss of useful medical information, while the ideal resizing or cropping factor of dermoscopic images for the fine-tuning process remains unknown. Methods: We investigate the effect of image size for skin lesion classification based on pre-trained CNNs and transfer learning. Dermoscopic images from the International Skin Imaging Collaboration (ISIC) skin lesion classification challenge datasets are either resized to or cropped at six different sizes ranging from 224 × 224 to 450 × 450. The resulting classification performance of three well established CNNs, namely EfficientNetB0, EfficientNetB1 and SeReNeXt-50 is explored. We also propose and evaluate a multi-scale multi-CNN (MSM-CNN) fusion approach based on a three-level ensemble strategy that utilises the three network architectures trained on cropped dermoscopic images of various scales. Results: Our results show that image cropping is a better strategy compared to image resizing delivering superior classification performance at all explored image scales. Moreover, fusing the results of all three fine-tuned networks using cropped images at all six scales in the proposed MSM-CNN approach boosts the classification performance compared to a single network or a single image scale. On the ISIC 2018 skin lesion classification challenge test set, our MSM-CNN algorithm yields a balanced multi-class accuracy of 86.2% making it the currently second ranked algorithm on the live leaderboard. Conclusions: We confirm that the image size has an effect on skin lesion classification performance when employing transfer learning of CNNs. We also show that image cropping results in better performance compared to image resizing. Finally, a straightforward ensembling approach that fuses the results from images cropped at six scales and three fine-tuned CNNs is shown to lead to the best classification performance.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-4 of 4

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view