SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Matuszewski Damian J.) srt2:(2021)"

Sökning: WFRF:(Matuszewski Damian J.) > (2021)

  • Resultat 1-2 av 2
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Matuszewski, Damian J., et al. (författare)
  • Learning Cell Nuclei Segmentation Using Labels Generated with Classical Image Analysis Methods
  • 2021
  • Ingår i: Proceedings of the WSCG 2021. - : University of West Bohemia. ; , s. 335-338
  • Konferensbidrag (refereegranskat)abstract
    • Creating manual annotations in a large number of images is a tedious bottleneck that limits deep learning use inmany applications. Here, we present a study in which we used the output of a classical image analysis pipeline aslabels when training a convolutional neural network (CNN). This may not only reduce the time experts spendannotating images but it may also lead to an improvement of results when compared to the output from the classicalpipeline used in training. In our application, i.e., cell nuclei segmentation, we generated the annotations usingCellProfiler (a tool for developing classical image analysis pipelines for biomedical applications) and trained onthem a U-Net-based CNN model. The best model achieved a 0.96 dice-coefficient of the segmented Nuclei and a0.84 object-wise Jaccard index which was better than the classical method used for generating the annotations by0.02 and 0.34, respectively. Our experimental results show that in this application, not only such training is feasiblebut also that the deep learning segmentations are a clear improvement compared to the output from the classicalpipeline used for generating the annotations.
  •  
2.
  • Matuszewski, Damian J., et al. (författare)
  • TEM virus images : Benchmark dataset and deep learning classification
  • 2021
  • Ingår i: Computer Methods and Programs in Biomedicine. - : Elsevier. - 0169-2607 .- 1872-7565. ; 209
  • Tidskriftsartikel (refereegranskat)abstract
    • Background and Objective: To achieve the full potential of deep learning (DL) models, such as understanding the interplay between model (size), training strategy, and amount of training data, researchers and developers need access to new dedicated image datasets; i.e., annotated collections of images representing real-world problems with all their variations, complexity, limitations, and noise. Here, we present, describe and make freely available an annotated transmission electron microscopy (TEM) image dataset. It constitutes an interesting challenge for many practical applications in virology and epidemiology; e.g., virus detection, segmentation, classification, and novelty detection. We also present benchmarking results for virus detection and recognition using some of the top-performing (large and small) networks as well as a handcrafted very small network. We compare and evaluate transfer learning and training from scratch hypothesizing that with a limited dataset, transfer learning is crucial for good performance of a large network whereas our handcrafted small network performs relatively well when training from scratch. This is one step towards understanding how much training data is needed for a given task.Methods: The benchmark dataset contains 1245 images of 22 virus classes. We propose a representative data split into training, validation, and test sets for this dataset. Moreover, we compare different established DL networks and present a baseline DL solution for classifying a subset of the 14 most-represented virus classes in the dataset.Results: Our best model, DenseNet201 pre-trained on ImageNet and fine-tuned on the training set, achieved a 0.921 F1-score and 93.1% accuracy on the proposed representative test set.Conclusions: Public and real biomedical datasets are an important contribution and a necessity to increase the understanding of shortcomings, requirements, and potential improvements for deep learning solutions on biomedical problems or deploying solutions in clinical settings. We compared transfer learning to learning from scratch on this dataset and hypothesize that for limited-sized datasets transfer learning is crucial for achieving good performance for large models. Last but not least, we demonstrate the importance of application knowledge in creating datasets for training DL models and analyzing their results.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-2 av 2
Typ av publikation
konferensbidrag (1)
tidskriftsartikel (1)
Typ av innehåll
refereegranskat (2)
Författare/redaktör
Matuszewski, Damian ... (2)
Ranefall, Petter, 19 ... (1)
Sintorn, Ida-Maria, ... (1)
Lärosäte
Uppsala universitet (2)
Språk
Engelska (2)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (2)
År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy