SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Koriakina Nadezhda 1991 ) "

Sökning: WFRF:(Koriakina Nadezhda 1991 )

  • Resultat 1-6 av 6
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Andersson, Axel, et al. (författare)
  • End-to-end Multiple Instance Learning with Gradient Accumulation
  • 2022
  • Ingår i: 2022 IEEE International Conference on Big Data (Big Data). - : Institute of Electrical and Electronics Engineers (IEEE). - 9781665480451 - 9781665480468 ; , s. 2742-2746
  • Konferensbidrag (refereegranskat)abstract
    • Being able to learn on weakly labeled data and provide interpretability are two of the main reasons why attention-based deep multiple instance learning (ABMIL) methods have become particularly popular for classification of histopathological images. Such image data usually come in the form of gigapixel-sized whole-slide-images (WSI) that are cropped into smaller patches (instances). However, the sheer volume of the data poses a practical big data challenge: All the instances from one WSI cannot fit the GPU memory of conventional deep-learning models. Existing solutions compromise training by relying on pre-trained models, strategic selection of instances, sub-sampling, or self-supervised pre-training. We propose a training strategy based on gradient accumulation that enables direct end-to-end training of ABMIL models without being limited by GPU memory. We conduct experiments on both QMNIST and Imagenette to investigate the performance and training time and compare with the conventional memory-expensive baseline as well as a recent sampled-based approach. This memory-efficient approach, although slower, reaches performance indistinguishable from the memory-expensive baseline.
  •  
2.
  •  
3.
  • Koriakina, Nadezhda, 1991-, et al. (författare)
  • Deep multiple instance learning versus conventional deep single instance learning for interpretable oral cancer detection
  • 2024
  • Ingår i: PLOS ONE. - : Public Library of Science (PLoS). - 1932-6203. ; 19:4 April
  • Tidskriftsartikel (refereegranskat)abstract
    • The current medical standard for setting an oral cancer (OC) diagnosis is histological examination of a tissue sample taken from the oral cavity. This process is time-consuming and more invasive than an alternative approach of acquiring a brush sample followed by cytological analysis. Using a microscope, skilled cytotechnologists are able to detect changes due to malignancy; however, introducing this approach into clinical routine is associated with challenges such as a lack of resources and experts. To design a trustworthy OC detection system that can assist cytotechnologists, we are interested in deep learning based methods that can reliably detect cancer, given only per-patient labels (thereby minimizing annotation bias), and also provide information regarding which cells are most relevant for the diagnosis (thereby enabling supervision and understanding). In this study, we perform a comparison of two approaches suitable for OC detection and interpretation: (i) conventional single instance learning (SIL) approach and (ii) a modern multiple instance learning (MIL) method. To facilitate systematic evaluation of the considered approaches, we, in addition to a real OC dataset with patient-level ground truth annotations, also introduce a synthetic dataset—PAP-QMNIST. This dataset shares several properties of OC data, such as image size and large and varied number of instances per bag, and may therefore act as a proxy model of a real OC dataset, while, in contrast to OC data, it offers reliable per-instance ground truth, as defined by design. PAP-QMNIST has the additional advantage of being visually interpretable for non-experts, which simplifies analysis of the behavior of methods. For both OC and PAP-QMNIST data, we evaluate performance of the methods utilizing three different neural network architectures. Our study indicates, somewhat surprisingly, that on both synthetic and real data, the performance of the SIL approach is better or equal to the performance of the MIL approach. Visual examination by cytotechnologist indicates that the methods manage to identify cells which deviate from normality, including malignant cells as well as those suspicious for dysplasia. We share the code as open source.
  •  
4.
  • Koriakina, Nadezhda, 1991-, et al. (författare)
  • The Effect of Within-Bag Sampling on End-to-End Multiple Instance Learning
  • 2021
  • Ingår i: Proceedings of the 12th International Symposium on Image and Signal Processing and Analysis (ISPA). - : Institute of Electrical and Electronics Engineers (IEEE). - 9781665426404 - 9781665426398 ; , s. 183-188
  • Konferensbidrag (refereegranskat)abstract
    • End-to-end multiple instance learning (MIL) is an important concept with a wide range of applications. It is gaining increased popularity in the (bio)medical imaging community since it may provide a possibility to, while relying only on weak labels assigned to large regions, obtain more fine-grained information. However, processing very large bags in end-to-end MIL is problematic due to computer memory constraints. We propose within-bag sampling as one way of applying end-to-end MIL methods on very large data. We explore how different levels of sampling affect the performance of a well-known high-performing end-to-end attention-based MIL method, to understand the conditions when sampling can be utilized. We compose two new datasets tailored for the purpose of the study, and propose a strategy for sampling during MIL inference to arrive at reliable bag labels as well as instance level attention weights. We perform experiments without and with different levels of sampling, on the two publicly available datasets, and for a range of learning settings. We observe that in most situations the proposed bag-level sampling can be applied to end-to-end MIL without performance loss, supporting its confident usage to enable end-to-end MIL also in scenarios with very large bags. We share the code as open source at https://github.com/MIDA-group/SampledABMIL
  •  
5.
  • Koriakina, Nadezhda, 1991-, et al. (författare)
  • Uncovering hidden reasoning of convolutional neural networks in biomedical image classification by using attribution methods
  • 2020
  • Ingår i: 4th NEUBIAS Conference, Bordeaux, France.
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Convolutional neural networks (CNNs) are very popular in biomedical image processing and analysis, due to their impressive performance on numerous tasks. However, the performance comes at a cost of limited interpretability, which may harm users' trust in methods and their results. Robust and trustworthy methods are particularly in demand in the medical domain due to the sensitivity of the matter. There is a limited understanding of what CNNs base their decisions on, and, in particular, how their performance is related to what they are paying attention to. In this study, we utilize popular attribution methods, with the aim to explore relations between properties of a network's attention and its accuracy and certainty in classification. An intuitive reasoning is that in order for a network to make good decisions, it has to be consistent in what to draw attention to. We take a step towards understanding CNNs' behavior by identifying a relation between the model performance and the variability of its attention map.We observe two biomedical datasets and two commonly used architectures. We train several identical models of the same architecture on the given data; these identical models differ due to stochasticity of initialization and training. We analyse the variability of the predictions from such collections of networks where we observe all the network instances and their classifications independently. We utilize Gradient-weighted Class Activation Mapping (Grad-CAM) and Layer-wise Relevance Propagation (LRP), frequently employed attribution methods, for the activation analysis. Given a collection of trained CNNs, we compute, for each image of the test set: (i) the mean and standard deviation (SD) of the accuracy, over the networks in the collection; (ii) the mean and SD of the respective attention maps. We plot these measures against each other for the different combinations of network architectures and datasets, in order to expose possible relations between them.Our results reveal that there exists a relation between the variability of accuracy for collections of identical models and the variability of corresponding attention maps and that this relation is consistent among the considered combinations of datasets and architectures. We observe that the aggregated standard deviation of attention maps has a quadratic relation to the average accuracy of the sets of models and a linear relation to the standard deviation of accuracy. Motivated by the results, we are also performing subsequent experiments to reveal the relation between the score and attention, as well as to understand the impact of different images to the prediction by using mentioned statistics for each image and clustering techniques. These constitute important steps towards improved explainability and a generally clearer picture of the decision-making process of CNNs for biomedical data.
  •  
6.
  • Koriakina, Nadezhda, 1991-, et al. (författare)
  • Visualization of convolutional neural network class activations in automated oral cancer detection for interpretation of malignancy associated changes
  • 2019
  • Ingår i: 3rd NEUBIAS Conference, Luxembourg, 2-8 February 2019.
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Introduction: Cancer of the oral cavity is one of the most common malignancies in the world. The incidence of oral cavity and oropharyngeal cancer is increasing among young people. It is noteworthy that the oral cavity can be relatively easily accessed for routine screening tests that could potentially decrease the incidence of oral cancer. Automated deep learning computer aided methods show promising ability for detection of subtle precancerous changes at a very early stage, also when visual examination is less effective. Although the biological nature of these malignancy associated changes is not fully understood, the consistency of morphology and textural changes within a cell dataset could shed light on the premalignant state. In this study, we are aiming to increase understanding of this phenomenon by exploring and visualizing what parts of cell images are considered as most important when trained deep convolutional neural networks (DCNNs) are used to differentiate cytological images into normal and abnormal classes.Materials and methods: Cell samples are collected with a brush at areas of interest in the oral cavity and stained according to standard PAP procedures. Digital images from the slides are acquired with a 0.32 micron pixel size in greyscale format (570 nm bandpass filter). Cell nuclei are manually selected in the images and a small region is cropped around each nucleus resulting in images of 80x80 pixels. Medical knowledge is not used for choosing the cells but they are just randomly selected from the glass; for the learning process we are only providing ground truth on the patient level and not on the cell level. Overall, 10274 images of cell nuclei and the surrounding region are used to train state-of-the-art DCNNs to distinguish between cells from healthy persons and persons with precancerous lesions. Data augmentation through 90 degrees rotations and mirroring is applied to the datasets. Different approaches for class activation mapping and related methods are utilized to determine what image regions and feature maps are responsible for the relevant class differentiation.Results and Discussion:The best performing of the observed deep learning architectures reaches a per cell classification accuracy surpassing 80% on the observed material. Visualizing the class activation maps confirms our expectation that the network is able to learn to focus on specific relevant parts of the sample regions. We compare and evaluate our findings related to detected discriminative regions with the subjective judgements of a trained cytotechnologist. We believe that this effort on improving understanding of decision criteria used by machine and human leads to increased understanding of malignancy associated changes and also improves robustness and reliability of the automated malignancy detection procedure.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-6 av 6

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy