SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Wetzer Elisabeth) "

Sökning: WFRF:(Wetzer Elisabeth)

  • Resultat 1-10 av 23
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  •  
2.
  •  
3.
  •  
4.
  • Gay, Jo, et al. (författare)
  • Texture-based oral cancer detection: A performance analysis of deep learning approaches.
  • 2019
  • Ingår i: 3rd NEUBIAS Conference. - Luxembourg.
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Early stage cancer detection is essential for reducing cancer mortality. Screening programs such as that for cervical cancer are highly effective in preventing advanced stage cancers. One obstacle to the introduction of screening for other cancer types is the cost associated with manual inspection of the resulting cell samples. Computer assisted image analysis of cytology slides may offer a significant reduction of these costs. We are particularly interested in detection of cancer of the oral cavity, being one of the most common malignancies in the world, with an increasing tendency of incidence among young people. Due to the non-invasive accessibility of the oral cavity, automated detection may enable screening programs leading to early diagnosis and treatment.It is well known that variations in the chromatin texture of the cell nucleus are an important diagnostic feature. With an aim to maximize reliability of an automated cancer detection system for oral cancer detection, we evaluate three state of the art deep convolutional neural network (DCNN) approaches which are specialized for texture analysis. A powerful tool for texture description are local binary patterns (LBPs); they describe the pattern of variations in intensity between a pixel and its neighbours, instead of using the image intensity values directly. A neural network can be trained to recognize the range of patterns found in different types of images. Many methods have been proposed which either use LBPs directly, or are inspired by them, and show promising results on a range of different image classification tasks where texture is an important discriminative feature.We evaluate multiple recently published deep learning-based texture classification approaches: two of them (referred to as Model 1, by Juefei-Xu et al. (CVPR 2017); Model 2, by Li et al. (2018)) are inspired by LBP texture descriptors, while the third (Model 3, by Marcos et al. (ICCV 2017)), based on Rotation Equivariant Vector Field Networks, aims at preserving fine textural details under rotations, thus enabling a reduced model size. Performances are compared with state-of-the-art results on the same dataset, by Wieslander et al. (CVPR 2017), which are based on ResNet and VGG architectures. Furthermore a fusion of DCNN with LBP maps as in Wetzer et al. (Bioimg. Comp. 2018) is evaluated for comparison. Our aim is to explore if focus on texture can improve CNN performance.Both of the methods based on LBPs exhibit higher performances (F1-score for Model 1: 0.85; Model 2: 0.83) than what is obtained by using CNNs directly on the greyscale data (VGG: 0.78, ResNet: 0.76). This clearly demonstrates the effectiveness of LBPs for this type of image classification task. The approach based on rotation equivariant networks stays behind in performance (F1-score for Model 3: 0.72), indicating that this method may be less appropriate for classifying single-cell images.
  •  
5.
  • Koriakina, Nadezhda, 1991-, et al. (författare)
  • Uncovering hidden reasoning of convolutional neural networks in biomedical image classification by using attribution methods
  • 2020
  • Ingår i: 4th NEUBIAS Conference, Bordeaux, France.
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Convolutional neural networks (CNNs) are very popular in biomedical image processing and analysis, due to their impressive performance on numerous tasks. However, the performance comes at a cost of limited interpretability, which may harm users' trust in methods and their results. Robust and trustworthy methods are particularly in demand in the medical domain due to the sensitivity of the matter. There is a limited understanding of what CNNs base their decisions on, and, in particular, how their performance is related to what they are paying attention to. In this study, we utilize popular attribution methods, with the aim to explore relations between properties of a network's attention and its accuracy and certainty in classification. An intuitive reasoning is that in order for a network to make good decisions, it has to be consistent in what to draw attention to. We take a step towards understanding CNNs' behavior by identifying a relation between the model performance and the variability of its attention map.We observe two biomedical datasets and two commonly used architectures. We train several identical models of the same architecture on the given data; these identical models differ due to stochasticity of initialization and training. We analyse the variability of the predictions from such collections of networks where we observe all the network instances and their classifications independently. We utilize Gradient-weighted Class Activation Mapping (Grad-CAM) and Layer-wise Relevance Propagation (LRP), frequently employed attribution methods, for the activation analysis. Given a collection of trained CNNs, we compute, for each image of the test set: (i) the mean and standard deviation (SD) of the accuracy, over the networks in the collection; (ii) the mean and SD of the respective attention maps. We plot these measures against each other for the different combinations of network architectures and datasets, in order to expose possible relations between them.Our results reveal that there exists a relation between the variability of accuracy for collections of identical models and the variability of corresponding attention maps and that this relation is consistent among the considered combinations of datasets and architectures. We observe that the aggregated standard deviation of attention maps has a quadratic relation to the average accuracy of the sets of models and a linear relation to the standard deviation of accuracy. Motivated by the results, we are also performing subsequent experiments to reveal the relation between the score and attention, as well as to understand the impact of different images to the prediction by using mentioned statistics for each image and clustering techniques. These constitute important steps towards improved explainability and a generally clearer picture of the decision-making process of CNNs for biomedical data.
  •  
6.
  • Pielawski, Nicolas, et al. (författare)
  • CoMIR: Contrastive Multimodal Image Representation for Registration
  • 2020
  • Ingår i: NeurIPS - 34th Conference on Neural Information Processing Systems.
  • Konferensbidrag (refereegranskat)abstract
    • We propose contrastive coding to learn shared, dense image representations, referred to as CoMIRs (Contrastive Multimodal Image Representations). CoMIRs enable the registration of multimodal images where existing registration methods often fail due to a lack of sufficiently similar image structures. CoMIRs reduce the multimodal registration problem to a monomodal one, in which general intensity-based, as well as feature-based, registration algorithms can be applied. The method involves training one neural network per modality on aligned images, using a contrastive loss based on noise-contrastive estimation (InfoNCE). Unlike other contrastive coding methods, used for, e.g., classification, our approach generates image-like representations that contain the information shared between modalities. We introduce a novel, hyperparameter-free modification to InfoNCE, to enforce rotational equivariance of the learnt representations, a property essential to the registration task. We assess the extent of achieved rotational equivariance and the stability of the representations with respect to weight initialization, training set, and hyperparameter settings, on a remote sensing dataset of RGB and near-infrared images. We evaluate the learnt representations through registration of a biomedical dataset of bright-field and second-harmonic generation microscopy images; two modalities with very little apparent correlation. The proposed approach based on CoMIRs significantly outperforms registration of representations created by GAN-based image-to-image translation, as well as a state-of-the-art, application-specific method which takes additional knowledge about the data into account. Code is available at: https://github.com/MIDA-group/CoMIR.
  •  
7.
  • Pielawski, Nicolas, et al. (författare)
  • Global Parameter Optimization for Multimodal Biomedical Image Registration
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Multimodal image registration and fusion is a highly relevant problem in a variety of application domains, from biomedical imaging and remote sensing to computer vision. However, combining images originating from different sources (e.g. microscopes or scanners) is challenging as they are in different coordinate systems, and their content may vary greatly. To align underlying structures in multiple modalities and fuse their complementary information, image registration is required. Methods for registration generally rely on a similarity or distance function between images and an optimization algorithm to find the geometric transformation between two images -- translation and rotation for rigid registration. Global optimization can be applied to multimodal image registration such that the best transformation is guaranteed to be discovered given a large enough computational budget, eliminating failure cases of local optimization algorithms converging to a local optimum. Recently, several methods for global multimodal image registration were developed, however relying on a grid or random search to find the best orientation. We propose a framework using Bayesian optimization to find the optimal orientation between images, which combines the favorable properties of global optimization with the sophisticated parameter search of Bayesian optimization to accelerate the convergence rate. This manuscript presents preliminary results on the faster convergence rate of the Bayesian optimizer in comparison to random search on a small set of multimodal image pairs of brains acquired by positron emission tomography and magnetic resonance imaging.
  •  
8.
  • Sintorn, Ida-Maria, 1976-, et al. (författare)
  • Facilitating Ultrastructural Pathology through Automated Imaging and Analysis
  • 2019
  • Ingår i: Journal of Pathology Informatics. - : Elsevier. - 2229-5089 .- 2153-3539. ; 10:1, s. 38-39
  • Tidskriftsartikel (övrigt vetenskapligt/konstnärligt)abstract
    • Transmission electron microscopy (TEM) is an important diagnostic tool for analyzing human tissue at the nm scale. It is the only option, or gold standard, for diagnosing several disorders e.g. cilia and renal diseases, rare cancers etc. However, conventional TEM microscopes are highly manual, technically complex and a special environment is required to house the bulky and sensitive machines. Interpretation of information is subjective, time consuming, and relies on a high level of expertise which, unfortunately, is rare for this specialty within pathology. Here, we present methods and results from an ongoing project with the goal to develop a smart and easy to use platform for ultrastructural pathologic diagnoses. The platform is based on the recently developed MiniTEM instrument, a highly automated table-top TEM. In the project we develop image analysis methods for guided as well as fully automated search and analysis of structures of interest. In addition we enrich MiniTEM with an integrated database for convenient image handling and traceability. These points are identified by user representatives as crucial for creating a cost-effective diagnostic platform. We will show strategies and results for using image analysis and machine learning for automated search for objects/regions of interest at low magnification as well as combining multiple object instances acquired at high magnification to enhance nm details necessary for correct diagnosis. This will be exemplified for diagnosing primary cilia dyskinesia and renal disorders. The automation in imaging and analysis within the platform is a big step towards digital ultrapathology.
  •  
9.
  •  
10.
  • Wetzer, Elisabeth, et al. (författare)
  • Can Representation Learning for Multimodal Image Registration be Improved by Supervision of Intermediate Layers?
  • 2023
  • Ingår i: IbPRIA 2023: Pattern Recognition and Image Analysis. - : Springer. - 9783031366154 - 9783031366161 ; , s. 261-275
  • Konferensbidrag (refereegranskat)abstract
    • Multimodal imaging and correlative analysis typically require image alignment. Contrastive learning can generate representations of multimodal images, reducing the challenging task of multimodal image registration to a monomodal one. Previously, additional supervision on intermediate layers in contrastive learning has improved biomedical image classification. We evaluate if a similar approach improves representations learned for registration to boost registration performance. We explore three approaches to add contrastive supervision to the latent features of the bottleneck layer in the U-Nets encoding the multimodal images and evaluate three different critic functions. Our results show that representations learned without additional supervision on latent features perform best in the downstream task of registration on two public biomedical datasets. We investigate the performance drop by exploiting recent insights in contrastive learning in classification and self-supervised learning. We visualize the spatial relations of the learned representations by means of multidimensional scaling, and show that additional supervision on the bottleneck layer can lead to partial dimensional collapse of the intermediate embedding space.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 23

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy