SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Tiziano Portenier) "

Search: WFRF:(Tiziano Portenier)

  • Result 1-7 of 7
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Gomariz, Alvaro, et al. (author)
  • Modality attention and sampling enables deep learning with heterogeneous marker combinations in fluorescence microscopy
  • 2021
  • In: Nature Machine Intelligence. - : Springer Nature. - 2522-5839. ; 3:9, s. 799-811
  • Journal article (peer-reviewed)abstract
    • Fluorescence microscopy allows for a detailed inspection of cells, cellular networks and anatomical landmarks by staining with a variety of carefully selected markers visualized as colour channels. Quantitative characterization of structures in acquired images often relies on automatic image analysis methods. Despite the success of deep learning methods in other vision applications, their potential for fluorescence image analysis remains underexploited. One reason lies in the considerable workload required to train accurate models, which are normally specific for a given combination of markers and therefore applicable to a very restricted number of experimental settings. We herein propose ‘marker sampling and excite’—a neural network approach with a modality sampling strategy and a novel attention module that together enable (1) flexible training with heterogeneous datasets with combinations of markers and (2) successful utility of learned models on arbitrary subsets of markers prospectively. We show that our single neural network solution performs comparably to an upper bound scenario in which an ensemble of many networks is naively trained for each possible marker combination separately. We also demonstrate the feasibility of this framework in high-throughput biological analysis by revising a recent quantitative characterization of bone-marrow vasculature in three-dimensional confocal microscopy datasets and further confirm the validity of our approach on another substantially different dataset of microvessels in foetal liver tissues. Not only can our work substantially ameliorate the use of deep learning in fluorescence microscopy analysis, but it can also be utilized in other fields with incomplete data acquisitions and missing modalities.
  •  
2.
  • Gomariz, Alvaro, et al. (author)
  • Probabilistic spatial analysis in quantitative microscopy with uncertainty-aware cell detection using deep Bayesian regression
  • 2022
  • In: Science Advances. - : American Association for the Advancement of Science (AAAS). - 2375-2548. ; 8:5
  • Journal article (peer-reviewed)abstract
    • The investigation of biological systems with three-dimensional microscopy demands automatic cell identification methods that not only are accurate but also can imply the uncertainty in their predictions. The use of deep learning to regress density maps is a popular successful approach for extracting cell coordinates from local peaks in a postprocessing step, which then, however, hinders any meaningful probabilistic output. We propose a framework that can operate on large microscopy images and output probabilistic predictions (i) by integrating deep Bayesian learning for the regression of uncertainty-aware density maps, where peak detection algorithms generate cell proposals, and (ii) by learning a mapping from prediction proposals to a probabilistic space that accurately represents the chances of a successful prediction. Using these calibrated predictions, we propose a probabilistic spatial analysis with Monte Carlo sampling. We demonstrate this in a bone marrow dataset, where our proposed methods reveal spatial patterns that are otherwise undetectable.
  •  
3.
  •  
4.
  • Gomariz, Alvaro, et al. (author)
  • Utilizing Uncertainty Estimation in Deep Learning Segmentation of Fluorescence Microscopy Images with Missing Markers
  • 2021
  • Conference paper (peer-reviewed)abstract
    • Fluorescence microscopy images contain several channels,each indicating a marker staining the sample. Since manydifferent marker combinations are utilized in practice, it hasbeen challenging to apply deep learning based segmentationmodels, which expect a predefined channel combination forall training samples as well as at inference for future applica-tion. Recent work circumvents this problem using a modalityattention approach to be effective across any possible markercombination. However, for combinations that do not existin a labeled training dataset, one cannot have any estimationof potential segmentation quality if that combination is en-countered during inference. Without this, not only one lacksquality assurance but one also does not know where to put anyadditional imaging and labeling effort. We herein propose amethod to estimate segmentation quality on unlabeled imagesby (i) estimating both aleatoric and epistemic uncertainties ofconvolutional neural networks for image segmentation, and(ii) training a Random Forest model for the interpretationof uncertainty features via regression to their correspond-ing segmentation metrics. Additionally, we demonstrate thatincluding these uncertainty measures during training canprovide an improvement on segmentation performance.
  •  
5.
  •  
6.
  • Tomar, Devavrat, et al. (author)
  • Content-Preserving Unpaired Translation from Simulated to Realistic Ultrasound Images
  • 2021
  • In: Medical Image Computing and Computer Assisted Intervention. - Cham : Springer International Publishing. - 9783030872366 - 9783030872373 ; , s. 659-669
  • Conference paper (peer-reviewed)abstract
    • Interactive simulation of ultrasound imaging greatly facilitates sonography training. Although ray-tracing based methods have shown promising results, obtaining realistic images requires substantial modeling effort and manual parameter tuning. In addition, current techniques still result in a significant appearance gap between simulated images and real clinical scans. Herein we introduce a novel content-preserving image translation framework (ConPres) to bridge this appearance gap, while maintaining the simulated anatomical layout. We achieve this goal by leveraging both simulated images with semantic segmentations and unpaired in-vivo ultrasound scans. Our framework is based on recent contrastive unpaired translation techniques and we propose a regularization approach by learning an auxiliary segmentation-to-real image translation task, which encourages the disentanglement of content and style. In addition, we extend the generator to be class-conditional, which enables the incorporation of additional losses, in particular a cyclic consistency loss, to further improve the translation quality. Qualitative and quantitative comparisons against state-of-the-art unpaired translation methods demonstrate the superiority of our proposed framework.
  •  
7.
  • Zhang, Lin, et al. (author)
  • Learning ultrasound rendering from cross-sectional model slices for simulated training
  • 2021
  • In: International Journal of Computer Assisted Radiology and Surgery. - : Springer. - 1861-6410 .- 1861-6429. ; 16:5, s. 721-730
  • Journal article (peer-reviewed)abstract
    • PurposeGiven the high level of expertise required for navigation and interpretation of ultrasound images, computational simulations can facilitate the training of such skills in virtual reality. With ray-tracing based simulations, realistic ultrasound images can be generated. However, due to computational constraints for interactivity, image quality typically needs to be compromised.MethodsWe propose herein to bypass any rendering and simulation process at interactive time, by conducting such simulations during a non-time-critical offline stage and then learning image translation from cross-sectional model slices to such simulated frames. We use a generative adversarial framework with a dedicated generator architecture and input feeding scheme, which both substantially improve image quality without increase in network parameters. Integral attenuation maps derived from cross-sectional model slices, texture-friendly strided convolutions, providing stochastic noise and input maps to intermediate layers in order to preserve locality are all shown herein to greatly facilitate such translation task.ResultsGiven several quality metrics, the proposed method with only tissue maps as input is shown to provide comparable or superior results to a state-of-the-art that uses additional images of low-quality ultrasound renderings. An extensive ablation study shows the need and benefits from the individual contributions utilized in this work, based on qualitative examples and quantitative ultrasound similarity metrics. To that end, a local histogram statistics based error metric is proposed and demonstrated for visualization of local dissimilarities between ultrasound images.ConclusionA deep-learning based direct transformation from interactive tissue slices to likeness of high quality renderings allow to obviate any complex rendering process in real-time, which could enable extremely realistic ultrasound simulations on consumer-hardware by moving the time-intensive processes to a one-time, offline, preprocessing data preparation stage that can be performed on dedicated high-end hardware.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-7 of 7

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view