SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Goksel Orcun) "

Sökning: WFRF:(Goksel Orcun)

  • Resultat 1-6 av 6
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Anklin, Valentin, et al. (författare)
  • Learning Whole-Slide Segmentation from Inexact and Incomplete Labels Using Tissue Graphs
  • 2021
  • Ingår i: Medical Image Computing and Computer Assisted Intervention. - Cham : Springer Nature. - 9783030871956 - 9783030871963 ; , s. 636-646
  • Konferensbidrag (refereegranskat)abstract
    • Segmenting histology images into diagnostically relevant regions is imperative to support timely and reliable decisions by pathologists. To this end, computer-aided techniques have been proposed to delineate relevant regions in scanned histology slides. However, the techniques necessitate task-specific large datasets of annotated pixels, which is tedious, time-consuming, expensive, and infeasible to acquire for many histology tasks. Thus, weakly-supervised semantic segmentation techniques are proposed to leverage weak supervision which is cheaper and quicker to acquire. In this paper, we propose SEGGINI, a weakly-supervised segmentation method using graphs, that can utilize weak multiplex annotations, i.e., inexact and incomplete annotations, to segment arbitrary and large images, scaling from tissue microarray (TMA) to whole slide image (WSI). Formally, SEGGINI constructs a tissue-graph representation for an input image, where the graph nodes depict tissue regions. Then, it performs weakly-supervised segmentation via node classification by using inexact image-level labels, incomplete scribbles, or both. We evaluated SEGGINI on two public prostate cancer datasets containing TMAs and WSIs. Our method achieved state-of-the-art segmentation performance on both datasets for various annotation settings while being comparable to a pathologist baseline. Code and models are available at: https://github.com/histocartography/seg-gini
  •  
2.
  •  
3.
  • Chintada, Bhaskara R., et al. (författare)
  • Phase-Aberration Correction in Shear-Wave Elastography Imaging Using Local Speed-of-Sound Adaptive Beamforming
  • 2021
  • Ingår i: Frontiers in Physics. - : Frontiers Media S.A.. - 2296-424X. ; 9
  • Tidskriftsartikel (refereegranskat)abstract
    • Shear wave elasticity imaging (SWEI) is a non-invasive imaging modality that provides tissue elasticity information by measuring the travelling speed of an induced shear-wave. It is commercially available on clinical ultrasound scanners and popularly used in the diagnosis and staging of liver disease and breast cancer. In conventional SWEI methods, a sequence of acoustic radiation force (ARF) pushes are used for inducing a shear-wave, which is tracked using high frame-rate multi-angle plane wave imaging (MA-PWI) to estimate the shear-wave speed (SWS). Conventionally, these plane waves are beamformed using a constant speed-of-sound (SoS), assuming an a-priori known and homogeneous tissue medium. However, soft tissues are inhomogeneous, with intrinsic SoS variations. In this work, we study the SoS effects and inhomogeneities on SWS estimation, using simulation and phantoms experiments with porcine muscle as an abbarator, and show how these aberrations can be corrected using local speed-of-sound adaptive beamforming. For shear-wave tracking, we compare standard beamform with spatially constant SoS values to software beamforming with locally varying SoS maps. We show that, given SoS aberrations, traditional beamforming using a constant SoS, regardless of the utilized SoS value, introduces a substantial bias in the resulting SWS estimations. Average SWS estimation disparity for the same material was observed over 4.3 times worse when a constant SoS value is used compared to that when a known SoS map is used for beamforming. Such biases are shown to be corrected by using a local SoS map in beamforming, indicating the importance of and the need for local SoS reconstruction techniques.
  •  
4.
  • Gomariz, Alvaro, et al. (författare)
  • Utilizing Uncertainty Estimation in Deep Learning Segmentation of Fluorescence Microscopy Images with Missing Markers
  • 2021
  • Konferensbidrag (refereegranskat)abstract
    • Fluorescence microscopy images contain several channels,each indicating a marker staining the sample. Since manydifferent marker combinations are utilized in practice, it hasbeen challenging to apply deep learning based segmentationmodels, which expect a predefined channel combination forall training samples as well as at inference for future applica-tion. Recent work circumvents this problem using a modalityattention approach to be effective across any possible markercombination. However, for combinations that do not existin a labeled training dataset, one cannot have any estimationof potential segmentation quality if that combination is en-countered during inference. Without this, not only one lacksquality assurance but one also does not know where to put anyadditional imaging and labeling effort. We herein propose amethod to estimate segmentation quality on unlabeled imagesby (i) estimating both aleatoric and epistemic uncertainties ofconvolutional neural networks for image segmentation, and(ii) training a Random Forest model for the interpretationof uncertainty features via regression to their correspond-ing segmentation metrics. Additionally, we demonstrate thatincluding these uncertainty measures during training canprovide an improvement on segmentation performance.
  •  
5.
  • Jimenez-del-Toro, Oscar, et al. (författare)
  • Cloud-Based Evaluation of Anatomical Structure Segmentation and Landmark Detection Algorithms : VISCERAL Anatomy Benchmarks
  • 2016
  • Ingår i: IEEE Transactions on Medical Imaging. - : Institute of Electrical and Electronics Engineers (IEEE). - 0278-0062 .- 1558-254X. ; 35:11, s. 2459-2475
  • Tidskriftsartikel (refereegranskat)abstract
    • Variations in the shape and appearance of anatomical structures in medical images are often relevant radiological signs of disease. Automatic tools can help automate parts of this manual process. A cloud-based evaluation framework is presented in this paper including results of benchmarking current state-of-the-art medical imaging algorithms for anatomical structure segmentation and landmark detection: the VISCERAL Anatomy benchmarks. The algorithms are implemented in virtual machines in the cloud where participants can only access the training data and can be run privately by the benchmark administrators to objectively compare their performance in an unseen common test set. Overall, 120 computed tomography and magnetic resonance patient volumes were manually annotated to create a standard Gold Corpus containing a total of 1295 structures and 1760 landmarks. Ten participants contributed with automatic algorithms for the organ segmentation task, and three for the landmark localization task. Different algorithms obtained the best scores in the four available imaging modalities and for subsets of anatomical structures. The annotation framework, resulting data set, evaluation setup, results and performance analysis from the three VISCERAL Anatomy benchmarks are presented in this article. Both the VISCERAL data set and Silver Corpus generated with the fusion of the participant algorithms on a larger set of non-manually-annotated medical images are available to the research community.
  •  
6.
  • Tomar, Devavrat, et al. (författare)
  • Content-Preserving Unpaired Translation from Simulated to Realistic Ultrasound Images
  • 2021
  • Ingår i: Medical Image Computing and Computer Assisted Intervention. - Cham : Springer International Publishing. - 9783030872366 - 9783030872373 ; , s. 659-669
  • Konferensbidrag (refereegranskat)abstract
    • Interactive simulation of ultrasound imaging greatly facilitates sonography training. Although ray-tracing based methods have shown promising results, obtaining realistic images requires substantial modeling effort and manual parameter tuning. In addition, current techniques still result in a significant appearance gap between simulated images and real clinical scans. Herein we introduce a novel content-preserving image translation framework (ConPres) to bridge this appearance gap, while maintaining the simulated anatomical layout. We achieve this goal by leveraging both simulated images with semantic segmentations and unpaired in-vivo ultrasound scans. Our framework is based on recent contrastive unpaired translation techniques and we propose a regularization approach by learning an auxiliary segmentation-to-real image translation task, which encourages the disentanglement of content and style. In addition, we extend the generator to be class-conditional, which enables the incorporation of additional losses, in particular a cyclic consistency loss, to further improve the translation quality. Qualitative and quantitative comparisons against state-of-the-art unpaired translation methods demonstrate the superiority of our proposed framework.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-6 av 6

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy