SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Ciompi Francesco) "

Sökning: WFRF:(Ciompi Francesco)

  • Resultat 1-20 av 20
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Balkenhol, Maschenka C. A., et al. (författare)
  • Deep learning assisted mitotic counting for breast cancer
  • 2019
  • Ingår i: Laboratory Investigation. - : NATURE PUBLISHING GROUP. - 0023-6837 .- 1530-0307. ; 99:11, s. 1596-1606
  • Tidskriftsartikel (refereegranskat)abstract
    • As part of routine histological grading, for every invasive breast cancer the mitotic count is assessed by counting mitoses in the (visually selected) region with the highest proliferative activity. Because this procedure is prone to subjectivity, the present study compares visual mitotic counting with deep learning based automated mitotic counting and fully automated hotspot selection. Two cohorts were used in this study. Cohort A comprised 90 prospectively included tumors which were selected based on the mitotic frequency scores given during routine glass slide diagnostics. This pathologist additionally assessed the mitotic count in these tumors in whole slide images (WSI) within a preselected hotspot. A second observer performed the same procedures on this cohort. The preselected hotspot was generated by a convolutional neural network (CNN) trained to detect all mitotic figures in digitized hematoxylin and eosin (Hamp;E) sections. The second cohort comprised a multicenter, retrospective TNBC cohort (n = 298), of which the mitotic count was assessed by three independent observers on glass slides. The same CNN was applied on this cohort and the absolute number of mitotic figures in the hotspot was compared to the averaged mitotic count of the observers. Baseline interobserver agreement for glass slide assessment in cohort A was good (kappa 0.689; 95% CI 0.580-0.799). Using the CNN generated hotspot in WSI, the agreement score increased to 0.814 (95% CI 0.719-0.909). Automated counting by the CNN in comparison with observers counting in the predefined hotspot region yielded an average kappa of 0.724. We conclude that manual mitotic counting is not affected by assessment modality (glass slides, WSI) and that counting mitotic figures in WSI is feasible. Using a predefined hotspot area considerably improves reproducibility. Also, fully automated assessment of mitotic score appears to be feasible without introducing additional bias or variability.
  •  
2.
  • Balkenhol, Maschenka C. A., et al. (författare)
  • Optimized tumour infiltrating lymphocyte assessment for triple negative breast cancer prognostics
  • 2021
  • Ingår i: Breast. - : Elsevier. - 0960-9776 .- 1532-3080. ; 56, s. 78-87
  • Tidskriftsartikel (refereegranskat)abstract
    • The tumour microenvironment has been shown to be a valuable source of prognostic information for different cancer types. This holds in particular for triple negative breast cancer (TNBC), a breast cancer subtype for which currently no prognostic biomarkers are established. Although different methods to assess tumour infiltrating lymphocytes (TILs) have been published, it remains unclear which method (marker, region) yields the most optimal prognostic information. In addition, to date, no objective TILs assessment methods are available. For this proof of concept study, a subset of our previously described TNBC cohort (n = 94) was stained for CD3, CD8 and FOXP3 using multiplex immunohistochemistry and subsequently imaged by a multispectral imaging system. Advanced whole-slide image analysis algorithms, including convolutional neural networks (CNN) were used to register unmixed multispectral images and corresponding H&E sections, to segment the different tissue compartments (tumour, stroma) and to detect all individual positive lymphocytes. Densities of positive lymphocytes were analysed in different regions within the tumour and its neighbouring environment and correlated to relapse free survival (RFS) and overall survival (OS). We found that for all TILs markers the presence of a high density of positive cells correlated with an improved survival. None of the TILs markers was superior to the others. The results of TILs assessment in the various regions did not show marked differences between each other. The negative correlation between TILs and survival in our cohort are in line with previous studies. Our results provide directions for optimizing TILs assessment methodology. (C) 2021 The Author(s). Published by Elsevier Ltd.
  •  
3.
  • Bokhorst, John-Melle, et al. (författare)
  • Deep learning for multi-class semantic segmentation enables colorectal cancer detection and classification in digital pathology images
  • 2023
  • Ingår i: Scientific Reports. - : NATURE PORTFOLIO. - 2045-2322. ; 13:1
  • Tidskriftsartikel (refereegranskat)abstract
    • In colorectal cancer (CRC), artificial intelligence (AI) can alleviate the laborious task of characterization and reporting on resected biopsies, including polyps, the numbers of which are increasing as a result of CRC population screening programs ongoing in many countries all around the globe. Here, we present an approach to address two major challenges in the automated assessment of CRC histopathology whole-slide images. We present an AI-based method to segment multiple (n=14 ) tissue compartments in the H &E-stained whole-slide image, which provides a different, more perceptible picture of tissue morphology and composition. We test and compare a panel of state-of-the-art loss functions available for segmentation models, and provide indications about their use in histopathology image segmentation, based on the analysis of (a) a multi-centric cohort of CRC cases from five medical centers in the Netherlands and Germany, and (b) two publicly available datasets on segmentation in CRC. We used the best performing AI model as the basis for a computer-aided diagnosis system that classifies colon biopsies into four main categories that are relevant pathologically. We report the performance of this system on an independent cohort of more than 1000 patients. The results show that with a good segmentation network as a base, a tool can be developed which can support pathologists in the risk stratification of colorectal cancer patients, among other possible uses. We have made the segmentation model available for research use on .
  •  
4.
  • Bokhorst, John-Melle, et al. (författare)
  • Fully Automated Tumor Bud Assessment in Hematoxylin and Eosin-Stained Whole Slide Images of Colorectal Cancer
  • 2023
  • Ingår i: Modern Pathology. - : ELSEVIER SCIENCE INC. - 0893-3952 .- 1530-0285. ; 36:9
  • Tidskriftsartikel (refereegranskat)abstract
    • Tumor budding (TB), the presence of single cells or small clusters of up to 4 tumor cells at the invasive front of colorectal cancer (CRC), is a proven risk factor for adverse outcomes. International definitions are necessary to reduce interobserver variability. According to the current international guidelines, hotspots at the invasive front should be counted in hematoxylin and eosin (H & E)-stained slides. This is time-consuming and prone to interobserver variability; therefore, there is a need for computer-aided diagnosis solutions. In this study, we report an artificial intelligence-based method for detecting TB in H & E-stained whole slide images. We propose a fully automated pipeline to identify the tumor border, detect tumor buds, characterize them based on the number of tumor cells, and produce a TB density map to identify the TB hotspot. The method outputs the TB count in the hotspot as a computational biomarker. We show that the proposed automated TB detection workflow performs on par with a panel of 5 pathologists at detecting tumor buds and that the hotspot-based TB count is an independent prognosticator in both the univariate and the multivariate analysis, validated on a cohort of n 1/4 981 patients with CRC. Computer-aided detection of tumor buds based on deep learning can perform on par with expert pathologists for the detection and quantification of tumor buds in H & E-stained CRC histopathology slides, strongly facilitating the introduction of budding as an independent prognosticator in clinical routine and clinical trials. & COPY; 2023 THE AUTHORS. Published by Elsevier Inc. on behalf of the United States & Canadian Academy of Pathology. This is an open access article under the CC BY license (http://creativecommons.org/ licenses/by/4.0/).
  •  
5.
  • Bokhorst, John-Melle, et al. (författare)
  • Semi-Supervised Learning to Automate Tumor Bud Detection in Cytokeratin-Stained Whole-Slide Images of Colorectal Cancer
  • 2023
  • Ingår i: Cancers. - : MDPI. - 2072-6694. ; 15:7
  • Tidskriftsartikel (refereegranskat)abstract
    • Tumor budding is a histopathological biomarker associated with metastases and adverse survival outcomes in colorectal carcinoma (CRC) patients. It is characterized by the presence of single tumor cells or small clusters of cells within the tumor or at the tumor-invasion front. In order to obtain a tumor budding score for a patient, the region with the highest tumor bud density must first be visually identified by a pathologist, after which buds will be counted in the chosen hotspot field. The automation of this process will expectedly increase efficiency and reproducibility. Here, we present a deep learning convolutional neural network model that automates the above procedure. For model training, we used a semi-supervised learning method, to maximize the detection performance despite the limited amount of labeled training data. The model was tested on an independent dataset in which human- and machine-selected hotspots were mapped in relation to each other and manual and machine detected tumor bud numbers in the manually selected fields were compared. We report the results of the proposed method in comparison with visual assessment by pathologists. We show that the automated tumor bud count achieves a prognostic value comparable with visual estimation, while based on an objective and reproducible quantification. We also explore novel metrics to quantify buds such as density and dispersion and report their prognostic value. We have made the model available for research use on the grand-challenge platform.
  •  
6.
  • Chelebian, Eduard, et al. (författare)
  • DEPICTER : Deep representation clustering for histology annotation
  • 2024
  • Ingår i: Computers in Biology and Medicine. - : Elsevier. - 0010-4825 .- 1879-0534. ; 170
  • Tidskriftsartikel (refereegranskat)abstract
    • Automatic segmentation of histopathology whole -slide images (WSI) usually involves supervised training of deep learning models with pixel -level labels to classify each pixel of the WSI into tissue regions such as benign or cancerous. However, fully supervised segmentation requires large-scale data manually annotated by experts, which can be expensive and time-consuming to obtain. Non -fully supervised methods, ranging from semi -supervised to unsupervised, have been proposed to address this issue and have been successful in WSI segmentation tasks. But these methods have mainly been focused on technical advancements in algorithmic performance rather than on the development of practical tools that could be used by pathologists or researchers in real -world scenarios. In contrast, we present DEPICTER (Deep rEPresentatIon ClusTERing), an interactive segmentation tool for histopathology annotation that produces a patch -wise dense segmentation map at WSI level. The interactive nature of DEPICTER leverages self- and semi -supervised learning approaches to allow the user to participate in the segmentation producing reliable results while reducing the workload. DEPICTER consists of three steps: first, a pretrained model is used to compute embeddings from image patches. Next, the user selects a number of benign and cancerous patches from the multi -resolution image. Finally, guided by the deep representations, label propagation is achieved using our novel seeded iterative clustering method or by directly interacting with the embedding space via feature space gating. We report both real-time interaction results with three pathologists and evaluate the performance on three public cancer classification dataset benchmarks through simulations. The code and demos of DEPICTER are publicly available at https://github.com/eduardchelebian/depicter.
  •  
7.
  • Geessink, Oscar G. F., et al. (författare)
  • Computer aided quantification of intratumoral stroma yields an independent prognosticator in rectal cancer
  • 2019
  • Ingår i: Cellular Oncology. - : SPRINGER. - 2211-3428 .- 2211-3436. ; 42:3, s. 331-341
  • Tidskriftsartikel (refereegranskat)abstract
    • PurposeTumor-stroma ratio (TSR) serves as an independent prognostic factor in colorectal cancer and other solid malignancies. The recent introduction of digital pathology in routine tissue diagnostics holds opportunities for automated TSR analysis. We investigated the potential of computer-aided quantification of intratumoral stroma in rectal cancer whole-slide images.MethodsHistological slides from 129 rectal adenocarcinoma patients were analyzed by two experts who selected a suitable stroma hot-spot and visually assessed TSR. A semi-automatic method based on deep learning was trained to segment all relevant tissue types in rectal cancer histology and subsequently applied to the hot-spots provided by the experts. Patients were assigned to a stroma-high or stroma-low group by both TSR methods (visual and automated). This allowed for prognostic comparison between the two methods in terms of disease-specific and disease-free survival times.ResultsWith stroma-low as baseline, automated TSR was found to be prognostic independent of age, gender, pT-stage, lymph node status, tumor grade, and whether adjuvant therapy was given, both for disease-specific survival (hazard ratio=2.48 (95% confidence interval 1.29-4.78)) and for disease-free survival (hazard ratio=2.05 (95% confidence interval 1.11-3.78)). Visually assessed TSR did not serve as an independent prognostic factor in multivariate analysis.ConclusionsThis work shows that TSR is an independent prognosticator in rectal cancer when assessed automatically in user-provided stroma hot-spots. The deep learning-based technology presented here may be a significant aid to pathologists in routine diagnostics.
  •  
8.
  • Hermsen, Meyke, et al. (författare)
  • Convolutional Neural Networks for the Evaluation of Chronic and Inflammatory Lesions in Kidney Transplant Biopsies
  • 2022
  • Ingår i: American Journal of Pathology. - : ELSEVIER SCIENCE INC. - 0002-9440 .- 1525-2191. ; 192:10, s. 1418-1432
  • Tidskriftsartikel (refereegranskat)abstract
    • In kidney transplant biopsies, both inflammation and chronic changes are important features that predict long-term graft survival. Quantitative scoring of these features is important for transplant diagnostics and kidney research. However, visual scoring is poorly reproducible and labor intensive. The goal of this study was to investigate the potential of convolutional neural networks (CNNs) to quantify inflammation and chronic features in kidney transplant biopsies. A structure segmentation CNN and a lymphocyte detection CNN were applied on 125 whole-slide image pairs of periodic acid-Schiff- and CD3-stained slides. The CNN results were used to quantify healthy and sclerotic glomeruli, interstitial fibrosis, tubular atrophy, and inflammation within both nonatrophic and atrophic tubuli, and in areas of interstitial fibrosis. The computed tissue features showed high correlation with Banff lesion scores of five pathologists (A.A., A.Dend., J.H.B., J.K., and T.N.). Analyses on a small subset showed a moderate correlation toward higher CD3+ cell density within scarred regions and higher CD3+ cell count inside atrophic tubuli correlated with long-term change of estimated glomerular filtration rate. The presented CNNs are valid tools to yield objective quantitative information on glomeruli number, fibrotic tissue, and inflammation within scarred and non-scarred kidney parenchyma in a reproducible manner. CNNs have the potential to improve kidney transplant diagnostics and will benefit the community as a novel method to generate surrogate end points for large-scale clinical studies. (Am J Pathol 2022, 192: 1418-1432; https://doi.org/10.1016/j.ajpath.2022.06.009)
  •  
9.
  • Jiao, Yiping, et al. (författare)
  • LYSTO: The Lymphocyte Assessment Hackathon and Benchmark Dataset
  • 2024
  • Ingår i: IEEE journal of biomedical and health informatics. - : IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC. - 2168-2194 .- 2168-2208. ; 28:3, s. 1161-1172
  • Tidskriftsartikel (refereegranskat)abstract
    • We introduce LYSTO, the Lymphocyte Assessment Hackathon, which was held in conjunction with the MICCAI 2019 Conference in Shenzhen (China). The competition required participants to automatically assess the number of lymphocytes, in particular T-cells, in images of colon, breast, and prostate cancer stained with CD3 and CD8 immunohistochemistry. Differently from other challenges setup in medical image analysis, LYSTO participants were solely given a few hours to address this problem. In this paper, we describe the goal and the multi-phase organization of the hackathon; we describe the proposed methods and the on-site results. Additionally, we present post-competition results where we show how the presented methods perform on an independent set of lung cancer slides, which was not part of the initial competition, as well as a comparison on lymphocyte assessment between presented methods and a panel of pathologists. We show that some of the participants were capable to achieve pathologist-level performance at lymphocyte assessment. After the hackathon, LYSTO was left as a lightweight plug-and-play benchmark dataset on grand-challenge website, together with an automatic evaluation platform.
  •  
10.
  • Leon-Ferre, Roberto A., et al. (författare)
  • Automated mitotic spindle hotspot counts are highly associated with clinical outcomes in systemically untreated early-stage triple-negative breast cancer
  • 2024
  • Ingår i: npj Breast Cancer. - : NATURE PORTFOLIO. - 2374-4677. ; 10:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Operable triple-negative breast cancer (TNBC) has a higher risk of recurrence and death compared to other subtypes. Tumor size and nodal status are the primary clinical factors used to guide systemic treatment, while biomarkers of proliferation have not demonstrated value. Recent studies suggest that subsets of TNBC have a favorable prognosis, even without systemic therapy. We evaluated the association of fully automated mitotic spindle hotspot (AMSH) counts with recurrence-free (RFS) and overall survival (OS) in two separate cohorts of patients with early-stage TNBC who did not receive systemic therapy. AMSH counts were obtained from areas with the highest mitotic density in digitized whole slide images processed with a convolutional neural network trained to detect mitoses. In 140 patients from the Mayo Clinic TNBC cohort, AMSH counts were significantly associated with RFS and OS in a multivariable model controlling for nodal status, tumor size, and tumor-infiltrating lymphocytes (TILs) (p < 0.0001). For every 10-point increase in AMSH counts, there was a 16% increase in the risk of an RFS event (HR 1.16, 95% CI 1.08-1.25), and a 7% increase in the risk of death (HR 1.07, 95% CI 1.00-1.14). We corroborated these findings in a separate cohort of systemically untreated TNBC patients from Radboud UMC in the Netherlands. Our findings suggest that AMSH counts offer valuable prognostic information in patients with early-stage TNBC who did not receive systemic therapy, independent of tumor size, nodal status, and TILs. If further validated, AMSH counts could help inform future systemic therapy de-escalation strategies.
  •  
11.
  • Litjens, Geert, et al. (författare)
  • A Decade of GigaScience : The Challenges of Gigapixe Pathology Images
  • 2022
  • Ingår i: GigaScience. - : Oxford University Press. - 2047-217X. ; 11
  • Tidskriftsartikel (refereegranskat)abstract
    • In the last decade, the field of computational pathology has advanced at a rapid pace because of the availability of deep neural networks, which achieved their first successes in computer vision tasks in 2012. An important driver for the progress of the field were public competitions, so called Grand Challenges, in which increasingly large data sets were offered to the public to solve clinically relevant tasks. Going from the first Pathology challenges, which had data obtained from 23 patients, to current challenges sharing data of thousands of patients, performance of developed deep learning solutions has reached (and sometimes surpassed) the level of experienced pathologists for specific tasks. We expect future challenges to broaden the horizon, for instance by combining data from radiology, pathology and tumor genetics, and to extract prognostic and predictive information independent of currently used grading schemes.
  •  
12.
  • Marini, Niccolo, et al. (författare)
  • Unleashing the potential of digital pathology data by training computer-aided diagnosis models without human annotations
  • 2022
  • Ingår i: npj Digital Medicine. - : Nature Portfolio. - 2398-6352. ; 5:1
  • Tidskriftsartikel (refereegranskat)abstract
    • The digitalization of clinical workflows and the increasing performance of deep learning algorithms are paving the way towards new methods for tackling cancer diagnosis. However, the availability of medical specialists to annotate digitized images and free-text diagnostic reports does not scale with the need for large datasets required to train robust computer-aided diagnosis methods that can target the high variability of clinical cases and data produced. This work proposes and evaluates an approach to eliminate the need for manual annotations to train computer-aided diagnosis tools in digital pathology. The approach includes two components, to automatically extract semantically meaningful concepts from diagnostic reports and use them as weak labels to train convolutional neural networks (CNNs) for histopathology diagnosis. The approach is trained (through 10-fold cross-validation) on 3769 clinical images and reports, provided by two hospitals and tested on over 11000 images from private and publicly available datasets. The CNN, trained with automatically generated labels, is compared with the same architecture trained with manual labels. Results show that combining text analysis and end-to-end deep neural networks allows building computer-aided diagnosis tools that reach solid performance (micro-accuracy = 0.908 at image-level) based only on existing clinical data without the need for manual annotations.
  •  
13.
  • Mercan, Caner, et al. (författare)
  • Deep learning for fully-automated nuclear pleomorphism scoring in breast cancer
  • 2022
  • Ingår i: npj Breast Cancer. - : Nature Portfolio. - 2374-4677. ; 8:1
  • Tidskriftsartikel (refereegranskat)abstract
    • To guide the choice of treatment, every new breast cancer is assessed for aggressiveness (i.e., graded) by an experienced histopathologist. Typically, this tumor grade consists of three components, one of which is the nuclear pleomorphism score (the extent of abnormalities in the overall appearance of tumor nuclei). The degree of nuclear pleomorphism is subjectively classified from 1 to 3, where a score of 1 most closely resembles epithelial cells of normal breast epithelium and 3 shows the greatest abnormalities. Establishing numerical criteria for grading nuclear pleomorphism is challenging, and inter-observer agreement is poor. Therefore, we studied the use of deep learning to develop fully automated nuclear pleomorphism scoring in breast cancer. The reference standard used for training the algorithm consisted of the collective knowledge of an international panel of 10 pathologists on a curated set of regions of interest covering the entire spectrum of tumor morphology in breast cancer. To fully exploit the information provided by the pathologists, a first-of-its-kind deep regression model was trained to yield a continuous scoring rather than limiting the pleomorphism scoring to the standard three-tiered system. Our approach preserves the continuum of nuclear pleomorphism without necessitating a large data set with explicit annotations of tumor nuclei. Once translated to the traditional system, our approach achieves top pathologist-level performance in multiple experiments on regions of interest and whole-slide images, compared to a panel of 10 and 4 pathologists, respectively.
  •  
14.
  • Smit, Marloes A., et al. (författare)
  • Deep learning based tumor–stroma ratio scoring in colon cancer correlates with microscopic assessment
  • 2023
  • Ingår i: Journal of Pathology Informatics. - : Elsevier B.V.. - 2229-5089 .- 2153-3539. ; 14
  • Tidskriftsartikel (refereegranskat)abstract
    • Background: The amount of stroma within the primary tumor is a prognostic parameter for colon cancer patients. This phenomenon can be assessed using the tumor–stroma ratio (TSR), which classifies tumors in stroma-low (≤50% stroma) and stroma-high (>50% stroma). Although the reproducibility for TSR determination is good, improvement might be expected from automation. The aim of this study was to investigate whether the scoring of the TSR in a semi- and fully automated method using deep learning algorithms is feasible. Methods: A series of 75 colon cancer slides were selected from a trial series of the UNITED study. For the standard determination of the TSR, 3 observers scored the histological slides. Next, the slides were digitized, color normalized, and the stroma percentages were scored using semi- and fully automated deep learning algorithms. Correlations were determined using intraclass correlation coefficients (ICCs) and Spearman rank correlations. Results: 37 (49%) cases were classified as stroma-low and 38 (51%) as stroma-high by visual estimation. A high level of concordance between the 3 observers was reached, with ICCs of 0.91, 0.89, and 0.94 (all P < .001). Between visual and semi-automated assessment the ICC was 0.78 (95% CI 0.23–0.91, P-value 0.005), with a Spearman correlation of 0.88 (P < .001). Spearman correlation coefficients above 0.70 (N=3) were observed for visual estimation versus the fully automated scoring procedures. Conclusion: Good correlations were observed between standard visual TSR determination and semi- and fully automated TSR scores. At this point, visual examination has the highest observer agreement, but semi-automated scoring could be helpful to support pathologists. © 2023 The Authors
  •  
15.
  • Swiderska-Chadaj, Zaneta, et al. (författare)
  • Learning to detect lymphocytes in immunohistochemistry with deep learning
  • 2019
  • Ingår i: Medical Image Analysis. - : ELSEVIER. - 1361-8415 .- 1361-8423. ; 58
  • Tidskriftsartikel (refereegranskat)abstract
    • The immune system is of critical importance in the development of cancer. The evasion of destruction by the immune system is one of the emerging hallmarks of cancer. We have built a dataset of 171,166 manually annotated CD3(+) and CD8(+) cells, which we used to train deep learning algorithms for automatic detection of lymphocytes in histopathology images to better quantify immune response. Moreover, we investigate the effectiveness of four deep learning based methods when different subcompartments of the whole-slide image are considered: normal tissue areas, areas with immune cell clusters, and areas containing artifacts. We have compared the proposed methods in breast, colon and prostate cancer tissue slides collected from nine different medical centers. Finally, we report the results of an observer study on lymphocyte quantification, which involved four pathologists from different medical centers, and compare their performance with the automatic detection. The results give insights on the applicability of the proposed methods for clinical use. U-Net obtained the highest performance with an F1-score of 0.78 and the highest agreement with manual evaluation (kappa = 0.72), whereas the average pathologists agreement with reference standard was kappa = 0.64. The test set and the automatic evaluation procedure are publicly available at lyon19.grand-challenge.org. (C) 2019 Elsevier B.V. All rights reserved.
  •  
16.
  • Tellez, David, et al. (författare)
  • Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology
  • 2019
  • Ingår i: Medical Image Analysis. - : ELSEVIER. - 1361-8415 .- 1361-8423. ; 58
  • Tidskriftsartikel (refereegranskat)abstract
    • Stain variation is a phenomenon observed when distinct pathology laboratories stain tissue slides that exhibit similar but not identical color appearance. Due to this color shift between laboratories, convolutional neural networks (CNNs) trained with images from one lab often underperform on unseen images from the other lab. Several techniques have been proposed to reduce the generalization error, mainly grouped into two categories: stain color augmentation and stain color normalization. The former simulates a wide variety of realistic stain variations during training, producing stain-invariant CNNs. The latter aims to match training and test color distributions in order to reduce stain variation. For the first time, we compared some of these techniques and quantified their effect on CNN classification performance using a heterogeneous dataset of hematoxylin and eosin histopathology images from 4 organs and 9 pathology laboratories. Additionally, we propose a novel unsupervised method to perform stain color normalization using a neural network. Based on our experimental results, we provide practical guidelines on how to use stain color augmentation and stain color normalization in future computational pathology applications. (C) 2019 Elsevier B.V. All rights reserved.
  •  
17.
  • van der Laak, Jeroen, et al. (författare)
  • Deep learning in histopathology : the path to the clinic
  • 2021
  • Ingår i: Nature Medicine. - : NATURE RESEARCH. - 1078-8956 .- 1546-170X. ; 27:5, s. 775-784
  • Forskningsöversikt (refereegranskat)abstract
    • Recent advances in machine learning techniques have created opportunities to improve medical diagnostics, but implementing these advances in the clinic will not be without challenge. Machine learning techniques have great potential to improve medical diagnostics, offering ways to improve accuracy, reproducibility and speed, and to ease workloads for clinicians. In the field of histopathology, deep learning algorithms have been developed that perform similarly to trained pathologists for tasks such as tumor detection and grading. However, despite these promising results, very few algorithms have reached clinical implementation, challenging the balance between hope and hype for these new techniques. This Review provides an overview of the current state of the field, as well as describing the challenges that still need to be addressed before artificial intelligence in histopathology can achieve clinical value.
  •  
18.
  • van der Laak, Jeroen, et al. (författare)
  • No pixel-level annotations needed
  • 2019
  • Ingår i: Nature Biomedical Engineering. - : NATURE PUBLISHING GROUP. - 2157-846X. ; 3:11, s. 855-856
  • Tidskriftsartikel (övrigt vetenskapligt/konstnärligt)abstract
    • A deep-learning model for cancer detection trained on a large number of scanned pathology slides and associated diagnosis labels enables model development without the need for pixel-level annotations.
  •  
19.
  • van Rijthoven, Mart, et al. (författare)
  • Few-shot weakly supervised detection and retrieval in histopathology whole-slide images
  • 2021
  • Ingår i: MEDICAL IMAGING 2021 - DIGITAL PATHOLOGY. - : SPIE-INT SOC OPTICAL ENGINEERING. - 9781510640368
  • Konferensbidrag (refereegranskat)abstract
    • In this work, we propose a deep learning system for weakly supervised object detection in digital pathology whole slide images. We designed the system to be organ- and object-agnostic, and to be adapted on-the-fly to detect novel objects based on a few examples provided by the user. We tested our method on detection of healthy glands in colon biopsies and ductal carcinoma in situ (DCIS) of the breast, showing that (1) the same system is capable of adapting to detect requested objects with high accuracy, namely 87% accuracy assessed on 582 detections in colon tissue, and 93% accuracy assessed on 163 DCIS detections in breast tissue; (2) in some settings, the system is capable of retrieving similar cases with little to none false positives (i.e., precision equal to 1.00); (3) the performance of the system can benefit from previously detected objects with high confidence that can be reused in new searches in an iterative fashion.
  •  
20.
  • van Rijthoven, Mart, et al. (författare)
  • HookNet: Multi-resolution convolutional neural networks for semantic segmentation in histopathology whole-slide images
  • 2021
  • Ingår i: Medical Image Analysis. - : Elsevier. - 1361-8415 .- 1361-8423. ; 68
  • Tidskriftsartikel (refereegranskat)abstract
    • We propose HookNet, a semantic segmentation model for histopathology whole-slide images, which combines context and details via multiple branches of encoder-decoder convolutional neural networks. Concentric patches at multiple resolutions with different fields of view, feed different branches of HookNet, and intermediate representations are combined via a hooking mechanism. We describe a framework to design and train HookNet for achieving high-resolution semantic segmentation and introduce constraints to guarantee pixel-wise alignment in feature maps during hooking. We show the advantages of using HookNet in two histopathology image segmentation tasks where tissue type prediction accuracy strongly depends on contextual information, namely (1) multi-class tissue segmentation in breast cancer and, (2) segmentation of tertiary lymphoid structures and germinal centers in lung cancer. We show the superiority of HookNet when compared with single-resolution U-Net models working at different resolutions as well as with a recently published multi-resolution model for histopathology image segmentation. We have made HookNet publicly available by releasing the source coder as well as in the form of web-based applications) :3 based on the grand-challenge.org platform. (C) 2020 The Authors. Published by Elsevier B.V.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-20 av 20

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy