SwePub
Sök i LIBRIS databas

  Utökad sökning

id:"swepub:oai:DiVA.org:liu-189163"
 

Sökning: id:"swepub:oai:DiVA.org:liu-189163" > Learning Representa...

Learning Representations with Contrastive Self-Supervised Learning for Histopathology Applications

Stacke, Karin, 1990- (författare)
Linköpings universitet,Medie- och Informationsteknik,Tekniska fakulteten,Sectra AB, Sweden
Unger, Jonas, 1978- (författare)
Linköpings universitet,Medie- och Informationsteknik,Tekniska fakulteten,Centrum för medicinsk bildvetenskap och visualisering, CMIV
Lundström, Claes, 1973- (författare)
Linköpings universitet,Medie- och Informationsteknik,Tekniska fakulteten,Centrum för medicinsk bildvetenskap och visualisering, CMIV,Sectra AB, Sweden
visa fler...
Eilertsen, Gabriel, 1984- (författare)
Linköpings universitet,Medie- och Informationsteknik,Tekniska fakulteten,Centrum för medicinsk bildvetenskap och visualisering, CMIV
visa färre...
 (creator_code:org_t)
Melba (The Journal of Machine Learning for Biomedical Imaging), 2022
2022
Engelska.
Ingår i: The Journal of Machine Learning for Biomedical Imaging. - : Melba (The Journal of Machine Learning for Biomedical Imaging). - 2766-905X. ; 1
  • Tidskriftsartikel (övrigt vetenskapligt/konstnärligt)
Abstract Ämnesord
Stäng  
  • Unsupervised learning has made substantial progress over the last few years, especially by means of contrastive self-supervised learning. The dominating dataset for benchmarking self-supervised learning has been ImageNet, for which recent methods are approaching the performance achieved by fully supervised training. The ImageNet dataset is however largely object-centric, and it is not clear yet what potential those methods have on widely different datasets and tasks that are not object-centric, such as in digital pathology.While self-supervised learning has started to be explored within this area with encouraging results, there is reason to look closer at how this setting differs from natural images and ImageNet. In this paper we make an in-depth analysis of contrastive learning for histopathology, pin-pointing how the contrastive objective will behave differently due to the characteristics of histopathology data. Using SimCLR and H&E stained images as a representative setting for contrastive self-supervised learning in histopathology, we bring forward a number of considerations, such as view generation for the contrastive objectiveand hyper-parameter tuning. In a large battery of experiments, we analyze how the downstream performance in tissue classification will be affected by these considerations. The results point to how contrastive learning can reduce the annotation effort within digital pathology, but that the specific dataset characteristics need to be considered. To take full advantage of the contrastive learning objective, different calibrations of view generation and hyper-parameters are required. Our results pave the way for realizing the full potential of self-supervised learning for histopathology applications. Code and trained models are available at https://github.com/k-stacke/ssl-pathology.

Ämnesord

TEKNIK OCH TEKNOLOGIER  -- Medicinteknik -- Medicinsk bildbehandling (hsv//swe)
ENGINEERING AND TECHNOLOGY  -- Medical Engineering -- Medical Image Processing (hsv//eng)

Publikations- och innehållstyp

vet (ämneskategori)
art (ämneskategori)

Hitta via bibliotek

Till lärosätets databas

Sök utanför SwePub

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy