SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Matsoukas Christos) "

Sökning: WFRF:(Matsoukas Christos)

  • Resultat 1-10 av 12
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Fredin Haslum, Johan, et al. (författare)
  • Bridging Generalization Gaps in High Content Imaging Through Online Self-Supervised Domain Adaptation
  • 2024
  • Ingår i: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 2024. ; , s. 7723-7732
  • Konferensbidrag (refereegranskat)abstract
    • High Content Imaging (HCI) plays a vital role in modern drug discovery and development pipelines, facilitating various stages from hit identification to candidate drug characterization. Applying machine learning models to these datasets can prove challenging as they typically consist of multiple batches, affected by experimental variation, especially if different imaging equipment have been used. Moreover, as new data arrive, it is preferable that they are analyzed in an online fashion. To overcome this, we propose CODA, an online self-supervised domain adaptation approach. CODA divides the classifier’s role into a generic feature extractor and a task-specific model. We adapt the feature extractor’s weights to the new domain using cross-batch self-supervision while keeping the task-specific model unchanged. Our results demonstrate that this strategy significantly reduces the generalization gap, achieving up to a 300% improvement when applied to data from different labs utilizing different microscopes. CODA can be applied to new, unlabeled out-of-domain data sources of different sizes, from a single plate to multiple experimental batches.
  •  
2.
  • Fredin Haslum, Johan, et al. (författare)
  • Metadata-guided Consistency Learning for High Content Images
  • 2023
  • Ingår i: PLMR: Volume 227: Medical Imaging with Deep Learning, 10-12 July 2023, Nashville, TN, USA.
  • Konferensbidrag (refereegranskat)abstract
    • High content imaging assays can capture rich phenotypic response data for large sets of compound treatments, aiding in the characterization and discovery of novel drugs. However, extracting representative features from high content images that can capture subtle nuances in phenotypes remains challenging. The lack of high-quality labels makes it difficult to achieve satisfactory results with supervised deep learning. Self-Supervised learning methods have shown great success on natural images, and offer an attractive alternative also to microscopy images. However, we find that self-supervised learning techniques underperform on high content imaging assays. One challenge is the undesirable domain shifts present in the data known as batch effects, which are caused by biological noise or uncontrolled experimental conditions. To this end, we introduce Cross-Domain Consistency Learning (CDCL), a self-supervised approach that is able to learn in the presence of batch effects. CDCL enforces the learning of biological similarities while disregarding undesirable batch-specific signals, leading to more useful and versatile representations. These features are organised according to their morphological changes and are more useful for downstream tasks – such as distinguishing treatments and mechanism of action.
  •  
3.
  • Fredin Haslum, Johan, et al. (författare)
  • Metadata-guided Consistency Learning for High Content Images
  • 2023
  • Ingår i: Medical Imaging with Deep Learning 2023, MIDL 2023. - : ML Research Press. ; , s. 918-936
  • Konferensbidrag (refereegranskat)abstract
    • High content imaging assays can capture rich phenotypic response data for large sets of compound treatments, aiding in the characterization and discovery of novel drugs. However, extracting representative features from high content images that can capture subtle nuances in phenotypes remains challenging. The lack of high-quality labels makes it difficult to achieve satisfactory results with supervised deep learning. Self-Supervised learning methods have shown great success on natural images, and offer an attractive alternative also to microscopy images. However, we find that self-supervised learning techniques underperform on high content imaging assays. One challenge is the undesirable domain shifts present in the data known as batch effects, which are caused by biological noise or uncontrolled experimental conditions. To this end, we introduce Cross-Domain Consistency Learning (CDCL), a self-supervised approach that is able to learn in the presence of batch effects. CDCL enforces the learning of biological similarities while disregarding undesirable batch-specific signals, leading to more useful and versatile representations. These features are organised according to their morphological changes and are more useful for downstream tasks - such as distinguishing treatments and mechanism of action.
  •  
4.
  • Huix, Joana Palés, et al. (författare)
  • Are Natural Domain Foundation Models Useful for Medical Image Classification?
  • 2024
  • Ingår i: Proceedings - 2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024. - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 7619-7628
  • Konferensbidrag (refereegranskat)abstract
    • The deep learning field is converging towards the use of general foundation models that can be easily adapted for diverse tasks. While this paradigm shift has become common practice within the field of natural language processing, progress has been slower in computer vision. In this paper we attempt to address this issue by investigating the transferability of various state-of-the-art foundation models to medical image classification tasks. Specifically, we evaluate the performance of five foundation models, namely Sam, Seem, Dinov2, BLIP, and OpenCLIP across four well-established medical imaging datasets. We explore different training settings to fully harness the potential of these models. Our study shows mixed results. Dinov2 consistently outperforms the standard practice of ImageNet pretraining. However, other foundation models failed to consistently beat this established baseline indicating limitations in their transferability to medical image classification tasks.
  •  
5.
  • Liu, Yue, et al. (författare)
  • PatchDropout : Economizing Vision Transformers Using Patch Dropout
  • 2023
  • Ingår i: 2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV). - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 3942-3951
  • Konferensbidrag (refereegranskat)abstract
    • Vision transformers have demonstrated the potential to outperform CNNs in a variety of vision tasks. But the computational and memory requirements of these models prohibit their use in many applications, especially those that depend on high-resolution images, such as medical image classification. Efforts to train ViTs more efficiently are overly complicated, necessitating architectural changes or intricate training schemes. In this work, we show that standard ViT models can be efficiently trained at high resolution by randomly dropping input image patches. This simple approach, PatchDropout, reduces FLOPs and memory by at least 50% in standard natural image datasets such as IMAGENET, and those savings only increase with image size. On CSAW, a high-resolution medical dataset, we observe a 5. savings in computation and memory using PatchDropout, along with a boost in performance. For practitioners with a fixed computational or memory budget, PatchDropout makes it possible to choose image resolution, hyperparameters, or model size to get the most performance out of their model.
  •  
6.
  • Matsoukas, Christos, et al. (författare)
  • Adding seemingly uninformative labels helps in low data regimes
  • 2020
  • Ingår i: 37th International Conference on Machine Learning, ICML 2020. - : International Machine Learning Society (IMLS). ; , s. 6731-6740
  • Konferensbidrag (refereegranskat)abstract
    • Evidence suggests that networks trained on large datasets generalize well not solely because of the numerous training examples, but also class diversity which encourages learning of enriched features. This raises the question of whether this remains true when data is scarce - is there an advantage to learning with additional labels in low-data regimes In this work, we consider a task that requires difficult-To-obtain expert annotations: Tumor segmentation in mammography images. We show that, in low-data settings, performance can be improved by complementing the expert annotations with seemingly uninformative labels from non-expert annotators, turning the task into a multi-class problem. We reveal that these gains increase when less expert data is available, and uncover several interesting properties through further studies. We demonstrate our findings on CSAW-S, a new dataset that we introduce here, and confirm them on two public datasets.
  •  
7.
  • Matsoukas, Christos (författare)
  • Artificial Intelligence for Medical Image Analysis with Limited Data
  • 2024
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Artificial intelligence (AI) is progressively influencing business, science, and society, leading to major socioeconomic changes. However, its application in real-world problems varies significantly across different sectors. One of the primary challenges limiting the widespread adoption of AI in certain areas is data availability. Medical image analysis is one of these domains, where the process of gathering data and labels is often challenging or even infeasible due to legal and privacy concerns, or due to the specific characteristics of diseases. Logistical obstacles, expensive diagnostic methods and the necessity for invasive procedures add to the difficulty of data collection. Even when ample data exists, the substantial cost and logistical hurdles in acquiring expert annotations pose considerable challenges. Thus, there is a pressing need for the development of AI models that can operate in low-data settings.In this thesis, we explore methods that improve the generalization and robustness of models when data availability is limited. We highlight the importance of model architecture and initialization, considering their associated assumptions and biases, to determine their effectiveness in such settings. We find that models with fewer built-in assumptions in their architecture need to be initialized with pre-trained weights, executed via transfer learning. This prompts us to explore how well transfer learning performs when models are initially trained in the natural domains, where data is abundant, before being used for medical image analysis where data is limited. We identify key factors responsible for transfer learning’s efficacy, and explore its relationship with data size, model architecture, and the distance between the target domain and the one used for pretraining. In cases where expert labels are scarce, we introduce the concept of complementary labels as the means to expand the labeling set. By providing information about other objects in the image, these labels help develop richer representations, leading to improved performance in low-data regimes. We showcase the utility of these methods by streamlining the histopathology-based assessment of chronic kidney disease in an industrial pharmaceutical setting, reducing the turnaround time of study evaluations by 97%. Our results demonstrate that AI models developed for low data regimes are capable of delivering industrial-level performance, proving their practical use in drug discovery and healthcare.
  •  
8.
  • Matsoukas, Christos (författare)
  • Pretrained ViTs yield versatile representations for medical images
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Convolutional Neural Networks (CNNs) have reigned for a decade as the de facto approach to automated medical image diagnosis, pushing the state-of-the-art in classification, detection and segmentation tasks. Over the last years, vision transformers (ViTs) have appeared as a competitive alternative to CNNs, yielding impressive levels of performance in the natural image domain, while possessing several interesting properties that could prove beneficial for medical imaging tasks. In this work, we explore the benefits and drawbacks of transformer-based models for medical image classification. We conduct a series of experiments on several standard 2D medical image benchmark datasets and tasks. Our findings show that, while CNNs perform better if trained from scratch, off-the-shelf vision transformers can perform on par with CNNs when pretrained on ImageNet, both in a supervised and self-supervised setting, rendering them as a viable alternative to CNNs.
  •  
9.
  • Matsoukas, Christos (författare)
  • Streamlining the Histopathological Workflow in Chronic Kidney Disease with AI
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Pathology assessment and scoring are essential steps for the evaluation of tissue changes in clinical and preclinical studies of chronic kidney disease, but are often costly and inefficient. Moreover, inconsistencies in manual scoring makes comparisons across different studies difficult. In this work, we identify areas where AI-assistance can streamline and improve the pathology workflow and demonstrate the efficiency of our process in an industrial setting. We show that repetitive and time-consuming tasks such as identifying and annotating glomeruli can be fully automated using AI without loss of quality. By providing a streamlined interface that facilitates rapid pathologist scoring, additional savings can be achieved, reducing the time spent per slide by 92%. We also present a fully automated scoring process, where the pathologist’s role is limited to general overview and quality control, which further increases time savings up to 98.7% compared to traditional manual scoring. Finally, we show that AI models trained using our method provide highly accurate scoring of studies they were not trained on in a routine discovery pipeline (R value of 0.964 between the AI predictions and the pathologists score). The models can also effectively translate from mouse models to human biopsies, even without pre-training on human tissue.
  •  
10.
  • Matsoukas, Christos, et al. (författare)
  • What Makes Transfer Learning Work for Medical Images : Feature Reuse & Other Factors
  • 2022
  • Ingår i: 2022 IEEE/CVF conference on computer vision and pattern recognition (CVPR). - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 9215-9224
  • Konferensbidrag (refereegranskat)abstract
    • Transfer learning is a standard technique to transfer knowledge from one domain to another. For applications in medical imaging, transfer from ImageNet has become the de-facto approach, despite differences in the tasks and image characteristics between the domains. However, it is unclear what factors determine whether - and to what extent transfer learning to the medical domain is useful. The longstanding assumption that features from the source domain get reused has recently been called into question. Through a series of experiments on several medical image benchmark datasets, we explore the relationship between transfer learning, data size, the capacity and inductive bias of the model, as well as the distance between the source and target domain. Our findings suggest that transfer learning is beneficial in most cases, and we characterize the important role feature reuse plays in its success.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 12

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy