SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Smith Kevin Associate Professor 1975 ) "

Sökning: WFRF:(Smith Kevin Associate Professor 1975 )

  • Resultat 1-4 av 4
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Liu, Yue (författare)
  • Breast cancer risk assessment and detection in mammograms with artificial intelligence
  • 2024
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Breast cancer, the most common type of cancer among women worldwide, necessitates reliable early detection methods. Although mammography serves as a cost-effective screening technique, its limitations in sensitivity emphasize the need for more advanced detection approaches. Previous studies have relied on breast density, extracted directly from the mammograms, as a primary metric for cancer risk assessment, given its correlation with increased cancer risk and the masking potential of cancer. However, such a singular metric overlooks image details and spatial relationships critical for cancer diagnosis. To address these limitations, this thesis integrates artificial intelligence (AI) models into mammography, with the goal of enhancing both cancer detection and risk estimation. In this thesis, we aim to establish a new benchmark for breast cancer prediction using neural networks. Utilizing the Cohort of Screen-Aged Women (CSAW) dataset, which includes mammography images from 2008 to 2015 in Stockholm, Sweden, we develop three AI models to predict inherent risk, cancer signs, and masking potential of cancer. Combined, these models can e↵ectively identify women in need of supplemental screening, even after a clean exam, paving the way for better early detection of cancer. Individually, important progress has been made on each of these component tasks as well. The risk prediction model, developed and tested on a large population-based cohort, establishes a new state-of-the-art at identifying women at elevated risk of developing breast cancer, outperforming traditional density measures. The risk model is carefully designed to avoid conflating image patterns re- lated to early cancers signs with those related to long-term risk. We also propose a method that allows vision transformers to eciently be trained on and make use of high-resolution images, an essential property for models analyzing mammograms. We also develop an approach to predict the masking potential in a mammogram – the likelihood that a cancer may be obscured by neighboring tissue and consequently misdiagnosed. High masking potential can complicate early detection and delay timely interventions. Along with the model, we curate and release a new public dataset which can help speed up progress on this important task. Through our research, we demonstrate the transformative potential of AI in mammographic analysis. By capturing subtle image cues, AI models consistently exceed the traditional baselines. These advancements not only highlight both the individual and combined advantages of the models, but also signal a transition to an era of AI-enhanced personalized healthcare, promising more ecient resource allocation and better patient outcomes. 
  •  
2.
  • Baldassarre, Federico (författare)
  • Structured Representations for Explainable Deep Learning
  • 2023
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Deep learning has revolutionized scientific research and is being used to take decisions in increasingly complex scenarios. With growing power comes a growing demand for transparency and interpretability. The field of Explainable AI aims to provide explanations for the predictions of AI systems. The state of the art of AI explainability, however, is far from satisfactory. For example, in Computer Vision, the most prominent post-hoc explanation methods produce pixel-wise heatmaps over the input domain, which are meant to visualize the importance of individual pixels of an image or video. We argue that such dense attribution maps are poorly interpretable to non-expert users because of the domain in which explanations are formed - we may recognize shapes in a heatmap but they are just blobs of pixels. In fact, the input domain is closer to the raw data of digital cameras than to the interpretable structures that humans use to communicate, e.g. objects or concepts. In this thesis, we propose to move beyond dense feature attributions by adopting structured internal representations as a more interpretable explanation domain. Conceptually, our approach splits a Deep Learning model in two: the perception step that takes as input dense representations and the reasoning step that learns to perform the task at hand. At the interface between the two are structured representations that correspond to well-defined objects, entities, and concepts. These representations serve as the interpretable domain for explaining the predictions of the model, allowing us to move towards more meaningful and informative explanations. The proposed approach introduces several challenges, such as how to obtain structured representations, how to use them for downstream tasks, and how to evaluate the resulting explanations. The works included in this thesis address these questions, validating the approach and providing concrete contributions to the field. For the perception step, we investigate how to obtain structured representations from dense representations, whether by manually designing them using domain knowledge or by learning them from data without supervision. For the reasoning step, we investigate how to use structured representations for downstream tasks, from Biology to Computer Vision, and how to evaluate the learned representations. For the explanation step, we investigate how to explain the predictions of models that operate in a structured domain, and how to evaluate the resulting explanations. Overall, we hope that this work inspires further research in Explainable AI and helps bridge the gap between high-performing Deep Learning models and the need for transparency and interpretability in real-world applications.
  •  
3.
  • Fredin Haslum, Johan (författare)
  • Machine Learning Methods for Image-based Phenotypic Profiling in Early Drug Discovery
  • 2024
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • In the search for new therapeutic treatments, strategies to make the drug discovery process more efficient are crucial. Image-based phenotypic profiling, with its millions of pictures of fluorescent stained cells, is a rich and effective means to capture the morphological effects of potential treatments on living systems. Within this complex data await biological insights and new therapeutic opportunities – but computational tools are needed to unlock them.This thesis examines the role of machine learning in improving the utility and analysis of phenotypic screening data. It focuses on challenges specific to this domain, such as the lack of reliable labels that are essential for supervised learning, as well as confounding factors present in the data that are often unavoidable due to experimental variability. We explore transfer learning to boost model generalization and robustness, analyzing the impact of domain distance, initialization, dataset size, and architecture on the effectiveness of applying natural domain pre-trained weights to biomedical contexts. Building upon this, we delve into self-supervised pretraining for phenotypic image data, but find its direct application is inadequate in this context as it fails to differentiate between various biological effects. To overcome this, we develop new self-supervised learning strategies designed to enable the network to disregard confounding experimental noise, thus enhancing its ability to discern the impacts of various treatments. We further develop a technique that allows a model trained for phenotypic profiling to be adapted to new, unseen data without the need for any labels or supervised learning. Using this approach, a general phenotypic profiling model can be readily adapted to data from different sites without the need for any labels. Beyond our technical contributions, we also show that bioactive compounds identified using the approaches outlined in this thesis have been subsequently confirmed in biological assays through replication in an industrial setting. Our findings indicate that while phenotypic data and biomedical imaging present complex challenges, machine learning techniques can play a pivotal role in making early drug discovery more efficient and effective.
  •  
4.
  • Matsoukas, Christos (författare)
  • Artificial Intelligence for Medical Image Analysis with Limited Data
  • 2024
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Artificial intelligence (AI) is progressively influencing business, science, and society, leading to major socioeconomic changes. However, its application in real-world problems varies significantly across different sectors. One of the primary challenges limiting the widespread adoption of AI in certain areas is data availability. Medical image analysis is one of these domains, where the process of gathering data and labels is often challenging or even infeasible due to legal and privacy concerns, or due to the specific characteristics of diseases. Logistical obstacles, expensive diagnostic methods and the necessity for invasive procedures add to the difficulty of data collection. Even when ample data exists, the substantial cost and logistical hurdles in acquiring expert annotations pose considerable challenges. Thus, there is a pressing need for the development of AI models that can operate in low-data settings.In this thesis, we explore methods that improve the generalization and robustness of models when data availability is limited. We highlight the importance of model architecture and initialization, considering their associated assumptions and biases, to determine their effectiveness in such settings. We find that models with fewer built-in assumptions in their architecture need to be initialized with pre-trained weights, executed via transfer learning. This prompts us to explore how well transfer learning performs when models are initially trained in the natural domains, where data is abundant, before being used for medical image analysis where data is limited. We identify key factors responsible for transfer learning’s efficacy, and explore its relationship with data size, model architecture, and the distance between the target domain and the one used for pretraining. In cases where expert labels are scarce, we introduce the concept of complementary labels as the means to expand the labeling set. By providing information about other objects in the image, these labels help develop richer representations, leading to improved performance in low-data regimes. We showcase the utility of these methods by streamlining the histopathology-based assessment of chronic kidney disease in an industrial pharmaceutical setting, reducing the turnaround time of study evaluations by 97%. Our results demonstrate that AI models developed for low data regimes are capable of delivering industrial-level performance, proving their practical use in drug discovery and healthcare.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-4 av 4

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy