SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Smith Kevin 1975 ) "

Sökning: WFRF:(Smith Kevin 1975 )

  • Resultat 1-42 av 42
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Wuttke, Matthias, et al. (författare)
  • A catalog of genetic loci associated with kidney function from analyses of a million individuals
  • 2019
  • Ingår i: Nature Genetics. - : NATURE PUBLISHING GROUP. - 1061-4036 .- 1546-1718. ; 51:6, s. 957-972
  • Tidskriftsartikel (refereegranskat)abstract
    • Chronic kidney disease (CKD) is responsible for a public health burden with multi-systemic complications. Through transancestry meta-analysis of genome-wide association studies of estimated glomerular filtration rate (eGFR) and independent replication (n = 1,046,070), we identified 264 associated loci (166 new). Of these,147 were likely to be relevant for kidney function on the basis of associations with the alternative kidney function marker blood urea nitrogen (n = 416,178). Pathway and enrichment analyses, including mouse models with renal phenotypes, support the kidney as the main target organ. A genetic risk score for lower eGFR was associated with clinically diagnosed CKD in 452,264 independent individuals. Colocalization analyses of associations with eGFR among 783,978 European-ancestry individuals and gene expression across 46 human tissues, including tubulo-interstitial and glomerular kidney compartments, identified 17 genes differentially expressed in kidney. Fine-mapping highlighted missense driver variants in 11 genes and kidney-specific regulatory variants. These results provide a comprehensive priority list of molecular targets for translational research.
  •  
2.
  • Yala, Adam, et al. (författare)
  • Toward robust mammography-based models for breast cancer risk
  • 2021
  • Ingår i: Science Translational Medicine. - : AMER ASSOC ADVANCEMENT SCIENCE. - 1946-6234 .- 1946-6242. ; 13:578
  • Tidskriftsartikel (refereegranskat)abstract
    • Improved breast cancer risk models enable targeted screening strategies that achieve earlier detection and less screening harm than existing guidelines. To bring deep learning risk models to clinical practice, we need to further refine their accuracy, validate them across diverse populations, and demonstrate their potential to improve clinical workflows. We developed Mirai, a mammography-based deep learning model designed to predict risk at multiple timepoints, leverage potentially missing risk factor information, and produce predictions that are consistent across mammography machines. Mirai was trained on a large dataset from Massachusetts General Hospital (MGH) in the United States and tested on held-out test sets from MGH, Karolinska University Hospital in Sweden, and Chang Gung Memorial Hospital (CGMH) in Taiwan, obtaining C-indices of 0.76 (95% confidence interval, 0.74 to 0.80), 0.81 (0.79 to 0.82), and 0.79 (0.79 to 0.83), respectively. Mirai obtained significantly higher 5-year ROC AUCs than the Tyrer-Cuzick model (P < 0.001) and prior deep learning models Hybrid DL (P < 0.001) and Image-Only DL (P < 0.001), trained on the same dataset. Mirai more accurately identified high-risk patients than prior methods across all datasets. On the MGH test set, 41.5% (34.4 to 48.5) of patients who would develop cancer within 5 years were identified as high risk, compared with 36.1% (29.1 to 42.9) by Hybrid DL (P = 0.02) and 22.9% (15.9 to 29.6) by the Tyrer-Cuzick model (P < 0.001).
  •  
3.
  • Augustsson, Andreas, 1975- (författare)
  • Soft X-ray Emission Spectroscopy of Liquids and Lithium Battery Materials
  • 2004
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Lithium ion insertion into electrode materials is commonly used in rechargeable battery technology. The insertion implies changes in both the crystal structure and the electronic structure of the electrode material. Side-reactions may occur on the surface of the electrode, which is exposed to the electrolyte and form a solid electrolyte interface (SEI). The understanding of these processes is of great importance for improving battery performance. The chemical and physical properties of water and alcohols are complicated by the presence of strong hydrogen bonding. Various experimental techniques have been used to study geometrical structures and different models have been proposed to view the details of how these liquids are geometrically organized by hydrogen bonding. However, very little is known about the electronic structure of these liquids, mainly due to the lack of suitable experimental tools.This thesis addresses the electronic structure of liquids and lithium battery materials using resonant inelastic X-ray scattering (RIXS) at high brightness synchrotron radiation sources. The electronic structure of battery electrodes has been probed, before and after lithiation, studying the doping of electrons into the host material. The chemical composition of the SEI on cycled graphite electrodes was determined. The local electronic structure of water, methanol and mixtures of the two have been examined using a special liquid cell. Results from the study of liquid water showed a strong influence on the 3a1 molecular orbital and orbital mixing between molecules upon hydrogen bonding. The study of methanol showed the existence of ring and chain formations in the liquid phase and the dominating structures are formed of 6 and 8 molecules. Upon mixing of the two liquids, a segregation at the molecular level was found and the formation of new structures, which could explain the unexpected low increase of the entropy.
  •  
4.
  • Baldassarre, Federico, et al. (författare)
  • Explanation-Based Weakly-Supervised Learning of Visual Relations with Graph Networks
  • 2020
  • Ingår i: Proceedings, Part XXVIII Computer Vision - ECCV 2020 - 16th European Conference, Glasgow, UK, August 23-28, 2020. - Cham : Springer Nature. ; , s. 612-630
  • Konferensbidrag (refereegranskat)abstract
    • Visual relationship detection is fundamental for holistic image understanding. However, the localization and classification of (subject, predicate, object) triplets remain challenging tasks, due to the combinatorial explosion of possible relationships, their long-tailed distribution in natural images, and an expensive annotation process. This paper introduces a novel weakly-supervised method for visual relationship detection that relies on minimal image-level predicate labels. A graph neural network is trained to classify predicates in images from a graph representation of detected objects, implicitly encoding an inductive bias for pairwise relations. We then frame relationship detection as the explanation of such a predicate classifier, i.e. we obtain a complete relation by recovering the subject and object of a predicted predicate. We present results comparable to recent fully- and weakly-supervised methods on three diverse and challenging datasets: HICO-DET for human-object interaction, Visual Relationship Detection for generic object-to-object relations, and UnRel for unusual triplets; demonstrating robustness to non-comprehensive annotations and good few-shot generalization.
  •  
5.
  • Baldassarre, Federico (författare)
  • Structured Representations for Explainable Deep Learning
  • 2023
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Deep learning has revolutionized scientific research and is being used to take decisions in increasingly complex scenarios. With growing power comes a growing demand for transparency and interpretability. The field of Explainable AI aims to provide explanations for the predictions of AI systems. The state of the art of AI explainability, however, is far from satisfactory. For example, in Computer Vision, the most prominent post-hoc explanation methods produce pixel-wise heatmaps over the input domain, which are meant to visualize the importance of individual pixels of an image or video. We argue that such dense attribution maps are poorly interpretable to non-expert users because of the domain in which explanations are formed - we may recognize shapes in a heatmap but they are just blobs of pixels. In fact, the input domain is closer to the raw data of digital cameras than to the interpretable structures that humans use to communicate, e.g. objects or concepts. In this thesis, we propose to move beyond dense feature attributions by adopting structured internal representations as a more interpretable explanation domain. Conceptually, our approach splits a Deep Learning model in two: the perception step that takes as input dense representations and the reasoning step that learns to perform the task at hand. At the interface between the two are structured representations that correspond to well-defined objects, entities, and concepts. These representations serve as the interpretable domain for explaining the predictions of the model, allowing us to move towards more meaningful and informative explanations. The proposed approach introduces several challenges, such as how to obtain structured representations, how to use them for downstream tasks, and how to evaluate the resulting explanations. The works included in this thesis address these questions, validating the approach and providing concrete contributions to the field. For the perception step, we investigate how to obtain structured representations from dense representations, whether by manually designing them using domain knowledge or by learning them from data without supervision. For the reasoning step, we investigate how to use structured representations for downstream tasks, from Biology to Computer Vision, and how to evaluate the learned representations. For the explanation step, we investigate how to explain the predictions of models that operate in a structured domain, and how to evaluate the resulting explanations. Overall, we hope that this work inspires further research in Explainable AI and helps bridge the gap between high-performing Deep Learning models and the need for transparency and interpretability in real-world applications.
  •  
6.
  • Berndt, Sonja I., et al. (författare)
  • Genome-wide meta-analysis identifies 11 new loci for anthropometric traits and provides insights into genetic architecture
  • 2013
  • Ingår i: Nature Genetics. - : Springer Science and Business Media LLC. - 1061-4036 .- 1546-1718. ; 45:5, s. 501-U69
  • Tidskriftsartikel (refereegranskat)abstract
    • Approaches exploiting trait distribution extremes may be used to identify loci associated with common traits, but it is unknown whether these loci are generalizable to the broader population. In a genome-wide search for loci associated with the upper versus the lower 5th percentiles of body mass index, height and waist-to-hip ratio, as well as clinical classes of obesity, including up to 263,407 individuals of European ancestry, we identified 4 new loci (IGFBP4, H6PD, RSRC1 and PPP2R2A) influencing height detected in the distribution tails and 7 new loci (HNF4G, RPTOR, GNAT2, MRPS33P4, ADCY9, HS6ST3 and ZZZ3) for clinical classes of obesity. Further, we find a large overlap in genetic structure and the distribution of variants between traits based on extremes and the general population and little etiological heterogeneity between obesity subgroups.
  •  
7.
  • Brasko, Csilla, et al. (författare)
  • Intelligent image-based in situ single-cell isolation
  • 2018
  • Ingår i: Nature Communications. - : NATURE PUBLISHING GROUP. - 2041-1723. ; 9
  • Tidskriftsartikel (refereegranskat)abstract
    • Quantifying heterogeneities within cell populations is important for many fields including cancer research and neurobiology; however, techniques to isolate individual cells are limited. Here, we describe a high-throughput, non-disruptive, and cost-effective isolation method that is capable of capturing individually targeted cells using widely available techniques. Using high-resolution microscopy, laser microcapture microscopy, image analysis, and machine learning, our technology enables scalable molecular genetic analysis of single cells, targetable by morphology or location within the sample.
  •  
8.
  • Carlsson, Stefan, et al. (författare)
  • The Preimage of Rectifier Network Activities
  • 2017
  • Ingår i: International Conference on Learning Representations (ICLR). - : International Conference on Learning Representations, ICLR.
  • Konferensbidrag (refereegranskat)abstract
    • The preimage of the activity at a certain level of a deep network is the set of inputs that result in the same node activity. For fully connected multi layer rectifier networks we demonstrate how to compute the preimages of activities at arbitrary levels from knowledge of the parameters in a deep rectifying network. If the preimage set of a certain activity in the network contains elements from more than one class it means that these classes are irreversibly mixed. This implies that preimage sets which are piecewise linear manifolds are building blocks for describing the input manifolds specific classes, ie all preimages should ideally be from the same class. We believe that the knowledge of how to compute preimages will be valuable in understanding the efficiency displayed by deep learning networks and could potentially be used in designing more efficient training algorithms.
  •  
9.
  • Christiansen, F., et al. (författare)
  • Ultrasound image analysis using deep neural networks for discriminating between benign and malignant ovarian tumors : comparison with expert subjective assessment
  • 2021
  • Ingår i: Ultrasound in Obstetrics and Gynecology. - : Wiley. - 0960-7692 .- 1469-0705. ; 57:1, s. 155-163
  • Tidskriftsartikel (refereegranskat)abstract
    • Objectives: To develop and test the performance of computerized ultrasound image analysis using deep neural networks (DNNs) in discriminating between benign and malignant ovarian tumors and to compare its diagnostic accuracy with that of subjective assessment (SA) by an ultrasound expert. Methods: We included 3077 (grayscale, n = 1927; power Doppler, n = 1150) ultrasound images from 758 women with ovarian tumors, who were classified prospectively by expert ultrasound examiners according to IOTA (International Ovarian Tumor Analysis) terms and definitions. Histological outcome from surgery (n = 634) or long-term (>= 3 years) follow-up (n = 124) served as the gold standard. The dataset was split into a training set (n = 508; 314 benign and 194 malignant), a validation set (n = 100; 60 benign and 40 malignant) and a test set (n = 150; 75 benign and 75 malignant). We used transfer learning on three pre-trained DNNs: VGG16, ResNet50 and MobileNet. Each model was trained, and the outputs calibrated, using temperature scaling. An ensemble of the three models was then used to estimate the probability of malignancy based on all images from a given case. The DNN ensemble classified the tumors as benign or malignant (Ovry-Dx1 model); or as benign, inconclusive or malignant (Ovry-Dx2 model). The diagnostic performance of the DNN models, in terms of sensitivity and specificity, was compared to that of SA for classifying ovarian tumors in the test set. Results: At a sensitivity of 96.0%, Ovry-Dx1 had a specificity similar to that of SA (86.7% vs 88.0%; P = 1.0). Ovry-Dx2 had a sensitivity of 97.1% and a specificity of 93.7%, when designating 12.7% of the lesions as inconclusive. By complimenting Ovry-Dx2 with SA in inconclusive cases, the overall sensitivity (96.0%) and specificity (89.3%) were not significantly different from using SA in all cases (P = 1.0). Conclusion: Ultrasound image analysis using DNNs can predict ovarian malignancy with a diagnostic accuracy comparable to that of human expert examiners, indicating that these models may have a role in the triage of women with an ovarian tumor.
  •  
10.
  • Cossío, Fernando, et al. (författare)
  • VAI-B: A multicenter platform for the external validation of artificial intelligence algorithms in breast imaging
  • 2023
  • Ingår i: Journal of Medical Imaging. - : SPIE-Intl Soc Optical Eng. - 2329-4302 .- 2329-4310. ; 10:6
  • Tidskriftsartikel (refereegranskat)abstract
    • Purpose: Multiple vendors are currently offering artificial intelligence (AI) computer-aided systems for triage detection, diagnosis, and risk prediction of breast cancer based on screening mammography. There is an imminent need to establish validation platforms that enable fair and transparent testing of these systems against external data. Approach: We developed validation of artificial intelligence for breast imaging (VAI-B), a platform for independent validation of AI algorithms in breast imaging. The platform is a hybrid solution, with one part implemented in the cloud and another in an on-premises environment at Karolinska Institute. Cloud services provide the flexibility of scaling the computing power during inference time, while secure on-premises clinical data storage preserves their privacy. A MongoDB database and a python package were developed to store and manage the data onpremises. VAI-B requires four data components: radiological images, AI inferences, radiologist assessments, and cancer outcomes. Results: To pilot test VAI-B, we defined a case-control population based on 8080 patients diagnosed with breast cancer and 36,339 healthy women based on the Swedish national quality registry for breast cancer. Images and radiological assessments from more than 100,000 mammography examinations were extracted from hospitals in three regions of Sweden. The images were processed by AI systems from three vendors in a virtual private cloud to produce abnormality scores related to signs of cancer in the images. A total of 105,706 examinations have been processed and stored in the database. Conclusions: We have created a platform that will allow downstream evaluation of AI systems for breast cancer detection, which enables faster development cycles for participating vendors and safer AI adoption for participating hospitals. The platform was designed to be scalable and ready to be expanded should a new vendor want to evaluate their system or should a new hospital wish to obtain an evaluation of different AI systems on their images.
  •  
11.
  • Dembrower, K., et al. (författare)
  • Comparison of a deep learning risk score and standard mammographic density score for breast cancer risk prediction
  • 2020
  • Ingår i: Radiology. - : Radiological Society of North America Inc.. - 0033-8419 .- 1527-1315. ; 294:2, s. 265-272
  • Tidskriftsartikel (refereegranskat)abstract
    • Background: Most risk prediction models for breast cancer are based on questionnaires and mammographic density assessments. By training a deep neural network, further information in the mammographic images can be considered. Purpose: To develop a risk score that is associated with future breast cancer and compare it with density-based models. Materials and Methods: In this retrospective study, all women aged 40-74 years within the Karolinska University Hospital uptake area in whom breast cancer was diagnosed in 2013-2014 were included along with healthy control subjects. Network development was based on cases diagnosed from 2008 to 2012. The deep learning (DL) risk score, dense area, and percentage density were calculated for the earliest available digital mammographic examination for each woman. Logistic regression models were fitted to determine the association with subsequent breast cancer. False-negative rates were obtained for the DL risk score, age-adjusted dense area, and age-adjusted percentage density. Results: A total of 2283 women, 278 of whom were later diagnosed with breast cancer, were evaluated. The age at mammography (mean, 55.7 years vs 54.6 years; P< .001), the dense area (mean, 38.2 cm2 vs 34.2 cm2; P< .001), and the percentage density (mean, 25.6% vs 24.0%; P< .001) were higher among women diagnosed with breast cancer than in those without a breast cancer diagnosis. The odds ratios and areas under the receiver operating characteristic curve (AUCs) were higher for age-adjusted DL risk score than for dense area and percentage density: 1.56 (95% confidence interval [CI]: 1.48, 1.64; AUC, 0.65), 1.31 (95% CI: 1.24, 1.38; AUC, 0.60), and 1.18 (95% CI: 1.11, 1.25; AUC, 0.57), respectively (P< .001 for AUC). The false-negative rate was lower: 31% (95% CI: 29%, 34%), 36% (95% CI: 33%, 39%; P = .006), and 39% (95% CI: 37%, 42%; P< .001); this difference was most pronounced for more aggressive cancers. Conclusion: Compared with density-based models, a deep neural network can more accurately predict which women are at risk for future breast cancer, with a lower false-negative rate for more aggressive cancers.
  •  
12.
  • Dembrower, Karin, et al. (författare)
  • Effect of artificial intelligence-based triaging of breast cancer screening mammograms on cancer detection and radiologist workload : a retrospective simulation study
  • 2020
  • Ingår i: The Lancet Digital Health. - : Elsevier. - 2589-7500. ; 2:9, s. E468-E474
  • Tidskriftsartikel (refereegranskat)abstract
    • Background We examined the potential change in cancer detection when using an artificial intelligence (AI) cancer-detection software to triage certain screening examinations into a no radiologist work stream, and then after regular radiologist assessment of the remainder, triage certain screening examinations into an enhanced assessment work stream. The purpose of enhanced assessment was to simulate selection of women for more sensitive screening promoting early detection of cancers that would otherwise be diagnosed as interval cancers or as next-round screen-detected cancers. The aim of the study was to examine how AI could reduce radiologist workload and increase cancer detection. Methods In this retrospective simulation study, all women diagnosed with breast cancer who attended two consecutive screening rounds were included. Healthy women were randomly sampled from the same cohort; their observations were given elevated weight to mimic a frequency of 0.7% incident cancer per screening interval. Based on the prediction score from a commercially available AI cancer detector, various cutoff points for the decision to channel women to the two new work streams were examined in terms of missed and additionally detected cancer. Findings 7364 women were included in the study sample: 547 were diagnosed with breast cancer and 6817 were healthy controls. When including 60%, 70%, or 80% of women with the lowest AI scores in the no radiologist stream, the proportion of screen-detected cancers that would have been missed were 0, 0.3% (95% CI 0.0-4.3), or 2.6% (1.1-5.4), respectively. When including 1% or 5% of women with the highest AI scores in the enhanced assessment stream, the potential additional cancer detection was 24 (12%) or 53 (27%) of 200 subsequent interval cancers, respectively, and 48 (14%) or 121 (35%) of 347 next-round screen-detected cancers, respectively. Interpretation Using a commercial AI cancer detector to triage mammograms into no radiologist assessment and enhanced assessment could potentially reduce radiologist workload by more than half, and pre-emptively detect a substantial proportion of cancers otherwise diagnosed later.
  •  
13.
  • Fredin Haslum, Johan, et al. (författare)
  • Bridging Generalization Gaps in High Content Imaging Through Online Self-Supervised Domain Adaptation
  • 2024
  • Ingår i: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 2024. ; , s. 7723-7732
  • Konferensbidrag (refereegranskat)abstract
    • High Content Imaging (HCI) plays a vital role in modern drug discovery and development pipelines, facilitating various stages from hit identification to candidate drug characterization. Applying machine learning models to these datasets can prove challenging as they typically consist of multiple batches, affected by experimental variation, especially if different imaging equipment have been used. Moreover, as new data arrive, it is preferable that they are analyzed in an online fashion. To overcome this, we propose CODA, an online self-supervised domain adaptation approach. CODA divides the classifier’s role into a generic feature extractor and a task-specific model. We adapt the feature extractor’s weights to the new domain using cross-batch self-supervision while keeping the task-specific model unchanged. Our results demonstrate that this strategy significantly reduces the generalization gap, achieving up to a 300% improvement when applied to data from different labs utilizing different microscopes. CODA can be applied to new, unlabeled out-of-domain data sources of different sizes, from a single plate to multiple experimental batches.
  •  
14.
  • Fredin Haslum, Johan, et al. (författare)
  • Cell Painting-based bioactivity prediction boosts high-throughput screening hit-rates and compound diversity
  • 2024
  • Ingår i: Nature Communications. - : Springer Nature. - 2041-1723. ; 15:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Identifying active compounds for a target is a time- and resource-intensive task in early drug discovery. Accurate bioactivity prediction using morphological profiles could streamline the process, enabling smaller, more focused compound screens. We investigate the potential of deep learning on unrefined single-concentration activity readouts and Cell Painting data, to predict compound activity across 140 diverse assays. We observe an average ROC-AUC of 0.744 ± 0.108 with 62% of assays achieving ≥0.7, 30% ≥0.8, and 7% ≥0.9. In many cases, the high prediction performance can be achieved using only brightfield images instead of multichannel fluorescence images. A comprehensive analysis shows that Cell Painting-based bioactivity prediction is robust across assay types, technologies, and target classes, with cell-based assays and kinase targets being particularly well-suited for prediction. Experimental validation confirms the enrichment of active compounds. Our findings indicate that models trained on Cell Painting data, combined with a small set of single-concentration data points, can reliably predict the activity of a compound library across diverse targets and assays while maintaining high hit rates and scaffold diversity. This approach has the potential to reduce the size of screening campaigns, saving time and resources, and enabling primary screening with more complex assays.
  •  
15.
  • Fredin Haslum, Johan (författare)
  • Machine Learning Methods for Image-based Phenotypic Profiling in Early Drug Discovery
  • 2024
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • In the search for new therapeutic treatments, strategies to make the drug discovery process more efficient are crucial. Image-based phenotypic profiling, with its millions of pictures of fluorescent stained cells, is a rich and effective means to capture the morphological effects of potential treatments on living systems. Within this complex data await biological insights and new therapeutic opportunities – but computational tools are needed to unlock them.This thesis examines the role of machine learning in improving the utility and analysis of phenotypic screening data. It focuses on challenges specific to this domain, such as the lack of reliable labels that are essential for supervised learning, as well as confounding factors present in the data that are often unavoidable due to experimental variability. We explore transfer learning to boost model generalization and robustness, analyzing the impact of domain distance, initialization, dataset size, and architecture on the effectiveness of applying natural domain pre-trained weights to biomedical contexts. Building upon this, we delve into self-supervised pretraining for phenotypic image data, but find its direct application is inadequate in this context as it fails to differentiate between various biological effects. To overcome this, we develop new self-supervised learning strategies designed to enable the network to disregard confounding experimental noise, thus enhancing its ability to discern the impacts of various treatments. We further develop a technique that allows a model trained for phenotypic profiling to be adapted to new, unseen data without the need for any labels or supervised learning. Using this approach, a general phenotypic profiling model can be readily adapted to data from different sites without the need for any labels. Beyond our technical contributions, we also show that bioactive compounds identified using the approaches outlined in this thesis have been subsequently confirmed in biological assays through replication in an industrial setting. Our findings indicate that while phenotypic data and biomedical imaging present complex challenges, machine learning techniques can play a pivotal role in making early drug discovery more efficient and effective.
  •  
16.
  • Fredin Haslum, Johan, et al. (författare)
  • Metadata-guided Consistency Learning for High Content Images
  • 2023
  • Ingår i: PLMR: Volume 227: Medical Imaging with Deep Learning, 10-12 July 2023, Nashville, TN, USA.
  • Konferensbidrag (refereegranskat)abstract
    • High content imaging assays can capture rich phenotypic response data for large sets of compound treatments, aiding in the characterization and discovery of novel drugs. However, extracting representative features from high content images that can capture subtle nuances in phenotypes remains challenging. The lack of high-quality labels makes it difficult to achieve satisfactory results with supervised deep learning. Self-Supervised learning methods have shown great success on natural images, and offer an attractive alternative also to microscopy images. However, we find that self-supervised learning techniques underperform on high content imaging assays. One challenge is the undesirable domain shifts present in the data known as batch effects, which are caused by biological noise or uncontrolled experimental conditions. To this end, we introduce Cross-Domain Consistency Learning (CDCL), a self-supervised approach that is able to learn in the presence of batch effects. CDCL enforces the learning of biological similarities while disregarding undesirable batch-specific signals, leading to more useful and versatile representations. These features are organised according to their morphological changes and are more useful for downstream tasks – such as distinguishing treatments and mechanism of action.
  •  
17.
  • Fredin Haslum, Johan, et al. (författare)
  • Metadata-guided Consistency Learning for High Content Images
  • 2023
  • Ingår i: Medical Imaging with Deep Learning 2023, MIDL 2023. - : ML Research Press. ; , s. 918-936
  • Konferensbidrag (refereegranskat)abstract
    • High content imaging assays can capture rich phenotypic response data for large sets of compound treatments, aiding in the characterization and discovery of novel drugs. However, extracting representative features from high content images that can capture subtle nuances in phenotypes remains challenging. The lack of high-quality labels makes it difficult to achieve satisfactory results with supervised deep learning. Self-Supervised learning methods have shown great success on natural images, and offer an attractive alternative also to microscopy images. However, we find that self-supervised learning techniques underperform on high content imaging assays. One challenge is the undesirable domain shifts present in the data known as batch effects, which are caused by biological noise or uncontrolled experimental conditions. To this end, we introduce Cross-Domain Consistency Learning (CDCL), a self-supervised approach that is able to learn in the presence of batch effects. CDCL enforces the learning of biological similarities while disregarding undesirable batch-specific signals, leading to more useful and versatile representations. These features are organised according to their morphological changes and are more useful for downstream tasks - such as distinguishing treatments and mechanism of action.
  •  
18.
  • Fusco, Ludovico, et al. (författare)
  • Computer vision profiling of neurite outgrowth dynamics reveals spatio-temporal modularity of Rho GTPase signaling
  • 2016
  • Ingår i: Journal of Cell Biology. - : Rockefeller University Press. - 0021-9525 .- 1540-8140. ; 212:1, s. 91-111
  • Tidskriftsartikel (refereegranskat)abstract
    • Rho guanosine triphosphatases (GTPases) control the cytoskeletal dynamics that power neurite outgrowth. This process consists of dynamic neuriteinitiation, elongation, retraction, and branching cycles that are likely to be regulated by specific spatiotemporal signaling networks, which cannot be resolved with static, steady-state assays. We present Neurite-Tracker, a computer-vision approach to automatically segment and track neuronal morphodynamics in time-lapse datasets. Feature extraction then quantifies dynamic neurite outgrowth phenotypes. We identify a set of stereotypic neurite outgrowth morphodynamic behaviors in a cultured neuronal cell system. Systematic RNA interference perturbation of a Rho GTPase interactome consisting of 219 proteins reveals a limited set of morphodynamic phenotypes. As proof of concept, we show that loss of function of two distinct RhoA-specific GTPase-activating proteins (GAPs) leads to opposite neurite outgrowth phenotypes. Imaging of RhoA activation dynamics indicates that both GAPs regulate different spatiotemporal Rho GTPase pools, with distinct functions. Our results provide a starting point to dissect spatiotemporal Rho GTPase signaling networks that regulate neurite outgrowth.
  •  
19.
  • Hollandi, R., et al. (författare)
  • nucleAIzer : A Parameter-free Deep Learning Framework for Nucleus Segmentation Using Image Style Transfer
  • 2020
  • Ingår i: Cell Systems. - : Elsevier BV. - 2405-4712. ; 10:5, s. 453-458.e6
  • Tidskriftsartikel (refereegranskat)abstract
    • Single-cell segmentation is typically a crucial task of image-based cellular analysis. We present nucleAIzer, a deep-learning approach aiming toward a truly general method for localizing 2D cell nuclei across a diverse range of assays and light microscopy modalities. We outperform the 739 methods submitted to the 2018 Data Science Bowl on images representing a variety of realistic conditions, some of which were not represented in the training data. The key to our approach is that during training nucleAIzer automatically adapts its nucleus-style model to unseen and unlabeled data using image style transfer to automatically generate augmented training samples. This allows the model to recognize nuclei in new and different experiments efficiently without requiring expert annotations, making deep learning for nucleus segmentation fairly simple and labor free for most biological light microscopy experiments. It can also be used online, integrated into CellProfiler and freely downloaded at www.nucleaizer.org. A record of this paper's transparent peer review process is included in the Supplemental Information. Microscopy image analysis of single cells can be challenging but also eased and improved. We developed a deep learning method to segment cell nuclei. Our strategy is adapting to unexpected circumstances automatically by synthesizing artificial microscopy images in such a domain as training samples.
  •  
20.
  • Huix, Joana Palés, et al. (författare)
  • Are Natural Domain Foundation Models Useful for Medical Image Classification?
  • 2024
  • Ingår i: Proceedings - 2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024. - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 7619-7628
  • Konferensbidrag (refereegranskat)abstract
    • The deep learning field is converging towards the use of general foundation models that can be easily adapted for diverse tasks. While this paradigm shift has become common practice within the field of natural language processing, progress has been slower in computer vision. In this paper we attempt to address this issue by investigating the transferability of various state-of-the-art foundation models to medical image classification tasks. Specifically, we evaluate the performance of five foundation models, namely Sam, Seem, Dinov2, BLIP, and OpenCLIP across four well-established medical imaging datasets. We explore different training settings to fully harness the potential of these models. Our study shows mixed results. Dinov2 consistently outperforms the standard practice of ImageNet pretraining. However, other foundation models failed to consistently beat this established baseline indicating limitations in their transferability to medical image classification tasks.
  •  
21.
  • Konuk, Emir, et al. (författare)
  • An empirical study of the relation between network architecture and complexity
  • 2019
  • Ingår i: Proceedings - 2019 International Conference on Computer Vision Workshop, ICCVW 2019. - : Institute of Electrical and Electronics Engineers Inc.. - 9781728150239 ; , s. 4597-4599
  • Konferensbidrag (refereegranskat)abstract
    • In this preregistration submission, we propose an empirical study of how networks handle changes in complexity of the data. We investigate the effect of network capacity on generalization performance in the face of increasing data complexity. For this, we measure the generalization error for an image classification task where the number of classes steadily increases. We compare a number of modern architectures at different scales in this setting. The methodology, setup, and hypotheses described in this proposal were evaluated by peer review before experiments were conducted.
  •  
22.
  • Liu, Yue (författare)
  • Breast cancer risk assessment and detection in mammograms with artificial intelligence
  • 2024
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Breast cancer, the most common type of cancer among women worldwide, necessitates reliable early detection methods. Although mammography serves as a cost-effective screening technique, its limitations in sensitivity emphasize the need for more advanced detection approaches. Previous studies have relied on breast density, extracted directly from the mammograms, as a primary metric for cancer risk assessment, given its correlation with increased cancer risk and the masking potential of cancer. However, such a singular metric overlooks image details and spatial relationships critical for cancer diagnosis. To address these limitations, this thesis integrates artificial intelligence (AI) models into mammography, with the goal of enhancing both cancer detection and risk estimation. In this thesis, we aim to establish a new benchmark for breast cancer prediction using neural networks. Utilizing the Cohort of Screen-Aged Women (CSAW) dataset, which includes mammography images from 2008 to 2015 in Stockholm, Sweden, we develop three AI models to predict inherent risk, cancer signs, and masking potential of cancer. Combined, these models can e↵ectively identify women in need of supplemental screening, even after a clean exam, paving the way for better early detection of cancer. Individually, important progress has been made on each of these component tasks as well. The risk prediction model, developed and tested on a large population-based cohort, establishes a new state-of-the-art at identifying women at elevated risk of developing breast cancer, outperforming traditional density measures. The risk model is carefully designed to avoid conflating image patterns re- lated to early cancers signs with those related to long-term risk. We also propose a method that allows vision transformers to eciently be trained on and make use of high-resolution images, an essential property for models analyzing mammograms. We also develop an approach to predict the masking potential in a mammogram – the likelihood that a cancer may be obscured by neighboring tissue and consequently misdiagnosed. High masking potential can complicate early detection and delay timely interventions. Along with the model, we curate and release a new public dataset which can help speed up progress on this important task. Through our research, we demonstrate the transformative potential of AI in mammographic analysis. By capturing subtle image cues, AI models consistently exceed the traditional baselines. These advancements not only highlight both the individual and combined advantages of the models, but also signal a transition to an era of AI-enhanced personalized healthcare, promising more ecient resource allocation and better patient outcomes. 
  •  
23.
  • Liu, Yue, et al. (författare)
  • Decoupling Inherent Risk and Early Cancer Signs in Image-Based Breast Cancer Risk Models
  • 2020
  • Ingår i: Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. - Cham : Springer Nature. ; , s. 230-240
  • Konferensbidrag (refereegranskat)abstract
    • The ability to accurately estimate risk of developing breast cancer would be invaluable for clinical decision-making. One promising new approach is to integrate image-based risk models based on deep neural networks. However, one must take care when using such models, as selection of training data influences the patterns the network will learn to identify. With this in mind, we trained networks using three different criteria to select the positive training data (i.e. images from patients that will develop cancer): an inherent risk model trained on images with no visible signs of cancer, a cancer signs model trained on images containing cancer or early signs of cancer, and a conflated model trained on all images from patients with a cancer diagnosis. We find that these three models learn distinctive features that focus on different patterns, which translates to contrasts in performance. Short-term risk is best estimated by the cancer signs model, whilst long-term risk is best estimated by the inherent risk model. Carelessly training with all images conflates inherent risk with early cancer signs, and yields sub-optimal estimates in both regimes. As a consequence, conflated models may lead physicians to recommend preventative action when early cancer signs are already visible.
  •  
24.
  • Liu, Yue, et al. (författare)
  • PatchDropout : Economizing Vision Transformers Using Patch Dropout
  • 2023
  • Ingår i: 2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV). - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 3942-3951
  • Konferensbidrag (refereegranskat)abstract
    • Vision transformers have demonstrated the potential to outperform CNNs in a variety of vision tasks. But the computational and memory requirements of these models prohibit their use in many applications, especially those that depend on high-resolution images, such as medical image classification. Efforts to train ViTs more efficiently are overly complicated, necessitating architectural changes or intricate training schemes. In this work, we show that standard ViT models can be efficiently trained at high resolution by randomly dropping input image patches. This simple approach, PatchDropout, reduces FLOPs and memory by at least 50% in standard natural image datasets such as IMAGENET, and those savings only increase with image size. On CSAW, a high-resolution medical dataset, we observe a 5. savings in computation and memory using PatchDropout, along with a boost in performance. For practitioners with a fixed computational or memory budget, PatchDropout makes it possible to choose image resolution, hyperparameters, or model size to get the most performance out of their model.
  •  
25.
  • Liu, Yue, et al. (författare)
  • Selecting Women for Supplemental Breast Imaging using AI Biomarkers of Cancer Signs, Masking, and Risk
  • 2023
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Background: Traditional mammographic density aids in determining the need for supplemental imagingby MRI or ultrasound. However, AI image analysis, considering more subtle and complex image features,may enable a more effective identification of women requiring supplemental imaging.Purpose: To assess if AISmartDensity, an AI-based score considering cancer signs, masking, and risk,surpasses traditional mammographic density in identifying women for supplemental imaging after negativescreening mammography.Methods: This retrospective study included randomly selected breast cancer patients and healthy controlsat Karolinska University Hospital between 2008 and 2015. Bootstrapping simulated a 0.2% interval cancerrate. We included previous exams for diagnosed women and all exams for controls. AISmartDensity hadbeen developed using random mammograms from a population non-overlapping with the current studypopulation. We evaluated AISmartDensity to, based on negative screening mammograms, identify womenwith interval cancer and next-round screen-detected cancer. It was compared to age and density models, withsensitivity and PPV calculated for women with the top 8% scores, mimicking the proportion of BIRADS“extremely dense” category. Statistical significance was determined using the Student’s t-test.Results: The study involved 2043 women, 258 with breast cancer diagnosed within 3 years of a negativemammogram, and 1785 healthy controls. Diagnosed women had a median age of 57 years (IQR 16) versus53 years (IQR 15) for controls (p < .001). At the 92nd percentile, AISmartDenstiy identified 87 (33.67%)future cancers with PPV 1.68%, whereas mammographic density identified 34 (13.18%) with PPV 0.66%(p < .001). AISmartDensity identified 32% interval and 36% next-round cancers, versus mammographicdensity’s 16% and 10%. The combined mammographic density and age model yielded an AUC of 0.60,significantly lower than AISmartDensity’s 0.73 (p < .001).Conclusions: AISmartDensity, integrating cancer signs, masking, and risk, more effectively identifiedwomen for additional breast imaging than traditional age and density models. 
  •  
26.
  • Liu, Yue, et al. (författare)
  • Use of an AI Score Combining Cancer Signs, Masking, and Risk to Select Patients for Supplemental Breast Cancer Screening
  • 2024
  • Ingår i: Radiology. - : Radiological Society of North America (RSNA). - 0033-8419 .- 1527-1315. ; 311:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Background: Mammographic density measurements are used to identify patients who should undergo supplemental imaging for breast cancer detection, but artificial intelligence (AI) image analysis may be more effective. Purpose: To assess whether AISmartDensity-an AI -based score integrating cancer signs, masking, and risk-surpasses measurements of mammographic density in identifying patients for supplemental breast imaging after a negative screening mammogram. Materials and Methods: This retrospective study included randomly selected individuals who underwent screening mammography at Karolinska University Hospital between January 2008 and December 2015. The models in AISmartDensity were trained and validated using nonoverlapping data. The ability of AISmartDensity to identify future cancer in patients with a negative screening mammogram was evaluated and compared with that of mammographic density models. Sensitivity and positive predictive value (PPV) were calculated for the top 8% of scores, mimicking the proportion of patients in the Breast Imaging Reporting and Data System "extremely dense" category. Model performance was evaluated using area under the receiver operating characteristic curve (AUC) and was compared using the DeLong test. Results: The study population included 65 325 examinations (median patient age, 53 years [IQR, 47-62 years])-64 870 examinations in healthy patients and 455 examinations in patients with breast cancer diagnosed within 3 years of a negative screening mammogram. The AUC for detecting subsequent cancers was 0.72 and 0.61 ( P < .001) for AISmartDensity and the best -performing density model (age -adjusted dense area), respectively. For examinations with scores in the top 8%, AISmartDensity identified 152 of 455 (33%) future cancers with a PPV of 2.91%, whereas the best -performing density model (age -adjusted dense area) identified 57 of 455 (13%) future cancers with a PPV of 1.09% ( P < .001). AISmartDensity identified 32% (41 of 130) and 34% (111 of 325) of interval and next -round screen -detected cancers, whereas the best -performing density model (dense area) identified 16% (21 of 130) and 9% (30 of 325), respectively. Conclusion: AISmartDensity, integrating cancer signs, masking, and risk, outperformed traditional density models in identifying patients for supplemental imaging after a negative screening mammogram.
  •  
27.
  • Mahajan, Anubha, et al. (författare)
  • Multi-ancestry genetic study of type 2 diabetes highlights the power of diverse populations for discovery and translation
  • 2022
  • Ingår i: Nature Genetics. - : Springer Nature. - 1061-4036 .- 1546-1718. ; 54:5, s. 560-572
  • Tidskriftsartikel (refereegranskat)abstract
    • We assembled an ancestrally diverse collection of genome-wide association studies (GWAS) of type 2 diabetes (T2D) in 180,834 affected individuals and 1,159,055 controls (48.9% non-European descent) through the Diabetes Meta-Analysis of Trans-Ethnic association studies (DIAMANTE) Consortium. Multi-ancestry GWAS meta-analysis identified 237 loci attaining stringent genome-wide significance (P < 5 x 10(-9)), which were delineated to 338 distinct association signals. Fine-mapping of these signals was enhanced by the increased sample size and expanded population diversity of the multi-ancestry meta-analysis, which localized 54.4% of T2D associations to a single variant with >50% posterior probability. This improved fine-mapping enabled systematic assessment of candidate causal genes and molecular mechanisms through which T2D associations are mediated, laying the foundations for functional investigations. Multi-ancestry genetic risk scores enhanced transferability of T2D prediction across diverse populations. Our study provides a step toward more effective clinical translation of T2D GWAS to improve global health for all, irrespective of genetic background. Genome-wide association and fine-mapping analyses in ancestrally diverse populations implicate candidate causal genes and mechanisms underlying type 2 diabetes. Trans-ancestry genetic risk scores enhance transferability across populations.
  •  
28.
  • Matsoukas, Christos (författare)
  • Artificial Intelligence for Medical Image Analysis with Limited Data
  • 2024
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Artificial intelligence (AI) is progressively influencing business, science, and society, leading to major socioeconomic changes. However, its application in real-world problems varies significantly across different sectors. One of the primary challenges limiting the widespread adoption of AI in certain areas is data availability. Medical image analysis is one of these domains, where the process of gathering data and labels is often challenging or even infeasible due to legal and privacy concerns, or due to the specific characteristics of diseases. Logistical obstacles, expensive diagnostic methods and the necessity for invasive procedures add to the difficulty of data collection. Even when ample data exists, the substantial cost and logistical hurdles in acquiring expert annotations pose considerable challenges. Thus, there is a pressing need for the development of AI models that can operate in low-data settings.In this thesis, we explore methods that improve the generalization and robustness of models when data availability is limited. We highlight the importance of model architecture and initialization, considering their associated assumptions and biases, to determine their effectiveness in such settings. We find that models with fewer built-in assumptions in their architecture need to be initialized with pre-trained weights, executed via transfer learning. This prompts us to explore how well transfer learning performs when models are initially trained in the natural domains, where data is abundant, before being used for medical image analysis where data is limited. We identify key factors responsible for transfer learning’s efficacy, and explore its relationship with data size, model architecture, and the distance between the target domain and the one used for pretraining. In cases where expert labels are scarce, we introduce the concept of complementary labels as the means to expand the labeling set. By providing information about other objects in the image, these labels help develop richer representations, leading to improved performance in low-data regimes. We showcase the utility of these methods by streamlining the histopathology-based assessment of chronic kidney disease in an industrial pharmaceutical setting, reducing the turnaround time of study evaluations by 97%. Our results demonstrate that AI models developed for low data regimes are capable of delivering industrial-level performance, proving their practical use in drug discovery and healthcare.
  •  
29.
  • Matsoukas, Christos, et al. (författare)
  • What Makes Transfer Learning Work for Medical Images : Feature Reuse & Other Factors
  • 2022
  • Ingår i: 2022 IEEE/CVF conference on computer vision and pattern recognition (CVPR). - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 9215-9224
  • Konferensbidrag (refereegranskat)abstract
    • Transfer learning is a standard technique to transfer knowledge from one domain to another. For applications in medical imaging, transfer from ImageNet has become the de-facto approach, despite differences in the tasks and image characteristics between the domains. However, it is unclear what factors determine whether - and to what extent transfer learning to the medical domain is useful. The longstanding assumption that features from the source domain get reused has recently been called into question. Through a series of experiments on several medical image benchmark datasets, we explore the relationship between transfer learning, data size, the capacity and inductive bias of the model, as well as the distance between the source and target domain. Our findings suggest that transfer learning is beneficial in most cases, and we characterize the important role feature reuse plays in its success.
  •  
30.
  • Modin, Anders, 1975- (författare)
  • Resonant Soft X-ray Spectroscopic Studies of Light Actinides and Copper Systems
  • 2009
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Light actinides and copper systems were studied using resonant soft X-ray spectroscopy.An instrumental and experimental setup for soft X-ray spectroscopy meeting the requirements of a closed source for radioactivity was developed and described in detail. The setup was used for studies of single-crystal PuO2 oxidation. The existence of higher oxidation state than Pu(IV) in some surface areas of the single crystal were found from O 1s X-ray absorption measurements. Furthermore, from comparison with first principles calculations it was indicated that plutonium oxide with Pu fraction in a higher oxidation state than Pu(IV) consists of inequivalent sites with Pu(IV)O2 and Pu(V)O2 rather than a system where the Pu oxidation state is constantly fluctuating between Pu(IV) an Pu(V).It was shown that a combination of resonant O Kα X-ray emission and O 1s X-ray absorption spectroscopies can be used to study electron correlation effects in light-actinide dioxides.The electronic structure of copper systems was studied using resonant inelastic soft X-ray scattering and absorption spectroscopy. It was found that X-ray absorption can be used to monitor changes in the oxidation state but as differences between systems with the same oxidation state are in many cases small, speciation is uncertain. Therefore, a method utilizing resonant inelastic X-ray scattering as fingerprint to characterize complex copper systems was developed. The data recorded at certain excitation energies revealed unambiguous spectral fingerprints for different divalent copper systems. These specific spectral fingerprints were then used to study copper films exposed to different solutions. In particular, it was shown that resonant inelastic X-ray scattering can be used in situ to distinguish between CuO and Cu(OH)2, which is difficult with other techniques.
  •  
31.
  • Piccinini, Filippo, et al. (författare)
  • Advanced Cell Classifier : User-Friendly Machine-Learning-Based Software for Discovering Phenotypes in High-Content Imaging Data
  • 2017
  • Ingår i: CELL SYSTEMS. - : CELL PRESS. - 2405-4712. ; 4:6, s. 651-
  • Tidskriftsartikel (refereegranskat)abstract
    • High-content, imaging-based screens now routinely generate data on a scale that precludes manual verification and interrogation. Software applying machine learning has become an essential tool to automate analysis, but these methods require annotated examples to learn from. Efficiently exploring large datasets to find relevant examples remains a challenging bottleneck. Here, we present Advanced Cell Classifier (ACC), a graphical software package for phenotypic analysis that addresses these difficulties. ACC applies machine-learning and image-analysis methods to high-content data generated by large-scale, cell-based experiments. It features methods to mine microscopic image data, discover new phenotypes, and improve recognition performance. We demonstrate that these features substantially expedite the training process, successfully uncover rare phenotypes, and improve the accuracy of the analysis. ACC is extensively documented, designed to be user-friendly for researchers without machine-learning expertise, and distributed as a free open-source tool at www.cellclassifier.org.
  •  
32.
  • Robertson, Stephanie, et al. (författare)
  • Digital image analysis in breast pathology-from image processing techniques to artificial intelligence
  • 2018
  • Ingår i: Translational Research. - : ELSEVIER SCIENCE INC. - 1931-5244 .- 1878-1810. ; 194, s. 19-35
  • Forskningsöversikt (refereegranskat)abstract
    • Breast cancer is the most common malignant disease in women worldwide. In recent decades, earlier diagnosis and better adjuvant therapy have substantially improved patient outcome. Diagnosis by histopathology has proven to be instrumental to guide breast cancer treatment, but new challenges have emerged as our increasing understanding of cancer over the years has revealed its complex nature. As patient demand for personalized breast cancer therapy grows, we face an urgent need for more precise biomarker assessment and more accurate histopathologic breast cancer diagnosis to make better therapy decisions. The digitization of pathology data has opened the door to faster, more reproducible, and more precise diagnoses through computerized image analysis. Software to assist diagnostic breast pathology through image processing techniques have been around for years. But recent breakthroughs in artificial intelligence (AI) promise to fundamentally change the way we detect and treat breast cancer in the near future. Machine learning, a subfield of AI that applies statistical methods to learn from data, has seen an explosion of interest in recent years because of its ability to recognize patterns in data with less need for human instruction. One technique in particular, known as deep learning, has produced groundbreaking results in many important problems including image classification and speech recognition. In this review, we will cover the use of AI and deep learning in diagnostic breast pathology, and other recent developments in digital image analysis.
  •  
33.
  • Salim, Mattie, et al. (författare)
  • Differences and similarities in false interpretations by AI CAD and radiologists in screening mammography
  • 2023
  • Ingår i: British Journal of Radiology. - : Oxford University Press (OUP). - 0007-1285 .- 1748-880X. ; 96:1151
  • Tidskriftsartikel (refereegranskat)abstract
    • OBJECTIVE: We aimed to evaluate the false interpretations between artificial intelligence (AI) and radiologists in screening mammography to get a better understanding of how the distribution of diagnostic mistakes might change when moving from entirely radiologist-driven to AI-integrated breast cancer screening. METHODS AND MATERIALS: This retrospective case-control study was based on a mammography screening cohort from 2008 to 2015. The final study population included screening examinations for 714 women diagnosed with breast cancer and 8029 randomly selected healthy controls. Oversampling of controls was applied to attain a similar cancer proportion as in the source screening cohort. We examined how false-positive (FP) and false-negative (FN) assessments by AI, the first reader (RAD 1) and the second reader (RAD 2), were associated with age, density, tumor histology and cancer invasiveness in a single- and double-reader setting. RESULTS: For each reader, the FN assessments were distributed between low- and high-density females with 53 (42%) and 72 (58%) for AI; 59 (36%) and 104 (64%) for RAD 1 and 47 (36%) and 84 (64%) for RAD 2. The corresponding numbers for FP assessments were 1820 (47%) and 2016 (53%) for AI; 1568 (46%) and 1834 (54%) for RAD 1 and 1190 (43%) and 1610 (58%) for RAD 2. For ductal cancer, the FN assessments were 79 (77%) for AI CAD; with 120 (83%) for RAD 1 and with 96 (16%) for RAD 2. For the double-reading simulation, the FP assessments were distributed between younger and older females with 2828 (2.5%) and 1554 (1.4%) for RAD 1 + RAD 2; 3850 (3.4%) and 2940 (2.6%) for AI+RAD 1 and 3430 (3%) and 2772 (2.5%) for AI+RAD 2. CONCLUSION: The most pronounced decrease in FN assessments was noted for females over the age of 55 and for high density-women. In conclusion, AI could have an important complementary role when combined with radiologists to increase sensitivity for high-density and older females. ADVANCES IN KNOWLEDGE: Our results highlight the potential impact of integrating AI in breast cancer screening, particularly to improve interpretation accuracy. The use of AI could enhance screening outcomes for high-density and older females.
  •  
34.
  • Salim, Mattie, et al. (författare)
  • External Evaluation of 3 Commercial Artificial Intelligence Algorithms for Independent Assessment of Screening Mammograms
  • 2020
  • Ingår i: JAMA Oncology. - : American Medical Association (AMA). - 2374-2437 .- 2374-2445. ; 6:10, s. 1581-
  • Tidskriftsartikel (refereegranskat)abstract
    • Importance: A computer algorithm that performs at or above the level of radiologists in mammography screening assessment could improve the effectiveness of breast cancer screening. Objective: To perform an external evaluation of 3 commercially available artificial intelligence (AI) computer-aided detection algorithms as independent mammography readers and to assess the screening performance when combined with radiologists. Design, Setting, and Participants: This retrospective case-control study was based on a double-reader population-based mammography screening cohort of women screened at an academic hospital in Stockholm, Sweden, from 2008 to 2015. The study included 8805 women aged 40 to 74 years who underwent mammography screening and who did not have implants or prior breast cancer. The study sample included 739 women who were diagnosed as having breast cancer (positive) and a random sample of 8066 healthy controls (negative for breast cancer). Main Outcomes and Measures: Positive follow-up findings were determined by pathology-verified diagnosis at screening or within 12 months thereafter. Negative follow-up findings were determined by a 2-year cancer-free follow-up. Three AI computer-aided detection algorithms (AI-1, AI-2, and AI-3), sourced from different vendors, yielded a continuous score for the suspicion of cancer in each mammography examination. For a decision of normal or abnormal, the cut point was defined by the mean specificity of the first-reader radiologists (96.6%). Results: The median age of study participants was 60 years (interquartile range, 50-66 years) for 739 women who received a diagnosis of breast cancer and 54 years (interquartile range, 47-63 years) for 8066 healthy controls. The cases positive for cancer comprised 618 (84%) screen detected and 121 (16%) clinically detected within 12 months of the screening examination. The area under the receiver operating curve for cancer detection was 0.956 (95% CI, 0.948-0.965) for AI-1, 0.922 (95% CI, 0.910-0.934) for AI-2, and 0.920 (95% CI, 0.909-0.931) for AI-3. At the specificity of the radiologists, the sensitivities were 81.9% for AI-1, 67.0% for AI-2, 67.4% for AI-3, 77.4% for first-reader radiologist, and 80.1% for second-reader radiologist. Combining AI-1 with first-reader radiologists achieved 88.6% sensitivity at 93.0% specificity (abnormal defined by either of the 2 making an abnormal assessment). No other examined combination of AI algorithms and radiologists surpassed this sensitivity level. Conclusions and Relevance: To our knowledge, this study is the first independent evaluation of several AI computer-aided detection algorithms for screening mammography. The results of this study indicated that a commercially available AI computer-aided detection algorithm can assess screening mammograms with a sufficient diagnostic performance to be further evaluated as an independent reader in prospective clinical trials. Combining the first readers with the best algorithm identified more cases positive for cancer than combining the first readers with second readers. 
  •  
35.
  • Sirmacek, B., et al. (författare)
  • The Potential of Artificial Intelligence for Achieving Healthy and Sustainable Societies
  • 2023
  • Ingår i: The Ethics of Artificial Intelligence for the Sustainable Development Goals. - : Springer Nature. ; , s. 65-96
  • Bokkapitel (övrigt vetenskapligt/konstnärligt)abstract
    • In this chapter we extend earlier work (Vinuesa et al., Nat Commun 11, 2020) on the potential of artificial intelligence (AI) to achieve the 17 Sustainable Development Goals (SDGs) proposed by the United Nations (UN) for the 2030 Agenda. The present contribution focuses on three SDGs related to healthy and sustainable societies, i.e., SDG 3 (on good health), SDG 11 (on sustainable cities), and SDG 13 (on climate action). This chapter extends the previous study within those three goals and goes beyond the 2030 targets. These SDGs are selected because they are closely related to the coronavirus disease 19 (COVID-19) pandemic and also to crises like climate change, which constitute important challenges to our society.
  •  
36.
  • Smith, Kevin, 1975-, et al. (författare)
  • Phenotypic Image Analysis Software Tools for Exploring and Understanding Big Image Data from Cell-Based Assays
  • 2018
  • Ingår i: CELL SYSTEMS. - : Elsevier. - 2405-4712. ; 6:6, s. 636-653
  • Forskningsöversikt (refereegranskat)abstract
    • Phenotypic image analysis is the task of recognizing variations in cell properties using microscopic image data. These variations, produced through a complex web of interactions between genes and the environment, may hold the key to uncover important biological phenomena or to understand the response to a drug candidate. Today, phenotypic analysis is rarely performed completely by hand. The abundance of high-dimensional image data produced by modern high-throughput microscopes necessitates computational solutions. Over the past decade, a number of software tools have been developed to address this need. They use statistical learning methods to infer relationships between a cell's phenotype and data from the image. In this review, we examine the strengths and weaknesses of non-commercial phenotypic image analysis software, cover recent developments in the field, identify challenges, and give a perspective on future possibilities.
  •  
37.
  • Sorkhei, Moein, et al. (författare)
  • CSAW-M : An Ordinal Classification Dataset for Benchmarking Mammographic Masking of Cancer
  • 2021
  • Ingår i: Conference on Neural Information Processing Systems (NeurIPS) – Datasets and Benchmarks Proceedings, 2021..
  • Konferensbidrag (refereegranskat)abstract
    • Interval and large invasive breast cancers, which are associated with worse prognosis than other cancers, are usually detected at a late stage due to false negative assessments of screening mammograms. The missed screening-time detection is commonly caused by the tumor being obscured by its surrounding breast tissues, a phenomenon called masking. To study and benchmark mammographic masking of cancer, in this work we introduce CSAW-M, the largest public mammographic dataset, collected from over 10,000 individuals and annotated with potential masking. In contrast to the previous approaches which measure breast image density as a proxy, our dataset directly provides annotations of masking potential assessments from five specialists. We also trained deep learning models on CSAW-M to estimate the masking level and showed that the estimated masking is significantly more predictive of screening participants diagnosed with interval and large invasive cancers – without being explicitly trained for these tasks – than its breast density counterparts.
  •  
38.
  • Sullivan, Devin P., et al. (författare)
  • Deep learning is combined with massive-scale citizen science to improve large-scale image classification
  • 2018
  • Ingår i: Nature Biotechnology. - : NATURE PUBLISHING GROUP. - 1087-0156 .- 1546-1696. ; 36:9, s. 820-
  • Tidskriftsartikel (refereegranskat)abstract
    • Pattern recognition and classification of images are key challenges throughout the life sciences. We combined two approaches for large-scale classification of fluorescence microscopy images. First, using the publicly available data set from the Cell Atlas of the Human Protein Atlas (HPA), we integrated an image-classification task into a mainstream video game (EVE Online) as a mini-game, named Project Discovery. Participation by 322,006 gamers over 1 year provided nearly 33 million classifications of subcellular localization patterns, including patterns that were not previously annotated by the HPA. Second, we used deep learning to build an automated Localization Cellular Annotation Tool (Loc-CAT). This tool classifies proteins into 29 subcellular localization patterns and can deal efficiently with multi-localization proteins, performing robustly across different cell types. Combining the annotations of gamers and deep learning, we applied transfer learning to create a boosted learner that can characterize subcellular protein distribution with F1 score of 0.72. We found that engaging players of commercial computer games provided data that augmented deep learning and enabled scalable and readily improved image classification.
  •  
39.
  • Teye, Mattias, et al. (författare)
  • Bayesian Uncertainty Estimation for Batch Normalized Deep Networks
  • 2018
  • Ingår i: 35th International Conference on Machine Learning, ICML 2018. - : International Machine Learning Society (IMLS).
  • Konferensbidrag (refereegranskat)abstract
    • We show that training a deep network using batch normalization is equivalent to approximate inference in Bayesian models. We further demonstrate that this finding allows us to make meaningful estimates of the model uncertainty using conventional architectures, without modifications to the network or the training procedure. Our approach is thoroughly validated by measuring the quality of uncertainty in a series of empirical experiments on different tasks. It outperforms baselines with strong statistical significance, and displays competitive performance with recent Bayesian approaches
  •  
40.
  • Van der Goten, Lennart Alexander, et al. (författare)
  • Conditional De-Identification of 3D Magnetic Resonance Images
  • 2021
  • Ingår i: 32nd British Machine Vision Conference, BMVC 2021. - : British Machine Vision Association, BMVA.
  • Konferensbidrag (refereegranskat)abstract
    • Privacy protection of medical image data is challenging. Even if metadata is removed, brain scans are vulnerable to attacks that match renderings of the face to facial image databases. Solutions have been developed to de-identify diagnostic scans by obfuscating or removing parts of the face. However, these solutions either fail to reliably hide the patient's identity or are so aggressive that they impair further analyses. We propose a new class of de-identification techniques that, instead of removing facial features, remodels them. Our solution relies on a conditional multi-scale GAN architecture. It takes a patient's MRI scan as input and generates a 3D volume conditioned on the patient's brain, which is preserved exactly, but where the face has been de-identified through remodeling. We demonstrate that our approach preserves privacy far better than existing techniques, without compromising downstream medical analyses. Analyses were run on the OASIS-3 and ADNI corpora.
  •  
41.
  • Van der Goten, Lennart Alexander, et al. (författare)
  • Wide-Range MRI Artifact Removal with Transformers
  • 2022
  • Ingår i: BMVC 2022 - 33rd British Machine Vision Conference Proceedings. - : British Machine Vision Association, BMVA.
  • Konferensbidrag (refereegranskat)abstract
    • Artifacts on magnetic resonance scans are a serious challenge for both radiologists and computer-aided diagnosis systems. Most commonly, artifacts are caused by motion of the patients, but can also arise from device-specific abnormalities such as noise patterns. Irrespective of the source, artifacts can not only render a scan useless, but can potentially induce misdiagnoses if left unnoticed. For instance, an artifact may masquerade as a tumor or other abnormality. Retrospective artifact correction (RAC) is concerned with removing artifacts after the scan has already been taken. In this work, we propose a method capable of retrospectively removing eight common artifacts found in native-resolution MR imagery. Knowledge of the presence or location of a specific artifact is not assumed and the system is, by design, capable of undoing interactions of multiple artifacts. Our method is realized through the design of a novel volumetric transformer-based neural network that generalizes a window-centered approach popularized by the Swin transformer. Unlike Swin, our method is (i) natively volumetric, (ii) geared towards dense prediction tasks instead of classification, and (iii), uses a novel and more global mechanism to enable information exchange between windows. Our experiments show that our reconstructions are considerably better than those attained by ResNet, V-Net, MobileNet-v2, DenseNet, CycleGAN and BicycleGAN. Moreover, we show that the reconstructed images from our model improves the accuracy of FSL BET, a standard skull-stripping method typically applied in diagnostic workflows.
  •  
42.
  • Yang, Jian, et al. (författare)
  • FTO genotype is associated with phenotypic variability of body mass index
  • 2012
  • Ingår i: Nature. - : Springer Science and Business Media LLC. - 0028-0836 .- 1476-4687. ; 490:7419, s. 267-272
  • Tidskriftsartikel (refereegranskat)abstract
    • There is evidence across several species for genetic control of phenotypic variation of complex traits(1-4), such that the variance among phenotypes is genotype dependent. Understanding genetic control of variability is important in evolutionary biology, agricultural selection programmes and human medicine, yet for complex traits, no individual genetic variants associated with variance, as opposed to the mean, have been identified. Here we perform a meta-analysis of genome-wide association studies of phenotypic variation using similar to 170,000 samples on height and body mass index (BMI) in human populations. We report evidence that the single nucleotide polymorphism (SNP) rs7202116 at the FTO gene locus, which is known to be associated with obesity (as measured by mean BMI for each rs7202116 genotype)(5-7), is also associated with phenotypic variability. We show that the results are not due to scale effects or other artefacts, and find no other experiment-wise significant evidence for effects on variability, either at loci other than FTO for BMI or at any locus for height. The difference in variance for BMI among individuals with opposite homozygous genotypes at the FTO locus is approximately 7%, corresponding to a difference of similar to 0.5 kilograms in the standard deviation of weight. Our results indicate that genetic variants can be discovered that are associated with variability, and that between-person variability in obesity can partly be explained by the genotype at the FTO locus. The results are consistent with reported FTO by environment interactions for BMI8, possibly mediated by DNA methylation(9,10). Our BMI results for other SNPs and our height results for all SNPs suggest that most genetic variants, including those that influence mean height or mean BMI, are not associated with phenotypic variance, or that their effects on variability are too small to detect even with samples sizes greater than 100,000.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-42 av 42
Typ av publikation
tidskriftsartikel (18)
konferensbidrag (14)
doktorsavhandling (6)
forskningsöversikt (2)
annan publikation (1)
bokkapitel (1)
visa fler...
visa färre...
Typ av innehåll
refereegranskat (34)
övrigt vetenskapligt/konstnärligt (8)
Författare/redaktör
Smith, Kevin, 1975- (32)
Azizpour, Hossein, 1 ... (12)
Strand, Fredrik (9)
Leuchowius, Karl-Joh ... (5)
Boehnke, Michael (4)
Mohlke, Karen L (4)
visa fler...
Thorleifsson, Gudmar (4)
Stefansson, Kari (4)
Metspalu, Andres (4)
Loos, Ruth J F (4)
Psaty, Bruce M (4)
Smith, Kevin, Associ ... (4)
Groop, Leif (3)
Lind, Lars (3)
Campbell, Harry (3)
Rudan, Igor (3)
Eklund, Martin (3)
North, Kari E. (3)
Wareham, Nicholas J. (3)
Kuusisto, Johanna (3)
Laakso, Markku (3)
McCarthy, Mark I (3)
Ridker, Paul M. (3)
Chasman, Daniel I. (3)
van Duijn, Cornelia ... (3)
Thorsteinsdottir, Un ... (3)
Gieger, Christian (3)
Peters, Annette (3)
Martin, Nicholas G. (3)
Mahajan, Anubha (3)
Froguel, Philippe (3)
Luan, Jian'an (3)
Hicks, Andrew A. (3)
Pramstaller, Peter P ... (3)
Wilson, James F. (3)
Montgomery, Grant W. (3)
Rivadeneira, Fernand ... (3)
Uitterlinden, André ... (3)
Vitart, Veronique (3)
Wild, Sarah H (3)
Hayward, Caroline (3)
Gudnason, Vilmundur (3)
Polasek, Ozren (3)
Boerwinkle, Eric (3)
van der Harst, Pim (3)
Liu, Jianjun (3)
Azizpour, Hossein, A ... (3)
Cusi, Daniele (3)
Fischer, Krista (3)
Vollenweider, Peter (3)
visa färre...
Lärosäte
Kungliga Tekniska Högskolan (36)
Karolinska Institutet (12)
Uppsala universitet (6)
Lunds universitet (5)
Göteborgs universitet (2)
Umeå universitet (1)
visa fler...
Högskolan Dalarna (1)
visa färre...
Språk
Engelska (42)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (25)
Medicin och hälsovetenskap (14)
Teknik (2)
Samhällsvetenskap (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy