SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Kiani NA) "

Sökning: WFRF:(Kiani NA)

  • Resultat 1-37 av 37
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  •  
2.
  •  
3.
  •  
4.
  •  
5.
  •  
6.
  •  
7.
  • Hernández-Orozco, S, et al. (författare)
  • Algorithmic Probability-Guided Machine Learning on Non-Differentiable Spaces
  • 2020
  • Ingår i: Frontiers in artificial intelligence. - : Frontiers Media SA. - 2624-8212. ; 3, s. 567356-
  • Tidskriftsartikel (refereegranskat)abstract
    • We show how complexity theory can be introduced in machine learning to help bring together apparently disparate areas of current research. We show that this model-driven approach may require less training data and can potentially be more generalizable as it shows greater resilience to random attacks. In an algorithmic space the order of its element is given by its algorithmic probability, which arises naturally from computable processes. We investigate the shape of a discrete algorithmic space when performing regression or classification using a loss function parametrized by algorithmic complexity, demonstrating that the property of differentiation is not required to achieve results similar to those obtained using differentiable programming approaches such as deep learning. In doing so we use examples which enable the two approaches to be compared (small, given the computational power required for estimations of algorithmic complexity). We find and report that 1) machine learning can successfully be performed on a non-smooth surface using algorithmic complexity; 2) that solutions can be found using an algorithmic-probability classifier, establishing a bridge between a fundamentally discrete theory of computability and a fundamentally continuous mathematical theory of optimization methods; 3) a formulation of an algorithmically directed search technique in non-smooth manifolds can be defined and conducted; 4) exploitation techniques and numerical methods for algorithmic search to navigate these discrete non-differentiable spaces can be performed; in application of the (a) identification of generative rules from data observations; (b) solutions to image classification problems more resilient against pixel attacks compared to neural networks; (c) identification of equation parameters from a small data-set in the presence of noise in continuous ODE system problem, (d) classification of Boolean NK networks by (1) network topology, (2) underlying Boolean function, and (3) number of incoming edges.
  •  
8.
  • Hernandez-Orozco, S, et al. (författare)
  • Algorithmically probable mutations reproduce aspects of evolution, such as convergence rate, genetic memory and modularity
  • 2018
  • Ingår i: Royal Society open science. - : The Royal Society. - 2054-5703. ; 5:8, s. 180399-
  • Tidskriftsartikel (refereegranskat)abstract
    • Natural selection explains how life has evolved over millions of years from more primitive forms. The speed at which this happens, however, has sometimes defied formal explanations when based on random (uniformly distributed) mutations. Here, we investigate the application of a simplicity bias based on a natural but algorithmic distribution of mutations (no recombination) in various examples, particularly binary matrices, in order to compare evolutionary convergence rates. Results both on synthetic and on small biological examples indicate an accelerated rate when mutations are not statistically uniform butalgorithmically uniform. We show that algorithmic distributions can evolve modularity and genetic memory by preservation of structures when they first occur sometimes leading to an accelerated production of diversity but also to population extinctions, possibly explaining naturally occurring phenomena such as diversity explosions (e.g. the Cambrian) and massive extinctions (e.g. the End Triassic) whose causes are currently a cause for debate. The natural approach introduced here appears to be a better approximation to biological evolution than models based exclusively upon random uniform mutations, and it also approaches a formal version of open-ended evolution based on previous formal results. These results validate some suggestions in the direction that computation may be an equally important driver of evolution. We also show that inducing the method on problems of optimization, such as genetic algorithms, has the potential to accelerate convergence of artificial evolutionary algorithms.
  •  
9.
  •  
10.
  •  
11.
  •  
12.
  •  
13.
  • Kannan, V, et al. (författare)
  • Conditional Disease Development extracted from Longitudinal Health Care Cohort Data using Layered Network Construction
  • 2016
  • Ingår i: Scientific reports. - : Springer Science and Business Media LLC. - 2045-2322. ; 6, s. 26170-
  • Tidskriftsartikel (refereegranskat)abstract
    • Health care data holds great promise to be used in clinical decision support systems. However, frequent near-synonymous diagnoses recorded separately, as well as the sheer magnitude and complexity of the disease data makes it challenging to extract non-trivial conclusions beyond confirmatory associations from such a web of interactions. Here we present a systematic methodology to derive statistically valid conditional development of diseases. To this end we utilize a cohort of 5,512,469 individuals followed over 13 years at inpatient care, including data on disability pension and cause of death. By introducing a causal information fraction measure and taking advantage of the composite structure in the ICD codes, we extract an effective directed lower dimensional network representation (100 nodes and 130 edges) of our cohort. Unpacking composite nodes into bipartite graphs retrieves, for example, that individuals with behavioral disorders are more likely to be followed by prescription drug poisoning episodes, whereas women with leiomyoma were more likely to subsequently experience endometriosis. The conditional disease development represent putative causal relations, indicating possible novel clinical relationships and pathophysiological associations that have not been explored yet.
  •  
14.
  •  
15.
  • Khan, SA, et al. (författare)
  • scAEGAN: Unification of single-cell genomics data by adversarial learning of latent space correspondences
  • 2023
  • Ingår i: PloS one. - : Public Library of Science (PLoS). - 1932-6203. ; 18:2, s. e0281315-
  • Tidskriftsartikel (refereegranskat)abstract
    • Recent progress in Single-Cell Genomics has produced different library protocols and techniques for molecular profiling. We formulate a unifying, data-driven, integrative, and predictive methodology for different libraries, samples, and paired-unpaired data modalities. Our design of scAEGAN includes an autoencoder (AE) network integrated with adversarial learning by a cycleGAN (cGAN) network. The AE learns a low-dimensional embedding of each condition, whereas the cGAN learns a non-linear mapping between the AE representations. We evaluate scAEGAN using simulated data and real scRNA-seq datasets, different library preparations (Fluidigm C1, CelSeq, CelSeq2, SmartSeq), and several data modalities as paired scRNA-seq and scATAC-seq. The scAEGAN outperforms Seurat3 in library integration, is more robust against data sparsity, and beats Seurat 4 in integrating paired data from the same cell. Furthermore, in predicting one data modality from another, scAEGAN outperforms Babel. We conclude that scAEGAN surpasses current state-of-the-art methods and unifies integration and prediction challenges.
  •  
16.
  •  
17.
  •  
18.
  • Kiani, NA, et al. (författare)
  • Predictive Systems Toxicology
  • 2018
  • Ingår i: Methods in molecular biology (Clifton, N.J.). - New York, NY : Springer New York. - 1940-6029. ; 1800, s. 535-557
  • Tidskriftsartikel (refereegranskat)
  •  
19.
  •  
20.
  •  
21.
  •  
22.
  • Kotelnikova, E, et al. (författare)
  • Signaling networks in MS: a systems-based approach to developing new pharmacological therapies
  • 2015
  • Ingår i: Multiple sclerosis (Houndmills, Basingstoke, England). - : SAGE Publications. - 1477-0970 .- 1352-4585. ; 21:2, s. 138-146
  • Tidskriftsartikel (refereegranskat)abstract
    • The pathogenesis of multiple sclerosis (MS) involves alterations to multiple pathways and processes, which represent a significant challenge for developing more-effective therapies. Systems biology approaches that study pathway dysregulation should offer benefits by integrating molecular networks and dynamic models with current biological knowledge for understanding disease heterogeneity and response to therapy. In MS, abnormalities have been identified in several cytokine-signaling pathways, as well as those of other immune receptors. Among the downstream molecules implicated are Jak/Stat, NF-Kb, ERK1/3, p38 or Jun/Fos. Together, these data suggest that MS is likely to be associated with abnormalities in apoptosis/cell death, microglia activation, blood-brain barrier functioning, immune responses, cytokine production, and/or oxidative stress, although which pathways contribute to the cascade of damage and can be modulated remains an open question. While current MS drugs target some of these pathways, others remain untouched. Here, we propose a pragmatic systems analysis approach that involves the large-scale extraction of processes and pathways relevant to MS. These data serve as a scaffold on which computational modeling can be performed to identify disease subgroups based on the contribution of different processes. Such an analysis, targeting these relevant MS-signaling pathways, offers the opportunity to accelerate the development of novel individual or combination therapies.
  •  
23.
  •  
24.
  • Menden, MP, et al. (författare)
  • Community assessment to advance computational prediction of cancer drug combinations in a pharmacogenomic screen
  • 2019
  • Ingår i: Nature communications. - : Springer Science and Business Media LLC. - 2041-1723. ; 10:1, s. 2674-
  • Tidskriftsartikel (refereegranskat)abstract
    • The effectiveness of most cancer targeted therapies is short-lived. Tumors often develop resistance that might be overcome with drug combinations. However, the number of possible combinations is vast, necessitating data-driven approaches to find optimal patient-specific treatments. Here we report AstraZeneca’s large drug combination dataset, consisting of 11,576 experiments from 910 combinations across 85 molecularly characterized cancer cell lines, and results of a DREAM Challenge to evaluate computational strategies for predicting synergistic drug pairs and biomarkers. 160 teams participated to provide a comprehensive methodological development and benchmarking. Winning methods incorporate prior knowledge of drug-target interactions. Synergy is predicted with an accuracy matching biological replicates for >60% of combinations. However, 20% of drug combinations are poorly predicted by all methods. Genomic rationale for synergy predictions are identified, including ADAM17 inhibitor antagonism when combined with PIK3CB/D inhibition contrasting to synergy when combined with other PI3K-pathway inhibitors in PIK3CA mutant cells.
  •  
25.
  •  
26.
  •  
27.
  • Ruffini, G, et al. (författare)
  • LSD-induced increase of Ising temperature and algorithmic complexity of brain dynamics
  • 2023
  • Ingår i: PLoS computational biology. - : Public Library of Science (PLoS). - 1553-7358. ; 19:2, s. e1010811-
  • Tidskriftsartikel (refereegranskat)abstract
    • A topic of growing interest in computational neuroscience is the discovery of fundamental principles underlying global dynamics and the self-organization of the brain. In particular, the notion that the brain operates near criticality has gained considerable support, and recent work has shown that the dynamics of different brain states may be modeled by pairwise maximum entropy Ising models at various distances from a phase transition, i.e., from criticality. Here we aim to characterize two brain states (psychedelics-induced and placebo) as captured by functional magnetic resonance imaging (fMRI), with features derived from the Ising spin model formalism (system temperature, critical point, susceptibility) and from algorithmic complexity. We hypothesized, along the lines of the entropic brain hypothesis, that psychedelics drive brain dynamics into a more disordered state at a higher Ising temperature and increased complexity. We analyze resting state blood-oxygen-level-dependent (BOLD) fMRI data collected in an earlier study from fifteen subjects in a control condition (placebo) and during ingestion of lysergic acid diethylamide (LSD). Working with the automated anatomical labeling (AAL) brain parcellation, we first create “archetype” Ising models representative of the entire dataset (global) and of the data in each condition. Remarkably, we find that such archetypes exhibit a strong correlation with an average structural connectome template obtained from dMRI (r = 0.6). We compare the archetypes from the two conditions and find that the Ising connectivity in the LSD condition is lower than in the placebo one, especially in homotopic links (interhemispheric connectivity), reflecting a significant decrease of homotopic functional connectivity in the LSD condition. The global archetype is then personalized for each individual and condition by adjusting the system temperature. The resulting temperatures are all near but above the critical point of the model in the paramagnetic (disordered) phase. The individualized Ising temperatures are higher in the LSD condition than in the placebo condition (p = 9 × 10−5). Next, we estimate the Lempel-Ziv-Welch (LZW) complexity of the binarized BOLD data and the synthetic data generated with the individualized model using the Metropolis algorithm for each participant and condition. The LZW complexity computed from experimental data reveals a weak statistical relationship with condition (p = 0.04 one-tailed Wilcoxon test) and none with Ising temperature (r(13) = 0.13, p = 0.65), presumably because of the limited length of the BOLD time series. Similarly, we explore complexity using the block decomposition method (BDM), a more advanced method for estimating algorithmic complexity. The BDM complexity of the experimental data displays a significant correlation with Ising temperature (r(13) = 0.56, p = 0.03) and a weak but significant correlation with condition (p = 0.04, one-tailed Wilcoxon test). This study suggests that the effects of LSD increase the complexity of brain dynamics by loosening interhemispheric connectivity—especially homotopic links. In agreement with earlier work using the Ising formalism with BOLD data, we find the brain state in the placebo condition is already above the critical point, with LSD resulting in a shift further away from criticality into a more disordered state.
  •  
28.
  •  
29.
  • Zenil, H, et al. (författare)
  • A Decomposition Method for Global Evaluation of Shannon Entropy and Local Estimations of Algorithmic Complexity
  • 2018
  • Ingår i: Entropy (Basel, Switzerland). - : MDPI AG. - 1099-4300. ; 20:8
  • Tidskriftsartikel (refereegranskat)abstract
    • We investigate the properties of a Block Decomposition Method (BDM), which extends the power of a Coding Theorem Method (CTM) that approximates local estimations of algorithmic complexity based on Solomonoff–Levin’s theory of algorithmic probability providing a closer connection to algorithmic complexity than previous attempts based on statistical regularities such as popular lossless compression schemes. The strategy behind BDM is to find small computer programs that produce the components of a larger, decomposed object. The set of short computer programs can then be artfully arranged in sequence so as to produce the original object. We show that the method provides efficient estimations of algorithmic complexity but that it performs like Shannon entropy when it loses accuracy. We estimate errors and study the behaviour of BDM for different boundary conditions, all of which are compared and assessed in detail. The measure may be adapted for use with more multi-dimensional objects than strings, objects such as arrays and tensors. To test the measure we demonstrate the power of CTM on low algorithmic-randomness objects that are assigned maximal entropy (e.g., π ) but whose numerical approximations are closer to the theoretical low algorithmic-randomness expectation. We also test the measure on larger objects including dual, isomorphic and cospectral graphs for which we know that algorithmic randomness is low. We also release implementations of the methods in most major programming languages—Wolfram Language (Mathematica), Matlab, R, Perl, Python, Pascal, C++, and Haskell—and an online algorithmic complexity calculator.
  •  
30.
  • Zenil, H, et al. (författare)
  • A Review of Graph and Network Complexity from an Algorithmic Information Perspective
  • 2018
  • Ingår i: Entropy (Basel, Switzerland). - : MDPI AG. - 1099-4300. ; 20:8
  • Tidskriftsartikel (refereegranskat)abstract
    • Information-theoretic-based measures have been useful in quantifying network complexity. Here we briefly survey and contrast (algorithmic) information-theoretic methods which have been used to characterize graphs and networks. We illustrate the strengths and limitations of Shannon’s entropy, lossless compressibility and algorithmic complexity when used to identify aspects and properties of complex networks. We review the fragility of computable measures on the one hand and the invariant properties of algorithmic measures on the other demonstrating how current approaches to algorithmic complexity are misguided and suffer of similar limitations than traditional statistical approaches such as Shannon entropy. Finally, we review some current definitions of algorithmic complexity which are used in analyzing labelled and unlabelled graphs. This analysis opens up several new opportunities to advance beyond traditional measures.
  •  
31.
  •  
32.
  •  
33.
  •  
34.
  •  
35.
  •  
36.
  • Zenil, H, et al. (författare)
  • Symmetry and Correspondence of Algorithmic Complexity over Geometric, Spatial and Topological Representations
  • 2018
  • Ingår i: Entropy (Basel, Switzerland). - : MDPI AG. - 1099-4300. ; 20:7
  • Tidskriftsartikel (refereegranskat)abstract
    • We introduce a definition of algorithmic symmetry in the context of geometric and spatial complexity able to capture mathematical aspects of different objects using as a case study polyominoes and polyhedral graphs. We review, study and apply a method for approximating the algorithmic complexity (also known as Kolmogorov–Chaitin complexity) of graphs and networks based on the concept of Algorithmic Probability (AP). AP is a concept (and method) capable of recursively enumerate all properties of computable (causal) nature beyond statistical regularities. We explore the connections of algorithmic complexity—both theoretical and numerical—with geometric properties mainly symmetry and topology from an (algorithmic) information-theoretic perspective. We show that approximations to algorithmic complexity by lossless compression and an Algorithmic Probability-based method can characterize spatial, geometric, symmetric and topological properties of mathematical objects and graphs.
  •  
37.
  • Zenil, H, et al. (författare)
  • The Thermodynamics of Network Coding, and an Algorithmic Refinement of the Principle of Maximum Entropy
  • 2019
  • Ingår i: Entropy (Basel, Switzerland). - : MDPI AG. - 1099-4300. ; 21:6
  • Tidskriftsartikel (refereegranskat)abstract
    • The principle of maximum entropy (Maxent) is often used to obtain prior probability distributions as a method to obtain a Gibbs measure under some restriction giving the probability that a system will be in a certain state compared to the rest of the elements in the distribution. Because classical entropy-based Maxent collapses cases confounding all distinct degrees of randomness and pseudo-randomness, here we take into consideration the generative mechanism of the systems considered in the ensemble to separate objects that may comply with the principle under some restriction and whose entropy is maximal but may be generated recursively from those that are actually algorithmically random offering a refinement to classical Maxent. We take advantage of a causal algorithmic calculus to derive a thermodynamic-like result based on how difficult it is to reprogram a computer code. Using the distinction between computable and algorithmic randomness, we quantify the cost in information loss associated with reprogramming. To illustrate this, we apply the algorithmic refinement to Maxent on graphs and introduce a Maximal Algorithmic Randomness Preferential Attachment (MARPA) Algorithm, a generalisation over previous approaches. We discuss practical implications of evaluation of network randomness. Our analysis provides insight in that the reprogrammability asymmetry appears to originate from a non-monotonic relationship to algorithmic probability. Our analysis motivates further analysis of the origin and consequences of the aforementioned asymmetries, reprogrammability, and computation.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-37 av 37

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy