SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Zenil H) "

Search: WFRF:(Zenil H)

  • Result 1-25 of 38
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Menden, MP, et al. (author)
  • Community assessment to advance computational prediction of cancer drug combinations in a pharmacogenomic screen
  • 2019
  • In: Nature communications. - : Springer Science and Business Media LLC. - 2041-1723. ; 10:1, s. 2674-
  • Journal article (peer-reviewed)abstract
    • The effectiveness of most cancer targeted therapies is short-lived. Tumors often develop resistance that might be overcome with drug combinations. However, the number of possible combinations is vast, necessitating data-driven approaches to find optimal patient-specific treatments. Here we report AstraZeneca’s large drug combination dataset, consisting of 11,576 experiments from 910 combinations across 85 molecularly characterized cancer cell lines, and results of a DREAM Challenge to evaluate computational strategies for predicting synergistic drug pairs and biomarkers. 160 teams participated to provide a comprehensive methodological development and benchmarking. Winning methods incorporate prior knowledge of drug-target interactions. Synergy is predicted with an accuracy matching biological replicates for >60% of combinations. However, 20% of drug combinations are poorly predicted by all methods. Genomic rationale for synergy predictions are identified, including ADAM17 inhibitor antagonism when combined with PIK3CB/D inhibition contrasting to synergy when combined with other PI3K-pathway inhibitors in PIK3CA mutant cells.
  •  
2.
  •  
3.
  • Abrahao, FS, et al. (author)
  • Algorithmic Information Distortions in Node-Aligned and Node-Unaligned Multidimensional Networks
  • 2021
  • In: Entropy (Basel, Switzerland). - : MDPI AG. - 1099-4300. ; 23:7
  • Journal article (peer-reviewed)abstract
    • In this article, we investigate limitations of importing methods based on algorithmic information theory from monoplex networks into multidimensional networks (such as multilayer networks) that have a large number of extra dimensions (i.e., aspects). In the worst-case scenario, it has been previously shown that node-aligned multidimensional networks with non-uniform multidimensional spaces can display exponentially larger algorithmic information (or lossless compressibility) distortions with respect to their isomorphic monoplex networks, so that these distortions grow at least linearly with the number of extra dimensions. In the present article, we demonstrate that node-unaligned multidimensional networks, either with uniform or non-uniform multidimensional spaces, can also display exponentially larger algorithmic information distortions with respect to their isomorphic monoplex networks. However, unlike the node-aligned non-uniform case studied in previous work, these distortions in the node-unaligned case grow at least exponentially with the number of extra dimensions. On the other hand, for node-aligned multidimensional networks with uniform multidimensional spaces, we demonstrate that any distortion can only grow up to a logarithmic order of the number of extra dimensions. Thus, these results establish that isomorphisms between finite multidimensional networks and finite monoplex networks do not preserve algorithmic information in general and highlight that the algorithmic information of the multidimensional space itself needs to be taken into account in multidimensional network complexity analysis.
  •  
4.
  • Adams, A, et al. (author)
  • Formal Definitions of Unbounded Evolution and Innovation Reveal Universal Mechanisms for Open-Ended Evolution in Dynamical Systems
  • 2017
  • In: Scientific reports. - : Springer Science and Business Media LLC. - 2045-2322. ; 7:1, s. 997-
  • Journal article (peer-reviewed)abstract
    • Open-ended evolution (OEE) is relevant to a variety of biological, artificial and technological systems, but has been challenging to reproduce in silico. Most theoretical efforts focus on key aspects of open-ended evolution as it appears in biology. We recast the problem as a more general one in dynamical systems theory, providing simple criteria for open-ended evolution based on two hallmark features: unbounded evolution and innovation. We define unbounded evolution as patterns that are non-repeating within the expected Poincare recurrence time of an isolated system, and innovation as trajectories not observed in isolated systems. As a case study, we implement novel variants of cellular automata (CA) where the update rules are allowed to vary with time in three alternative ways. Each is capable of generating conditions for open-ended evolution, but vary in their ability to do so. We find that state-dependent dynamics, regarded as a hallmark of life, statistically out-performs other candidate mechanisms, and is the only mechanism to produce open-ended evolution in a scalable manner, essential to the notion of ongoing evolution. This analysis suggests a new framework for unifying mechanisms for generating OEE with features distinctive to life and its artifacts, with broad applicability to biological and artificial systems.
  •  
5.
  •  
6.
  •  
7.
  •  
8.
  • Hernández-Orozco, S, et al. (author)
  • Algorithmic Probability-Guided Machine Learning on Non-Differentiable Spaces
  • 2020
  • In: Frontiers in artificial intelligence. - : Frontiers Media SA. - 2624-8212. ; 3, s. 567356-
  • Journal article (peer-reviewed)abstract
    • We show how complexity theory can be introduced in machine learning to help bring together apparently disparate areas of current research. We show that this model-driven approach may require less training data and can potentially be more generalizable as it shows greater resilience to random attacks. In an algorithmic space the order of its element is given by its algorithmic probability, which arises naturally from computable processes. We investigate the shape of a discrete algorithmic space when performing regression or classification using a loss function parametrized by algorithmic complexity, demonstrating that the property of differentiation is not required to achieve results similar to those obtained using differentiable programming approaches such as deep learning. In doing so we use examples which enable the two approaches to be compared (small, given the computational power required for estimations of algorithmic complexity). We find and report that 1) machine learning can successfully be performed on a non-smooth surface using algorithmic complexity; 2) that solutions can be found using an algorithmic-probability classifier, establishing a bridge between a fundamentally discrete theory of computability and a fundamentally continuous mathematical theory of optimization methods; 3) a formulation of an algorithmically directed search technique in non-smooth manifolds can be defined and conducted; 4) exploitation techniques and numerical methods for algorithmic search to navigate these discrete non-differentiable spaces can be performed; in application of the (a) identification of generative rules from data observations; (b) solutions to image classification problems more resilient against pixel attacks compared to neural networks; (c) identification of equation parameters from a small data-set in the presence of noise in continuous ODE system problem, (d) classification of Boolean NK networks by (1) network topology, (2) underlying Boolean function, and (3) number of incoming edges.
  •  
9.
  • Hernandez-Orozco, S, et al. (author)
  • Algorithmically probable mutations reproduce aspects of evolution, such as convergence rate, genetic memory and modularity
  • 2018
  • In: Royal Society open science. - : The Royal Society. - 2054-5703. ; 5:8, s. 180399-
  • Journal article (peer-reviewed)abstract
    • Natural selection explains how life has evolved over millions of years from more primitive forms. The speed at which this happens, however, has sometimes defied formal explanations when based on random (uniformly distributed) mutations. Here, we investigate the application of a simplicity bias based on a natural but algorithmic distribution of mutations (no recombination) in various examples, particularly binary matrices, in order to compare evolutionary convergence rates. Results both on synthetic and on small biological examples indicate an accelerated rate when mutations are not statistically uniform butalgorithmically uniform. We show that algorithmic distributions can evolve modularity and genetic memory by preservation of structures when they first occur sometimes leading to an accelerated production of diversity but also to population extinctions, possibly explaining naturally occurring phenomena such as diversity explosions (e.g. the Cambrian) and massive extinctions (e.g. the End Triassic) whose causes are currently a cause for debate. The natural approach introduced here appears to be a better approximation to biological evolution than models based exclusively upon random uniform mutations, and it also approaches a formal version of open-ended evolution based on previous formal results. These results validate some suggestions in the direction that computation may be an equally important driver of evolution. We also show that inducing the method on problems of optimization, such as genetic algorithms, has the potential to accelerate convergence of artificial evolutionary algorithms.
  •  
10.
  •  
11.
  • Joosten, JJ, et al. (author)
  • Fractal Dimension versus Process Complexity
  • 2016
  • In: ADVANCES IN MATHEMATICAL PHYSICS. - : Hindawi Limited. - 1687-9120 .- 1687-9139. ; 2016
  • Journal article (other academic/artistic)abstract
    • We look at small Turing machines (TMs) that work with just two colors (alphabet symbols) and either two or three states. For any particular such machineτand any particular inputx, we consider what we call thespace-timediagram which is basically the collection of consecutive tape configurations of the computationτ(x). In our setting, it makes sense to define a fractal dimension for a Turing machine as the limiting fractal dimension for the corresponding space-time diagrams. It turns out that there is a very strong relation between the fractal dimension of a Turing machine of the above-specified type and its runtime complexity. In particular, a TM with three states and two colors runs in at most linear time, if and only if its dimension is 2, and its dimension is 1, if and only if it runs in superpolynomial time and it uses polynomial space. If a TM runs in timeO(xn), we have empirically verified that the corresponding dimension is(n+1)/n, a result that we can only partially prove. We find the results presented here remarkable because they relate two completely different complexity measures: the geometrical fractal dimension on one side versus the time complexity of a computation on the other side.
  •  
12.
  •  
13.
  • Kiani, NA, et al. (author)
  • Predictive Systems Toxicology
  • 2018
  • In: Methods in molecular biology (Clifton, N.J.). - New York, NY : Springer New York. - 1940-6029. ; 1800, s. 535-557
  • Journal article (peer-reviewed)
  •  
14.
  •  
15.
  •  
16.
  • Masoudi-Nejad, A, et al. (author)
  • Flow of Information in Biological Systems
  • 2016
  • In: Seminars in cell & developmental biology. - : Elsevier BV. - 1096-3634 .- 1084-9521. ; 51, s. 1-2
  • Journal article (other academic/artistic)
  •  
17.
  •  
18.
  •  
19.
  •  
20.
  • Soler-Toscano, F, et al. (author)
  • A Computable Measure of Algorithmic Probability by Finite Approximations with an Application to Integer Sequences
  • 2017
  • In: COMPLEXITY. - : Hindawi Limited. - 1076-2787 .- 1099-0526.
  • Journal article (other academic/artistic)abstract
    • Given the widespread use of lossless compression algorithms to approximate algorithmic (Kolmogorov-Chaitin) complexity and that, usually, generic lossless compression algorithms fall short at characterizing features other than statistical ones not different from entropy evaluations, here we explore an alternative and complementary approach. We study formal properties of a Levin-inspired measure m calculated from the output distribution of small Turing machines. We introduce and justify finite approximations mk that have been used in some applications as an alternative to lossless compression algorithms for approximating algorithmic (Kolmogorov-Chaitin) complexity. We provide proofs of the relevant properties of both m and mk and compare them to Levin’s Universal Distribution. We provide error estimations of mk with respect to m. Finally, we present an application to integer sequences from the On-Line Encyclopedia of Integer Sequences, which suggests that our AP-based measures may characterize nonstatistical patterns, and we report interesting correlations with textual, function, and program description lengths of the said sequences.
  •  
21.
  •  
22.
  • Uthamacumaran, A, et al. (author)
  • A Review of Mathematical and Computational Methods in Cancer Dynamics
  • 2022
  • In: Frontiers in oncology. - : Frontiers Media SA. - 2234-943X. ; 12, s. 850731-
  • Journal article (peer-reviewed)abstract
    • Cancers are complex adaptive diseases regulated by the nonlinear feedback systems between genetic instabilities, environmental signals, cellular protein flows, and gene regulatory networks. Understanding the cybernetics of cancer requires the integration of information dynamics across multidimensional spatiotemporal scales, including genetic, transcriptional, metabolic, proteomic, epigenetic, and multi-cellular networks. However, the time-series analysis of these complex networks remains vastly absent in cancer research. With longitudinal screening and time-series analysis of cellular dynamics, universally observed causal patterns pertaining to dynamical systems, may self-organize in the signaling or gene expression state-space of cancer triggering processes. A class of these patterns, strange attractors, may be mathematical biomarkers of cancer progression. The emergence of intracellular chaos and chaotic cell population dynamics remains a new paradigm in systems medicine. As such, chaotic and complex dynamics are discussed as mathematical hallmarks of cancer cell fate dynamics herein. Given the assumption that time-resolved single-cell datasets are made available, a survey of interdisciplinary tools and algorithms from complexity theory, are hereby reviewed to investigate critical phenomena and chaotic dynamics in cancer ecosystems. To conclude, the perspective cultivates an intuition for computational systems oncology in terms of nonlinear dynamics, information theory, inverse problems, and complexity. We highlight the limitations we see in the area of statistical machine learning but the opportunity at combining it with the symbolic computational power offered by the mathematical tools explored.
  •  
23.
  •  
24.
  •  
25.
  • Zenil, H, et al. (author)
  • A Decomposition Method for Global Evaluation of Shannon Entropy and Local Estimations of Algorithmic Complexity
  • 2018
  • In: Entropy (Basel, Switzerland). - : MDPI AG. - 1099-4300. ; 20:8
  • Journal article (peer-reviewed)abstract
    • We investigate the properties of a Block Decomposition Method (BDM), which extends the power of a Coding Theorem Method (CTM) that approximates local estimations of algorithmic complexity based on Solomonoff–Levin’s theory of algorithmic probability providing a closer connection to algorithmic complexity than previous attempts based on statistical regularities such as popular lossless compression schemes. The strategy behind BDM is to find small computer programs that produce the components of a larger, decomposed object. The set of short computer programs can then be artfully arranged in sequence so as to produce the original object. We show that the method provides efficient estimations of algorithmic complexity but that it performs like Shannon entropy when it loses accuracy. We estimate errors and study the behaviour of BDM for different boundary conditions, all of which are compared and assessed in detail. The measure may be adapted for use with more multi-dimensional objects than strings, objects such as arrays and tensors. To test the measure we demonstrate the power of CTM on low algorithmic-randomness objects that are assigned maximal entropy (e.g., π ) but whose numerical approximations are closer to the theoretical low algorithmic-randomness expectation. We also test the measure on larger objects including dual, isomorphic and cospectral graphs for which we know that algorithmic randomness is low. We also release implementations of the methods in most major programming languages—Wolfram Language (Mathematica), Matlab, R, Perl, Python, Pascal, C++, and Haskell—and an online algorithmic complexity calculator.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-25 of 38

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view