SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "L773:0004 5411 "

Search: L773:0004 5411

  • Result 1-10 of 14
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Abdulla, Parosh, 1961-, et al. (author)
  • Source Sets : A Foundation for Optimal Dynamic Partial Order Reduction
  • 2017
  • In: Journal of the ACM. - : Association for Computing Machinery (ACM). - 0004-5411 .- 1557-735X. ; 64:4
  • Journal article (peer-reviewed)abstract
    • Stateless model checking is a powerful method for program verification that, however, suffers from an exponential growth in the number of explored executions. A successful technique for reducing this number, while still maintaining complete coverage, is Dynamic Partial Order Reduction (DPOR), an algorithm originally introduced by Flanagan and Godefroid in 2005 and since then not only used as a point of reference but also extended by various researchers. In this article, we present a new DPOR algorithm, which is the first to be provably optimal in that it always explores the minimal number of executions. It is based on a novel class of sets, called source sets, that replace the role of persistent sets in previous algorithms. We begin by showing how to modify the original DPOR algorithm to work with source sets, resulting in an efficient and simple-to-implement algorithm, called source-DPOR. Subsequently, we enhance this algorithm with a novel mechanism, called wakeup trees, that allows the resulting algorithm, called optimal-DPOR, to achieve optimality. Both algorithms are then extended to computational models where processes may disable each other, for example, via locks. Finally, we discuss tradeoffs of the source-and optimal-DPOR algorithm and present programs that illustrate significant time and space performance differences between them. We have implemented both algorithms in a publicly available stateless model checking tool for Erlang programs, while the source-DPOR algorithm is at the core of a publicly available stateless model checking tool for C/pthread programs running on machines with relaxed memory models. Experiments show that source sets significantly increase the performance of stateless model checking compared to using the original DPOR algorithm and that wakeup trees incur only a small overhead in both time and space in practice.
  •  
2.
  • Andersson, Arne, et al. (author)
  • Dynamic Ordered Sets with Exponential Search Trees
  • 2007
  • In: Journal of the ACM. - : Association for Computing Machinery (ACM). - 0004-5411 .- 1557-735X. ; 54:3, s. 1236460-
  • Journal article (peer-reviewed)abstract
    • We introduce exponential search trees as a novel technique for converting static polynomial space search structures for ordered sets into fully-dynamic linear space data structures. This leads to an optimal bound of O(log n/log log n) for searching and updating a dynamic set X of n integer keys in linear space. Searching X for an integer y means finding the maximum key in X which is smaller than or equal to y. This problem is equivalent to the standard text book problem of maintaining an ordered set. The best previous deterministic linear space bound was O(log n/log log n) due to Fredman and Willard from STOC 1990. No better deterministic search bound was known using polynomial space. We also get the following worst-case linear space trade-offs between the number n, the word length W, and the maximal key U < 2W: O(min log log n + log n/logW, log log n log log U/log log log U). These trade-offs are, however, not likely to be optimal. Our results are generalized to finger searching and string searching, providing optimal results for both in terms of n.
  •  
3.
  • Arvestad, Lars, et al. (author)
  • The Gene Evolution Model and Computing Its Associated Probabilities
  • 2009
  • In: Journal of the ACM. - : Association for Computing Machinery (ACM). - 0004-5411 .- 1557-735X. ; 56:2
  • Journal article (peer-reviewed)abstract
    • Phylogeny is both a fundamental tool in biology and a rich source of fascinating modeling and algorithmic problems. Today's wealth of sequenced genomes makes it increasingly important to understand evolutionary events such as duplications, losses, transpositions, inversions, lateral transfers, and domain shuffling. We focus on the gene duplication event, that constitutes a major force in the creation of genes with new function [Ohno 1970; Lynch and Force 2000] and, thereby also, of biodiversity. We introduce the probabilistic gene evolution model, which describes how a gene tree evolves within a given species tree with respect to speciation, gene duplication, and gene loss. The actual relation between gene tree and species tree is captured by a reconciliation, a concept which we generalize for more expressiveness. The model is a canonical generalization of the classical linear birth-death process, obtained by replacing the interval where the process takes place by a tree. For the gene evolution model, we derive efficient algorithms for some associated probability distributions: the probability of a reconciled tree, the probability of a gene tree, the maximum probability reconciliation, the posterior probability of a reconciliation, and sampling reconciliations with respect to the posterior probability. These algorithms provides the basis for several applications, including species tree construction, reconciliation analysis, orthology analysis, biogeography, and host-parasite co-evolution.
  •  
4.
  • Atserias, Albert, et al. (author)
  • Clique Is Hard on Average for Regular Resolution
  • 2021
  • In: Journal of the ACM. - : Association for Computing Machinery (ACM). - 0004-5411 .- 1557-735X. ; 68:4, s. 1-26
  • Journal article (peer-reviewed)abstract
    • We prove that for k ≫; 4√n regular resolution requires length nω(k) to establish that an ErdÅ's-Rényi graph with appropriately chosen edge density does not contain a k-clique. This lower bound is optimal up to the multiplicative constant in the exponent and also implies unconditional nω(k) lower bounds on running time for several state-of-the-art algorithms for finding maximum cliques in graphs.
  •  
5.
  • Berkholz, Christoph, et al. (author)
  • Near-optimal Lower Bounds on Quantifier Depth and Weisfeiler-Leman Refinement Steps
  • 2023
  • In: Journal of the ACM. - 0004-5411. ; 70:5
  • Journal article (peer-reviewed)abstract
    • We prove near-optimal tradeoffs for quantifier depth (also called quantifier rank) versus number of variables in first-order logic by exhibiting pairs of n-element structures that can be distinguished by a k-variable first-order sentence but where every such sentence requires quantifier depth at least nω (k/log k). Our tradeoffs also apply to first-order counting logic and, by the known connection to the k-dimensional Weisfeiler-Leman algorithm, imply near-optimal lower bounds on the number of refinement iterations.A key component in our proof is the hardness condensation technique introduced by Razborov in the context of proof complexity. We apply this method to reduce the domain size of relational structures while maintaining the minimal quantifier depth needed to distinguish them in finite variable logics.
  •  
6.
  • Bol, R, et al. (author)
  • The meaning of negative premises in transition system specifications
  • 1996
  • In: JOURNAL OF THE ACM. - : ASSOC COMPUTING MACHINERY. - 0004-5411. ; 43:5, s. 863-914
  • Journal article (other academic/artistic)abstract
    • We present a general theory for the use of negative premises in the rules of Transition System Specifications (TSSs). We formulate a criterion that should be satisfied by a TSS in order to be meaningful, that is, to unequivocally define a transition relat
  •  
7.
  • Chatzipanagiotou, Marita, et al. (author)
  • Automated recognition of geographical named entities in titles of Ukiyo-e prints
  • 2021
  • In: Journal of the ACM / Association for Computing Machinery. - New York, NY, USA : ACM. - 0004-5411. ; , s. 70-77
  • Journal article (peer-reviewed)abstract
    • This paper investigates the application of Natural Language Processing as a means to study the relationship between topography and its visual renderings in early modern Japanese ukiyo-e landscape prints. We introduce a new dataset with titles of landscape prints that have been annotated by an art historian for any included place-names. The prints are hosted by the digital database of the Art Research Center at the Ritsumeikan University, Kyoto, one of the hubs of Digital Humanities in Japan. By applying, calibrating and assessing a Named Entity Recognition (NER) tool, we argue that ‘distant viewing’ or macroanalysis of visual datasets can be facilitated, which is needed to assist art historical studies of this rich, complex and diverse research material. Experimental results indicated that the performance of NER can be improved by 30% and reach 50% precision, by using part of the introduced dataset.
  •  
8.
  • Cohen, D, et al. (author)
  • Building tractable disjunctive constraints
  • 2000
  • In: Journal of the ACM. - : Association for Computing Machinery (ACM). - 0004-5411 .- 1557-735X. ; 47:5, s. 826-853
  • Journal article (peer-reviewed)abstract
    • Many combinatorial search problems can be expressed as 'constraint satisfaction problems'. This class of problems is known to be NP-hard in general, but a number of restricted constraint classes have been identified which ensure tractability. This paper presents the first general results on combining tractable constraint classes to obtain larger, more general, tractable classes. We give examples to show that many known examples of tractable constraint classes, from a wide variety of different contexts, can be constructed from simpler tractable classes using a general method. We also construct several new tractable classes that have not previously been identified.
  •  
9.
  • Henzinger, Monika, et al. (author)
  • Decremental Single-Source Shortest Paths on Undirected Graphs in Near-Linear Total Update Time
  • 2018
  • In: Journal of the ACM. - : Association for Computing Machinery (ACM). - 0004-5411 .- 1557-735X. ; 65:6
  • Journal article (peer-reviewed)abstract
    • In the decremental single-source shortest paths (SSSP) problem, we want to maintain the distances between a given source node s and every other node in an n-node m-edge graph G undergoing edge deletions. While its static counterpart can be solved in near-linear time, this decremental problem is much more challenging even in the undirected unweighted case. In this case, the classic O(mn) total update time of Even and Shiloach [16] has been the fastest known algorithm for three decades. At the cost of a (1 + is an element of)-approximation factor, the running time was recently improved to n(2)(+o(1)) by Bernstein and Roditty [9]. In this article, we bring the running time down to near-linear: We give a (1 + is an element of)-approximation algorithm with m(1+)(o(1)) expected total update time, thus obtaining near-linear time. Moreover, we obtain m(1)(+)(o(1)) log W time for the weighted case, where the edge weights are integers from 1 to W. The only prior work on weighted graphs in o(mn) time is the mn(0.9+o(1))-time algorithm by Henzinger et al. [18, 19], which works for directed graphs with quasi-polynomial edge weights. The expected running time bound of our algorithm holds against an oblivious adversary. In contrast to the previous results, which rely on maintaining a sparse emulator, our algorithm relies on maintaining a so-called sparse (h, is an element of)-hop set introduced by Cohen [12] in the PRAM literature. An (h, is an element of)-hop set of a graph G = (V, E) is a set F of weighted edges such that the distance between any pair of nodes in G can be (1 + is an element of)-approximated by their h-hop distance (given by a path containing at most h edges) on G' = (V, E boolean OR F). Our algorithm can maintain an (n(o(1)), is an element of)-hop set of near-linear size in near-linear time under edge deletions. It is the first of its kind to the best of our knowledge. To maintain approximate distances using this hop set, we extend the monotone Even-Shiloach tree of Henzinger et al. [20] and combine it with the bounded-hop SSSP technique of Bernstein [4, 5] and Madry [27]. These two new tools might be of independent interest.
  •  
10.
  • Håstad, Johan, 1960-, et al. (author)
  • An average-case depth hierarchy theorem for boolean circuits
  • 2017
  • In: Journal of the ACM. - : Association for Computing Machinery (ACM). - 0004-5411 .- 1557-735X. ; 64:5
  • Journal article (peer-reviewed)abstract
    • We prove an average-case depth hierarchy theorem for Boolean circuits over the standard basis of AND, OR, and NOT gates. Our hierarchy theorem says that for every d ≥ 2, there is an explicit n-variable Boolean function f, computed by a linear-size depth-d formula, which is such that any depth-(d−1) circuit that agrees with f on (1/2 + on(1)) fraction of all inputs must have size exp(nΩ (1/d)). This answers an open question posed by Håstad in his Ph.D. thesis (Håstad 1986b).Our average-case depth hierarchy theorem implies that the polynomial hierarchy is infinite relative to a random oracle with probability 1, confirming a conjecture of Håstad (1986a), Cai (1986), and Babai (1987). We also use our result to show that there is no “approximate converse” to the results of Linial, Mansour, Nisan (Linial et al. 1993) and (Boppana 1997) on the total influence of bounded-depth circuits.A key ingredient in our proof is a notion of random projections which generalize random restrictions.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 14

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view