SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Rümmer Philipp) "

Search: WFRF:(Rümmer Philipp)

  • Result 1-50 of 110
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Abdulla, Parosh Aziz, Professor, 1961-, et al. (author)
  • Boosting Constrained Horn Solving by Unsat Core Learning
  • 2024
  • In: Verification, Model Checking, and Abstract Interpretation. - : Springer Nature. - 9783031505232 - 9783031505249 ; , s. 280-302
  • Conference paper (peer-reviewed)abstract
    • The Relational Hyper-Graph Neural Network (R-HyGNN) was introduced in [1] to learn domain-specific knowledge from program verification problems encoded in Constrained Horn Clauses (CHCs). It exhibits high accuracy in predicting the occurrence of CHCs in counterexamples. In this research, we present an R-HyGNN-based framework called MUSHyperNet. The goal is to predict the Minimal Unsatisfiable Subsets (MUSes) (i.e., unsat core) of a set of CHCs to guide an abstract symbolic model checking algorithm. In MUSHyperNet, we can predict the MUSes once and use them in different instances of the abstract symbolic model checking algorithm. We demonstrate the efficacy of MUSHyperNet using two instances of the abstract symbolic modelchecking algorithm: Counter-Example Guided Abstraction Refinement (CEGAR) and symbolic model-checking-based (SymEx) algorithms. Our framework enhances performance on a uniform selection of benchmarks across all categories from CHC-COMP, solving more problems (6.1% increase for SymEx, 4.1% for CEGAR) and reducing average solving time (13.3% for SymEx, 7.1% for CEGAR).
  •  
2.
  • Abdulla, Parosh Aziz, 1961-, et al. (author)
  • Flatten and Conquer : A Framework for Efficient Analysis of String Constraints
  • 2017
  • In: SIGPLAN notices. - New York, NY, USA : ACM. - 0362-1340 .- 1558-1160. - 9781450349888 ; 52:6, s. 602-617, s. 602-617
  • Journal article (peer-reviewed)abstract
    • We describe a uniform and efficient framework for checking the satisfiability of a large class of string constraints. The framework is based on the observation that both satisfiability and unsatisfiability of common constraints can be demonstrated through witnesses with simple patterns. These patterns are captured using flat automata each of which consists of a sequence of simple loops. We build a Counter-Example Guided Abstraction Refinement (CEGAR) framework which contains both an under-and an over-approximation module. The flow of information between the modules allows to increase the precision in an automatic manner. We have implemented the framework as a tool and performed extensive experimentation that demonstrates both the generality and efficiency of our method.
  •  
3.
  • Abdulla, Parosh Aziz, et al. (author)
  • Norn: An SMT Solver for String Constraints
  • 2015
  • In: COMPUTER AIDED VERIFICATION, PT I. - Cham : SPRINGER-VERLAG BERLIN. - 9783319216904 - 9783319216898 ; , s. 462-469
  • Conference paper (peer-reviewed)abstract
    • We present version 1.0 of the Norn SMT solver for string constraints. Norn is a solver for an expressive constraint language, including word equations, length constraints, and regular membership queries. As a feature distinguishing Norn from other SMT solvers, Norn is a decision procedure under the assumption of a set of acyclicity conditions on word equations, without any restrictions on the use of regular membership.
  •  
4.
  • Abdulla, Parosh Aziz, et al. (author)
  • String Constraints for Verification
  • 2014
  • In: <em>26th International Conference on Computer Aided Verification (CAV 2014), Vienna, Austria, Jul. 9-12, 2014.</em>. - Berlin : Springer. - 9783319088662 - 9783319088679 ; , s. 150-166
  • Conference paper (peer-reviewed)abstract
    • We present a decision procedure for a logic that combines (i) word equations over string variables denoting words of arbitrary lengths, together with (ii) constraints on the length of words, and on (iii) the regular languages to which words belong. Decidability of this general logic is still open. Our procedure is sound for the general logic, and a decision procedure for a particularly rich fragment that restricts the form in which word equations are written. In contrast to many existing procedures, our method does not make assumptions about the maximum length of words. We have developed a prototypical implementation of our decision procedure, and integrated it into a CEGAR-based model checker for the analysis of programs encoded as Horn clauses. Our tool is able to automatically establish the correctness of several programs that are beyond the reach of existing methods.
  •  
5.
  • Abdulla, Parosh Aziz, et al. (author)
  • String Constraints for Verification
  • 2014
  • In: Computer Aided Verification - 26th International Conference, {CAV} 2014, Held as Part of the Vienna Summer of Logic, {VSL} 2014, Vienna, Austria, July 18-22, 2014. Proceedings. - Cham : Springer. - 9783319088662 ; , s. 150-166
  • Conference paper (peer-reviewed)abstract
    • We present a decision procedure for a logic that combines (i) word equations over string variables denoting words of arbitrary lengths, together with (ii) constraints on the length of words, and on (iii) the regular languages to which words belong. Decidability of this general logic is still open. Our procedure is sound for the general logic, and a decision procedure for a particularly rich fragment that restricts the form in which word equations are written. In contrast to many existing procedures, our method does not make assumptions about the maximum length of words. We have developed a prototypical implementation of our decision procedure, and integrated it into a CEGAR-based model checker for the analysis of programs encoded as Horn clauses. Our tool is able to automatically establish the correctness of several programs that are beyond the reach of existing methods.
  •  
6.
  • Abdulla, Parosh Aziz, Professor, et al. (author)
  • TRAU : SMT solver for string constraints
  • 2018
  • In: PROCEEDINGS OF THE 2018 18TH CONFERENCE ON FORMAL METHODS IN COMPUTER AIDED DESIGN (FMCAD). - : IEEE. - 9780983567882 ; , s. 165-169
  • Conference paper (peer-reviewed)abstract
    • We introduce TRAU, an SMT solver for an expressive constraint language, including word equations, length constraints, context-free membership queries, and transducer constraints. The satisfiability problem for such a class of constraints is in general undecidable. The key idea behind TRAU is a technique called flattening, which searches for satisfying assignments that follow simple patterns. TRAU implements a Counter-Example Guided Abstraction Refinement (CEGAR) framework which contains both an under- and an over-approximation module. The approximations are refined in an automatic manner by information flow between the two modules. The technique implemented by TRAU can handle a rich class of string constraints and has better performance than state-of-the-art string solvers.
  •  
7.
  • Ahrendt, Wolfgang, 1967, et al. (author)
  • Practical Aspects of Automated Deduction for Program Verification
  • 2010
  • In: KI - K√ºnstliche Intelligenz. - : Springer Science and Business Media LLC. - 0933-1875 .- 1610-1987. ; 24:1, s. 43-49
  • Journal article (peer-reviewed)abstract
    • Software is vital for modern society. It is used in manysafety- or security-critical applications, where a high degree of correctness is desirable. Over the last years, technologies for the formal specification and verification of software -- using logic-based specification languages and automated deduction -- have matured and can be expected to complement and partly replace traditional software engineering methods in the future. Program verification is an increasingly important application area for automated deduction. The field has outgrown the area of academic case studies, and industry is showing serious interest. This article describes the aspects of automated deduction that are important for program verification in practice, and it gives an overview of the reasoning mechanisms, the methodology, and the architecture of modern program verification systems.
  •  
8.
  • Ahrendt, Wolfgang, 1967, et al. (author)
  • TriCo—Triple Co-piloting of Implementation, Specification and Tests
  • 2022
  • In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). - Cham : Springer International Publishing. - 1611-3349 .- 0302-9743. ; 13701 LNCS, s. 174-187, s. 174-187
  • Conference paper (peer-reviewed)abstract
    • This white paper presents the vision of a novel methodology for developing safety-critical software, which is inspired by late developments in learning based co-piloting of implementations. The methodology, called TriCo, integrates formal methods with learning based approaches to co-pilot the agile, simultaneous development of three artefacts: implementation, specification, and tests. Whenever the user changes any of these, a TriCo empowered IDE would suggest changes to the other two artefacts in such a way that the three are kept consistent. The user has the final word on whether the changes are accepted, rejected, or modified. In the latter case, consistency will be checked again and re-established. We discuss the emerging trends which put the community in a good position to realise this vision, describe the methodology and workflow, as well as challenges and possible solutions for the realisation of TriCo.
  •  
9.
  • Ahrendt, Wolfgang, 1967, et al. (author)
  • Verifying Object-Oriented Programs with KeY: A Tutorial
  • 2007
  • In: Formal Methods for Components and Objects, eds. de Boer, Bonsangue, Graf, de Roever. - 9783540747918 ; LNCS 4709
  • Conference paper (peer-reviewed)abstract
    • This paper is a tutorial on performing formal specification and semi-automatic verification of Java programs with the formal software development tool KeY. This tutorial aims to fill the gap between elementary introductions using toy examples and state-of-art case studies by going through a self-contained, yet non-trivial, example. It is hoped that this contributes to explain the problems encountered in verification of imperative, object-oriented programs to a readership outside the limited community of active researchers.
  •  
10.
  •  
11.
  •  
12.
  • Amilon, Jesper, et al. (author)
  • Automatic Program Instrumentation for Automatic Verification
  • 2023
  • In: Computer Aided Verification. - Cham : Springer Nature. ; , s. 281-304, s. 281-304
  • Conference paper (peer-reviewed)abstract
    • In deductive verification and software model checking, dealing with certain specification language constructs can be problematic when the back-end solver is not sufficiently powerful or lacks the required theories. One way to deal with this is to transform, for verification purposes, the program to an equivalent one not using the problematic constructs, and to reason about its correctness instead. In this paper, we propose instrumentation as a unifying verification paradigm that subsumes various existing ad-hoc approaches, has a clear formal correctness criterion, can be applied automatically, and can transfer back witnesses and counterexamples. We illustrate our approach on the automated verification of programs that involve quantification and aggregation operations over arrays, such as the maximum value or sum of the elements in a given segment of the array, which are known to be difficult to reason about automatically. We implement our approach in the MonoCera tool, which is tailored to the verification of programs with aggregation, and evaluate it on example programs, including SV-COMP programs.
  •  
13.
  • Arlt, Stephan, et al. (author)
  • A theory for control-flow graph exploration
  • 2013
  • In: Automated Technology for Verification and Analysis. - Cham : Springer Berlin/Heidelberg. - 9783319024431 ; , s. 506-515
  • Conference paper (peer-reviewed)
  •  
14.
  • Arlt, Stephan, et al. (author)
  • The Gradual Verifier
  • 2014
  • In: NASA Formal Methods. - Switzerland : Springer International Publishing. - 9783319061993 ; , s. 313-327
  • Conference paper (peer-reviewed)
  •  
15.
  • Backeman, Peter, et al. (author)
  • Bit-Vector Interpolation and Quantifier Elimination by Lazy Reduction
  • 2018
  • In: Formal Methods in Computer-Aided Design. - : IEEE. - 9780983567882 ; , s. 50-59
  • Conference paper (peer-reviewed)abstract
    • The inference of program invariants over machine arithmetic, commonly called bit-vector arithmetic, is an important problem in verification. Techniques that have been successful for unbounded arithmetic, in particular Craig interpolation, have turned out to be difficult to generalise to machine arithmetic: existing bit-vector interpolation approaches are based either on eager translation from bit-vectors to unbounded arithmetic, resulting in complicated constraints that are hard to solve and interpolate, or on bit-blasting to propositional logic, in the process losing all arithmetic structure. We present a new approach to bitvector interpolation, as well as bit-vector quantifier elimination (QE), that works by lazy translation of bit-vector constraints to unbounded arithmetic. Laziness enables us to fully utilise the information available during proof search (implied by decisions and propagation) in the encoding, and this way produce constraints that can be handled relatively easily by existing interpolation and QE procedures for Presburger arithmetic. The lazy encoding is complemented with a set of native proof rules for bit-vector equations and non-linear (polynomial) constraints, this way minimising the number of cases a solver has to consider
  •  
16.
  •  
17.
  •  
18.
  • Backeman, Peter, et al. (author)
  • Interpolating bit-vector formulas using uninterpreted predicates and Presburger arithmetic
  • 2021
  • In: Formal methods in system design. - : SPRINGER. - 0925-9856 .- 1572-8102. ; 57:2, s. 121-156
  • Journal article (peer-reviewed)abstract
    • The inference of program invariants over machine arithmetic, commonly called bit-vector arithmetic, is an important problem in verification. Techniques that have been successful for unbounded arithmetic, in particular Craig interpolation, have turned out to be difficult to generalise to machine arithmetic: existing bit-vector interpolation approaches are based either on eager translation from bit-vectors to unbounded arithmetic, resulting in complicated constraints that are hard to solve and interpolate, or on bit-blasting to propositional logic, in the process losing all arithmetic structure. We present a new approach to bit-vector interpolation, as well as bit-vector quantifier elimination (QE), that works by lazy translation of bit-vector constraints to unbounded arithmetic. Laziness enables us to fully utilise the information available during proof search (implied by decisions and propagation) in the encoding, and this way produce constraints that can be handled relatively easily by existing interpolation and QE procedures for Presburger arithmetic. The lazy encoding is complemented with a set of native proof rules for bit-vector equations and non-linear (polynomial) constraints, this way minimising the number of cases a solver has to consider. We also incorporate a method for handling concatenations and extractions of bit-vector efficiently.
  •  
19.
  • Backeman, Peter (author)
  • New techniques for handling quantifiers in Boolean and first-order logic
  • 2016
  • Licentiate thesis (other academic/artistic)abstract
    • The automation of reasoning has been an aim of research for a long time. Already in 17th century, the famous mathematician Leibniz invented a mechanical calculator capable of performing all four basic arithmetic operators. Although automatic reasoning can be done in different fields, many of the procedures for automated reasoning handles formulas of first-order logic. Examples of use cases includes hardware verification, program analysis and knowledge representation.One of the fundamental challenges in first-order logic is handling quantifiers and the equality predicate. On the one hand, SMT-solvers (Satisfiability Modulo Theories) are quite efficient at dealing with theory reasoning, on the other hand they have limited support for complete and efficient reasoning with quantifiers. Sequent, tableau and resolution calculi are methods which are used to construct proofs for first-order formulas and can use more efficient techniques to handle quantifiers. Unfortunately, in contrast to SMT, handling theories is more difficult.In this thesis we investigate methods to handle quantifiers by restricting search spaces to finite domains which can be explored in a systematic manner. We present this approach in two different contexts.First we introduce a function synthesis based on template-based quantifier elimination, which is applied to gene interaction computation. The function synthesis is shown to be capable of generating smaller representations of solutions than previous solvers, and by restricting the constructed functions to certain forms we can produce formulas which can more easily be interpreted by a biologist.Secondly we introduce the concept of Bounded Rigid E-Unification (BREU), a finite form of unification that can be used to define a complete and sound sequent calculus for first-order logic with equality. We show how to solve this bounded form of unification in an efficient manner, yielding a first-order theorem prover utilizing BREU that is competitive with other state-of-the-art tableau theorem provers.
  •  
20.
  • Backeman, Peter (author)
  • Quantifiers and Theories : A Lazy Approach
  • 2019
  • Doctoral thesis (other academic/artistic)abstract
    • In this thesis we study Automated Theorem Proving (ATP) as well as Satisfiability Modulo Theories (SMT) and present lazy strategies for improving reasoning within these areas. A lazy strategy works by simplifying a problem, and gradually refines the abstraction only when necessary. Often, a solution can be found without solving the full problem, saving valuable resources. We formulate our contributions within two key challenges in ATP and SMT: theory and quantifier reasoning.Many problems need first-order reasoning modulo a theory, i.e., reasoning where symbols in formulas are interpreted according to some background theory. In software verification, which often involves conditions over machine arithmetic, bit-vectors as well as floating-point numbers play an important role. Finding methods for how to reason with these theories in an efficient manner is therefore an important task. In this thesis we present a lazy method for handling bit-vector constraints as well as bit-vector interpolation, which improves performance and produces simpler interpolants. Moreover, a modular approximation framework is described, which allows for high-level description of lazy strategies applicable to a multitude of theories. Such a strategy is combined with a back-end, creating an approximating SMT solver. We use floating-point arithmetic as an illustrating use case, showing how the lazy strategy can improve the overall efficiency.The quantifier is a language construct which allows for making statements about one or all objects of some universe. However, with great power comes great cost - reasoning with quantifiers is very hard. Many decision problems involving quantifiers are not decidable, e.g., validity of first-order logic. Intricate strategies are needed to handle formulas with quantifiers, especially in combination with theory reasoning. We present a new restricted form of unification, by lazy expansion of the domain of substitution, as well as efficient procedures to solve it. This is incorporated into a complete and sound sequent calculus for the combination of the theory of equality and quantifiers.
  •  
21.
  •  
22.
  • Beckert, Bernhard, et al. (author)
  • Taclets: a new paradigm for writing theorem provers : Taclets: Un nuevo paradigma para construir demostradores automáticos interactivos
  • 2004
  • In: REVISTA DE LA REAL ACADEMIA DE CIENCIAS, Serie A: Matemáticas. - 1578-7303. ; 98:1, s. 17-53
  • Journal article (peer-reviewed)abstract
    • Frameworks for interactive theorem proving give the user explicit control over the construction of proofs based on meta languages that contain dedicated control structures for describing proof construction. Such languages are not easy to master and thus contribute to the already long list of skills required by prospective users of interactive theorem provers. Most users, however, only need a convenient formalism that allows to introduce new rules with minimal overhead. On the the other hand, rules of calculi have not only purely logical content, but contain restrictions on the expected context of rule applications and heuristic information. We suggest a new and minimalist concept for implementing interactive theorem provers called taclet. Their usage can be mastered in a matter of hours, and they are efficiently compiled into the GUI of a prover. We implemented the KeY system, an interactive theorem prover for the full JavaCard language based on taclets.
  •  
23.
  • Beckert, Bernhard, et al. (author)
  • Taclets: a new paradigm for writing theorem provers
  • 2004
  • In: Revista de la Real Academia de Ciencias Exactas, Fisicas y Naturales - Serie A: Matematicas. - 1578-7303 .- 1579-1505. ; 98:1, s. 17-53
  • Journal article (peer-reviewed)abstract
    • Frameworks for interactive theorem proving give the user explicit control over the construction of proofs based on meta languages that contain dedicated control structures for describing proof construction. Such languages are not easy to master and thus contribute to the already long list of skills required by prospective users of interactive theorem provers. Most users, however, only need a convenient formalism that allows to introduce new rules with minimal overhead. On the the other hand, rules of calculi have not only purely logical content, but contain restrictions on the expected context of rule applications and heuristic information. We suggest a new and minimalist concept for implementing interactive theorem provers called taclet. Their usage can be mastered in a matter of hours, and they are efficiently compiled into the GUI of a prover. We implemented the KeY system, an interactive theorem prover for the full JavaCard language based on taclets.
  •  
24.
  • Beckert, Bernhard, et al. (author)
  • The KeY system 1.0 (Deduction Component)
  • 2007
  • In: Automated Deduction - CADE-21, 21st International Conference on Automated Deduction, Bremen, Germany, July 17-20, 2007, Proceedings, Springer-Verlag, LNCS. - Berlin, Heidelberg : Springer Berlin Heidelberg. - 9783540735946 ; 4603, s. 379-384
  • Conference paper (peer-reviewed)
  •  
25.
  • Brain, Martin, et al. (author)
  • An automatable formal semantics for IEEE-754 floating-point arithmetic
  • 2015
  • In: Proc. 22nd Symposium on Computer Arithmetic. - : IEEE Computer Society. - 9781479986637 ; , s. 160-167
  • Conference paper (peer-reviewed)abstract
    • Automated reasoning tools often provide little or no support to reason accurately and efficiently about floating-point arithmetic. As a consequence, software verification systems that use these tools are unable to reason reliably about programs containing floating-point calculations or may give unsound results. These deficiencies are in stark contrast to the increasing awareness that the improper use of floating-point arithmetic in programs can lead to unintuitive and harmful defects in software. To promote coordinated efforts towards building efficient and accurate floating-point reasoning engines, this paper presents a formalization of the IEEE-754 standard for floating-point arithmetic as a theory in many-sorted first-order logic. Benefits include a standardized syntax and unambiguous semantics, allowing tool interoperability and sharing of benchmarks, and providing a basis for automated, formal analysis of programs that process floating-point data.
  •  
26.
  • Brillout, Angelo, et al. (author)
  • An interpolating sequent calculus for quantifier-free Presburger arithmetic
  • 2011
  • In: Journal of automated reasoning. - : Springer Science and Business Media LLC. - 0168-7433 .- 1573-0670. ; 47:4, s. 341-367
  • Journal article (peer-reviewed)abstract
    • Craig interpolation has become a versatile tool in formal verification, used for instance to generate program assertions that serve as candidates for loop invariants. In this paper, we consider Craig interpolation for quantifier-free Presburger arithmetic (QFPA). Until recently, quantifier elimination was the only available interpolation method for this theory, which is, however, known to be potentially costly and inflexible. We introduce an interpolation approach based on a sequent calculus for QFPA that determines interpolants by annotating the steps of an unsatisfiability proof with partial interpolants. We prove our calculus to be sound and complete. We have extended the Princess theorem prover to generate interpolating proofs, and applied it to a large number of publicly available Presburger arithmetic benchmarks. The results document the robustness and efficiency of our interpolation procedure. Finally, we compare the procedure against alternative interpolation methods, both for QFPA and linear rational arithmetic.
  •  
27.
  •  
28.
  • Bubel, R., et al. (author)
  • Ensuring the Correctness of Lightweight Tactics for JavaCard Dynamic Logic
  • 2008
  • In: Electronic Notes in Theoretical Computer Science. - : Elsevier BV. - 1571-0661. ; 199, s. 107-128
  • Journal article (peer-reviewed)abstract
    • The interactive theorem prover developed in the KeY project, which implements a sequent calculus for JavaCard Dynamic Logic (JavaCardDL) is based on taclets. Taclets are lightweight tactics with easy to master syntax and semantics. Adding new taclets to the calculus is quite simple, but poses correctness problems. We present an approach how derived (non-axiomatic) taclets for JavaCardDL can be proven sound in JavaCardDL itself. Together with proof management facilities, our concept allows the safe introduction of new derived taclets while preserving the soundness of the calculus. © 2008 Elsevier B.V. All rights reserved.
  •  
29.
  • Chen, Taolue, et al. (author)
  • A Decision Procedure for Path Feasibility of String Manipulating Programs with Integer Data Type
  • 2020
  • In: ATVA 2020. - Cham : Springer Nature. - 9783030591519 - 9783030591526 ; , s. 325-342
  • Conference paper (peer-reviewed)abstract
    • In this paper, we propose a decision procedure for a class of string-manipulating programs which includes not only a wide range of string operations such as concatenation, replaceAll, reverse, and finite transducers, but also those involving the integer data-type such as length, indexof, and substring. To the best of our knowledge, this represents one of the most expressive string constraint languages that is currently known to be decidable. Our decision procedure is based on a variant of cost register automata. We implement the decision procedure, giving rise to a new solver OSTRICH+. We evaluate the performance of OSTRICH+ on a wide range of existing and new benchmarks. The experimental results show that OSTRICH+ is the first string decision procedure capable of tackling finite transducers and integer constraints, whilst its overall performance is comparable with the state-of-the-art string constraint solvers.
  •  
30.
  • Chen, Taolue, et al. (author)
  • Decision procedures for path feasibility of string-manipulating programs with complex operations
  • 2019
  • In: Proceedings of the ACM on Programming Languages. - : Association for Computing Machinery (ACM). - 2475-1421. ; 3:49
  • Journal article (peer-reviewed)abstract
    • The design and implementation of decision procedures for checking path feasibility in string-manipulating programs is an important problem, with such applications as symbolic execution of programs with strings and automated detection of cross-site scripting (XSS) vulnerabilities in web applications. A (symbolic) path is given as a finite sequence of assignments and assertions (i.e. without loops), and checking its feasibility amounts to determining the existence of inputs that yield a successful execution. Modern programming languages (e.g. JavaScript, PHP, and Python) support many complex string operations, and strings are also often implicitly modified during a computation in some intricate fashion (e.g. by some autoescaping mechanisms).In this paper we provide two general semantic conditions which together ensure the decidability of path feasibility: (1) each assertion admits regular monadic decomposition (i.e. is an effectively recognisable relation), and (2) each assignment uses a (possibly nondeterministic) function whose inverse relation preserves regularity. We show that the semantic conditions are expressive since they are satisfied by a multitude of string operations including concatenation, one-way and two-way finite-state transducers, replaceall functions (where the replacement string could contain variables), string-reverse functions, regular-expression matching, and some (restricted) forms of letter-counting/length functions. The semantic conditions also strictly subsume existing decidable string theories (e.g. straight-line fragments, and acyclic logics), and most existing benchmarks (e.g. most of Kaluza’s, and all of SLOG’s, Stranger’s, and SLOTH’s benchmarks). Our semantic conditions also yield a conceptually simple decision procedure, as well as an extensible architecture of a string solver in that a user may easily incorporate his/her own string functions into the solver by simply providing code for the pre-image computation without worrying about other parts of the solver. Despite these, the semantic conditions are unfortunately too general to provide a fast and complete decision procedure. We provide strong theoretical evidence for this in the form of complexity results. To rectify this problem, we propose two solutions. Our main solution is to allow only partial string functions (i.e., prohibit nondeterminism) in condition (2). This restriction is satisfied in many cases in practice, and yields decision procedures that are effective in both theory and practice. Whenever nondeterministic functions are still needed (e.g. the string function split), our second solution is to provide a syntactic fragment that provides a support of nondeterministic functions, and operations like one-way transducers, replaceall (with constant replacement string), the string-reverse function, concatenation, and regular-expression matching. We show that this fragment can be reduced to an existing solver SLOTH that exploits fast model checking algorithms like IC3.We provide an efficient implementation of our decision procedure (assuming our first solution above, i.e., deterministic partial string functions) in a new string solver OSTRICH. Our implementation provides built-in support for concatenation, reverse, functional transducers (FFT), and replaceall and provides a framework for extensibility to support further string functions. We demonstrate the efficacy of our new solver against other competitive solvers.
  •  
31.
  • Chen, Taolue, et al. (author)
  • Solving String Constraints with Regex-Dependent Functions through Transducers with Priorities and Variables
  • 2022
  • In: Proceedings of the ACM on Programming Languages. - : Association for Computing Machinery (ACM). - 2475-1421. ; 6
  • Journal article (peer-reviewed)abstract
    • Regular expressions are a classical concept in formal language theory. Regular expressions in programming languages (RegEx) such as JavaScript, feature non-standard semantics of operators (e.g. greedy/lazy Kleene star), as well as additional features such as capturing groups and references. While symbolic execution of programs containing RegExes appeals to string solvers natively supporting important features of RegEx, such a string solver is hitherto missing. In this paper, we propose the first string theory and string solver that natively provides such support. The key idea of our string solver is to introduce a new automata model, called prioritized streaming string transducers (PSST), to formalize the semantics of RegEx-dependent string functions. PSSTs combine priorities, which have previously been introduced in prioritized finite-state automata to capture greedy/lazy semantics, with string variables as in streaming string transducers to model capturing groups. We validate the consistency of the formal semantics with the actual JavaScript semantics by extensive experiments. Furthermore, to solve the string constraints, we show that PSSTs enjoy nice closure and algorithmic properties, in particular, the regularity-preserving property (i.e., pre-images of regular constraints under PSSTs are regular), and introduce a sound sequent calculus that exploits these properties and performs propagation of regular constraints by means of taking post-images or pre-images. Although the satisfiability of the string constraint language is generally undecidable, we show that our approach is complete for the so-called straightline fragment. We evaluate the performance of our string solver on over 195 000 string constraints generated from an open-source RegEx library. The experimental results show the efficacy of our approach, drastically improving the existing methods (via symbolic execution) in both precision and efficiency.
  •  
32.
  • Chen, Yu-Fang, et al. (author)
  • A Theory of Cartesian Arrays (with Applications in Quantum Circuit Verification)
  • 2023
  • In: AUTOMATED DEDUCTION, CADE 29. - : Springer Nature. - 9783031384998 - 9783031384981 ; , s. 170-189
  • Conference paper (peer-reviewed)abstract
    • We present a theory of Cartesian arrays, which are multi-dimensional arrays with support for the projection of arrays to subarrays, as well as for updating sub-arrays. The resulting logic is an extension of Combinatorial Array Logic (CAL) and is motivated by the analysis of quantum circuits: using projection, we can succinctly encode the semantics of quantum gates as quantifier-free formulas and verify the end-to-end correctness of quantum circuits. Since the logic is expressive enough to represent quantum circuits succinctly, it necessarily has a high complexity; as we show, it suffices to encode the k-color problem of a graph under a succinct circuit representation, an NEXPTIME-complete problem. We present an NEXPTIME decision procedure for the logic and report on preliminary experiments with the analysis of quantum circuits using this decision procedure.
  •  
33.
  •  
34.
  • Cook, Byron, et al. (author)
  • Ranking function synthesis for bit-vector relations
  • 2013
  • In: Formal methods in system design. - : Springer Science and Business Media LLC. - 0925-9856 .- 1572-8102. ; 43:1, s. 93-120
  • Journal article (peer-reviewed)abstract
    • Ranking function synthesis is a key component of modern termination provers for imperative programs. While it is well-known how to generate linear ranking functions for relations over (mathematical) integers or rationals, efficient synthesis of ranking functions for machine-level integers (bit-vectors) is an open problem. This is particularly relevant for the verification of low-level code. We propose several novel algorithms to generate ranking functions for relations over machine integers: a complete method based on a reduction to Presburger arithmetic, and a template-matching approach for predefined classes of ranking functions based on reduction to SAT- and QBF-solving. The utility of our algorithms is demonstrated on examples drawn from Windows device drivers.
  •  
35.
  •  
36.
  • Donaldson, Alastair F., et al. (author)
  • Automatic analysis of DMA races using model checking and k-induction
  • 2011
  • In: Formal methods in system design. - : Springer Science and Business Media LLC. - 0925-9856 .- 1572-8102. ; 39:1, s. 83-113
  • Journal article (peer-reviewed)abstract
    • Modern multicore processors, such as the Cell Broadband Engine, achieve high performance by equipping accelerator cores with small "scratch-pad" memories. The price for increased performance is higher programming complexity - the programmer must manually orchestrate data movement using direct memory access (DMA) operations. Programming using asynchronous DMA operations is error-prone, and DMA races can lead to nondeterministic bugs which are hard to reproduce and fix. We present a method for DMA race analysis in C programs. Our method works by automatically instrumenting a program with assertions modeling the semantics of a memory flow controller. The instrumented program can then be analyzed using state-of-the-art software model checkers. We show that bounded model checking is effective for detecting DMA races in buggy programs. To enable automatic verification of the correctness of instrumented programs, we present a new formulation of k-induction geared towards software, as a proof rule operating on loops. Our techniques are implemented as a tool, Scratch, which we apply to a large set of programs supplied with the IBM Cell SDK, in which we discover a previously unknown bug. Our experimental results indicate that our k-induction method performs extremely well on this problem class. To our knowledge, this marks both the first application of k-induction to software verification, and the first example of software model checking in the context of heterogeneous multicore processors.
  •  
37.
  • Donaldson, Alastair F., et al. (author)
  • SCRATCH : a tool for automatic analysis of DMA races
  • 2011
  • In: Proc. 16th ACM Symposium on Principles and Practice of Parallel Programming. - New York : ACM Press. - 9781450301190 ; , s. 311-312
  • Conference paper (peer-reviewed)
  •  
38.
  • Donaldson, Alastair F., et al. (author)
  • Software verification using k-induction
  • 2011
  • In: Static Analysis. - Berlin, Heidelberg : Springer Berlin/Heidelberg. - 9783642237010 ; , s. 351-368
  • Conference paper (peer-reviewed)
  •  
39.
  •  
40.
  • El Yaacoub, Ahmed, 1996-, et al. (author)
  • NeRTA : Enabling Dynamic Software Updates in Mobile Robotics
  • 2022
  • Conference paper (peer-reviewed)abstract
    • We present NeRTA (Next Release Time Analysis), a technique to schedule dynamic software updates of the low-level control loops of mobile robots. Dynamic software updates enable software correction and evolution during system operation. In mobile robotics, they are crucial to resolve software defects without interrupting system operation or to enable on-the-fly extensions. Low-level control loops of mobile robots, however, are time sensitive and run on resource-constrained hardware with no operating system support. To minimize the impact of the update process, NeRTA safely schedules updates during times when the computing unit would otherwise be idle. It does so by utilizing information from the existing scheduling algorithm without impacting its operation. As such, NeRTA works orthogonal to the existing scheduler, retaining the existing platform-specific optimizations and fine-tuning, and may simply operate as a plug-in component. Our experimental evaluation shows that NeRTA estimates are within 15% of the actual idle times in more than three-quarters of the cases. We also show that the processing overhead of NeRTA is essentially negligible.
  •  
41.
  • El Yaacoub, Ahmed, et al. (author)
  • Poster Abstract: Scheduling Dynamic Software Updates in Safety-critical Embedded Systems : the Case of Aerial Drones
  • 2022
  • In: 2022 ACM/IEEE 13th International Conference on Cyber-Physical Systems (ICCPS). - : Institute of Electrical and Electronics Engineers (IEEE). - 9781665409674 - 9781665409681 ; , s. 284-285
  • Conference paper (peer-reviewed)abstract
    • Dynamic software updates enable software evolution and bug fixes to embedded systems without disrupting their run-time operation. Scheduling dynamic updates for safety-critical embedded systems, such as aerial drones, must be done with great care. Otherwise, the system's control loop will be delayed leading to a partial or even complete loss of control, ultimately impacting the dependable operation. We propose an update scheduling algorithm called NeRTA, which schedules updates during the short times when the processor would have been idle. NeRTA consequently avoids the loss of control that would occur if an update delayed the execution of the control loop. The algorithm computes conservative estimations of idle times to determine if an update is possible, but is also sufficiently accurate that the estimated idle time is typically within 15% of the actual idle time.
  •  
42.
  •  
43.
  • El Yaacoub, Ahmed, 1996-, et al. (author)
  • Scheduling Dynamic Software Updates in Mobile Robots
  • 2023
  • In: ACM Transactions on Embedded Computing Systems. - : Association for Computing Machinery (ACM). - 1539-9087 .- 1558-3465. ; 22:6, s. 1-27
  • Journal article (peer-reviewed)abstract
    • We present NeRTA (Next Release Time Analysis), a technique to enable dynamic software updates for low-level control software of mobile robots. Dynamic software updates enable software correction and evolution during system operation. In mobile robotics, they are crucial to resolve software defects without interrupting system operation or to enable on-the-fly extensions. Low-level control software for mobile robots, however, is time sensitive and runs on resource-constrained hardware with no operating system support. To minimize the impact of the update process, NeRTA safely schedules updates during times when the computing unit would otherwise be idle. It does so by utilizing information from the existing scheduling algorithm without impacting its operation. As such, NeRTA works orthogonal to the existing scheduler, retaining the existing platform-specific optimizations and fine-tuning, and may simply operate as a plug-in component. To enable larger dynamic updates, we further conceive an additional mechanism called bounded reactive control and apply mixed-criticality concepts. The former cautiously reduces the overall control frequency, whereas the latter excludes less critical tasks from NeRTA processing. Their use increases the available idle times. We combine real-world experiments on embedded hardware with simulations to evaluate NeRTA. Our experimental evaluation shows that the difference between NeRTA’s estimated idle times and the measured idle times is less than 15% in more than three-quarters of the samples. The combined effect of bounded reactive control and mixed-criticality concepts results in a 150+% increase in available idle times. We also show that the processing overhead of NeRTA and of the additional mechanisms is essentially negligible.
  •  
44.
  •  
45.
  • Engel, Christian, et al. (author)
  • Integrating Verification and Testing of Object-Oriented Software
  • 2008
  • In: Tests and Proofs, Second International Conference, TAP 2008, Prato, Italy. Springer, LNCS. - 9783540791232 ; 4966, s. 182-191
  • Conference paper (peer-reviewed)abstract
    • Formal methods can only gain widespread use in industrial software development if they are integrated into software development techniques, tools, and languages used in practice. A symbiosis of software testing and verification techniques is a highly desired goal, but at the current state of the art most available tools are dedicated to just one of the two tasks: verification or testing. We use the KeY verification system (developed by the tutorial presenters) to demonstrate our approach in combining both.
  •  
46.
  • Esen, Zafer, et al. (author)
  • TRICERA : Verifying C Programs Using the Theory of Heaps
  • 2022
  • In: Proceedings of the 22nd Conference on Formal Methods in Computer-Aided Design – FMCAD 2022. - : TU Wien Academic Press. - 9781665480406 - 9783854480532 ; , s. 380-391
  • Conference paper (peer-reviewed)abstract
    • TRICERA is an automated, open-source verification tool for C programs based on the concept of Constrained Horn Clauses (CHCs). In order to handle programs operating on heap, Tricera applies a novel theory of heaps, which enables the tool to hand off most of the required heap reasoning directly to the underlying CHC solver. This leads to a cleaner interface between the language-specific verification front-end and the language-independent CHC back-end, and enables verification tools for different programming languages to share a common heap back-end. The paper introduces Tricera, gives an overview of the theory of heaps, and presents preliminary experimental results using SV-COMP benchmarks.
  •  
47.
  • Fedyukovich, Grigory, et al. (author)
  • Competition Report : CHC-COMP-21
  • 2021
  • In: Electronic Proceedings in Theoretical Computer Science. - : Open Publishing Association. ; , s. 91-108
  • Conference paper (peer-reviewed)abstract
    • CHC-COMP-211 is the fourth competition of solvers for Constrained Horn Clauses. In this year, 7 solvers participated at the competition, and were evaluated in 7 separate tracks on problems in linear integer arithmetic, linear real arithmetic, arrays, and algebraic data-types. The competition was run in March 2021 using the StarExec computing cluster. This report gives an overview of the competition design, explains the organisation of the competition, and presents the competition results.
  •  
48.
  • Felsing, Dennis, et al. (author)
  • Automating regression verification
  • 2015
  • In: Software Engineering &amp; Management 2015. - Germany : Gesellschaft für Informatik. - 9783885796336 ; , s. 75-76
  • Conference paper (peer-reviewed)
  •  
49.
  • Felsing, Dennis, et al. (author)
  • Automating regression verification
  • 2014
  • In: ASE '14 Proceedings of the 29th ACM/IEEE international conference on Automated software engineering. - New York : ACM Press. - 9781450330138 ; , s. 349-360
  • Conference paper (peer-reviewed)
  •  
50.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-50 of 110
Type of publication
conference paper (75)
journal article (21)
book chapter (5)
doctoral thesis (3)
licentiate thesis (3)
reports (2)
show more...
editorial proceedings (1)
show less...
Type of content
peer-reviewed (96)
other academic/artistic (14)
Author/Editor
Rümmer, Philipp, 197 ... (71)
Rümmer, Philipp (36)
Lin, Anthony W. (11)
Hojjat, Hossein (11)
Backeman, Peter (8)
Kroening, Daniel (8)
show more...
Chen, Yu-Fang (7)
Hähnle, Reiner, 1962 (7)
Zeljic, Aleksandar (7)
Holík, Lukás (6)
Schäf, Martin (6)
Klebanov, Vladimir (6)
Voigt, Thiemo (5)
Rezine, Ahmed (5)
Atig, Mohamed Faouzi (5)
Mottola, Luca, 1980- (5)
Beckert, Bernhard (5)
Wintersteiger, Chris ... (5)
Schlager, Steffen (4)
Hague, Matthew (4)
Wu, Zhilin (4)
Kuncak, Viktor (4)
Ahrendt, Wolfgang, 1 ... (3)
Abdulla, Parosh Aziz (3)
Schmitt, Peter H. (3)
Ulbrich, Mattias (3)
Giese, Martin, 1970 (3)
Gurov, Dilian, 1964- (3)
Lidström, Christian (3)
Esen, Zafer (3)
Wahl, Thomas (3)
Kahsai, Temesghen (3)
Subotić, Pavle (3)
Chen, Taolue (3)
Stenman, Jari (2)
Hong, Chih-Duo (2)
Liang, Chencheng (2)
Abdulla, Parosh Aziz ... (2)
Bui, Phi Diep (2)
Hu, Denghang (2)
Roth, Andreas (2)
Grebing, Sarah (2)
Amilon, Jesper (2)
Arlt, Stephan (2)
Rümmer, Philipp, Doc ... (2)
Majumdar, Rupak (2)
Habermalz, Elmar (2)
Brillout, Angelo (2)
Lin, Anthony Widjaja (2)
Kan, Shuanglong (2)
show less...
University
Uppsala University (87)
University of Gothenburg (18)
Chalmers University of Technology (11)
Linköping University (4)
Royal Institute of Technology (3)
Mälardalen University (1)
Language
English (110)
Research subject (UKÄ/SCB)
Natural sciences (104)
Engineering and Technology (11)
Social Sciences (1)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view