ICMS 2014 Session: Software for Quantified Reasoning

ICMS 2014: Home, Sessions

Organizers

Aim and Scope

Deciding satisfiability of quantified formulas has many applications including program verification and synthesis, synthesizing inductive invariants, and optimization. Thus, it is one of the fundamental problems in Computer Science. This session will bring together experts in quantified reasoning and faciliate disseminating new and ongoing results on quantified reasoning.

Topics (including, but not limited to)

Publications

Submission Guidelines

Talks/Abstracts

  1. Real Quantifier elimination in the RegularChains library

    Changbo Chen (CIGIT, Chinese Academy of Sciences) and Marc Moreno Maza (University of Western Ontario)

    Abstract: Quantifier elimination (QE) over real closed fields has found applications in numerous areas. Cylindrical algebraic decomposition (CAD) is one of the main tools for handling quantifier elimination of nonlinear input formulas. Despite of its worst case doubly exponential running time, CAD-based quantifier elimination remains interesting for handling general quantified formulas and producing simple quantifier-free formulas.

    In another talk at ICMS 2014, we explain how a CAD based on the theory of regular chains, is implemented in the RegularChains library. In this talk, we discuss the implementation details of a QE procedure based on the CAD implementations in the RegularChains library. Several aspects of this implementation are presented, including the data structure for organizing the computation, the design of user interface, the simplification of output formulas, important optimizations, etc.

    The function for doing quantifier elimination in the RegularChains library is called QuantifierElimination. Its input is a prenex formula given by polynomial constraints, logical connectives, existential and universal quantifiers. This command supports both standard quantifier-free formula and extended Tarski formula in the output. The use of the QE procedure is illustrated by solving examples from different applications. In particular, we explain in detail how QuantifierElimination is applied to automatic generation of parametrized parallel programs.

  2. Skolemization Modulo Theories

    Konstantin Korovin (The University of Manchester) and Margus Veanes (Microsoft Research)

    Abstract: In classical automated theorem proving, Skolemization is a technique used to transform formulas into equisatisfiable form by replacing existentially quantified variables by Skolem terms. In resolution based methods using clausal normal form this is a necessary preprocessing step. An important optimization in Skolemization is to minimize Skolem terms by reducing false dependencies on variables, one such classical technique is miniscoping. Essentially what miniscoping achieves is that if y is an existentially quantified variable, then the Skolem term for y will contain only those universally quantified variables that y truly depends on with respect to the Boolean structure of the formula.

    Combining classical automated theorem proving techniques with theory based reasoning, such as satisfiability modulo theories, is a new approach to first-order reasoning modulo theories. Here we investigate how a technique called monadic decomposition can be used to improve Skolemization in first-order reasoning modulo theories. Monadic decomposition is a new technique that has been developed to transform a certain class of theory dependent formulas over multiple variables into an equivalent form as a Boolean combination of unary formulas, where a unary formula is a formula that depends on a single variable. Monadic decomposition can be used as an equivalence preserving preprocessing step of a first-order formula modulo theories in a way that breaks theory specific atoms into Boolean combinations of unary formulas and thus refines miniscoping in places where it was not applicable in the original formula.

  3. Incremental QBF Solving

    Florian Lonsing and Uwe Egly (Vienna University of Technology, Austria)

    Abstract: Propositional logic (SAT) has been widely applied to encode problems from model checking, formal verification, and synthesis. SAT solvers applied in these applications often have to solve a sequence of closely related formulae. Incremental solving aims at employing information learned from one formula for solving the next formulae.

    Motivated by the success of incremental SAT solving, we consider the problem of incrementally solving a sequence of quantified Boolean formulae (QBF). We adopt ideas from incremental SAT solving and present an approach which is application-independent and hence applicable to QBF encodings of arbitrary problems. We implemented this approach in our incremental search-based QBF solver DepQBF. Modern search-based QBF solvers are based on an extension of the DPLL algorithm for SAT and apply techniques which are inspired by the success of SAT solvers.

    We report on challenges in the context of incremental QBF solving, like maintaining the information that was learned from a particular input formula across different calls of the solver. Search-based QBF solvers implement the Q-resolution calculus to infer new clauses and cubes (i.e. conjunctions of literals) from the input formula. These clauses and cubes are added to the formula as learned constraints. The purpose of these constraints is to guide the solver away from parts of the search-space which have been explored already or which have been found irrelevant to the solution of the current input formula. If the input formula is solved then the Q-resolution derivations of the learned constraints give rise to Q-resolution proofs of (un)satisfiability. In the context of incremental solving, constraints that were learned when solving the previous input formulae might no longer be derivable from the current or future ones by Q-resolution. Hence these constraints must be discarded in order to prevent the solver from making unsound inferences by Q-resolution. Whereas known approaches from incremental SAT solving can be applied to maintain the set of learned clauses across different solver calls, the handling of the learned cubes requires techniques which are specific to QBF.

    Regarding possible applications of incremental QBF solving, we sketch application scenarios by means of examples. The API of our incremental QBF solver DepQBF provides the user with functions to manipulate the input formula by incrementally adding and deleting clauses and variables. As an additional API feature, the user can add and delete sets of clauses by means of push and pop operations. This way, the set of clauses of the input formula is organized as a stack. From the user's perspective, we argue that this design increases the usability of an incremental QBF solver considerably.

    Based on first experimental results, we conclude that incremental QBF solving has the potential to improve QBF-based workflows in many application domains.

  4. Bit-precise Quantifier Elimination

    Ajith K. John and Supratik Chakraborty (IIT Bombay, India)

    Abstract: Quantifier elimination is a key operation in word-level formal verification and analysis of hardware and software systems. For simplicity, words in hardware and software systems are often abstracted as unbounded integers, and quantifier elimination techniques for integers are used by formal verification and analysis tools. However the results of analysis using quantifier elimination for unbounded integers may not be sound if the underlying implementation uses fixed-width bit-vectors. Therefore, bit-precise quantifier elimination techniques from fixed-width bit-vector constraints is an important problem. Boolean combinations of linear equalities, disequalities and inequalities on fixed-width bit-vectors, collectively called linear modular constraints, form an important fragment of the theory of fixed-width bit-vectors. The most dominant technique used for eliminating quantifiers from these constraints is bit-blasting, followed by bit-level quantifier elimination. Since linear modular constraints can be expressed as formulae in Presburger Arithmetic, quantifier elimination for Presburger Arithmetic can also be used. However, both the above approaches destroy the word-level structure of the problem, and do not scale well for large bit-widths.

    In our work, we present an efficient and bit-precise algorithm for quantifier elimination from conjunctions of linear modular constraints that overcomes the above drawbacks in practice. In contrast to bit-blasting and Presburger Arithmetic-based techniques, our algorithm keeps the quantifier-eliminated formulae in linear modular arithmetic. We use a layered approach to solve the problem, i.e., sound but relatively less complete, cheaper layers are invoked first, and expensive but more complete layers are called only when required. The cheaper layers involve simplification of the given conjunction of constraints using linear modular equalities. We also apply an efficient combinatorial heuristic to identify unconstraining inequalities and disequalities that can be dropped from the conjunction of constraints, without changing the set of satisfying solutions. Finally, those cases that are not solved by application of the above computationally cheap techniques, are handled by a variant of the classical Fourier-Motzkin quantifier elimination algorithm for reals adapted for linear modular arithmetic. We also extend the above layered algorithm to work with boolean combinations of linear modular constraints. We use three approaches for this purpose - a decision diagram based approach, an SMT-solving based approach and a hybrid approach that combines the strengths of the previous two approaches. Experiments on an extensive set of public-domain benchmarks demonstrate that our techniques significantly outperform alternative quantifier elimination techniques based on bit-level reasoning and Presburger Arithmetic. Our experiments also indicate that the cheaper layers are successful in eliminating almost all quantifiers in the benchmark problems, and the expensive layers are invoked only in a very small fraction of cases.

    Significant parts of this work have been presented at the International conference on Computer-Aided Verification (CAV) 2011 and at the International Conference on Tools and Algorithms for Construction and Analysis of Systems (TACAS) 2013.

  5. Software for Quantifier Elimination in Propositional Logic

    Eugene Goldberg and Panagiotis (Pete) Manolios (Northeastern University, USA)

    Abstract: We consider the following problem of Quantifier Elimination (QE). Given a Boolean CNF formula F where some variables are existentially quantified, find a logically equivalent CNF formula that is free of quantifiers. Solving this problem comes down to finding a set of clauses depending only on non-quantified variables that has the following property: adding the clauses of this set to F makes all the clauses of F with quantified variables redundant. To solve the QE problem we developed a tool meant for handling a more general problem called partial QE. This tool builds a set of clauses adding which to F renders a specified subset of clauses with quantified variables redundant. In particular, if the specified subset contains all the clauses with quantified variables, our tool performs QE. Importantly, in many cases partial QE is "equivalent" to QE. That is adding clauses that make a small subset of clauses of F with quantified variables redundant one guarantees that the rest of the clauses of F with quantified variables are redundant too.

  6. Quantified Reasoning over the Reals

    Sicun (Sean) Gao, Soonho Kong, and Edmund M. Clarke (Carnegie Mellon University, USA)

    Abstract: Exists-forall sentences over real numbers correspond to general non-smooth optimization problems, which are the bottleneck problems in numerous areas such as control, robotics, and verification. Successful algorithms for these formulas should exploit the full power of both logical decision procedures and numerical optimization algorithms. Such combination of symbolic and numerical algorithms can be rigorously developed in the framework of delta-complete decision procedures. We suggest two concrete directions. The first one can be named "satisfiability modulo optimizations," which uses optimization algorithms to handle partially universally quantified theory atoms. The second approach is the recursive CEGAR-based pruning of the search space, which repeatedly find overapproximations of assignments for existentially quantified variables and underapproximations for the universally quantified ones. The two approaches complement each other, and their unification would lead to an interesting convergence of numerical optimization and decision procedures.

  7. NLCertify: A Tool for Formal Nonlinear Optimization

    Victor Magron (LAAS-CNRS, France)

    Abstract: NLCertify is a software package for handling formal certification of nonlinear inequalities involving transcendental multivariate functions. The tool exploits sparse semialgebraic optimization techniques with approximation methods for transcendental functions, as well as formal features. Given a box K and a function f as input, NLCertify provides OCaml libraries that produce nonnegativity certificates for f over K, which can be ultimately proved correct inside the Coq proof assistant. One specific challenge of the field of formal nonlinear reasoning is to develop adaptive techniques to produce certificates with a reduced complexity.

    The software first builds the abstract syntax tree t of f. The leaves of t are semialgebraic functions obtained by composition of polynomials with some basic operations (including the square root, sup, inf, +, x, -, /, etc). The other nodes can be either univariate transcendental functions (arctan, cos, exp, etc) or basic operations. NLCertify approximates t with means of semialgebraic estimators and provides lower and upper bounds of t over K. When t represents a polynomial, the tool computes lower and upper bounds of t using a hierarchy of semidefinite (SDP) relaxations, via an interface with the external SDPA solver. The extension to the semialgebraic case is straightforward through the implementation of the Lasserre-Putinar lifting-strategy. The user can choose to approximate transcendental functions with best uniform (or minimax) polynomials as well as maxplus estimators. Univariate minimax polynomials are provided using an interface with the Sollya environment, in which an iterative algorithm designed by Remez is implemented. Alternatively, the maxplus approach builds lower (resp. upper) estimators using concave maxima (resp. convex infima) of quadratic forms. In this way, NLCertify computes certified global estimators from approximations of primitive functions by induction over the syntax tree t.

    These various approximation and optimization algorithms are placed in a unified framework extending to about 15000 lines of OCaml code and 3600 lines of Coq code. The NLCertify package solves successfully non-trivial inequalities from the Flyspeck project (essentially tight inequalities, involving both semialgebraic and transcendental expressions of 6~12 variables) as well as significant global optimization benchmarks.