Solvable Polynomial Ideals: The Ideal Reflection for Program Analysis

This paper presents a program analysis method that generates program summaries involving polynomial arithmetic. Our approach builds on prior techniques that use solvable polynomial maps for summarizing loops. These techniques are able to generate all polynomial invariants for a restricted class of programs, but cannot be applied to programs outside of this class---for instance, programs with nested loops, conditional branching, unstructured control flow, etc. There currently lacks approaches to apply these prior methods to the case of general programs. This paper bridges that gap. Instead of restricting the kinds of programs we can handle, our method abstracts every loop into a model that can be solved with prior techniques, bringing to bear prior work on solvable polynomial maps to general programs. While no method can generate all polynomial invariants for arbitrary programs, our method establishes its merit through a monotonicty result. We have implemented our techniques, and tested them on a suite of benchmarks from the literature. Our experiments indicate our techniques show promise on challenging verification tasks requiring non-linear reasoning.


INTRODUCTION
There has been a long history of prior work that automatically generates polynomial invariants.One line of work in this direction seeks to generate all possible polynomial invariants for a restricted class of programs [Hrushovski et al. 2018[Hrushovski et al. , 2023;;Humenberger et al. 2018;Kovács 2008;Rodríguez-Carbonell and Kapur 2004].These complete methods give strong, predictable results; however, there is no obvious way to use such techniques for general programs, which may contain nested loops, branching, and unstructured control flow.Another line of research into the automatic generation of polynomial invariants looks to apply to general programs; however, such techniques are often heuristic in nature [Farzan and Kincaid 2015;Kincaid et al. 2018] or are limited on what kind of invariants they can produce [Cachera et al. 2012;Müller-Olm and Seidl 2004;Oliveira et al. 2016;Sankaranarayanan et al. 2004], e.g.returning only polynomials up to some degree.
It is impossible to fully bridge the gap between these two lines of research.No method can generate all polynomial invariants for general programs.No method can even generate all linear invariants for general programs [Müller-Olm and Seidl 2004].However, the two lines of research raises the question: Can a method that generates polynomial invariants and works for general programs provide some guarantee on predictability?
In this paper we present techniques to give a positive answer to the previous question.Our method builds on the algebraic program analysis framework [Kincaid et al. 2021].Within this framework, summaries are created for larger and larger subprograms in a bottom-up manner.The essential challenge is the summarization of loops.In short, summarizing a loop amounts to over-approximating the reflexive transitive closure of a transition formula that describes the loop body.Once an appropriate loop summarization technique is constructed, the algebraic framework can then employ the technique as a subroutine in the analysis of whole programs.
The technique for summarizing loops described in this paper works by abstracting a transition formula describing an arbitrary loop body to an object, which we call a transition ideal.Informally, a transition ideal is a set of polynomial equations describing the transition relation of a loop body.Checking whether a non-linear transition formula (with an integrality predicate) implies a polynomial equation is undecidable for the standard model; however, Kincaid et al. [2023] developed a theory, LIRR, for which it is possible to compute all implied polynomial equations.Our work utilizes LIRR to extract transition ideals from loop body transition formulas.The extraction of transition ideals is complete for LIRR and sound for the standard interpretation.A transition ideal can be generated as a summary of an inner loop, a summary of a program with branches, etc.To summarize the loop, we would like to compute the transitive closure of the extracted transition ideal; however, the dynamics of a transition ideal can be chaotic and difficult to capture.Thus, our key insight is that we need to again abstract the transition ideal to some other object for which we know how to compute invariants.In Section 4 we show how given an arbitrary transition ideal one can compute a best abstraction as a solvable transition ideal, which we call its solvable reflection.A solvable transition ideal is a transition ideal that contains all of the defining polynomials of at least one solvable polynomial map, a class of polynomial maps that have been utilized in prior work on complete polynomial invariant generation [Amrollahi et al. 2022;Humenberger et al. 2018;Kovács 2008;Rodríguez-Carbonell and Kapur 2004].In Section 5 we show that the method of Kauers and Zimmermann [2008] can be generalized to compute the polynomial invariants of a solvable transition ideal.These resulting polynomial invariants can be translated back to a transition formula, which gives a method for summarizing arbitrary loops.Hence, via the algebraic framework we achieve a method to generate polynomial invariants to programs with arbitrary control-flow.
While our method is not complete for arbitrary programs, we can guarantee our method is monotone.The exact definition of monotonicity is given in Section 6; informally though, a program analysis is monotone if "more information in yields more information out".That is, improving the precision of a code-fragment, e.g. by strengthening the precondition or adding assumptions, necessarily improves the overall analysis result.Our method is monotone because we do not extract just some solvable transition ideal from a loop body, but our method extracts the best solvable transition ideal.Another way to understand this result is that while our method is not complete for general programs, it is complete, in a sense, at every loop.Given a summary for a loop body, we will always compute the solvable transition ideal that most closely approximates it.Thus, in the restricted case of a simple loop whose body is described by a solvable polynomial map, our method is complete; and, in general our method is monotone.
Summarizing, this paper presents a program analyzer that (1) produces non-linear summaries, (2) works for polynomial programs with arbitrary control-flow, and (3) is monotone.We have implemented our method and our experiments show that it performs comparably to the top performers on the c/ReachSafety-Loops subcategory of the Software Verification Competition.Our paper makes the following contributions: (1) We introduce transition ideals and solvable transition ideals.We further generalize solvable transition ideals with ultimately solvable transition ideals.(2) We show that transition ideals admit (ultimately) solvable reflections.
• We present algorithms for computing solvable linear reflections (Section 4.1) and ultimately solvable reflections (Section 4.2) of transition ideals.Linear reflections correspond to best abstractions with respect to linear simulations.• We generalize this method to compute best abstractions with respect to polynomial simulations of bounded degree (Section 4.3). (,  ′ ) Loop body formula Step 1 (Section 6) Step 2 (Section 4) Step 3 (Section 5) Step 4 (b) Overview of the method Fig. 1 (3) We present a complete algorithm for computing all the polynomial invariants of (ultimately) solvable transition ideals.
• Our summarization algorithm utilizes a sub-algorithm that might be of independent interest.Our sub-algorithm generalizes the technique of Kauers and Zimmermann [2008], which computes the algebraic relations of c-finite sequences of rational numbers, to compute algebraic relations of c-finite sequences over an arbitrary Q-algebra (Problem 5.1).(4) An implementation of the combination of the abstraction and transitive closure results yields a monotone program analysis that produces polynomial invariants for polynomial programs.
The rest of the paper is organized as follows.Section 2 illustrates the main features of our method on a challenging example.Section 3 gives background on commutative algebra, polynomial ideals, and solvable polynomial maps.Section 4 describes the method of extracting (ultimately) solvable transition ideals from arbitrary transition ideals.Section 5 describes the method of summarizing (ultimately) solvable transition ideals.Section 6 connects these ideas to transition formulas, and shows how the methods can be integrated into a program analyzer.Section 7 presents the experimental evaluation.Section 8 discusses related work.

OVERVIEW
In this section, we present our technique for program verification on the motivating example found in Fig. 1a.Two relevant features of this verification task is that (1) the program has a nested loop; and (2) to verify the assertions at the end of the program, an invariant involving non-linear arithmetic is required.This combination of nested loops and non-linear arithmetic presents a significant challenge for existing methods.
Our approach to analyzing programs builds on the algebraic program analysis framework [Kincaid et al. 2021].Within this framework, analysis proceeds by producing transition formulas for each program substructure.A transition formula,  (,  ′ ), is a formula over the program variables  as well as their primed counterparts  ′ .Such a formula represents a relation over program states, where the unprimed variables correspond to the pre-state and the primed variables correspond to the post-state.Summaries for the sequencing and branching of program substructures corresponds to the transition formulas operations  (,  ′ )• (,  ′ ) ≜ ∃ ′′ .(,  ′′ )∧ ( ′′ ,  ′ ) and  (,  ′ ) ⊕  (,  ′ ) ≜  (,  ′ ) ∨  (,  ′ ) respectively.From these two operations, one can accurately summarize non-looping code.For example, a transition formula,   for the inner-loop of Fig. 1a would look like Of course, we are interested in analyzing programs that do have loops, so an algebraic analysis for looping code must have an iteration operator  ⊛ .The benefit is that once an iteration operator is created, the analysis can work for any loop, regardless of the underlying program structure.
Figure 1b gives an overview of our iteration operator.We illustrate the method by discussing the analysis of the inner-loop of Fig. 1a.The goal of Step 1 (discussed in Section 6) is to extract a transition ideal from the loop body.Informally, transition ideal  corresponds to a transition formula that can be expressed as a conjunction of polynomial equations.That is, the transition ideal of a transition formula  (,  ′ ) is the set polynomials that vanish on all models of  .For example, for the transition formula for the inner-loop body, we have The objective of Step 2 (presented in Section 4) of our method is to extract a solvable transition ideal from   .We say that a transition ideal  is solvable if it contains a solvable polynomial map  (a homomorphism  : Q[ ] → Q[ ] of a particular form, defined in Section 3.3), in the sense that  ′ −  () belongs to  for all variables .Such a  is called a solvability witness for  .For the transition ideal   , the extraction step is trivial because   itself is solvable: the function   mapping { ↦ →  − ,  ↦ →  + 1,  ↦ → } (which is affine, a special case of solvable) is a witness.Therefore, the result of the second step of our method is the solvable transition ideal   =   .
The task of Step 3 (presented in Section 5) of our method is to "summarize" the ideal   , as the transition ideal  *  = ∞ =0    .Thinking of    as a set of polynomial constraints that hold after  iterations of   ,  *  represents the constraint that hold after any number of iterations.The process of computing  * from a solvable transition ideal  is the subject of Section 5.The basic idea is that we can "solve" the solvable witness  by deriving a closed-form   () for each  ∈  .In the case of the inner-loop of Fig. 1a, we have This solution represents the value of the program variables , , and  after  iterations of the loop.We can then obtain polynomial invariants by eliminating .For our running example, we have is essentially the same process as Kauers and Zimmermann [2008]; Kovács [2008]'s complete invariant generation for solvable polynomial maps; Section 5 shows how these ideas can be extended to solvable transition ideals in general.
The final step of our iteration operator (Step 4) is to translate  *  back to a transition formula,  (,  ′ ) ⊛ .For the inner-loop of Fig. 1a, the transition ideal  *  translates to the transition formula is our summary for the inner loop.The example analysis of the inner loop of Fig. 1a gives the basic outline of how our method analyzes a loop.However, because the body of the inner loop implements a solvable polynomial map, Step 2 of Fig. 1b was trivial.To understand the general case when a loop's body is not described by a solvable polynomial map, consider the outer loop of Fig. 1b.Let   be a transition formula describing the outer-loop body, and let   be a transition ideal obtained from   .  contains many polynomials that do not represent solvable assignments.For example, because the result of analysis of the inner loop  ⊛  is an approximation of the inner loop, the variable  is updated non-deterministically in   ; i.e., there is no  ′ −  ∈   for any polynomial .Furthermore,  has a non-linear self-dependence, i.e.  ′ −  +  ′ ∈   , which is not solvable.These complications mean that we cannot capture the dynamics of the variables of the outer loop using solvable polynomial maps.
However, we can find some terms that evolve predictably.For example,  and  are always equal in the post-state, i.e.  ′ −  ′ ∈   , and the sign of  −  flips between the pre-state and post-state, i.e. ( ′  ′ −  ′  ′ ) + ( − ) ∈   .The evolution of these terms can be represented with a solvable transition ideal.Figure 2a illustrates how the evolution of these terms for a single loop iteration can be represented by the solvable transition ideal  ′  .Because  ′  represents terms that are in   , While ,  ′  is an abstraction of   , there could be other abstractions of   that are better.For example, there are other polynomial terms that behave predictably that are not in  [ ′  ], e.g.( ′  ′ −  ′  ′ ) + ( −  ) − ( −  ) ∈   .Other abstractions may consider terms that are not captured by ,  ′  .However, the techniques of Section 4 does not just extract a sound abstraction, but actually extracts a best abstraction, with respect to a class of simulations.We call such a best abstraction a solvable reflection1 .Informally, a solvable reflection is best in that any other abstraction also abstracts the solvable reflection.In Section 4.1 we give an algorithm for producing a solvable reflection with respect to linear simulations.For the case of linear simulations, ⟨,  ⟩ is a solvable reflection, with In other words, capturing the dynamics of the linear term  −  is the best among all possible abstractions of linear terms with solvable transition ideals.
In Section 4.3, we extend our algorithm for finding linear simulations and give a method for producing solvable reflections with respect to polynomial simulations of a bounded degree.The simulation  from Fig. 2 is an example of a degree-2 simulation, i.e. the mapping for the variable  is a degree-2 polynomial.Our extended method is able to produce the solvable reflection, ⟨,   ⟩, with respect to degree-2 simulations, of   .⟨,   ⟩ is too big to be presented here; however, it necessarily captures more dynamics of the outer loop compared to ,  ′  .Furthermore, for this example, the closure,  *  , when combined with the program's initial conditions is strong enough to prove the two assertions at the end of the program, verifying the program in Fig. 1a.
The key that makes the overall process monotone is the combination of best abstractions with complete invariant generation for solvable transition ideals.In other words, at every loop we are finding the strongest loop-body invariant that we know how to completely solve.This leads to the result that our iteration operator is monotone (Section 6).Moreover, in the case when the loop body is described by a solvable polynomial map, similar to the case of the analysis of the inner-loop, our method essentially reduces to prior methods.Consequently our method is complete in such a case.

Polynomials, Ideals, and Gröbner Bases
We use Q[ 1 , . . .,   ] and Q[ ] to denote the ring of polynomials with rational coefficients over the variables { 1 , . . .,   } =  .A polynomial homomorphism is a ring homomorphism  : between two polynomial rings.Provided that  is finite, a polynomial homomorphism can be represented by its action on the variables  .We say that  is linear if for each  ∈  ,  () is either 0 or a homogeneous polynomial of degree 1.We say that  is a polynomial endomorphism if  =  .In this paper, every polynomial ring we consider is over a finite set of variables.
Next, we highlight standard definitions for polynomial ideals.For a more in depth presentation of these topics, Cox et al. [2015] provides a good introduction.A polynomial ideal  ⊆ Q[ ] is a set that contains 0, is closed under addition, and for any  ∈  and  ∈ Q[ ],  ∈  .Intuitively, one can consider an ideal  a collection of polynomial equations { = 0 :  ∈  }.The conditions of an ideal can be read as inference rules: 0 = 0, if  = 0 and  = 0 then  +  = 0, and if  = 0 then  = 0.For any collection of polynomials to denote the ideal generated by .
A monomial  is a product of variables of the form  =   1 1 . . .    .The total degree of  is  1 + • • • +   .A monomial order, ≪, is a total ordering on monomials, such that for any monomial , 1 ≪  and if  ≪  then  ≪ .The leading monomial, LM(), with respect to a given monomial order, of a polynomial  =  1  1 + • • • +     is the greatest monomial among  1 , . . .,   .In this paper, we make use of two different types of monomial orders: graded orders and elimination orders.Graded orders first compare monomials by total degree, with larger degree corresponding to a larger monomial; ties in total degree are broken by some other monomial order.For example, a graded order that breaks ties using a lexicographic ordering on monomials is the graded lexicographic order.Let  ∪  be a partition of the variables  .Let  =     and  =     be monomials with   and   containing only  variables, and   and   only containing  variables.Let ≪ be some monomial order.The elimination order ≪  defines  ≪   as either (1)   ≪   or (2)   =   and   ≪   .
Example 3.1.Consider monomials over the variables , , and  with  lexicographically greater than  and  lexicographically greater than .Let ≪ grlex be the graded lexicographic order, and let ≪ { } be the elimination order that eliminates  and uses ≪ grlex for remaining comparisons.
Given a finite set of polynomials  there are algorithms [Buchberger 1976;Faugère 1999] for computing a Gröbner basis of ⟨⟩.Furthermore, given a Gröbner basis, , there are algorithms for rewriting a polynomial by .
Example 3.2.Consider the graded lexicographic order over the variables , , and .We will not go through the steps to calculate it but { 2 − 1,  − 2,  + } is a Gröbner basis for the ideal 2 +  2 −  can be written with respect to the Gröbner basis  2 − 1,  + 1,  2 − 2 as The combination of Gröbner bases and elimination orderings result in the key property of elimination orderings: Let  and  be disjoint sets of variables, and let  ⊆ Q[,  ] be a Gröbner basis for ⟨⟩ w.r.t.
This key property is critical to many of our algorithms and arguments.
Let  and  be finite set of variables, let  ⊆ Q[ ] be a set of polynomials, and let  : and it can be computed as follows.Without loss of generality, we assume  and  are disjoint.Let  be a Gröbner basis for the ideal generated by  ∪{ −  () :  ∈  }, with respect to an elimination order ≪  .Define inv.image( , ) ≜  ∩Q[ ].
Lemma 3.1 (Inverse image).Let  and  be finite sets of variables, let  ⊆ Q[ ] be a set of polynomials, and let  : Proof.Suppose that  be a Gröbner basis for the ideal generated by  ∪ { −  () : We can easily verify one direction of the equality by observing that A basis for  ∩  can be computed from  and  using Gröbner basis techniques. +  ⊆ Q[ ] is an ideal and represents the set { +  :  ∈ ,  ∈ }. +  is the smallest ideal containing  and , and  +  = ⟨ ∪  ⟩.For any ideal  ⊆ Q[ ] and polynomial  ∈ Q[ ], we denote the set { :  −  ∈  } (equivalently, { +  :  ∈  }) as  +  .We use Q[ ]/ to denote the ring with carrier { +  :  ∈ Q[ ]}, with addition and multiplication lifted to sets.

Commutative Algebra
Define a Q-algebra to be a commutative algebra over Q; that is, an algebraic structure that is both a commutative ring and a linear space over Q. Examples of Q-algebras include Q itself, the field of algebraic numbers Q, Q[ ], and Q[ ]/ for any set of variables  and ideal  ⊆ Q[ ].For any set of variables  , a Q-algebra  defines an algebra homomorphism (−)  : Q where   () =  ().2For any set  ⊆   , define the vanishing ideal of  to be Observe that for any polynomial endomorphism  : Note that for any Q-algebra , the set of infinite sequences over ,   , is also a Q-algebra.The multiplication and addition operations of   are defined pointwise.Let 0  and 1  be the additive and multiplicative unit of , then {0  } ∞ =0 and {1  } ∞ =0 are the additive and multiplicative unit of   .The scalar multiplication operation of   is defined as applying the scalar multiplication of  to each element of the infinite sequence.
For a Q-algebra , and a set  ⊆ , we use span () to denote the smallest subspace of  that contains , and alg () to denote the smallest Q-subalgebra of  that contains .

C-finite Recurrences
for constants   ∈ Q, for all  ≥ .Given a recurrence of the form from Eq. ( 1) the order of the recurrence is .The characteristic polynomial of a c-finite recurrence of the form from Eq. ( 1) is Every c-finite recurrence admits a closed-form as a polynomial-exponential [Everest et al. 2003].More specifically, given a recurrence of the form from Eq. ( 1) with   ∈ Q and  initial values (0), . . ., ( − 1) from some Q-algebra , then where each Θ  is a complex root of the characteristic polynomial of the recurrence, each    ∈ Q[]4 , and   () : N → Q with   () = 0 for any  ≥ .More specifically,   () = 0 for any  greater than or equal to the multiplicity of 0 as a root of the characteristic polynomial.If 0 is not a root of the characteristic polynomial then the terms   () = 0 for all  ∈ N and can be omitted from the closed-form.Determining such a closed-form from a recurrence is referred to as "solving" the recurrence.The roots of the Fibonacci characteristic polynomial 2 .Assuming,  (0) = 0 and  (1) = 1, a solution to the Fibonacci recurrence in the form of Eq. ( 2) is Binet's formula for all  ≠ ) such that for each   and each  ∈   ,  () can be written as (  ) + ℎ( 1 , . . .,   −1 ), where  is a linear polynomial in the variables   and ℎ is a polynomial (of arbitrary degree) in the variables C-finite recurrences are equivalent to solvable polynomial maps in the sense that each solvable polynomial map  : Due to this equivalence, solvable polynomial maps can effectively be "solved" in the form of Eq. ( 2) in the same way as c-finite recurrences.Hence the name solvable polynomial map.

Transition Formulas and Linear Integer/Real Rings
Fix a set of program variables  .We use  ′ = { ′ :  ∈  } to denote a set of "primed" copies of variables in  (presumed disjoint from  ).We use (−) ′ to denote the homomorphism A transition formula is a formula  with free variables in  and  ′ (in some first-order language), with the unprimed variables representing the pre-state of some computation, and the primed variables representing the post-state.For transition formulas  1 and  2 , we use to denote the sequential composition of  1 and  2 .For any transition formula  and natural number , we use Within this paper, we shall assume that transition formulas are expressed in the existential fragment of the language of non-linear mixed integer/real arithmetic (that is, the language of rational constants, addition, multiplication, an order relation, and an integrality predicate).Although this language is undecidable over the standard model, Kincaid et al. [2023] showed that ground satisfiability is decidable if we allow more general interpretations, namely over linear integer/real rings (LIRR).For our purposes, we may think of linear integer/real rings as Q-algebras that satisfy some additional axioms concerning the order relation and integrality predicate, which are not relevant to this paper.We will assume LIRR as a background theory in the remainder of the paper, and use  |= LIRR  to denote that the formula  entails the formula  modulo LIRR.
In addition to satisfiability being decidable, there is a procedure [Kincaid et al. 2023] for computing the vanishing ideal I LIRR ( ) of a formula  in the existential fragment of the language: the ideal of all polynomials  such that  |= LIRR  = 0. See Example 6.1 for an example of I LIRR ( ) from a formula  .For any ideal  generated by polynomials  1 ,• • • ,   , we use F( ) to denote the formula The choice of generators for  is irrelevant in the sense that if two sets of polynomials  and  generate the same ideal, then  ∈  = 0 and  ∈  = 0 are equivalent modulo LIRR.Note that I LIRR and F form a Galois connection: for any transition formula  and ideal  over the free variables of  , we have  |= LIRR F( ) if and only if I LIRR ( ) ⊇  .This implies that (1) for formulas  and , if  |= LIRR , then I LIRR ( ) ⊇ I LIRR (), and (2) for ideals  and  if  ⊇  , then F( ) |= LIRR F( ).

Transition Ideals
The main results of this paper are concerned with transition ideals.A transition ideal is an ideal in the ring Q[,  ′ ] for some set of variables  .Transition ideals are not tied to the theory LIRR, but can be seen as the vanishing ideals of transition formulas, and their operations can be understood in terms of corresponding operations on transition formulas.
For transition ideals  1 and  2 , define For any transition ideal  and natural number , define Note that, since the polynomials in a transition ideal are interpreted as constraints,  * can be interpreted as the set of constraints that are common to all   .In particular, if  is a transition formula, then for any  ∈ N, we have and the invariant domain of  to be dom * ( ) = ∞ =0 dom(  ).Informally, if we think of a transition ideal as a set of polynomial equations constraining the transition between a pre-state  and post-state  ′ , then the domain of  is the set of constraints that must be satisfied by a pre-state in order to have a successor.Pre-states that satisfy the invariant domain of  are those states which have arbitrarily long computations described by  .Given a transition ideal  , dom * ( ) can be calculated as follows.By inspection of the definition of   •  , we see dom(  ) ⊆ dom( +1 ) for any  ≥ 1.Therefore, we have the ascending chain of ideals: dom( ) ⊆ dom( 2 ) ⊆ dom( 3 ) ⊆ . . . .By Hilbert's basis theorem this chain must stabilize at some  .That is, there exists  ≥ 1 such that dom(  ) = dom(  +1 ) = dom * ( ).
We say that  is solvable if there is a solvable  : For this example, the invariant domain stabilizes at  = 2, and so dom( 2 ) = dom * ( ) = ⟨ − 1, ⟩. is not a solvable transition ideal.This is because the dynamics of ,  ↦ →  cannot be captured by a solvable polynomial map.
Note that even though there is a non-linear dependence on the variable  for the assignment  (),  is still solvable using the variable partition {, } ∪ { }.
is an example of an ultimately solvable transition ideal.
denotes the extension of  to the "doubled" vocabulary, which maps each  ∈  to  () and each  ′ ∈  ′ to  () ′ .We say that  is a linear simulation if it is linear and a simulation.
and  ⊆ Q[,  ′ ] are transition ideals and  :  →  and  :  →  are simulations, then their composition ;  ≜  •  is a simulation from  to  (again noting that simulations go in the opposite direction of polynomial homomorphisms).

SOLVABLE REFLECTIONS
In this section, we show that every transition ideal  ⊆ Q[,  ′ ] admits a solvable reflection: there is a solvable transition ideal  and a linear simulation  :  →  that is a closer approximation of  than any other simulating solvable transition ideal.
Example 4.1.Figure 3 illustrates a transition ideal  that will be used as a running example throughout this section.One may think of  as the polynomial map (corresponding to the first four generators of  ) restricted to the domain  −  2 − 1 (corresponding to the fifth).The map  is not solvable, since  exhibits a non-linear self-dependence (in fact,  ↦ → 4 (1 − ) is a logistic map, a famous example in chaos theory that Ulam and von Neumann [1947] suggested as the basis of a pseudo-random number generator).If we restrict the transition ideal to the variable , the resulting transition ideal   = ⟨ ′ −  + 1⟩ is solvable, and we can compute a closed form for its th iterate:    = ⟨ ′ −  + ⟩.This is an instance of a solvable abstraction of  :   is a solvable transition ideal that approximates the dynamics of  , and the nature of the approximation is given by the inclusion homomorphism

Fig. 3. A solvable reflection of a transition ideal
There are other solvable abstractions of  which capture different aspects of  's dynamics.Observe that while it's challenging to understand the dynamics of  or , their difference behaves predictably:  contains the polynomial ( ′ −  ′ ) − ( − ) + 1, indicating that the value of ( − ) decreases by 1 at each step.Coincidentally, the dynamics of ( − ) are identical to that of  (both decrease by 1), so this information can be represented as the solvable abstraction ⟨{ ↦ →  − } ,  ⟩.
Yet another solvable abstraction ⟨,  ⟩ is pictured in Fig. 3.This abstraction is more desirable than either ⟨{ ↦ →  } ,  ⟩ or ⟨{ ↦ →  − } ,  ⟩, in the sense that ⟨,  ⟩ captures the dynamics of not only ( − ) and , but also .In fact, ⟨,  ⟩ is a solvable reflection of  , in the sense that any other solvable abstraction of  similarly factors through ⟨,  ⟩.
(1)  is a linear simulation from  to  (2)  is solvable (3) For any other pair ⟨,  ⟩ satisfying 1 and 2, there exists a unique linear simulation v :  →  such that  = v; .
Section 4.1 describes a procedure for computing solvable reflections.We then extend this result in two ways: (1) Section 4.2 generalizes from solvable to ultimately solvable reflections, and (2) Section 4.3 generalizes from linear simulations to polynomial simulations.

Computing Solvable Reflections
To begin, we give an alternate characterization of solvable transition ideals, which will serve as the basis of our algorithm for computing solvable reflections.For a transition ideal Intuitively, Det ( , ) represents the set of linear functionals that are "determined up to "; i.e.,  constrains the post-state value of  ∈ Det ( , ) to be equal to some  ∈ .Observe that  is solvable exactly when there is an ascending chain We may derive from this solvability witness a stratification {, } ⊂ {, , } (i.e., the th stratum is the union of the first  cells in the ordered partition).Observe that Det ( , span ({, }) + alg (∅)) = span ({, }), and Det ( , span ({, , }) + alg (, )) = span ({, , }).
Theorem 4.1.Every transition ideal has a solvable reflection.
Proof.Let  ⊆ Q[,  ′ ] be a transition ideal.We may calculate the solvable reflection of  as follows.For each natural number , we define a linear subspace of polynomials S  ( ) ⊆ span ( ) as follows: • S 0 ( ) ≜ {0}.
For any  ∈ N, let   be the dimension of S  ( ).Choose an ordered basis  1 , . . .,   for S * ( ) such that for each ,  1 , . . .,    spans S  ( ) (such a basis may be obtained by choosing an arbitrary basis for S 0 ( ) and then extending it to S 1 ( ), then extending that basis to S 2 ( ), and so on).Let  = { 1 , . . .,   } be a set of variables disjoint from  , and let  : First, we show that  −1 [ ] is solvable.We construct a solvability witness as follows.For any  ≤ , let  () be the least number such that  ≤   ( ) .For each 1 ≤  ≤ , let   = {  :  () =  }.

Ultimately Solvable Reflections
The algorithm presented in Section 5 reveals that a weaker condition than solvability is sufficient in order to compute the Kleene closure of a transition ideal, namely the transition needs to be ultimately solvable.This raises the question of whether it's possible to compute ultimately solvable reflections of arbitrary transition ideals, and thereby obtain a more powerful algorithm for generating polynomial invariants for loops.In this section, we answer that question in the affirmative.
Let  be a transition ideal.Define a sequence ⟨ 0 , 0 ⟩ , ⟨ 1 , 1 ⟩ , . . .where  0 is the identity function,  0 =  , and for each  ≥ 0,  +1 is the simulation component of the solvable reflection of   +dom * (  ), Since for all , if   is not invertible then the dimension of   is strictly smaller than   −1 , there must be some first index  such that   is invertible.Define  * ( ) ≜  * ,  * −1 [𝑇 ] , where Lemma 4.3.Let  be a transition ideal.Then  * ( ) is an ultimately solvable reflection of  .
Proof.Observe that  * −1 [ ] is ultimately solvable, since (by the definition of  * ), the solvable reflection of Towards universality, we show that for all , we have (2) For any ultimately solvable  and simulation  :  →  , there is a unique simulation by induction on .The base case  = 0 is trivial.For the induction step, suppose that (1) and ( 2) hold for .By definition,  +1 is the simulation component of a solvable reflection of (  + dom * (  )), and and thus  +1 is a simulation from  +1 to  .For uniqueness, suppose , and   is the unique such simulation, we have   =  +1 • ′ +1 .Since  +1 is unique such that   =  +1 • +1 , we have  +1 =  ′ +1 .Finally we show that  * ,  * −1 [ ] is universal.Let  be ultimately solvable, and let  :  →  be a simulation.By (2), there is a unique simulation  −1 : we have the result.□

Polynomial Simulations
Here, we consider a generalization of our definition of (ultimately) solvable reflections, in which the simulation from a transition ideal to its reflection is a polynomial map rather than a linear map.Let  be a set of variables and let  ∈ N be a fixed degree bound.Let  ≤ be the set of monomials of degree at most  (excluding 1), let  be a set of variables of cardinality equal to that of  ≤ , and let   , :  →  ≤ be a bijection.Observe that if  is a set of variables and  : Q is a polynomial homomorphism of degree at most , then there is unique linear polynomial homomorphism ĝ such that  = ĝ •   , .As a result, we can reduce the problem of computing reflections with respect to bounded-degree polynomial simulations to the problem of computing reflections with respect to linear simulations: Lemma 4.4.Let  ⊆ Q[,  ′ ] be a transition ideal, and let  ∈ N be a fixed degree bound.Suppose that ⟨,  ⟩ is an (ultimately) solvable reflection of   , −1 [ ].Then  •   , ,  is an (ultimately) solvable reflection of  with respect to degree- simulations, in the sense that (1)  •   , has degree at most , (2)  is solvable, and (3) for solvable transition ideal  and simulation  from  to  of degree at most , there is a unique linear simulation v such that  = v •  •   , .
Proof.Since  is linear, the polynomial homomorphism  •   , is degree-, and so  •   , ,  is a degree- solvable abstraction of  .It remains only to show that is satisfies the desired universal property.
Suppose  ⊆ Q[,  ′ ] is an (ultimately) solvable transition ideal, and that  :  →  is a degree- simulation.Then there exists a unique linear simulation v : and  is (ultimately) solvable, there is a unique linear simulation  :

KLEENE CLOSURE OF SOLVABLE TRANSITION IDEALS
In this section we describe how to compute  * = ∞ =0   when  is either a solvable and ultimately solvable transition ideal.In doing so we introduce a sub-problem of potential independent interest.The sub-problem asks how to find the set of rational polynomials that evaluate to 0 for every position in a Q-algebra sequence defined by a solvable polynomial map.

Finding the Relations of a Solvable Map over a Q-algebra
Problem 5.1.Let  be a Q-algebra, let  be a finite set of variables, and let  ∈   .Given a solvable map  : Q[ ] → Q[ ] and basis  such that ⟨ ⟩ = I  ({ }), find a basis for Intuitively, the solvable map  in Problem 5.1 defines a sequence, (,  1  (),  2  (), . . .).The goal of the problem is to find a basis for the set of polynomials  ∈ Q[ ] such that (  (),   ( 1  ()),   ( 2  ()), . . . ) = (0, 0, 0, . . .).The purpose of the ideal ⟨ ⟩ = I  ({ }) is to give the set of polynomial relations of the first element of the sequence, .In the case of Problem 5.1 we take I  ({ }) as a given to encode the relevant information of  and .
be the polynomial homomorphism defined by  () = 2 and  () = 2.Then  is a solvable map that defines the following sequence over   : We can also view Problem 5.1 as defining | | c-finite sequences of  elements, where each sequence is the trajectory of a particular variable.If we take the special case where  = Q then the goal is to find the algebraic relations [Kauers and Zimmermann 2008] Kauers and Zimmermann [2008]) Let  be a field and  a (commutative) -algebra.An algebraic relation over  among  1 , . . .,   ∈  is an element of the kernel of the -algebra homomorphism  :  [ 1 , . . .,   ] →  that maps   to   .Kauers and Zimmermann [2008, Algorithm 2] presents a method to find the set of algebraic relations over Q for the case of a set of c-finite sequences over the Q-algebra, Q  ; consequently, solving Problem 5.1 for the case where  = Q.In this section we show how the method of Kauers and Zimmermann [2008] can be utilized to solve Problem 5.1 for the case of an arbitrary Q-algebra .First we briefly review the method of Kauers and Zimmermann [2008].
(2) Using the algorithm of Ge [1993] compute a basis  ⊆ Q[ 0 ,  1 , . . .,   ] for the ideal of algebraic relations over The resulting ideal, ⟨ ⟩, is the ideal of algebraic relations over Remark.It should be noted that the method presented above as well as in Kauers and Zimmermann [2008] only works when the characteristic polynomials of the input recurrences do not have 0 as a root.Kauers and Zimmermann [2008] notes this, and correctly states such a situation can be handled with a pre-processing step.In this paper, we are more explicit on how to handle 0 roots.
The main observation that leads to our method for a general Q-algebra  is that the only "new" algebraic relations over Q for the  sequences must come from the relations of the initial values of the sequence (the ideal ⟨ ⟩ = I  ({ }) in the statement of Problem 5.1).This observation leads to our method for solving Problem 5.1, which we present as Algorithm 3. might have 0 as a root of their characteristic polynomialshence the presence of the term    ().These    () terms mean that the method of [Kauers and Zimmermann 2008] cannot be directly applied.The following lemma shows how we can handle this general case.
Furthermore, suppose that there exists some  ∈ N such that for all 1 sequences.We use the next two lemmas to establish the needed property of the return value of Algorithm 3 and thus show that Algorithm 3 is correct.The desired result of Algorithm 3 is the ideal of algebraic relations over Q of the  sequences,    (  ) defined in Line 2 of Algorithm 3.These sequences are defined as a sum-of-products of rational sequences and constant sequences of the valuations  (  ).The next lemma (Lemma 5.3) shows that if we want to find the algebraic relations over Q among arbitrary rational sequences lifted to   and constant valuations  (  ) ∞ =0 :   ∈  , it is sufficient to consider the ideals of algebraic relations over Q among  (  ) ∞ =0 :   ∈  and the lifted rational sequences separately.This is what makes Algorithm 3 possible: we are given the algebraic relations over Q among  (  ) ∞ =0 :   ∈  as input, and we can calculate the algebraic relations over Q among the c-finite sequences defined in Line 2 using the algorithm of Kauers and Zimmermann [2008].The second lemma (Lemma 5.4) then applies Lemma 5.3 to the specific form of the    (  ) ∞ =0 sequences defined in Algorithm 3 to establish the correctness of the algorithm.Lemma 5.3.
Let  be a Q-algebra with additive unit 0  and multiplicative unit 1  .Let  = { 1 , . . .,   }.Let  ∈   and let Combining the previous paragraphs we have that, because  is a homomorphism, Let  be  reduced by a Gröbner basis for ⟨ ∪  ⟩ under the elimination order ≪  .That is  =   +   +  for some   ∈  ,   ∈  and for any other We can write  by collecting  monomials with respect to the order ≫  as follows where each    is a distinct monomial of  variables with   1 ≫     for  = 2, . . ., , and each Observe that there exists some  such that Then by definition   1 ∈  .But this contradicts the property that  is reduced with respect to  .That is, if   1 ( 1 , . . .,   ) ∈ ⟨ ⟩, then there exists a better  ′ that does not contain the monomial   1 .Therefore, there is some  such that   1 ( 1 (), . . .,   ()) ≠ 0. Let  be such that   1 ( 1 (), . . .,   ()) ≠ 0. We have  ( ) = {0  } ∞ =0 by assumption, and so  ( )  = 0  , where denotes the th element of  ( ).Because  ( )  = 0  , we must have Denote the polynomial in (3) as ℎ.We can rewrite  as follows For some  ′ containing  monomials   2 , . . .,    .However, this (nearly) contradicts the property that  is reduced with respect to  .That is,  = (  +   + , because  ′ does not contain the monomial   1 and   1 ≫     for  = 2, . . ., .The only way to avoid the contradiction is to have   1 be a constant.Therefore, because  is reduced with respect to an order that eliminates  variables and LM( , and therefore  ∈  .However, because  must be reduced with respect to  ,  = 0. Thus, . .,  1 , . . .,  1 , . . .,   ] be the ideal of algebraic relations over Q among these sequences.Let  be a Q-algebra,  = { 1 , . . .,   },  ∈   , and □ Combining Lemmas 3.1, 5.2 and 5.4 establishes the correctness of Algorithm 3.

Closing Transition Ideals
The result of Algorithm 3 produces a polynomial ideal that summarizes the algebraic relations over Q of a solvable polynomial map.In this subsection, we show how the result of Algorithm 3 can be used to compute  * for a (ultimately) solvable transition ideal  .Recall that every solvable transition ideal  comes with a solvability witness .The basic idea to compute  * is to use Algorithm 3 to summarize the algebraic relations over Q among the sequences defined by .However, Problem 5.1, which is solved by Algorithm 3, is defined over a Q-algebra .Hence, in this subsection we work to motivate and explain that in order to calculate  * , the Q-algebra we want to instantiate Problem 5.1 with is  = Q[,  ′ ]/dom * ( ).
Before we talk of Q-algebras, we make a brief observation of the structure of solvable transition ideal.Intuitively, a solvable transition ideal can be broken into a domain part, containing only unprimed variables, and a transition part.The next lemma formalizes this point.
Lemma 5.6.Let  ⊆ Q[,  ′ ] be a solvable transition ideal, and let  : We prove the lemma by induction on .Let  = 1.Because  is solvable,  ′  −  (  ) ∈  for   ∈  .Thus, a Gröbner basis for  with respect to ≪  ′ is of the form Now suppose the lemma holds for .We wish to show the lemma holds for  +1 .
From Lemma 5.6, we see that the iterated behavior of the transition ideal is mostly captured by the iterated behavior of the polynomial witness; what is missing is the dom(  ) part.Note that dom(  ) is an ideal for each .If we let  be the Q-algebra Q[,  ′ ]/ for some ideal  , we can define a sequence like the one in Problem 5.1 that uses a solvable witness  to transition not only variables but sets of polynomials with respect to  .This can be formalized in the language of Problem 5.1 for a solvable transition ideal  ⊆ Q[,  ′ ] with solvability witness  : The question then is what should we take for  .We want  to be the polynomials not captured by the iteration of the solvable witness.Thus, from Lemma 5.6 these are the polynomials in dom(  ).For Problem 5.1  needs to be fixed, so we have two obvious options for  , the domain of  , dom( ), or the invariant domain of  , dom * ( ).The next example shows what happens if we use the domain of  and why that gives us a mismatch for computing  * .
Taking I    for each  of the above sequence is nearly  , 2 , 3 , . . . .However, The essential problem with the previous example is that the domain of  is not stable for higher iterations of   .If instead of  = dom( ) in the previous example we used  = dom * ( ) then we would have the equality I    =   for  ≥ 2. This observation that equality can be recovered for some  by using the invariant domain is our key insight for computing  * .The issue of Example 5.2 is fixed using the reasoning in the following lemma.
The right-hand-side of the equation in Theorem 5.8 is computable.The term I  p () :  ∈ N can be computed by Algorithm 3 with I  ({ }) = dom * ( ) + .The term (  −1 =0   ) is a finite intersection of polynomial ideals which can be computed via Gröbner basis techniques.Asymptotically, the Gröbner basis calculations for computing intersections as well as the Gröbner basis calculations in Algorithm 3 dominate the running time, making the overall computation exponential.
Loop summarization is the problem of computing, for a given transition formula  representing the body of some loop, an over-approximation of the reflexive transitive closure of  .This section describes how to combine the components introduced in the previous two sections to accomplish this task.We prove the key property that our loop summarization procedure is monotone.Finally, we discuss how this procedure can be combined with other summarization techniques to enhance the ability of an algebraic program analyzer to generate non-linear loop summaries.
Our iteration operator takes a four-step approach (pictured Fig. 1b).Given an input transition formula  , (1) Compute the transition ideal I LIRR ( ) of  (using the algorithm of Kincaid et al. [2023]) (2) Compute a solvable reflection ⟨, ⟩ of I LIRR ( ) (Section 4) (3) Compute  * (Section 5) (4) Calculate the formula corresponding to the image of  * under .More succinctly, we define an operator (−) ⊛ : TF → TF to be where ⟨, ⟩ is a solvable reflection of I LIRR ( ).Naturally, one may repeat this recipe for defining a loop summarization operator by using ultimately solvable transition ideals (Section 4.2) and/or polynomial simulations (Section 4.3), and the soundness and monotonicity results that we prove below hold also for these variants.
Example 6.1.Consider the transition formula  below, and its associated transition ideal: Notice that, while  employs a rich logical language involving disjunction, negation, and inequalities, its ideal I LIRR ( ) is defined by the set of polynomials  such that  entails  = 0.A solvable reflection of I LIRR ( ) is ⟨, ⟩ where  is the map that sends  ↦ → ,  ↦ → ,  ↦ → ( + ), and  ↦ → , and  is the ideal .

Finally, we have 𝐹
Theorem 6.1 (Soundness).Let  be a transition formula.For any  ∈ N, we have   |= LIRR  ⊛ Proof.Let ⟨, ⟩ be a solvable reflection of I LIRR ( ).We may show that by induction on .The base case  = 0 is trivial.The induction step follows from the fact that (1) the sequential composition operator for ideals over-approximates the sequential composition operator for transition formulas, and ( 2 The loop summarization operator that is defined in this paper is designed to compute polynomial invariants.Such invariants are often just a component of a correctness argument for a program-for example, a correctness argument may rely upon reasoning about inequalities, or may require a disjunctive invariant.Our loop summarization operator can be incorporated in a broader invariant generation scheme by using various combinators to combine summarization operators.For instance, the simplest such combinator is a product, which combines two loop summarization ⊛ 1 and ⊛ 2 into one ⊛ 1 × ⊛ 2 by taking their conjunction: Provided that both the ⊛ 1 and ⊛ 2 operators are monotone, then (1) so is their product, and (2) the resulting analysis is at least as precise as either component analysis.Another kind of summarization combinator is the refinement technique proposed in Cyphert et al. [2019].This combinator exposes phase structure in loops, and in particular enables a "base" summarization operator that may only generate conjunctive invariants to produce disjunctive invariants.In addition to monotonicity, this combinator requires four additional axioms in order to guarantee that it improves analysis precision.The following proposition states that indeed our summarization operator satisfies these conditions.Proposition 6.3.Let  be a transition formula.Then the following hold For any natural number , (  ) * |= LIRR  * .
Let  be a natural number.Let ⟨, ⟩ be a solvable reflection of I LIRR (  ), and let ⟨,  ⟩ be a solvable reflection of I LIRR ( ).Since I LIRR ( )  ⊆ I LIRR (  ), ⟨, ⟩ is a solvable reflection of I LIRR (  ), and   is solvable, there is a (unique) simulation  : We consider two experimental questions concerning our methods for synthesizing loop invariants for general programs: (1) (Section 7.2) How do our techniques apply to the task of verifying general programs?
(2) (Section 7.3) How do our techniques for generating polynomial invariants perform on programs for which other tools guarantee completeness?
In relation to each of these questions we also want to understand the performance, both in terms of accuracy and running time, of using linear simulations as well as polynomial simulations of bounded degree for extracting solvable transition ideals from transition ideals.

Experimental Setup
Implementation.We implemented the techniques described in this paper in a tool called Abstractionator.Our implementation relies on • Chilon and ChilonInv [Kincaid et al. 2023], for LIRR operations and generating invariant inequalities, respectively.• The FGb library [Faugère 2010] for an implementation of the F4 algorithm [Faugère 1999], which we use for computing of Gröbner bases.
• Flint [The FLINT team 2023] for integer lattice computations and Arb [Johansson 2017] for numerical polynomial root finding.These operations are required to implement the algorithm of Ge [1993] used in Algorithm 3.
Abstractionator can be configured to use either linear or quadratic simulations, and either solvable or ultimately solvable transition ideals.Our testing revealed that (1) the difference between using solvable and ultimately solvable is negligible (both in success rate and runtime performance), and (2) the cost of naïve computation of the full inverse image   ,2 −1 [−] for quadratic simulations is prohibitively high.In the following, we report on two configurations of Abstractionator: USP-Lin is the product of the ChilonInv domain and iteration operator induced by ultimately solvable linear reflections, USP-Quad is the product of USP-Lin and the iteration operator induced by solvable quadratic simulations with a single stratum (which necessitates only computing the affine polynomials in   ,2 −1 [−], and is therefore more tractable).
Environment.We ran all experiments on a virtual machine (using Oracle VirtualBox), with a guest OS of Ubuntu 22.04 allocated with 8 GB of RAM, using a 4-core Intel Core i7-4790K CPU @ 4.00 GHz.All tools were run with the BenchExec [Wendler and Beyer 2023] tool using a time limit of 300 seconds on all benchmarks.Benchmarks.Our 202 benchmarks programs are sourced from the set of safe6 benchmarks from the c/ReachSafety-Loops subcategory of the Software Verification Competition (SV-COMP) [Beyer 2023].We divided our 202 benchmarks into a loops category consisting of 176 programs, and an NLA category consisting of 26 benchmarks.The NLA benchmarks are modified versions of the programs in the nla-digbench set from SV-COMP, intended to evaluate the strength of Abstractionator's ability to generate non-linear invariants.The nla-benchmark programs from SV-COMP have "proposed invariants" at each loop header, as well as assertions at the end of the programs as post conditions; we obtained the NLA suite by removing these "proposed invariants".As a result, non-linear invariants must be synthesized in order to prove the post-condition (rather than simply verifying that the proposed invariant is an invariant, and implies the post-condition).The program from Fig. 1a is an example of a program in the NLA suite.
Comparison Tools.We have compared our techniques with ChilonInv [Kincaid et al. 2023], CRA [Kincaid et al. 2018], VeriAbs 1.5.1-2 [Afzal et al. 2019], and ULTIMATE Automizer 0.2.3 [Heizmann et al. 2009].ChilonInv and CRA use a similar verification strategy of extracting implied solvable invariants of loop bodies to generate invariants of loops.VeriAbs and ULTIMATE Automizer are high performers at SV-COMP and provide context to the overall results.The strategies of ChilonInv, USP-Lin, and USP-Quad are all monotone algebraic analyses and the refinement technique of Cyphert et al. [2019] applies.Refinement is guaranteed to improve the precision of these three techniques, and so we have employed refinement in the comparison of these three strategies.

How do our Techniques Perform on a Suite of General Verification Tasks?
Table 1 gives the results of running each tool on the program verification benchmarks.Theoretically, in terms of precision, ChilonInv ⪯ USP-Lin ⪯ USP-Quad.However, this does not consider timeouts.Due to the increased power of USP-Lin and USP-Quad we would expect in terms of time taken ChilonInv ⪯ USP-Lin ⪯ USP-Quad, and this is what we see reflected in Table 1.In our experiments we found USP-Lin to outperform ChilonInv in both the loops category and the NLA category in terms of programs verified, at the expensive of additional running time.Theoretically, USP-Quad is stronger than ChilonInv and USP-Lin; however, the extra power comes at a price of running time.As can be seen from Table 1 USP-Quad performed worse on the loops category compared with ChilonInv and USP-Lin because of the number of timeouts.However, due to its strong non-linear reasoning capability, USP-Quad outperformed all the other tools on the difficult NLA benchmarks.
Theoretically, USP-Lin and USP-Quad are incomparable with the other tools.On one hand CRA's recurrence extraction procedure is weaker than the methods in this paper.However, CRA is also able to produce invariants involving exponential and polynomial terms, whereas the techniques in this paper are only able to produce invariants involving polynomial terms.VeriAbs is a portfolio of many different techniques, such as bounded model checking and k-induction.ULTIMATE Automizer implements a trace abstraction algorithm.We note that while USP-Lin outperformed VeriAbs and ULTIMATE Automizer on the loops category, VeriAbs and ULTIMATE Automizer have additional capabilities such as the ability to produce counterexamples in the case when an assertion does not hold.This capability is outside the scope of USP-Lin and USP-Quad.Nevertheless, we find USP-Lin to be quite competitive on our benchmark suite.It outperformed all other tools on the loops category except for CRA, where it is behind by only 5 examples.Moreover, because of the success of USP-Quad on the NLA suite we find that powerful techniques that generate polynomial invariants are required to verify interesting programs found in the literature.

How do our Techniques Compare with Prior Methods for Complete Generation of
Polynomial Invariants?
Table 2. USP-Lin and USP-Quad on the multi-path Aligator benchmarks.

USP-Lin
In this subsection, we consider how our method for generating polynomial invariants (which works on general programs) compares with the method presented by Humenberger et al. [2018] (which is complete, but applies to a more limited class of programs).The method of Humenberger et al. [2018] is implemented in a tool called Aligator.Both our methods of linear simulations as well as polynomial simulations are complete for loops whose bodies are described by a solvable polynomial map.
Aligator is also complete for such loops.However, the completeness result of Humenberger et al. [2018] also extends to multi-path loops, where each branch is described by a solvable polynomial map (e.g., a loop of the form while(*){ if (*) A else B }, where A and B are described by solvable polynomial maps).On such an example, Aligator will produce all polynomial invariants of the loop, but Abstractionator cannot make the same guarantee.At the level of a loop we abstract the loop body to a solvable transition ideal.In the case of while(*){ if (*) A else B }, we create a solvable transition ideal that abstracts both A and B, which is strictly weaker than considering A and B separately as in Humenberger et al. [2018].
We investigated how USP-Lin and USP-Quad perform on multi-path loops for which Aligator is complete, but our techniques are incomplete.A direct practical comparison between USP-Lin, USP-Quad, and Aligator is challenging because they take different formats as input.However, a subset of 6 programs from the multi-path benchmark suite of Aligator are applicable for our tool7 .All of these 6 programs are found in the NLA suite discussed in Section 7.2.More detailed results of running USP-Lin and USP-Quad on these six examples can be found in Table 2.The completeness result of Humenberger et al. [2018] applies to these 6 programs, so given enough time Aligator would be able to verify all 6 of them.As can be seen from Table 2 USP-Lin, is unable to verify any of the 6 programs; however, USP-Quad is able to verify 4 of the 6.For the other 2 programs, the reason USP-Quad is unable to succeed is because those examples perform integer division in a situation in which no round-off occurs.In these programs this property is essentially encoded with an exponential invariant, which is outside the capabilities of USP-Quad.From the results of Table 2 we conclude that while the class of loops for which our technique is complete is a subset of Aligator's, we can still generate most of the invariants needed to prove correctness.

RELATED WORK
Polynomial abstractions of loops.The algorithm in Section 4 for computing the solvable reflection of a transition ideal can be seen as both a refinement of Kincaid et al. [2018]'s algorithm for extracting a solvable polynomial map from a transition formula and a generalization of Zhu and Kincaid [2021a]'s algorithm for computing deterministic affine reflections.Contrasting with [Kincaid et al. 2018], our algorithm is guaranteed to find a best abstraction as a solvable transition ideal, which is essential to prove monotonicity of our analysis.Contrasting with [Zhu and Kincaid 2021a], our algorithm consumes and produces transition ideals, which generalize affine relations.Amrollahi et al. [2022] considers the problem of abstracting polynomial endomorphisms by solvable polynomial maps.The technique presented in Section 4 is more general in the sense that it operates on transition ideals rather than polynomial endomorphisms.A polynomial endomorphism  : Q[ ] → Q[ ] can be encoded as a transition ideal, generated by the polynomials { ′ −  () :  ∈  }, in which case the algorithm in Section 4.1 computes a solvable transition ideal (from which we may recover a solvable polynomial map-that is, our procedure serves the same purpose as of Amrollahi et al. [2022] for the inputs considered in that work).Moreover, our procedure provides a precision guarantee: it finds solvable reflections of transition ideals.
For example, consider the loop below (left) along with its solvable reflection (right) While the technique in [Amrollahi et al. 2022] is able to identify the first polynomial in the reflection (corresponding to the update ( + ) := ( + ) + 1) it cannot find the second ( ′ :=  + ( + ) 2 ), since there is a non-linear dependence of  upon the "defective" variables  and  whose dynamics cannot be described by a solvable polynomial map.Frohn et al. [2020] considers another related problem: given a polynomial endomorphism , is there a polynomial automorphism  such that  −1 •  •  is solvable?The procedure in Section 4 can also be used to solve this problem: if ⟨, ⟩ is the solvable reflection of , then such an  exists (namely, ) exactly when the ambient dimension of  is equal to that of  (and  has real eigenvalues).Section 4 generalizes this result in the sense that, (1) we operate on transition ideals rather than polynomial endomorphisms and ( 2) should the answer to the decision problem be "no", we may still compute an abstraction of .
Complete polynomial invariant generation.Hrushovski et al. [2018Hrushovski et al. [ , 2023]]; Humenberger et al. [2018]; Kovács [2008]; Rodríguez-Carbonell and Kapur [2004] are complete methods for generating polynomial invariants on limited program structures.Our method matches the completeness results of these works on single loops whose bodies are described by solvable polynomial maps; however, the completeness result of each of these works cover additional situations.Hrushovski et al. [2018Hrushovski et al. [ , 2023] ] present a method that is complete for generating polynomial invariants for affine programs where all branching represents non-deterministic choice.Our methods have no issue analyzing such programs.Moreover, our method can also reason about programs with polynomial assignments as well as branching with conditionals.However, even though our method can reason about general affine programs, we can only guarantee completeness in the case of a loop whose body is described by a solvable polynomial map.Kovács [2008] presents complete polynomial invariant generation for P-solvable loops.These are loops, with no branching, whose bodies have either Gosper-summable or c-finite assignments.As stated in Section 3.3 c-finite sequences are equivalent to solvable polynomial maps, and so our technique matches Kovács [2008] in that regard.However, while we always extract a solvable transition ideal from a loop, solvable transition ideals are not powerful enough to capture certain Gosper-summable examples.Thus, while our method is monotone on such examples, it does not guarantee completeness.Humenberger et al. [2018] extends Kovács [2008] to the case of multi-path loops where each branch has a body with Gosper-summable or c-finite recurrence assignments.In the Gosper-summable case the comparison is the same as Kovács [2008].In the multi-path c-finite case we are also not complete; however, we experimentally compare with Humenberger et al. [2018] in Section 7.3.In either the case of Kovács [2008] or Humenberger et al. [2018] they cannot make a completeness guarantee for programs having branching with conditionals or programs with arbitrary loop nesting.Rodríguez-Carbonell and Kapur [2004]; Rodríguez-Carbonell and Kapur [2007] present a complete method for the case of a single multi-path loop where each branch has a body described by a c-finite recurrence.This matches the c-finite case of Humenberger et al. [2018], except Rodríguez-Carbonell and Kapur [2004]; Rodríguez-Carbonell and Kapur [2007] have an additional restriction on the eigenvalues of the c-finite recurrences (corresponding to the Θ  variables of Eq. ( 2)).For Rodríguez-Carbonell and Kapur [2004]; Rodríguez-Carbonell and Kapur [2007] the eigenvalues are required to be positive and rational.We have no such restriction and so our method generalizes Rodríguez-Carbonell and Kapur [2004]; Rodríguez-Carbonell and Kapur [2007] in the case of a simple loop where the body is described by a c-finite recurrence.However, their completeness result goes beyond our capability in the case of a multi-path loop with positive rational eigenvalues.
Template Based Methods.Another method for generating polynomial invariants is to reduce the problem to constraint solving by supposing that the invariant takes the form of some parameterized template, and solving for the parameters [Cachera et al. 2012;Chatterjee et al. 2020;Goharshady et al. 2023;Kojima et al. 2018;Müller-Olm and Seidl 2004;Oliveira et al. 2016;Sankaranarayanan et al. 2004].These methods have the benefit of being able to handle problems with arbitrary control flow.Furthermore, they are often complete for generating invariants that fit the given template.Many template methods consider all polynomials up to some bounded degree.In such cases when the desired polynomial is within the degree bound, template based methods have the potential to generate invariants for general programs that our method would theoretically miss.In contrast, our method does not require a degree bound.Even for linear simulations, there is no bound on the degree of the invariant our method calculates.
Monotone algebraic program analysis.A recent line of work has used the framework of algebraic program analysis to develop program analyses with monotonicity guarantees [Kincaid et al. 2023;Silverman and Kincaid 2019;Zhu and Kincaid 2021a,b].In particular, Kincaid et al. [2023] proposes a monotone loop summarization algorithm based on the theory of linear integer/real rings.Our technique is complementary in the sense that our method computes stronger invariant polynomial equations than [Kincaid et al. 2023], but cannot synthesize invariant polynomial inequalities.

DATA-AVAILABILITY STATEMENT
An implementation of Abstractionator and experimental scripts are available on Zenodo [Cyphert and Kincaid 2023].

Fig. 2 .
Fig. 2.An abstraction of the outer-loop transition ideal

Table 1 .
Comparison of tools on the loops and NLA benchmarks.T represents the amount of time, in seconds, take by each tool not including timeouts nor out of memory exceptions.The number of timeouts is reported in parentheses.We also experienced out of memory exceptions with VeriAbs which are noted in parentheses.#P represents the number of benchmarks proved correct.The best results in each category is bolded.