skip to main content
research-article
Open Access

Strong-separation Logic

Published:15 July 2022Publication History

Skip Abstract Section

Abstract

Most automated verifiers for separation logic are based on the symbolic-heap fragment, which disallows both the magic-wand operator and the application of classical Boolean operators to spatial formulas. This is not surprising, as support for the magic wand quickly leads to undecidability, especially when combined with inductive predicates for reasoning about data structures. To circumvent these undecidability results, we propose assigning a more restrictive semantics to the separating conjunction. We argue that the resulting logic, strong-separation logic, can be used for symbolic execution and abductive reasoning just like “standard” separation logic, while remaining decidable even in the presence of both the magic wand and inductive predicates (we consider a list-segment predicate and a tree predicate)—a combination of features that leads to undecidability for the standard semantics.

Skip 1INTRODUCTION Section

1 INTRODUCTION

Separation logic [Reynolds 2002] is one of the most successful formalisms for the analysis and verification of programs making use of dynamic resources such as heap memory and access permissions [Bornat et al. 2005; O’Hearn 2007; Calcagno et al. 2011; Berdine et al. 2011; Dudka et al. 2011; Jacobs et al. 2011; Calcagno et al. 2015]. At the heart of the success of separation logic (SL) is the separating conjunction, \( * \), which supports concise statements about the disjointness of resources. In this article, we will focus on separation logic for describing the heap in single-threaded heap-manipulating programs. In this setting, the formula \( \varphi *\psi \) can be read as “the heap can be split into two disjoint parts, such that \( \varphi \) holds for one part and \( \psi \) for the other.”

Our article starts from the following observation: The standard semantics of \( * \) allows splitting a heap into two arbitrary sub-heaps. The magic-wand operator \( {-\!\!\ast } \), which is the adjoint of \( * \), then allows adding arbitrary heaps. This arbitrary splitting and adding of heaps makes reasoning about SL formulas difficult, and quickly renders separation logic undecidable when inductive predicates for data structures are considered. For example, Demri et al. [2018] recently showed that adding only the singly linked list-segment predicate to propositional separation logic (i.e., with \( *,{-\!\!\ast } \) and classical connectives \( \wedge ,\vee ,\lnot \)) leads to undecidability.

Most SL specifications used in automated verification do not, however, make use of arbitrary heap compositions. For example, the widely used symbolic-heap fragments of separation logic considered, e.g., in Berdine et al. [2004]; , 2005]; Cook et al. [2011]; Iosif et al. [2013]; , 2014], have the following property: a symbolic heap satisfies a separating conjunction, if and only if one can split the model at locations that are the values of some program variables.

Motivated by this observation, we propose a more restrictive separating conjunction that allows splitting the heap only at locations that are the values of some program variables. We call the resulting logic strong-separation logic. Strong-separation logic (SSL) shares many properties with standard separation-logic semantics; for example, the models of our logic form a separation algebra. Because the frame rule and other standard SL inference rules continue to hold for SSL, SSL is suitable for deductive Hoare-style verification à la [Ishtiaq and O’Hearn 2001a; Reynolds 2002], symbolic execution [Berdine et al. 2005], as well as abductive reasoning [Calcagno et al. 2011; , 2015]. At the same time, SSL has the advantage to be decidable (in PSPACE) for a logic that combines the singly linked list-segment predicate, classical negation and the magic wand, which is undecidable over standard semantics [Demri et al. 2018]; moreover, the PSPACE complexity matches the complexity of the same fragment without the singly linked list-segment predicate over standard semantics [Calcagno et al. 2001].

We now give a more detailed introduction to the contributions of this article.

The standard semantics of the separating conjunction. To be able to justify our changed semantics of \( * \), we need to introduce a bit of terminology. As standard in separation logic, we interpret SL formulas over stack–heap pairs. A stack is a mapping of the program variables to memory locations. A heap is a finite partial function between memory locations; if a memory location \( l \) is mapped to location \( l^{\prime } \), we say the heap contains a pointer from \( l \) to \( l^{\prime } \). A memory location \( l \) is allocated if there is a pointer of the heap from \( l \) to some location \( l^{\prime } \). We call a location dangling if it is the target of a pointer but not allocated; a pointer is dangling if its target location is dangling.

Dangling pointers arise naturally in compositional specifications, i.e., in formulas that employ the separating conjunction \( * \): In the standard semantics of separation logic, a stack–heap pair \( (s,h) \) satisfies a formula \( \varphi *\psi \), if it is possible to split the heap \( h \) into two disjoint parts \( h_1 \) and \( h_2 \) such that \( (s,h_1) \) satisfies \( \varphi \) and \( (s,h_2) \) satisfies \( \psi \). Here, disjoint means that the allocated locations of \( h_1 \) and \( h_2 \) are disjoint; however, the targets of the pointers of \( h_1 \) and \( h_2 \) do not have to be disjoint.

We illustrate this in Figure 1(a), where we show a graphical representation of a stack–heap pair \( (s,h) \) that satisfies the formula \( \mathtt {ls}(x,y)*\mathtt {ls}(y,\mathsf {nil}) \). Here, \( \mathtt {ls} \) denotes the list-segment predicate. As shown in Figure 1(a), \( h \) can be split into two disjoint parts \( h_1 \) and \( h_2 \) such that \( (s,h_1) \) is a model of \( \mathtt {ls}(x,y) \) and \( (s,h_2) \) is a model of \( \mathtt {ls}(y,\mathsf {nil}) \). Now, \( h_1 \) has a dangling pointer with target \( s(y) \) (displayed by the arrow to \( y \)), while no pointer is dangling in the heap \( h \).

Fig. 1.

Fig. 1. Two models and their decomposition into disjoint submodels. The dangling arrows represent dangling pointers.

In what sense is the standard semantics too permissive?. The standard semantics of \( * \) allows splitting a heap into two arbitrary sub-heaps, which may result in the introduction of arbitrary dangling pointers into the sub-heaps. We note, however, that the introduction of dangling pointers is not arbitrary when splitting the models of \( \mathtt {ls}(x,y)*\mathtt {ls}(y,\mathsf {nil}) \); there is only one way of splitting the models of this formula, namely, at the location of program variable \( y \). The formula \( \mathtt {ls}(x,y)*\mathtt {ls}(y,\mathsf {nil}) \) belongs to a certain variant of the symbolic-heap fragment of separation logic, and all formulas of this fragment have the property that their models can only be split at locations that are the values of some variables.

Standard SL semantics also allows the introduction of dangling pointers without the use of variables. Figure 1(b) shows a model of \( \mathtt {ls}(x,\mathsf {nil}) * \mathsf {t} \)—assuming the standard semantics. Here, the formula \( \mathsf {t} \) (for true) stands for any arbitrary heap. In particular, this includes heaps with arbitrary dangling pointers into the list segment \( \mathtt {ls}(x,\mathsf {nil}) \). This power of introducing arbitrary dangling pointers is what is used by Demri et al. [2018] for their undecidability proof of propositional separation logic with the singly linked list-segment predicate.

Strong-separation logic. In this article, we want to explicitly disallow the implicit sharing of dangling locations when composing heaps. We propose to parameterize the separating conjunction by the stack and exclusively allow the union of heaps that only share locations that are pointed to by the stack. For example, the model in Figure 1(b) is not a model of \( \mathtt {ls}(x,\mathsf {nil})*\mathsf {t} \) in our semantics because of the dangling pointers in the sub-heap that satisfies \( \mathsf {t} \). SSL is the logic resulting from this restricted definition of the separating conjunction.

Why should I care?. We argue that SSL is a promising proposal for automated program verification:

(1) We show that the memory models of strong-separation logic form a separation algebra [Calcagno et al. 2007], which guarantees the soundness of the standard frame rule of SL [Reynolds 2002]. Consequently, SSL can be potentially be used instead of standard SL in a wide variety of (semi-)automated analyzers and verifiers, including Hoare-style verification [Reynolds 2002], symbolic execution [Berdine et al. 2005], and bi-abductive shape analysis [Calcagno et al. 2011].

(2) To date, most automated reasoners for separation logic have been developed for symbolic-heap separation logic [Berdine et al. 2004; , 2005; Calcagno et al. 2011; Iosif et al. 2013; , 2014; Katelaan et al. 2019; Pagel et al. 2020; Katelaan and Zuleger 2020]. In these fragments of separation logic, assertions about the heap can exclusively be combined via \( * \); neither the magic wand \( {-\!\!\ast } \) nor classical Boolean connectives are permitted. We show that the strong semantics agrees with the standard semantics on symbolic heaps. For this reason, symbolic-heap SL specifications remain unchanged when switching to strong-separation logic.

(3) We establish that the satisfiability and entailment problem for full propositional separation logic with a singly linked list-segment predicate and a tree predicate is decidable in our semantics (in \( {\rm PS}{\rm\small{PACE}} \))—in stark contrast to the aforementioned undecidability result obtained by Demri et al. [2018] assuming the standard semantics.

(4) The standard Hoare-style approach to verification requires discharging verification conditions (VCs), which amounts to proving for loop-free pieces of code that a pre-condition implies some post-condition. Discharging VCs can be automated by calculi that symbolically execute the pre-condition forward, respectively, the post-condition backward, and then using an entailment checker for proving the implication. For SL, symbolic execution calculi can be formulated using the magic wand, respectively, the septraction operator. However, these operators have proven to be difficult for automated procedures: “VC-generators do not work especially well with separation logic, as they introduce magic-wand \( {-\!\!\ast } \) operators, which are difficult to eliminate” [Appel 2014, p. 131]. In contrast, we demonstrate that SSL can overcome the described difficulties. We formulate a forward symbolic execution calculus for a simple heap-manipulating programming language using SSL. In conjunction with our entailment checker, see (3), our calculus gives rise to a fully automated procedure for discharging VCs of loop-free code segments.

(5) Computing solutions to the abduction problem is an integral building block of Facebook’s Infer analyzer [Calcagno et al. 2015], required for a scalable and fully automated shape analysis [Calcagno et al. 2011]. We show how to compute explicit representations of optimal, i.e., logically weakest and spatially minimal, solutions to the abduction problem for the separation logic considered in this article. The result is of theoretical interest, as explicit representations for optimal solutions to the abduction problem are hard to obtain [Calcagno et al. 2011; Gorogiannis et al. 2011].

Contributions. Our main contributions are as follows:

(1)

We propose and motivate strong-separation logic(SSL), a new semantics for separation logic.

(2)

We present a \( {\rm PS}{\rm\small{PACE}} \) decision procedure for strong-separation logic with points-to assertions, a list-segment predicate, a tree predicate, as well as spatial and Boolean operators, i.e., \( *,{-\!\!\ast },\wedge ,\vee ,\lnot \)—a logic that is undecidable when assuming the standard semantics [Demri et al. 2018].

(3)

We present symbolic execution rules for SSL, which allow us to discharge verification conditions fully automatically.

(4)

We show how to compute explicit representations of optimal solutions to the abduction problem for the SSL considered in (2).

We strongly believe that these results motivate further research on SSL (e.g., going beyond the singly linked list-segment predicate, implementing our decision procedure and integrating it into fully automated analyzers).

Journal version. This journal version substantially extends the conference version of this article [Pagel and Zuleger 2021] in several regards:

(1)

We have added a tree predicate to the considered separation logic, while the only data-structure predicate in the conference version of this article was the list-segment predicate. We show that all our decidability and complexity results continue to hold for the extended logic. For didactic reasons, we still first introduce a separation logic that only has the list-segment predicate and develop our decision procedure for this restricted logic. After that, we extend our results to trees in a separate section (Section 3.8).

(2)

We present an operational semantics for the program statements considered in the program verification section (Section 4) and prove the correctness of our symbolic execution calculus with regard to this semantics. The operational semantics and the proof of correctness were left out in the conference version for space reasons.

(3)

We improve the exposition of the section on normal forms and abduction (Section 5) by adding the missing proofs, improving the formula that characterizes abstract memory states, and adding the result that the normal form transformation is a closure operator (see Theorem 5.4).

(4)

For the result on the closure operator to hold, we had to adapt and improve the definition of the chunk size of a formula (see Section 3.5). The new definition gives strictly smaller bounds on the number of chunks than the definition from the conference version. This improvement is not only helpful for the result on the closure operator, but will also have practical impact in future implementations of our decision procedure.

(5)

We give all proofs that were left out in the conference version due to space reasons.

Related work. The undecidability of separation logic was established already in Calcagno et al. [2001]. Since then, decision problems for a large number of fragments and variants of separation logic have been studied. Most of this work has been on symbolic-heap separation logic or other variants of the logic that neither support the magic wand nor the use of negation below the \( * \) operator. While entailment in the symbolic-heap fragment with inductive definitions is undecidable, in general [Antonopoulos et al. 2014], there are decision procedures for variants with built-in lists and/or trees [Berdine et al. 2004; Cook et al. 2011; Pérez and Rybalchenko 2013; Piskac et al. 2013; , 2014], support for defining variants of linear structures [Gu et al. 2016] or tree structures [Tatsuta and Kimura 2015; Iosif et al. 2014] or graphs of bounded tree width [Iosif et al. 2013; Katelaan et al. 2019]. The expressive heap logics Strand [Madhusudan et al. 2011] and Dryad [Qiu et al. 2013] also have decidable fragments, as have some other separation logics that allow combining shape and data constraints. Besides the already mentioned work [Piskac et al. 2013; , 2014], these include Le et al. [2017]; Katelaan et al. [2018].

Among the aforementioned works, the graph-based decision procedures of Cook et al. [2011] and Katelaan et al. [2018] are most closely related to our approach. Note however, that neither of these works supports reasoning about magic wands or negation below the separating conjunction.

In contrast to symbolic-heap SL, separation logics with the magic wand quickly become undecidable. Propositional separation logic with the magic wand, but without inductive data structures, was shown to be decidable in \( {\rm PS}{\rm\small{PACE}} \) in the early days of SL research [Calcagno et al. 2001]. Support for this fragment was added to CVC4 a few years ago [Reynolds et al. 2016]. Some tools have “lightweight” support for the magic wand involving heuristics and user annotations, in part motivated by the lack of decision procedures [Blom and Huisman 2015; Schwerhoff and Summers 2015].

There is a significant body of work studying first-order SL with the magic wand and unary points-to assertions, but without a list predicate. This logic was first shown to be undecidable in Brochenin et al. [2012]; a result that has since been refined, showing e.g. that while satisfiability is still in \( {\rm PS}{\rm\small{PACE}} \) if we allow one quantified variable [Demri et al. 2014], two variables already lead to undecidability, even without the separating conjunction [Demri and Deters 2014]. Echenim et al. [2019] have recently addressed the satisfiability problem of SL with \( \exists ^{*}\forall ^{*} \) quantifier prefix, separating conjunction, magic wand, and full Boolean closure, but no inductive definitions. The logic was shown to be undecidable in general (contradicting an earlier claim [Reynolds et al. 2017]), but decidable in \( {\rm PS}{\rm\small{PACE}} \) under certain restrictions.

Above, we have focussed on the related work with regard to automated decision procedures. Here, we also mention several projects that target general and powerful frameworks rather than automation. Iris [Jung et al. 2018], FCSL [Sergey et al. 2015], and TaDA [da Rocha Pinto et al. 2014] provide frameworks for the verification of fine-grained concurrent programs, supporting higher-order functions, concurrency, ownership, and rely guarantee reasoning. The separation logics employed in these frameworks are parameterized by the underlying separation algebras, respectively, resource monoids, which can be specified by the user. Iris [Jung et al. 2018] and FCSL [Sergey et al. 2015] have been formalized in the Coq Proof assistant ensuring the soundness of the meta-theory. Because the cited approaches provide versatile and expressive frameworks, the involved logics are typically not decidable and proofs need to be done manually (respectively, interactively making use of Coq proof tactics), whereas we propose in this article a specific separation logic and establish decision procedures and complexity results for the considered logic. We further mention that our logic is a classical separation logic allowing to prove the absence of memory leaks. In contrast, Iris uses an intuitionistic semantics,1 which does not allow proving the absence of resources; this design choice has been made for principle reasons, because the later modality supported by Iris does not allow to incorporate the law of excluded middle [Jung et al. 2018]. Relatedly, TaDA [da Rocha Pinto et al. 2014] employs predicates that are upwards-closed sets of worlds, i.e., an intuitionistic semantics. At the present stage, it is difficult to determine whether the TaDA framework could be adapted to classical semantics. We finally mention the flow framework [Krishna et al. 2018; , 2020], which identifies separation algebras that can be used for reasoning about global graph properties such as reachabiltiy, acyclicity, and so on, in a modular way. The goal of the flow framework was to identify the mathematical foundations for such reasoning while leaving the (promising) automation of flow-based proofs for future work. With regard to automation, we remark that the general framework will likely not admit decidability results without putting further restrictions on the considered separation algebras.2

Outline. In Section 2, we introduce two semantics of propositional separation logic, the standard semantics and our new strong-separation semantics. We show the decidability of the satisfiability and entailment problems of SSL with lists and trees in Section 3 (we first show the decidability for SSL with lists but without trees, and then extend our results to trees in Section 3.8). We present symbolic execution rules for SSL in Section 4. We show how to compute explicit representations of optimal solutions to the abduction problem in Section 5. We conclude in Section 6.

Skip 2STRONG- AND WEAK-SEPARATION LOGIC Section

2 STRONG- AND WEAK-SEPARATION LOGIC

2.1 Preliminaries

We denote by \( \left|X\right| \) the cardinality of the set \( X \). Let \( f \) be a (partial) function. Then, \( \operatorname{dom}(f) \) and \( \operatorname{img}(f) \) denote the domain and image of \( f \), respectively. We write \( \left|f\right| := \left|\operatorname{dom}(f)\right| \) and \( f(x) = \bot \) for \( x \not\in \operatorname{dom}(f) \). We frequently use set notation to define and reason about partial functions: \( f := \left\lbrace x_1 \mapsto y_1, \ldots , x_k \mapsto y_k\right\rbrace \) is the partial function that maps \( x_i \) to \( y_i \), \( 1 \le i \le k \), and is undefined on all other values; \( f^{-1}(b) \) is the set of all elements \( a \) with \( f(a) = b \); we write \( f \cup g \), respectively, \( f \cap g \) for the union, respectively, intersection of partial functions \( f \) and \( g \), provided that \( f(a)=g(a) \) for all \( a \in \operatorname{dom}(f) \cap \operatorname{dom}(g) \); similarly, \( f \subseteq g \) holds if \( \operatorname{dom}(f)\subseteq \operatorname{dom}(g) \). Given a partial function \( f \), we denote by \( f[x/v] \) the updated partial function in which \( x \) maps to \( v \), i.e., \( \begin{equation*} f[x/v](y) = {\left\lbrace \begin{array}{ll} v, & \text{if } y=x, \\ f(y) & \text{otherwise}, \end{array}\right.} \end{equation*} \) where we use \( v = \bot \) to express that the updated function \( f[x/v] \) is undefined for \( x \).

Sets and ordered sequences are denoted in boldface, e.g., \( \mathbf {x} \). To list the elements of a sequence, we write \( \langle x_1,\ldots ,x_k\rangle \).

We assume a linearly ordered infinite set of variables \( \mathbf {Var} \) with \( \mathsf {nil}\in \mathbf {Var} \) and denote by \( \max (\mathbf {v}) \) the maximal variable among a set of variables \( \mathbf {v} \) according to this order. In Figure 2, we define the syntax of the separation-logic fragment we study in this article. The atomic formulas of our logic are the empty-heap predicate \( \mathbf {emp} \), points-to assertions \( x \mapsto y \), the list-segment predicate \( \mathtt {ls}(x,y) \), equalities \( x = y \) and disequalities \( x \ne y \)3; in all these cases, \( x,y \in \mathbf {Var} \). (We note that for the moment our separation logic does not include a tree predicate. We defer this extension to Section 3.8.) Formulas are closed under the classical Boolean operators \( \wedge ,\vee ,\lnot \) as well as under the separating conjunction \( * \) and the existential magic wand, also called septraction, \( {-\!\!\circledast } \) (see e.g. [Brochenin et al. 2012]). We collect the set of all SL formulas in \( \mathbf {SL} \). We also consider derived operators and formulas, in particular the separating implication (or magic wand), \( {-\!\!\ast } \), defined by \( \varphi {-\!\!\ast }\psi := \lnot (\varphi {-\!\!\circledast }\lnot \psi) \).4 We also use true, defined as \( \mathsf {t}:= \mathbf {emp}\vee \lnot \mathbf {emp} \). Finally, for \( \Phi =\left\lbrace \varphi _1,\ldots ,\varphi _n\right\rbrace \), we set \( \mathop {{\ast }}\Phi := \varphi _1 * \varphi _2 * \cdots * \varphi _n \), if \( n \gt 1 \), and \( \mathop {{\ast }}\Phi := \mathbf {emp} \), if \( n = 0 \). By \( \mathsf {fvs}(\varphi) \), we denote the set of (free) variables of \( \varphi \). We define the size of the formula \( \varphi \) as \( \left|\varphi \right| = 1 \) for atomic formulas \( \varphi \), \( \left|\varphi _1 \times \varphi _2\right| := \left|\varphi _1\right|+\left|\varphi _2\right|+1 \) for \( \times \in \left\lbrace \wedge ,\vee ,*,{-\!\!\circledast }\right\rbrace \) and \( \left|\lnot \varphi _1\right|:=\left|\varphi _1\right|+1 \).

Fig. 2.

Fig. 2. The syntax of separation logic with list segments.

2.2 Two Semantics of Separation Logic

Memory model. \( \mathbf {Loc} \) is an infinite set of heap locations. A stack is a partial function \( s:\mathbf {Var}\rightharpoonup \mathbf {Loc} \). A heap is a partial function \( h:\mathbf {Loc}\rightharpoonup \mathbf {Loc} \). A model is a stack–heap pair \( (s,h) \) with \( \mathsf {nil}\in \operatorname{dom}(s) \) and \( s(\mathsf {nil}) \notin \operatorname{dom}(h) \). We let \( \mathsf {locs}(h) := \operatorname{dom}(h) \cup \operatorname{img}(h) \). A location \( \ell \) is dangling if \( \ell \in \operatorname{img}(h)\setminus \operatorname{dom}(h) \). We write \( \mathbf {S} \) for the set of all stacks and \( \mathbf {H} \) for the set of all heaps.

Two notions of disjoint union of heaps. We write \( h_1 +h_2 \) for the union of disjoint heaps, i.e., \( \begin{equation*} h_1 +h_2 := {\left\lbrace \begin{array}{ll} h_1 \cup h_2, & \text{if } \operatorname{dom}(h_1)\cap \operatorname{dom}(h_2)=\emptyset ,\\ \bot ,& \text{otherwise.} \end{array}\right.} \end{equation*} \) This standard notion of disjoint union is commonly used to assign semantics to the separating conjunction and magic wand. It requires that \( h_1 \) and \( h_2 \) are domain-disjoint, but does not impose any restrictions on the images of the heaps. In particular, the dangling pointers of \( h_1 \) may alias arbitrarily with the domain of \( h_2 \) and vice-versa.

Let \( s \) be a stack. We write \( h_1 \uplus ^{s}h_2 \) for the disjoint union of \( h_1 \) and \( h_2 \) that restricts aliasing of dangling pointers to the locations in stack \( s \). This yields an infinite family of union operators: one for each stack. Formally, \( \begin{equation*} h_1 \uplus ^{s}h_2 := {\left\lbrace \begin{array}{ll} h_1 +h_2,& \text{if } (\operatorname{dom}(h_1)\cap \operatorname{img}(h_2)) \cup (\operatorname{dom}(h_2)\cap \operatorname{img}(h_1)) \subseteq \operatorname{img}(s), \\ \bot , & \text{otherwise.} \end{array}\right.} \end{equation*} \) Intuitively, \( h_1\uplus ^{s}h_2 \) is the (disjoint) union of heaps whose dangling pointers may only point to the domain of the other heap in case the targets of these dangling pointers are in the image of \( s \). Note that if \( h_1\uplus ^{s}h_2 \) is defined then \( h_1+h_2 \) is defined, but not vice-versa.

Just like the standard disjoint union \( + \), the operator \( \uplus ^{s} \) gives rise to a separation algebra, i.e., a cancellative, commutative partial monoid [Calcagno et al. 2007]:

Lemma 2.1.

Let \( s \) be a stack and let \( u \) be the empty heap (i.e., \( \operatorname{dom}(u)=\emptyset \)). The triple \( (\mathbf {H},\uplus ^{s},u) \) is a separation algebra.

Proof.

Trivially, the operation \( \uplus ^{s} \) is commutative and associative with unit \( u \). Let \( h\in \mathbf {H} \). Consider \( h_1, h_2 \in \mathbf {H} \) such that \( h\uplus ^{s}h_1 = h\uplus ^{s}h_2 \ne \bot \). Since the domain of \( h \) is disjoint from the domains of \( h_1 \) and \( h_2 \), it follows that for all \( x \), \( h_1(x)=h_2(x) \) and thus \( h_1=h_2 \). As \( h_1 \) and \( h_2 \) were chosen arbitrarily, we obtain that the function \( h\uplus ^{s}(\cdot) \) is injective. Consequently, the monoid is cancellative.□

Weak- and strong-separation logic. Both \( + \) and \( \uplus ^{s} \) can be used to give a semantics to the separating conjunction and septraction. We denote the corresponding model relations \( {\vert\!\mathop{=}\limits^{ \mathrm {wk}}} \) and \( {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \) and define them in Figure 3. Where the two semantics agree, we simply write \( \models \).

Fig. 3.

Fig. 3. The standard, “weak” semantics of separation logic, \( {\vert\!\mathop{=}\limits^{ \mathrm {wk}}} \) , and the “strong” semantics, \( {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \) . We write \( \models \) when there is no difference between \( {\vert\!\mathop{=}\limits^{ \mathrm {wk}}} \) and \( {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \) .

In both semantics, \( \mathbf {emp} \) only holds for the empty heap, and \( x=y \) holds for the empty heap when \( x \) and \( y \) are interpreted by the same location.5 Points-to assertions \( x\mapsto y \) are precise, i.e., only hold in singleton heaps. (Following Ishtiaq and O’Hearn [2001b], it is, of course, possible to express intuitionistic points-to assertions by \( x \mapsto y * \mathsf {t} \)). The list segment predicate \( \mathtt {ls}(x,y) \) holds in possibly empty lists of pointers from \( s(x) \) to \( s(y) \). The semantics of Boolean connectives are standard. The semantics of the separating conjunction, \( * \), and septraction, \( {-\!\!\circledast } \), differ based on the choice of \( + \) vs. \( \uplus ^{s} \) for combining disjoint heaps. In the former case, denoted \( {\vert\!\mathop{=}\limits^{ \mathrm {wk}}} \), we get the standard semantics of separation logic (cf. Reynolds [2002]). In the latter case, denoted \( {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \), we get a semantics that imposes stronger requirements on sub-heap composition: Sub-heaps may only overlap at locations that are stored in the stack.

Because the semantics \( {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \) imposes stronger constraints, we will refer to the standard semantics \( {\vert\!\mathop{=}\limits^{ \mathrm {wk}}} \) as the weak semantics of separation logic and to the semantics \( {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \) as the strong semantics of separation logic. Moreover, we use the terms weak-separation logic(WSL) and SSL to distinguish between SL with the semantics \( {\vert\!\mathop{=}\limits^{ \mathrm {wk}}} \) and \( {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \).

Example 2.2.

Let \( \varphi := a \ne b * (\mathtt {ls}(a,\mathsf {nil}) * \mathsf {t}) \wedge (\mathtt {ls}(b,\mathsf {nil}) * \mathsf {t}) \). In Figure 4

Fig. 4.

Fig. 4. Two models of \( (\mathtt {ls}(a,\mathsf {nil}) * \mathsf {t}) \wedge (\mathtt {ls}(b,\mathsf {nil}) * \mathsf {t}) \) for a stack with domain \( a,b \) and a stack with domain \( a,b,c \) .

, we show two models of \( \varphi \). On the left, we assume that \( a,b \) are the only program variables, whereas on the right, we assume that there is a third program variable \( c \).

Note that the latter model, where the two lists overlap, is possible in SSL only because the lists come together at the location labeled by \( c \). If we removed variable \( c \) from the stack, then the model would no longer satisfy \( \varphi \) according to the strong semantics, because \( \uplus ^{s} \) would no longer allow splitting the heap at that location. Conversely, the model would still satisfy \( \varphi \) with standard semantics.

This is a feature rather than a bug of SSL: Without having a variable \( c \) the stack-heap pair on the right of Figure 4 is not a model of \( \varphi \). However, an SSL user is able to explicitly allow such models by adding a (ghost) variable \( c \) to the set of program variables.

Isomorphism. For later use, we state that SL formulas cannot distinguish isomorphic models:

Definition 2.3.

Let \( (s,h),(s^{\prime },h^{\prime }) \) be models. \( (s,h) \) and \( (s^{\prime },h^{\prime }) \) are isomorphic, \( (s,h)\cong (s^{\prime },h^{\prime }) \), if there exists a bijection \( \sigma :(\mathsf {locs}(h) \cup \operatorname{img}(s)) \rightarrow (\mathsf {locs}(h^{\prime }) \cup \operatorname{img}(s^{\prime })) \) such that (1) for all \( x \), \( s^{\prime }(x) = \sigma (s(x)) \) and (2) \( h^{\prime } = \lbrace \sigma (l) \mapsto \sigma (h(l)) \mid l \in \operatorname{dom}(h)\rbrace \).

Lemma 2.4.

Let \( (s,h), (s^{\prime },h^{\prime }) \) be models with \( (s,h)\cong (s^{\prime },h^{\prime }) \) and let \( \varphi \in \mathbf {SL} \). Then \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \) iff \( (s^{\prime },h^{\prime }) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \).

Proof.

See Appendix.□

Satisfiability and Semantic Consequence. We define the notions of satisfiability and semantic consequence parameterized by a finite set of variables \( \mathbf {x} \subseteq \mathbf {Var} \). For a formula \( \varphi \) with \( \mathsf {fvs}(\varphi) \subseteq \mathbf {x} \), we say that \( \varphi \) is satisfiable w.r.t. \( \mathbf {x} \) if there is a model \( (s,h) \) with \( \operatorname{dom}(s) = \mathbf {x} \) such that \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \). We say that \( \varphi \) entails \( \psi \) w.r.t. \( \mathbf {x} \), in signs \( \varphi \vert\!\mathop{=}\limits^{ \mathrm {st}} {\mathbf {x}}\psi \), if \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \) then also \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \psi \) for all models \( (s,h) \) with \( \operatorname{dom}(s) = \mathbf {x} \).

2.3 Correspondence of Strong and Weak Semantics on Positive Formulas

We call an SL formula \( \varphi \) positive if it does not contain \( \lnot \). Note that, in particular, this implies that \( \varphi \) does not contain the magic wand \( {-\!\!\ast } \) or the atom \( \mathsf {t} \).

In models of positive formulas, all dangling locations are labeled by variables:

Lemma 2.5.

Let \( \varphi \) be positive and \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {wk}}} \varphi \). Then, \( (\operatorname{img}(h) \setminus \operatorname{dom}(h)) \subseteq \operatorname{img}(s) \).

Proof.

We prove the following stronger statement by structural induction on \( \varphi \): For every model \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {wk}}} \varphi \), we have that

(1)

\( (\operatorname{img}(h) \setminus \operatorname{dom}(h)) \subseteq \operatorname{img}(s) \),

(2)

every join point is labelled by a variable, i.e., \( \left|h^{-1}(\ell)\right| \ge 2 \) implies that \( \ell \in \operatorname{img}(s) \), and

(3)

every source is labelled by a variable, i.e., \( (\operatorname{dom}(h) \setminus \operatorname{img}(h)) \subseteq \operatorname{img}(s) \).

The proof is straightforward except for the \( {-\!\!\circledast } \) case: Assume \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {wk}}} \varphi _1 {-\!\!\circledast }\varphi _2 \). Then there is a \( h_0 \) with \( (s,h_0) {\vert\!\mathop{=}\limits^{ \mathrm {wk}}} \varphi _1 \) and \( (s,h_0 +h) {\vert\!\mathop{=}\limits^{ \mathrm {wk}}} \varphi _2 \). By induction assumption the claim holds for \( (s,h_0) \) and \( (s,h_0 +h) \). We note that every join point of \( h \) is also a join point of \( h_0 +h \) and hence labelled by a variable. We now verify that every pointer that is dangling in \( h \) is either also dangling in \( h_0 +h \) or is a join point in \( h_0 +h \) or is pointing to a source of \( h_0 \); in all cases the target of the dangling pointer is labelled by a variable. Finally, a source of \( h \) is either also a source of \( h_0 +h \) or is pointed to by a dangling pointer of \( h_0 \); in both cases the source is labelled by a variable.□

As every location shared by heaps \( h_1 \) and \( h_2 \) in \( h_1 +h_2 \) is either dangling in \( h_1 \) or in \( h_2 \) (or in both), the operations \( + \) and \( \uplus ^{s} \) coincide on models of positive formulas:

Lemma 2.6.

Let \( (s,h_{1}){\vert\!\mathop{=}\limits^{ \mathrm {wk}}} \varphi _1 \) and \( (s,h_{2}){\vert\!\mathop{=}\limits^{ \mathrm {wk}}} \varphi _2 \) for positive formulas \( \varphi _1,\varphi _2 \). Then \( h_1+h_2\ne \bot \) iff \( h_1 \uplus ^{s}h_2\ne \bot \).

Proof.

If \( h_1\uplus ^{s}h_2\ne \bot \), then \( h_1 +h_2 \ne \bot \) by definition.

Conversely, assume \( h_1+h_2\ne \bot \). We need to show that \( \mathsf {locs}(h_1)\cap \mathsf {locs}(h_2) \subseteq \operatorname{img}(s) \). To this end, let \( \ell \in \mathsf {locs}(h_1)\cap \mathsf {locs}(h_2) \). Then there exists an \( i \in \left\lbrace 1,2\right\rbrace \) such that \( \ell \in \operatorname{img}(h_i)\setminus \operatorname{dom}(h_i) \)—otherwise, \( \ell \) would be in \( \operatorname{dom}(h_1)\cap \operatorname{dom}(h_2) \) and \( h_1+h_2=\bot \). By Lemma 2.5, we thus have \( \ell \in \operatorname{img}(s) \).□

Since the semantics coincide on atomic formulas by definition and on \( * \) by Lemma 2.5, we can easily show that they coincide on all positive formulas:

Lemma 2.7.

Let \( \varphi \) be a positive formula and let \( (s,h) \) be a model. Then, \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {wk}}} \varphi \) iff \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \).

Proof.

We proceed by structural induction on \( \varphi \). If \( \varphi \) is atomic, then there is nothing to show. For \( \varphi = \varphi _1*\varphi _2 \) and \( \varphi = \varphi _1{-\!\!\circledast }\varphi _2 \), the claim follows from the induction hypotheses and Lemma 2.6. For \( \varphi =\varphi _1\wedge \varphi _2 \) and \( \varphi =\varphi _1\vee \varphi _2 \), the claim follows immediately from the induction hypotheses and the semantics of \( \wedge \), \( \vee \).□

Lemma 2.7 implies that the two semantics coincide on the popular symbolic-heap fragment of separation logic.6 Further, by negating Lemma 2.7, we have that \( \left\lbrace (s,h)\mid (s,h){\vert\!\mathop{=}\limits^{ \mathrm {wk}}} \varphi \right\rbrace \ne \lbrace (s,h)\mid (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \rbrace \) implies that \( \varphi \) contains negation, either explicitly or in the form of a magic wand or \( \mathsf {t} \).

We remark that formula \( \varphi \) in Example 2.2 uses only \( \mathsf {t} \) but not \( \lnot , {-\!\!\ast } \). Hence, adding \( \mathsf {t} \) to the positive fragment is already sufficient to invalidate Lemma 2.7; because \( \mathsf {t} \) can be defined from \( \lnot \), respectively, \( {-\!\!\ast } \), we cannot add either operator to the positive fragment without invalidating Lemma 2.7. Moreover, Lemma 2.7 does not hold under intuitionistic semantics: Recall that the meaning of a formula \( \zeta \) under intuitionistic semantics is equivalent to the meaning of \( \zeta * \mathsf {t} \) under classic semantics [Reynolds 2002]. Hence, the meaning of the formula \( \psi := a \ne b * (\mathtt {ls}(a,\mathsf {nil}) \wedge (\mathtt {ls}(b,\mathsf {nil})) \) under intuitionistic semantics is equivalent to formula \( \varphi \) in Example 2.2 under classical semantics. As \( \psi \) is from the positive fragment, Lemma 2.7 does not hold under intuitionistic semantics.

Skip 3DECIDING THE SSL SATISFIABILITY PROBLEM Section

3 DECIDING THE SSL SATISFIABILITY PROBLEM

The goal of this section is to develop a decision procedure for SSL:

Theorem 3.1.

Let \( \varphi \in \mathbf {SL} \) and let \( \mathbf {x} \subseteq \mathbf {Var} \) be a finite set of variables with \( \mathsf {fvs}(\varphi) \subseteq \mathbf {x} \). It is decidable in \( {\rm PS}{\rm\small{PACE}} \) (in \( \left|\varphi \right| \) and \( \left|\mathbf {x}\right| \)) whether there exists a model \( (s,h) \) with \( \operatorname{dom}(s) = \mathbf {x} \) and \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \).

Our approach is based on abstracting stack–heap models by abstract memory states(AMS), which have two key properties, which together imply Theorem 3.1:

  • Refinement (Theorem 3.19). If \( (s_{1},h_{1}) \) and \( (s_{2},h_{2}) \) abstract to the same AMS, then they satisfy the same formulas. That is, the AMS abstraction refines the satisfaction relation of SSL.

  • Computability (Theorem 3.42, Lemmas 3.44 and 3.46). For every formula \( \varphi \), we can compute (in \( {\rm PS}{\rm\small{PACE}} \)) the set of all AMSs of all models of \( \varphi \); then, \( \varphi \) is satisfiable if this set is nonempty.

The AMS abstraction is motivated by the following insights:

(1)

The operator \( \uplus ^{s} \) induces a unique decomposition of the heap into at most \( \left|s\right| \) minimal chunks of memory that cannot be further decomposed.

(2)

To decide whether \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \) holds, it is sufficient to know for each chunk of \( (s,h) \) (a) which atomic formulas the chunk satisfies and (b) which variables (if any) are allocated in the chunk.

(3)

We equip the AMS abstract domain with a composition operator \( \bullet \) such that AMS abstraction is a homomorphism with regard to \( \uplus ^{s} \) and \( \bullet \) (see Lemma 3.28); moreover, given a model \( (s,h) \) that abstracts to the composition of two AMSs \( \mathcal {A}_1 \bullet \mathcal {A}_2 \), we can always find a decomposition \( h=h_1\uplus ^{s}h_2 \), such that \( (s,h_{i}) \) abstracts to \( \mathcal {A}_i \) (see Lemma 3.29). These two properties are the key for proving the Refinement Theorem. We remark that the homomorphism and decomposition properties also were essential for proving the decidability and complexity results for the separation logics considered in Katelaan et al. [2019]; Katelaan and Zuleger [2020]; Pagel et al. [2020]. Interestingly, the homomorphism and decomposition property have been identified in concurrent work as natural properties for reasoning about framing and parallel composition in separation logic [Farka et al. 2021].7

We proceed as follows. In Sec 3.1, we make precise the notion of memory chunks. In Section 3.2, we define AMS, an abstraction of models that retains for every chunk precisely the information from point (2) above. We will prove the refinement theorem in 3.3. We will show in Sections 3.43.6 that we can compute the AMS of the models of a given formula \( \varphi \), which allows us to decide satisfiability and entailment problems for SSL. Finally, we prove the \( {\rm PS}{\rm\small{PACE}} \)-completeness result in Section 3.7.

3.1 Memory Chunks

We will abstract a model \( (s,h) \) by abstracting every chunk of \( h \), which is a minimal nonempty sub-heap of \( (s,h) \) that can be split off of \( h \) according to the strong-separation semantics.

Definition 3.2

(Sub-heap).

Let \( (s,h) \) be a model. We say that \( h_1 \) is a sub-heap of \( h \), in signs \( h_1 \sqsubseteq h \), if there is some heap \( h_2 \) such that \( h= h_1 \uplus ^{s}h_2 \). We collect all sub-heaps in the set \( \mathsf {subHeaps}(s,h) \).

Sub-heaps are closed under taking intersection and unions:

Proposition 3.3.

Let \( (s,h) \) be a model and let \( h_1,h_2 \) be sub-heaps of \( h \). Then, \( h_1 \cap h_2 \) and \( h_1 \cup h_2 \) are also sub-heaps of \( h \).

Proof.

By definition of sub-heaps, there are some heaps \( h_1^{\prime },h_2^{\prime } \) such that \( h= h_1 \uplus ^{s}h_1^{\prime } \) and \( h= h_2 \uplus ^{s}h_2^{\prime } \). We prove the claim for \( (h_1 \cap h_2) \). The proof for \( (h_1 \cup h_2) \) is analogous. We will now argue that \( (h_1 \cap h_2) \uplus ^{s}(h_1^{\prime } \cup h_2^{\prime }) = h \). Let us consider some \( \ell \in \operatorname{dom}(h_1 \cap h_2) \cap \operatorname{img}(h_1^{\prime } \cup h_2^{\prime }) \). Because of \( \ell \in \operatorname{img}(h_1^{\prime } \cup h_2^{\prime }) \), we have \( \ell \in \operatorname{img}(h_i^{\prime }) \) for some \( i \in \lbrace 1,2\rbrace \). Then, because of \( \ell \in \operatorname{dom}(h_1 \cap h_2) \), we also have \( \ell \in \operatorname{dom}(h_i) \). Because of \( h= h_i \uplus ^{s}h_i^{\prime } \), we get that \( \ell \in \operatorname{img}(s) \) from the definition of \( \uplus ^{s} \). The proof that \( \ell \in \operatorname{img}(h_1 \cap h_2) \cap \operatorname{dom}(h_1^{\prime } \cup h_2^{\prime }) \) implies \( \ell \in \operatorname{img}(s) \) is analogous.□

The following proposition is an immediate consequence of Proposition 3.3:

Proposition 3.4.

Let \( (s,h) \) be a model. Then, \( (\mathsf {subHeaps}(s,h),\sqsubseteq ,\sqcup ,\sqcap ,\lnot) \) is a Boolean algebra with greatest element \( h \) and smallest element \( \emptyset \), where

  • \( (s,h_1) \sqcup (s,h_2) := (s,h_1 \cup h_2) \),

  • \( (s,h_1) \sqcap (s,h_2) := (s,h_1 \cap h_2) \), and

  • \( \lnot (s,h_1) := (s,h_1^{\prime }) \), where \( h_1^{\prime } \in \mathsf {subHeaps}(s,h) \) is the unique sub-heap with \( h= h_1 \uplus ^{s}h_1^{\prime } \).

The fact that the sub-models form a Boolean algebra allows us to make the following definition8:

Definition 3.5

(Chunk).

Let \( (s,h) \) be a model. A chunk of \( (s,h) \) is an atom of the Boolean algebra \( (\mathsf {subHeaps}(s,h),\sqsubseteq ,\sqcup ,\sqcap ,\lnot) \). We collect all chunks of \( (s,h) \) in the set \( \mathsf {chunks}(s,h) \).

Because every element of a Boolean algebra can be uniquely decomposed into atoms, we obtain that every heap can be fully decomposed into its chunks:

Proposition 3.6.

Let \( (s,h) \) be a model and let \( \mathsf {chunks}(s,h)=\left\lbrace h_1,\ldots ,h_n\right\rbrace \) be its chunks. Then, \( h= h_1 \uplus ^{s}h_2 \uplus ^{s}\cdots \uplus ^{s}h_n \).

Example 3.7.

Let \( s= \lbrace x \mapsto 1, y \mapsto 3, u \mapsto 5, z \mapsto 3, w \mapsto 7, v \mapsto 9\rbrace \) and \( h= \lbrace 1 \mapsto 2, 2\mapsto 3, 3 \mapsto 8, 4 \mapsto 6, 5 \mapsto 6, 6 \mapsto 3, 7 \mapsto 6, 9 \mapsto 9, 10\mapsto 11, 11\mapsto 10\rbrace \). The model \( (s,h) \) is illustrated in Figure 5. This time, we include the identities of the locations in the graphical representation; e.g., \( 3:y,z \) represents location 3, \( s(y)=3 \), \( s(z)=3 \). The model consists of five chunks, \( h_1 := \lbrace 1 \mapsto 2, 2 \mapsto 3\rbrace \), \( h_2 := \lbrace 9 \mapsto 9\rbrace \), \( h_3 := \lbrace 4\mapsto 6, 5\mapsto 6, 6\mapsto 3, 7\mapsto 6\rbrace \), \( h_4 := \lbrace 3 \mapsto 8\rbrace \), and \( h_5 := \lbrace 10 \mapsto 11, 11 \mapsto 10\rbrace \).

Fig. 5.

Fig. 5. Graphical representation of a model consisting of five chunks (left, see Example 3.7) and its induced AMS (right, see Example 3.13).

We distinguish two types of chunks: those that satisfy SSL atoms and those that do not.

Definition 3.8

(Positive and Negative Chunk).

Let \( h_c \subseteq h \) be a chunk of \( (s,h) \). \( h_c \) is a positive chunk if there exists an atomic formula \( \tau \) such that \( (s,h_c){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \tau \). Otherwise, \( h_c \) is a negative chunk. We collect the respective chunks in \( \mathsf {chunks}^{+}(s,h) \) and \( \mathsf {chunks}^{-}(s,h) \).

Example 3.9.

Recall the chunks \( h_1 \) through \( h_5 \) from Example 3.7. \( h_1 \) and \( h_2 \) are positive chunks (blue in Figure 5), \( h_3 \) to \( h_5 \) are negative chunks (orange).

Negative chunks fall into three (not mutually exclusive) categories:

Garbage.

Chunks with locations that are inaccessible via stack variables.

Unlabeled dangling pointers.

Chunks with an unlabeled sink, i.e., a dangling location that is not in \( \operatorname{img}(s) \) and thus cannot be “made non-dangling” via composition using \( \uplus ^{s} \).

Overlaid list segments.

Overlaid list segments that cannot be separated via \( \uplus ^{s} \), because they are joined at locations that are not in \( \operatorname{img}(s) \).

Example 3.10

(Negative Chunks).

The chunk \( h_3 \) from Example 3.7 contains garbage, namely, the location 4 that cannot be reached via stack variables, and two overlaid list segments (from 5 to 3 and 7 to 3). The chunk \( h_4 \) has an unlabeled dangling pointer. The chunk \( h_5 \) contains only garbage.

3.2 Abstract Memory States

In AMSs, we retain for every chunk enough information to (1) determine which atomic formulas the chunk satisfies and (2) keep track of which variables are allocated within each chunk.

Definition 3.11.

A quadruple \( \mathcal {A}= \langle V,E,\rho ,\gamma \rangle \) is an abstract memory state, if

(1)

\( V \) is a partition of some finite set of variables, i.e., \( V= \lbrace \mathbf {v}_1, \ldots ,\mathbf {v}_n\rbrace \) for some non-empty disjoint finite sets \( \mathbf {v}_i \subseteq \mathbf {Var} \),

(2)

\( E:V\rightharpoonup V\times \left\lbrace =\!1,\ge \!2\right\rbrace \) is a partial function such that there is no \( \mathbf {v} \in \operatorname{dom}(E) \) with \( \mathsf {nil}\in \mathbf {v} \),9

(3)

\( \rho \) consists of disjoint subsets of \( V \) such that every \( R \in \rho \) is disjoint from \( \operatorname{dom}(E) \) and there is no \( \mathbf {v} \in R \) with \( \mathsf {nil}\in \mathbf {v} \),

(4)

\( \gamma \) is a natural number, i.e., \( \gamma \in \mathbb {N} \).

We call \( V \) the nodes, \( E \) the edges, \( \rho \) the negative-allocation constraint and \( \gamma \) the garbage-chunk count of \( \mathcal {A} \). We call the AMS \( \mathcal {A}=\langle V,E,\rho ,\gamma \rangle \) garbage-free if \( \rho =\emptyset \) and \( \gamma =\emptyset \).

We collect the set of all AMSs in \( \mathbf {AMS} \). The size of \( \mathcal {A} \) is given by \( \left|\mathcal {A}\right|:=\left|V\right| + \gamma \). Finally, the allocated variables of an AMS are given by \( \mathbf {alloc}(\mathcal {A}) := \operatorname{dom}(E) \cup \bigcup \rho \).

Every model induces an AMS, defined in terms of the following auxiliary definitions. The equivalence class of variable \( x \) w.r.t. stack \( s \) is \( {[x]}^{s}_{=} := \left\lbrace y \mid s(y)=s(x)\right\rbrace \); the set of all equivalence classes of \( s \) is \( \mathsf {cls}_{=}(s):= \left\lbrace {[x]}^{s}_{=} \mid x \in \operatorname{dom}(s)\right\rbrace \). We now define the edges induced by a model \( (s,h) \): For every equivalence class \( {[x]}^{s}_{=} \in \mathsf {cls}_{=}(s) \), we set \( \begin{equation*} \mathsf {edges}(s,h)({[x]}^{s}_{=}) := {\left\lbrace \begin{array}{ll} \langle {[y]}^{s}_{=}, =\!1\rangle & \text{there are } y \in \operatorname{dom}(s) \text{ and } h_c \in \mathsf {chunks}^{+}(s,h)\\ & \text{ with } (s,h_c) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} x \mapsto y, \\ \langle {[y]}^{s}_{=}, \ge \!2\rangle & \text{there are } y \in \operatorname{dom}(s) \text{ and } h_c \in \mathsf {chunks}^{+}(s,h)\\ & \text{ with } (s,h_c) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \mathtt {ls}(x,y) \wedge \lnot x \mapsto y, \\ \bot , & \text{otherwise}. \end{array}\right.} \end{equation*} \) Finally, we denote the sets of variables allocated in negative chunks by \( \begin{equation*} \mathsf {alloc}^{-}(s,h):= \lbrace \lbrace {[x]}^{s}_{=} \mid s(x) \in \operatorname{dom}(h_c) \rbrace \mid h_c \in \mathsf {chunks}^{-}(s,h)\rbrace \setminus \lbrace \emptyset \rbrace , \end{equation*} \) where (equivalence classes of) variables that are allocated in the same negative chunk are grouped together in a set.

Now, we are ready to define the induced AMS of a model.

Definition 3.12.

Let \( (s,h) \) be a model. Let \( V:= \mathsf {cls}_{=}(s) \), \( E:= \mathsf {edges}(s,h) \), \( \rho :=\mathsf {alloc}^{-}(s,h) \) and \( \gamma := \left|\mathsf {chunks}^{-}(s,h)\right| \). Then \( \mathsf {ams}(s,h):= \langle V,E,\rho ,\gamma \rangle \) is the induced AMS of \( (s,h) \).

Example 3.13.

The induced AMS of the model \( (s,h) \) from Example 3.7 is illustrated on the right-hand side of Figure 5. The blue box depicts the graph \( (V,E) \) induced by the positive chunks \( h_1,h_2 \); the negative chunks that allocate variables are abstracted to the set \( \rho = \left\lbrace \left\lbrace \left\lbrace w\right\rbrace ,\left\lbrace u\right\rbrace \right\rbrace , \left\lbrace \left\lbrace y,z\right\rbrace \right\rbrace \right\rbrace \) (note that the variables \( w \) and \( u \) are allocated in the chunk \( h_3 \) and the aliasing variables \( y,z \) are allocated in \( h_4 \)); and the garbage-chunk count is 3.

Observe that the induced AMS is indeed an AMS:

Proposition 3.14.

Let \( (s,h) \) be a model. Then \( \mathsf {ams}(s,h)\in \mathbf {AMS} \).

The reverse also holds: Every AMS is the induced AMS of at least one model; in fact, even of a model of linear size.

Lemma 3.15 (Realizability of AMS).

Let \( \mathcal {A}= \langle V,E,\rho ,\gamma \rangle \) be an AMS. There exists a model \( (s,h) \) with \( \mathsf {ams}(s,h)=\mathcal {A} \) whose size is linear in the size of \( \mathcal {A} \).

Proof.

For simplicity, we assume \( \mathbf {Loc}= \mathbb {N} \); this allows us to add locations.

Let \( n := \left|V\right| \). We fix some injective function \( t:V\rightarrow \left\lbrace 1,\ldots ,n\right\rbrace \) from nodes to natural numbers. We set \( s:= \bigcup _{x \in v, v \in V} \left\lbrace x \mapsto t(v)\right\rbrace \) and define \( h \) as the (disjoint) union of

  • \( \bigcup _{E(v)= \langle v^{\prime },=\!1\rangle } \left\lbrace t(v) \mapsto t(v^{\prime }) \right\rbrace , \)

  • \( \bigcup _{E(v)=\langle v^{\prime },\ge \!2\rangle } \left\lbrace t(v) \mapsto n + t(v),n + t(v)\mapsto t(v^{\prime }) \right\rbrace , \)

  • \( \left(\bigcup _{v \in \mathbf {r},\mathbf {r} \in \rho } \left\lbrace t(v) \mapsto 2n + t(\max (\mathbf {r}))\right\rbrace \right) \cup \left(\bigcup _{\mathbf {r} \in \rho } \left\lbrace 2n + t(\max (\mathbf {r})) \mapsto 2n + t(\max (\mathbf {r}))\right\rbrace \right), \)

  • \( \bigcup _{l \in \left\lbrace 3n+1, \ldots , 3n+\gamma \right\rbrace } \left\lbrace l \mapsto l\right\rbrace . \)

It is easy to verify that \( \mathsf {ams}(s,h)=\mathcal {A} \) and that \( \left|h\right| \in \mathcal {O}(\left|\mathcal {A}\right|) \).□

The following lemma demonstrates that we only need the \( \rho \) and \( \gamma \) components to be able to deal with negation and/or the magic wand:

Lemma 3.16 (Models of Positive Formulas Abstract to Garbage-free AMS).

Let \( (s,h) \) be a model. If \( (s,h) \models \varphi \) for a positive formula \( \varphi \), then \( \mathsf {ams}{(s,h)} \) is garbage-free.

Proof.

The lemma can be proved by a straightforward induction on \( \varphi \), using that every heap fully decomposes into its chunks.□

We abstract SL formulas by the set of AMS of their models:

Definition 3.17.

Let \( s \) be a stack. The \( \mathbf {SL} \) abstraction w.r.t. \( s \), \( \alpha _s:\mathbf {SL}\rightarrow 2^{\mathbf {AMS}} \), is given by \( \begin{align*} &\alpha _{s}(\varphi) := \lbrace \mathsf {ams}(s,h)\mid h\in \mathbf {H}, \text{ and } (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \rbrace .\qquad \qquad {{\triangle }} \end{align*} \)

Because AMSs do not retain any information about heap locations, just about aliasing, abstractions do not differ for stacks with the same equivalence classes:

Lemma 3.18.

Let \( s,s^{\prime } \) be stacks with \( \mathsf {cls}_{=}(s)=\mathsf {cls}_{=}(s^{\prime }) \). Then \( \alpha _{s}(\varphi)=\alpha _{s^{\prime }}(\varphi) \) for all formulas \( \varphi \).

Proof.

Let \( \mathcal {A}\in \alpha _{s}(\varphi) \). Then there exists a heap \( h \) such that \( \mathsf {ams}(s,h)=\mathcal {A} \) and \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \). Let \( h^{\prime } \) be such that \( (s,h)\cong (s^{\prime },h^{\prime }) \). By Lemma 2.4, \( (s^{\prime },h^{\prime }){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \). Moreover, \( \mathsf {ams}(s^{\prime },h^{\prime })=\mathcal {A} \). Consequently, \( \mathcal {A}\in \alpha _{s^{\prime }}(\varphi) \). The other direction is proved analogously.□

3.3 The Refinement Theorem for SSL

The main goal of this section is to show the following refinement theorem:

Theorem 3.19 (Refinement Theorem).

Let \( \varphi \) be a formula and let \( (s,h_{1}) \), \( (s,h_{2}) \) be models with \( \mathsf {ams}{(s,h_{1})} = \mathsf {ams}{(s,h_{2})} \). Then \( (s,h_{1}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \) iff \( (s,h_{2}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \).

We will prove this theorem step by step, characterizing the AMS abstraction of all atomic formulas and of the composed models before proving the refinement theorem. In the remainder of this section, we fix some model \( (s,h) \).

Abstract Memory States of Atomic Formulas. The empty-heap predicate \( \mathbf {emp} \) is only satisfied by the empty heap, i.e., by a heap that consists of zero chunks:

Lemma 3.20.

\( (s,h)\models \mathbf {emp} \) iff \( \mathsf {ams}{(s,h)} = \langle \mathsf {cls}_{=}(s), \emptyset , \emptyset , 0\rangle . \)

Proof.

\( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \mathbf {emp} \) iff \( h= \emptyset \) iff \( \mathsf {chunks}{(s,h)}=\emptyset \) iff \( \mathsf {ams}{(s,h)}=\langle \mathsf {cls}_{=}(s), \emptyset , \emptyset , 0\rangle \).□

Lemma 3.21.

(1)

\( (s,h)\models x=y \) iff \( \mathsf {ams}{(s,h)}=\langle \mathsf {cls}_{=}(s), \emptyset , \emptyset , 0\rangle \) and \( {[x]}^{s}_{=}={[y]}^{s}_{=} \).

(2)

\( (s,h)\models x \ne y \) iff \( \mathsf {ams}{(s,h)}=\langle \mathsf {cls}_{=}(s), \emptyset , \emptyset , 0\rangle \) and \( {[x]}^{s}_{=}\ne {[y]}^{s}_{=} \).

Proof.

We only show the first claim, as the proof of the second claim is completely analogous. \( (s,h)\models x=y \) iff (\( s(x)=s(y) \) and \( h= \emptyset \)) iff (\( {[x]}^{s}_{=}={[y]}^{s}_{=} \) and \( (s,h)\models \mathbf {emp} \)) iff, by Lemma 3.20, (\( {[x]}^{s}_{=}={[y]}^{s}_{=} \) and \( \mathsf {ams}{(s,h)}=\langle \mathsf {cls}_{=}(s), \emptyset , \emptyset , 0\rangle \)).□

Models of points-to assertions consist of a single positive chunk of size 1:

Lemma 3.22.

Let \( E= \lbrace {[x]}^{s}_{=} \mapsto \langle {[y]}^{s}_{=}, =\!1\rangle \rbrace \). \( (s,h)\models x \mapsto y \) iff \( \mathsf {ams}{(s,h)} = \langle \mathsf {cls}_{=}(s), E, \emptyset , 0 \rangle \).

Proof.

If \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} x \mapsto y, \) then \( h= \left\lbrace s(x) \mapsto s(y)\right\rbrace \). In particular, it then holds that \( h \) is a positive chunk. Consequently, \( \mathsf {edges}(s,h)= E \). It follows that \( \mathsf {ams}{(s,h)}=\langle \mathsf {cls}_{=}(s), E,\emptyset , 0 \rangle \).

Conversely, assume \( \mathsf {ams}{(s,h)}=\langle \mathsf {cls}_{=}(s), E,\emptyset , 0 \rangle \). Then, \( (s,h) \) consists of a single positive chunk and no negative chunks. Further, by the definition of \( \mathsf {edges}(s,h) \), we have that this single positive chunk satisfies \( (s,h)\models x \mapsto y \).□

Intuitively, the list segment \( \mathtt {ls}(x,y) \) is satisfied by models \( (s,h) \) that consist of zero or more positive chunks, corresponding to a (possibly empty) list from some equivalence class \( {[x]}^{s}_{=} \) to \( {[y]}^{s}_{=} \) via (zero or more) intermediate equivalence classes \( {[x_1]}^{s}_{=},\ldots ,{[x_n]}^{s}_{=} \). We will use this intuition to define abstract lists; this notion allows us to characterize the AMSs arising from abstracting lists.

Definition 3.23.

Let \( \mathcal {A}=\langle V,E,\rho ,\gamma \rangle \in \mathbf {AMS} \) and \( x,y \in \mathbf {Var} \). We say \( \mathcal {A} \) is an abstract list w.r.t. \( x \) and \( y \), in signs \( \mathcal {A}\in \mathbf {AbstLists}(x,y) \), iff

(1)

\( \rho =\emptyset \) and \( \gamma =0 \), and

(2)

we can pick nodes \( \mathbf {v}_1,\ldots , \mathbf {v}_n \in V \) and labels \( \iota _1,\ldots ,\iota _{n-1} \in \left\lbrace =\!1,\ge \!2\right\rbrace \) such that \( x \in \mathbf {v}_1 \), \( y \in \mathbf {v}_n \) and \( E= \lbrace \mathbf {v}_i \mapsto \langle \mathbf {v}_{i+1},\iota _i\rangle \mid 1 \le i \lt n \rbrace \).

Lemma 3.24.

\( (s,h)\models \mathtt {ls}(x,y) \) iff \( \mathsf {ams}{(s,h)} \in \mathbf {AbstLists}(x,y) \).

Proof.

Assume \( (s,h)\models \mathtt {ls}(x,y) \). By the semantics, there exist locations \( \ell _0,\ldots ,\ell _n \), \( n \ge 1 \), with \( s(x)=\ell _0 \), \( s(y)=\ell _n \) and \( h= \lbrace \ell _0 \mapsto \ell _1,\ldots ,\ell _{n-1} \mapsto \ell _n \rbrace \). Let \( j_1,\ldots ,j_k \) those indices among \( 1,\ldots ,n \) with \( \ell _{j_i} \in \operatorname{img}(s) \). (In particular, \( j_1=1 \) and \( j_k=n \).) Then for each \( j_i \), the restriction of \( h \) to \( \ell _{j_i},\ell _{j_i+1},\ldots ,\ell _{j_{i+1}-1} \) is a positive chunk that either satisfies a points-to assertion or a list-segment predicate. Hence, \( \mathsf {edges}(s,h)(s^{-1}(\ell _{j_i})) = \langle s^{-1}(\ell _{j_{i+1}}),\iota _i\rangle \) for all \( 1 \le i \lt k \), for some \( \iota _i \in \left\lbrace =\!1,\ge \!2\right\rbrace \). Thus, \( \mathsf {ams}{(s,h)} \in \mathbf {AbstLists}(x,y) \).

Assume \( \mathsf {ams}{(s,h)} \in \mathbf {AbstLists}(x,y) \). Then, there are equivalence classes \( {[x_1]}^{s}_{=},\ldots ,{[x_n]}^{s}_{=} \in \mathsf {cls}_{=}(s) \) and labels \( \iota _1,\ldots ,\iota _{n-1} \in \left\lbrace =\!1,\ge \!2\right\rbrace \) such that \( x \in {[x_1]}^{s}_{=} \), \( y \in {[x_n]}^{s}_{=} \) and \( \mathsf {edges}(s,h)= \lbrace {[x_i]}^{s}_{=} \mapsto \langle {[x_{i+1}]}^{s}_{=},\iota _i\rangle \mid 1 \le i \lt n \rbrace \). By the definition of \( \mathsf {edges}(s,h) \), we have that there are positive chunks \( h_i \) of \( h \) such that \( (s,h_{i}) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} x_i \mapsto x_{i+1} \) or \( (s,h_{i}) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \mathtt {ls}(x_i,x_{i+1}) \). In particular, we have \( h_i = \left\lbrace \ell _{i,1} \mapsto \ell _{i,2}, \ldots , \ell _{i,k_i-1}\mapsto \ell _{i,k_i}\right\rbrace , s(x_i)=\ell _{i,1} \text{ and } s(x_{i+1})=\ell _{i,k_i} \) for some locations \( \ell _{i,j} \). Because \( h \) does not have negative chunks, we get that \( h \) fully decomposes into its positive chunks. Hence, the locations \( \ell _{i,j} \) witness that \( (s,h)\models \mathtt {ls}(x,y) \).□

Abstract Memory States of Models composed by the Union Operator. Our next goal is to lift the union operator \( \uplus ^{s} \) to the abstract domain \( \mathbf {AMS} \). We will define an operator \( \bullet \) with the following property: \( \begin{align*} \text{if } h_1\uplus ^{s}h_2\ne \bot \text{~~then~~} \mathsf {ams}{(s,h_1 \uplus ^{s}h_2)} = \mathsf {ams}{(s,h_{1})} \bullet \mathsf {ams}{(s,h_{2})}. \end{align*} \)

AMS composition is a partial operation defined only on compatible AMS. Compatibility enforces (1) that the AMSs were obtained for equivalent stacks (i.e., for stacks \( s,s^{\prime } \) with \( \mathsf {cls}_{=}(s)=\mathsf {cls}_{=}(s^{\prime }) \)), and (2) that there is no double allocation.

Definition 3.25

(Compatibility of AMSs).

AMSs \( \mathcal {A}_1=\langle V_{1},E_{1},\rho _{1},\gamma _{1}\rangle \) and \( \mathcal {A}_2=\langle V_{2},E_{2},\rho _{2},\gamma _{2}\rangle \) are compatible iff (1) \( V_1=V_2 \) and (2) \( \mathbf {alloc}(\mathcal {A}_1)\cap \mathbf {alloc}(\mathcal {A}_2)=\emptyset \).

Note that if \( h_1 \uplus ^{s}h_2 \) is defined, then \( \mathsf {ams}{(s,h_{1})} \) and \( \mathsf {ams}{(s,h_{2})} \) are compatible. The converse is not true, because \( \mathsf {ams}{(s,h_{1})} \) and \( \mathsf {ams}{(s,h_{2})} \) may be compatible even if \( \operatorname{dom}(h_1)\cap \operatorname{dom}(h_2)\ne \emptyset \).

AMS composition is defined in a point-wise manner on compatible AMSs and undefined otherwise.

Definition 3.26

(AMS Composition).

Let \( \mathcal {A}_1=\langle V_{1},E_{1},\rho _{1},\gamma _{1}\rangle \) and \( \mathcal {A}_2=\langle V_{2},E_{2},\rho _{2},\gamma _{2}\rangle \) be two AMS. The composition of \( \mathcal {A}_1,\mathcal {A}_2 \) is then given by \( \begin{equation*} \mathcal {A}_1 \bullet \mathcal {A}_2 := {\left\lbrace \begin{array}{ll} \langle V_1,E_1\cup E_2,\rho _1\cup \rho _2, \gamma _1+\gamma _2\rangle , & \text{if } \mathcal {A}_1,\mathcal {A}_2 \text{ compatible,}\\ \bot , &\text{otherwise.} \end{array}\right.} \end{equation*} \)

Lemma 3.27.

Let \( s \) be a stack and let \( h_1,h_2 \) be heaps. If \( h_1\uplus ^{s}h_2\ne \bot \), then \( \mathsf {ams}{(s,h_{1})}\bullet \mathsf {ams}{(s,h_{2})}\ne \bot \).

Proof.

Since the same stack \( s \) underlies both abstractions, we have \( V_1=V_2 \). Furthermore, \( \operatorname{dom}(h_1)\cap \operatorname{dom}(h_2)=\emptyset \) implies that \( \mathbf {alloc}(\mathcal {A}_1)\cap \mathbf {alloc}(\mathcal {A}_2)=\emptyset \).□

We next show that \( \mathsf {ams}{(s,h_1\uplus ^{s}h_2)} = \mathsf {ams}{(s,h_{1})}\bullet \mathsf {ams}{(s,h_{2})} \) whenever \( h_1\uplus ^{s}h_2 \) is defined:

Lemma 3.28 (Homomorphism of composition).

Let \( (s,h_{1}),(s,h_{2}) \) be models with \( h_1\uplus ^{s}h_2\ne \bot \). Then, \( \mathsf {ams}{(s,h_1\uplus ^{s}h_2)}=\mathsf {ams}{(s,h_{1})}\bullet \mathsf {ams}{(s,h_{2})} \).

Proof.

The result follows easily from the observation that \( \begin{equation*} \mathsf {chunks}{(s,h_1\uplus ^{s}h_2)} = \mathsf {chunks}{(s,h_{1})} \cup \mathsf {chunks}{(s,h_{2})}, \end{equation*} \) which, in turn, is an immediate consequence of Proposition 3.6.□

To show the refinement theorem, we need one additional property of AMS composition. If an AMS \( \mathcal {A} \) of a model \( (s,h) \) can be decomposed into two smaller AMS \( \mathcal {A}=\mathcal {A}_1\bullet \mathcal {A}_2 \), then it is also possible to decompose the heap \( h \) into smaller heaps \( h_1,h_2 \) with \( \mathsf {ams}{(s,h_{i})}=\mathcal {A}_i \):

Lemma 3.29 (Decomposability of AMS).

Let \( \mathsf {ams}{(s,h)}=\mathcal {A}_1\bullet \mathcal {A}_2 \). There exist \( h_1,h_2 \) with \( h=h_1\uplus ^{s}h_2 \), \( \mathsf {ams}{(s,h_{1})}=\mathcal {A}_1 \) and \( \mathsf {ams}{(s,h_{2})}=\mathcal {A}_2 \).

Proof.

It can be verified from the definition of AMS and the definition of the composition operator \( \bullet \) that the following property holds: Let \( h_c \in \mathsf {chunks}(s,h) \) be a chunk. Then, either there exists an \( \mathcal {A}_1^{\prime } \) such that \( \mathcal {A}_1 = \mathsf {ams}{(s,h_c)} \bullet \mathcal {A}_1^{\prime } \) or there exists an \( \mathcal {A}_2^{\prime } \) such that \( \mathcal {A}_2 = \mathsf {ams}{(s,h_c)} \bullet \mathcal {A}_2^{\prime } \).

The claim can then be proven by an induction of the number of chunks \( \left|\mathsf {chunks}(s,h)\right| \).□

These results suffice to prove the Refinement Theorem stated at the beginning of this section; see the Appendix for a proof.

Corollary 3.30.

Let \( (s,h) \) be a model and \( \varphi \) be a formula. \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \) iff \( \mathsf {ams}{(s,h)} \in \alpha _{s}(\varphi) \).

Proof.

Assume \( \mathcal {A}:= \mathsf {ams}{(s,h)} \in \alpha _{s}(\varphi) \). By definition of \( \alpha _s \) there is a model \( (s,h^{\prime }) \) with \( (s,h^{\prime }){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \) and \( \mathsf {ams}{(s,h^{\prime })} = \mathcal {A} \). By applying Theorem 3.19 to \( \varphi \), \( (s,h) \) and \( (s,h^{\prime }) \), we then get that \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \).□

3.4 Recursive Equations for Abstract Memory States

In this section, we derive recursive equations that reduce the set of AMS \( \alpha _{s}(\varphi) \) for arbitrary compound formulas to the set of AMS of the constituent formulas of \( \varphi \). In the next sections, we will show that we can actually evaluate these equations, thus obtaining an algorithm for computing the abstraction of arbitrary formulas.

Lemma 3.31.

\( \alpha _{s}(\varphi _1\wedge \varphi _2) = \alpha _{s}(\varphi _1)\cap \alpha _{s}(\varphi _2) \).

Proof.

Let \( (s,h) \) be a model. \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 \wedge \varphi _2 \) iff \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 \) and \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _2 \) iff \( \mathsf {ams}(s,h)\in \alpha _{s}(\varphi _1) \) and \( \mathsf {ams}(s,h)\in \alpha _{s}(\varphi _2) \) iff \( \mathsf {ams}(s,h)\in \alpha _{s}(\varphi _1) \cap \alpha _{s}(\varphi _2) \).□

Lemma 3.32.

\( \alpha _{s}(\varphi _1\vee \varphi _2) = \alpha _{s}(\varphi _1)\cup \alpha _{s}(\varphi _2) \).

Proof.

Let \( (s,h) \) be a model. \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 \vee \varphi _2 \) iff (\( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 \) or \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _2 \)) iff (\( \mathsf {ams}(s,h)\in \alpha _{s}(\varphi _1) \) or \( \mathsf {ams}(s,h)\in \alpha _{s}(\varphi _2) \)) iff \( \mathsf {ams}(s,h)\in \alpha _{s}(\varphi _1) \cup \alpha _{s}(\varphi _2) \).□

Lemma 3.33.

\( \alpha _{s}(\lnot \varphi _1) = \left\lbrace \mathsf {ams}(s,h)\mid h\in \mathbf {H}\right\rbrace \setminus \alpha _{s}(\varphi _1) \).

Proof.

Let \( (s,h) \) be a model. \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \lnot \varphi _1 \) iff it is not the case that \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 \) iff it is not the case that \( \mathsf {ams}(s,h)\in \alpha _{s}(\varphi _1) \) iff it is the case that \( \mathsf {ams}(s,h)\in \left\lbrace \mathsf {ams}(s,h)\mid h\in \mathbf {H}\right\rbrace \setminus \alpha _{s}(\varphi _1) \).□

The Separating Conjunction. In Section 3.3, we defined the composition operation, \( \bullet \), on pairs of AMS. We now lift this operation to sets of AMS \( \mathbf {A}_1, \mathbf {A}_2 \): \( \begin{equation*} \mathbf {A}_1 \bullet \mathbf {A}_2 := \left\lbrace \mathcal {A}_1 \bullet \mathcal {A}_2 \mid \mathcal {A}_1 \in \mathbf {A}_1, \mathcal {A}_2 \in \mathbf {A}_2, \mathcal {A}_1 \bullet \mathcal {A}_2 \ne \bot \right\rbrace . \end{equation*} \)

Lemma 3.28 implies that \( \alpha _s \) is a homomorphism from formulas and \( * \) to sets of AMS and \( \bullet \):

Lemma 3.34.

For all \( \varphi _1,\varphi _2 \), \( \alpha _{s}(\varphi _1 * \varphi _2) = \alpha _{s}(\varphi _1) \bullet \alpha _{s}(\varphi _2) \).

Proof.

See Appendix.□

The septraction operator. We next define an abstract septraction operator \( {-\!\!\bullet } \) that relates to \( \bullet \) in the same way that \( {-\!\!\circledast } \) relates to \( * \). For two sets of AMS \( \mathbf {A}_1,\mathbf {A}_2 \), we set: \( \begin{equation*} \mathbf {A}_1 {-\!\!\bullet }\mathbf {A}_2 := \lbrace \mathcal {A}\in \mathbf {AMS}\mid \text{ there exists } \mathcal {A}_1 \in \mathbf {A}_1 \text{ s.t. } \mathcal {A}\bullet \mathcal {A}_1 \in \mathbf {A}_2\rbrace . \end{equation*} \)

Then, \( \alpha _s \) is a homomorphism from formulas and \( {-\!\!\circledast } \) to sets of AMS and \( {-\!\!\bullet } \):

Lemma 3.35.

For all \( \varphi _1,\varphi _2 \), \( \alpha _{s}(\varphi _1 {-\!\!\circledast }\varphi _2) = \alpha _{s}(\varphi _1) {-\!\!\bullet }\alpha _{s}(\varphi _2) \).

Proof.

See Appendix.□

3.5 Refining the Refinement Theorem: Bounding Garbage

Even though we have now characterized the set \( \alpha _{s}(\varphi) \) for every formula \( \varphi \), we do not yet have a way to implement AMS computation: While \( \alpha _{s}(\varphi) \) is finite if \( \varphi \) is a spatial atom, the set is infinite in general; see the cases \( \alpha _{s}(\lnot \varphi) \) and \( \alpha _{s}(\varphi _1{-\!\!\circledast }\varphi _2) \). However, we note that for a fixed stack \( s \) only the garbage-chunk count \( \gamma \) of an AMS \( \langle V,E,\rho ,\gamma \rangle \in \alpha _{s}(\varphi) \) can be of arbitrary size, while the size of the nodes \( V \), the edges \( E \) and the negative-allocation constraint \( \rho \) is bounded by \( \left|s\right| \). Fortunately, to decide the satisfiability of any fixed formula \( \varphi \), it is not necessary to keep track of arbitrarily large garbage-chunk counts.

We introduce the chunk size \( \lceil \varphi \rceil \) of a formula \( \varphi \), which provides an upper bound on the number of negative chunks that may be necessary to satisfy and/or falsify the formula; \( \lceil \varphi \rceil \) is defined as follows:

  • \( \lceil \mathbf {emp}\rceil =\lceil x \mapsto y\rceil =\lceil \mathtt {ls}(x,y)\rceil = \lceil x=y\rceil = \lceil x\ne y\rceil := 0, \)

  • \( \lceil \varphi * \psi \rceil := \lceil \varphi \rceil + \lceil \psi \rceil , \)

  • \( \lceil \varphi {-\!\!\circledast }\psi \rceil := \lceil \psi \rceil , \)

  • \( \lceil \varphi \vee \psi \rceil := \max (\lceil \varphi \rceil ,\lceil \psi \rceil), \)

  • \( \lceil \varphi \wedge \psi \rceil := {\left\lbrace \begin{array}{ll} 0, & \mbox{if } \lceil \varphi \rceil = 0 \mbox{ or } \lceil \psi \rceil = 0,\\ \max (\lceil \varphi \rceil ,\lceil \psi \rceil), & \mbox{otherwise}, \end{array}\right.} \)

  • \( \lceil \lnot \varphi \rceil := \max \lbrace 1,\lceil \varphi \rceil \rbrace \).

Observe that \( \lceil \varphi \rceil \le \left|\varphi \right| \) for all \( \varphi \). Intuitively, the chunk bound \( \lceil \varphi \rceil \) of a formula \( \varphi \) establishes two pieces of information: (1) For \( \lceil \varphi \rceil =0 \), we have that every model of \( \varphi \) does not contain negative chunks. (2) For \( \lceil \varphi \rceil \ge 1 \), we have that if there is a model of \( \varphi \) then there is also a model with at most \( \lceil \varphi \rceil \) negative chunks, and for every model with at least \( \lceil \varphi \rceil \) negative chunks, we can add an arbitrary number of negative chunks (without allocated variables) and still satisfy \( \varphi \). We now formally state these two facts:

Lemma 3.36.

Let \( \varphi \) be a formula with \( \lceil \varphi \rceil = 0 \) and let \( (s,h) \) be a model of \( \varphi \). Then, the AMS of all models of \( \varphi \) have a garbage-chunk count of 0.

Proof.

See Appendix.□

For stating the second fact, we generalize the refinement theorem, Theorem 3.19, to models whose AMS differ in their garbage-chunk count, provided both garbage-chunk counts exceed the non-zero chunk size of the formula:

Theorem 3.37 (Refined Refinement Theorem).

Let \( \varphi \) be a formula with \( \lceil \varphi \rceil = k \ge 1 \). Let \( m\ge k \), \( n \ge k \) and let \( (s,h_{1}),(s,h_{2}) \) be models with \( \mathsf {ams}(s,h_{1})=\langle V,E,\rho ,m\rangle \), \( \mathsf {ams}(s,h_{2})=\langle V,E,\rho ,n\rangle \). Then, \( (s,h_{1}) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \) iff \( (s,h_{2}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \).

Proof.

See Appendix.□

This implies that \( \varphi \) is satisfiable over stack \( s \) iff \( \varphi \) is satisfiable by a heap that contains at most \( \lceil \varphi \rceil \) negative chunks:

Corollary 3.38.

Let \( \varphi \) be an formula with \( \lceil \varphi \rceil =k \). Then \( \varphi \) is satisfiable over stack \( s \) iff there exists a heap \( h \) such that (1) \( \mathsf {ams}(s,h)=(V,E,\rho ,\gamma) \) for some \( \gamma \le k \) and (2) \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \).

Proof.

Assume \( \varphi \) is satisfiable and let \( (s,h) \) be a model with \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \). Let \( \mathcal {A}= \langle V,E,\rho ,\gamma \rangle := \mathsf {ams}(s,h) \). If \( \gamma \le k \), then there is nothing to show. Otherwise, let \( \mathcal {A}^{\prime } := \langle V,E,\rho ,k\rangle \). By Lemma 3.15, we can choose a heap \( h^{\prime } \) with \( \mathsf {ams}(s,h^{\prime })=\mathcal {A}^{\prime } \). By Theorem 3.37, \( (s,h^{\prime }){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \).□

3.6 Deciding SSL by AMS Computation

In light of Corollary 3.38, we can decide the SSL satisfiability problem by means of a function \( \mathsf {abst}_{s}(\varphi) \) that computes the (finite) intersection of the (possibly infinite) set \( \alpha _{s}(\varphi) \) and the (finite) set \( \mathbf {AMS}_{k,s}:= \left\lbrace \langle V,E,\rho ,\gamma \rangle \in \mathbf {AMS}\mid V=\mathsf {cls}_{=}(s) \text{ and } \gamma \le k \right\rbrace \) for \( k = \lceil \varphi \rceil \). We define \( \mathsf {abst}_{s}(\varphi) \) in Figure 6. For atomic predicates, we only need to consider garbage-chunk-count 0, whereas the cases \( * \), \( {-\!\!\circledast } \), \( \wedge \) and \( \vee \) require lifting the bound on the garbage-chunk count from \( m \) to \( n\ge m \).

Fig. 6.

Fig. 6. Computing the abstract memory states of the models of \( \varphi \) with stack \( s \) .

Definition 3.39.

Let \( m,n\in \mathbb {N} \) with \( m \le n \) and let \( \mathcal {A}= \langle V,E,\rho ,\gamma \rangle \in \mathbf {AMS} \). The bound-lifting of \( \mathcal {A} \) from \( m \) to \( n \) is \( \begin{equation*} \mathsf {lift}_{m \nearrow n}(\mathcal {A}) := {\left\lbrace \begin{array}{ll} \left\lbrace \mathcal {A}\right\rbrace & \text{if } m=0 \text{ or } \gamma \lt m,\\ \left\lbrace \langle V,E,\rho ,k\rangle \mid m \le k \le n \right\rbrace & \text{if } m \ne 0 \text{ and } \gamma = m. \\ \end{array}\right.} \end{equation*} \) We generalize bound-lifting to sets of AMS: \( \mathsf {lift}_{m \nearrow n}(\mathbf {A}) := \bigcup _{\mathcal {A}\in \mathbf {A}}\mathsf {lift}_{m \nearrow n}(\mathcal {A}) \).

As a consequence of Lemma 3.36 and Theorem 3.37, bound-lifting is sound for all \( n \ge \lceil \varphi \rceil \), i.e., \( \begin{equation*} \mathsf {lift}_{\lceil \varphi \rceil \nearrow n}(\alpha _{s}(\varphi) \cap \mathbf {AMS}_{\lceil \varphi \rceil }) = \alpha _{s}(\varphi) \cap \mathbf {AMS}_{n}. \end{equation*} \) By combining this observation with the lemmas characterizing \( \alpha _s \) (Lemmas 3.20, 3.21, 3.22, 3.24, 3.31, 3.32, 3.33, 3.34, and 3.35), we obtain the correctness of \( \mathsf {abst}_{s}(\varphi) \):

Theorem 3.40.

Let \( s \) be a stack and \( \varphi \) be a formula. Then, \( \mathsf {abst}_{s}(\varphi)=\alpha _{s}(\varphi)\cap \mathbf {AMS}_{\lceil \varphi \rceil ,s} \).

Proof.

See Appendix.□

Computability of \( \mathsf {abst}_{s}(\varphi) \). We note that the operators \( \bullet ,{-\!\!\bullet },\cap ,\cup \) and \( \setminus \) are all computable as the sets that occur in the definition of \( \mathsf {abst}_{s}(\varphi) \) are all finite. It remains to argue that we can compute the set of AMS for all atomic formulas. This is trivial for \( \mathbf {emp} \), (dis-)equalities, and points-to assertions. For the list-segment predicate, we note that the set \( \mathsf {abst}_{s}(\mathtt {ls}(x,y)) = \mathbf {AbstLists}(x,y) \cap \mathbf {AMS}_{\lceil 0\rceil ,s} \) can be easily computed as there are only finitely many abstract lists w.r.t. the set of nodes \( V= \mathsf {cls}_{=}(s) \). We obtain the following results:

Corollary 3.41.

Let \( s \) be a (finite) stack. Then \( \mathsf {abst}_{s}(\varphi) \) is computable for all formulas \( \varphi \).

Theorem 3.42.

Let \( \varphi \in \mathbf {SL} \) and let \( \mathbf {x} \subseteq \mathbf {Var} \) be a finite set of variables with \( \mathsf {fvs}(\varphi) \subseteq \mathbf {x} \). It is decidable whether there exists a model \( (s,h) \) with \( \operatorname{dom}(s) = \mathbf {x} \) and \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \).

Proof.

We consider stacks \( s \) with \( \operatorname{dom}(s) = \mathbf {x} \); we observe that \( \mathbf {C} := \left\lbrace \mathsf {cls}_{=}(s) \mid \operatorname{dom}(s) \subseteq \mathbf {x}\right\rbrace \) is finite; and that all stacks \( s,s^{\prime } \) with \( \mathsf {cls}_{=}(s)=\mathsf {cls}_{=}(s^{\prime }) \) have the same abstractions by Lemma 3.18. Consequently, we can compute the set \( \left\lbrace \mathsf {abst}_{s}(\varphi) \mid \operatorname{dom}(s) \subseteq \mathbf {x}\right\rbrace \) by picking for each element \( V\in \mathbf {C} \) one stack \( s \) with \( \mathsf {cls}_{=}(s)=V \), and calculating \( \mathsf {abst}_{s}(\varphi) \) for this stack. By Corollary 3.41, \( \mathsf {abst}_{s}(\varphi) \) is computable for every such stack. By Theorem 3.40 and Corollary 3.38, \( \varphi \) is satisfiable over stack \( s \) iff \( \mathsf {abst}_{s}(\varphi) \) is nonempty. Putting all this together, we obtain \( \varphi \) is satisfiable in stacks of size \( n \) if and only if any of finitely many computable sets \( \mathsf {abst}_{s}(\varphi) \) is nonempty.□

Corollary 3.43.

\( \varphi \vert\!\mathop{=}\limits^{ \mathrm {st}} {\mathbf {x}}\psi \) is decidable for all finite sets of variables \( \mathbf {x} \subseteq \mathbf {Var} \) and \( \varphi ,\psi \in \mathbf {SL} \) with \( \mathsf {fvs}(\varphi) \subseteq \mathbf {x} \) and \( \mathsf {fvs}(\psi) \subseteq \mathbf {x} \).

Proof.

\( \varphi \vert\!\mathop{=}\limits^{ \mathrm {st}} {\mathbf {x}}\psi \) iff \( \varphi \wedge \lnot \psi \) is unsatisfiable w.r.t. \( \mathbf {x} \), which is decidable by Theorem 3.42.□

3.7 Complexity of the SSL Satisfiability Problem

It is easy to see that the algorithm \( \mathsf {abst}_{s}(\varphi) \) runs in exponential time. We conclude this section with a proof that SSL satisfiability and entailment are actually \( {\rm PS}{\rm\small{PACE}} \)-complete.

\( {\rm PS}{\rm\small{PACE}} \)-hardness. An easy reduction from quantified Boolean formulas (QBF) shows that the SSL satisfiability problem is \( {\rm PS}{\rm\small{PACE}} \)-hard. The reduction is presented in Figure 7. We encode positive literals \( x \) by \( (x \mapsto \mathsf {nil}) * \mathsf {t} \) (the heap contains the pointer \( x \mapsto \mathsf {nil} \)) and negative literals by \( \lnot ((x \mapsto \mathsf {nil}) * \mathsf {t}) \) (the heap does not contain the pointer \( x\mapsto \mathsf {nil} \)). The magic wand is used to simulate universals (i.e., to enforce that we consider both the case \( x \mapsto \mathsf {nil} \) and the case \( \mathbf {emp} \), setting \( x \) both to true and to false). Analogously, septraction is used to simulate existentials. Similar reductions can be found (for standard SL) in Calcagno et al. [2001].

Fig. 7.

Fig. 7. Translation \( \mathsf {qbf\_to\_sl}(F) \) from closed QBF formula \( F \) (in negation normal form) to a formula that is satisfiable iff \( F \) is true.

Lemma 3.44.

The SSL satisfiability problem is \( {\rm PS}{\rm\small{PACE}} \)-hard (even without the \( \mathtt {ls} \) predicate).

Note that this reduction simultaneously proves the \( {\rm PS}{\rm\small{PACE}} \)-hardness of SSL model checking: If \( F \) is a QBF formula over variables \( x_1,\ldots ,x_k \), then \( \mathsf {qbf\_to\_sl}(F) \) is satisfiable iff \( (\left\lbrace x_i \mapsto \ell _i \mid 1 \le i \le n\right\rbrace ,\emptyset) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \mathsf {qbf\_to\_sl}(F) \) for some locations \( \ell _i \) with \( \ell _i \ne \ell _j \) for \( i \ne j \).

\( {\rm PS}{\rm\small{PACE}} \)-membership. For every stack \( s \) and every bound on the garbage-chunk count of the AMS we consider, it is possible to encode every AMS by a string of polynomial length.

Lemma 3.45.

Let \( k\in \mathbb {N} \), let \( s \) be a stack and \( n:=k+\left|s\right| \). There exists an injective function \( \mathsf {encode}:\mathbf {AMS}_{k,s}\rightarrow {\left\lbrace 0,1\right\rbrace }^{*} \) such that \( \begin{equation*} \left|\mathsf {encode}(\mathcal {A})\right| \in \mathcal {O}(n \log (n)) \quad \text{ for all } \mathcal {A}\in \mathbf {AMS}_{k,s}. \end{equation*} \)

Proof.

(Lemma 3.45.) Let \( \mathcal {A}= \langle V,E,\rho ,\gamma \rangle \in \mathbf {AMS}_{k,s} \). Each of the \( \left|s\right| \le n \) variables that occur in \( A \) can be encoded by a logarithmic number of bits. Observe that \( \left|V\right| \le \left|s\right| \), so \( V \) can be encoded by at most \( \mathcal {O}(n \log (n) + n) \) symbols (using a constant-length delimiter between the nodes). Each of the at most \( \left|V\right| \) edges can be encoded by \( \mathcal {O}(\log (n)) \) bits, encoding the position of the source and target nodes in the encoding of \( V \) by \( \mathcal {O}(\log (n)) \) bits each and expending another bit to differentiate between \( =\!1 \) and \( \ge \!2 \) edges. \( \rho \) can be encoded like \( V \). Since \( \gamma \le k \le n \), \( \gamma \) can be encoded by at most \( \log (n) \) bits. In total, we thus have an encoding of length \( \mathcal {O}(n \log (n)) \).□

An enumeration-based implementation of the algorithm in Figure 6 (that has to keep in memory at most one AMS per subformula at any point in the computation) therefore runs in \( {\rm PS}{\rm\small{PACE}} \):

Lemma 3.46.

Let \( \varphi \in \mathbf {SL} \) and let \( \mathbf {x} \subseteq \mathbf {Var} \) be a finite set of variables with \( \mathsf {fvs}(\varphi) \subseteq \mathbf {x} \). It is decidable in \( {\rm PS}{\rm\small{PACE}} \) (in \( \left|\varphi \right| \) and \( \left|\mathbf {x}\right| \)) whether there exists a model \( (s,h) \) with \( \operatorname{dom}(s) = \mathbf {x} \) and \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \).

Proof.

A simple induction on the structure of \( \varphi \) shows that it is possible to enumerate the set \( \mathsf {abst}_{s}(\varphi) \) using at most \( \left|\varphi \right| \) registers (each storing an AMS). The most interesting case is \( \varphi _1{-\!\!\circledast }\varphi _2 \). Assume we can enumerate the sets \( \mathsf {abst}_{s}(\varphi _1) = \left\lbrace \mathcal {A}_1,\ldots ,\mathcal {A}_m\right\rbrace \) and \( \mathsf {abst}_{s}(\varphi _2)=\left\lbrace \mathcal {B}_1,\ldots ,\mathcal {B}_n\right\rbrace \) in polynomial space. We then use a new register in which we successively enumerate all \( \mathcal {A}\in \mathbf {AMS}_{\lceil \varphi _1{-\!\!\circledast }\varphi _2\rceil ,s} \). This is done as follows: We enumerate all pairs of AMS \( (\mathcal {A}_i,\mathcal {B}_j) \), \( 1 \le i \le m \), \( 1 \le j \le n \); we recognize that \( \mathcal {A}\in \mathbf {AMS}_{\lceil \varphi _1{-\!\!\circledast }\varphi _2\rceil ,s} \) iff \( \mathcal {B}_j = \mathcal {A}_i\bullet \mathcal {A} \) for any of these pairs \( (\mathcal {A}_i,\mathcal {B}_j) \).□

The \( {\rm PS}{\rm\small{PACE}} \)-completeness result, Theorem 3.1, follows by combining Lemmas 3.44 and 3.46.

3.8 Extension to Trees

In this section, we show that all our results continue to hold when we add a tree predicate to our separation logic. In what follows, we only state the definitions and results that need to be adapted, most of the definitions and results from the previous sections, however, do not need to be changed.

We begin by extending our memory model: We allow pointers to point to either one or two successor locations, i.e., we extend our previous definition of heaps and consider partial functions: \( \begin{equation*} h:\mathbf {Loc}\rightharpoonup \mathbf {Loc}\cup \mathbf {Loc}\times \mathbf {Loc}. \end{equation*} \) With pointers being able to point to more than one location, the heap can now form more general graph-theoretic structures, in particular trees.

We now extend the syntax and semantics of our separation logic (as stated in Figures 2 and 3) to tree predicates and points-to predicates with two target locations: \( \begin{equation*} \begin{array}{lll} \tau & ::= & \cdots \mid x \mapsto \langle y,z\rangle \mid \mathtt {tree}(x;y_1,\ldots ,y_n;z_1,\ldots ,z_m) \end{array} \end{equation*} \) \( \begin{equation*} \begin{array}{lcl} (s,h)\models x \mapsto \langle y,z\rangle & \text{iff} & h= \left\lbrace s(x) \mapsto \langle s(y),s(y)\rangle \right\rbrace \quad \quad \quad \quad \quad \quad \\ (s,h)\models \mathtt {tree}(x;y_1,\ldots ,y_n;z_1,\ldots ,z_m) & \text{iff} \end{array} \end{equation*} \) \( \begin{equation*} \begin{array}{l} \quad \quad \quad \quad \quad \operatorname{dom}(h) = \emptyset , n=1 \text{ and } s(x) = s(y_1), \text{ or }\\ \quad \quad \quad \quad \quad \operatorname{dom}(h) = \emptyset , n=0 \text{ and } s(x) = s(z_i) \text{ for some } i \in \lbrace 1,\ldots ,m\rbrace , \text{ or }\\ \quad \quad \quad \quad \quad \text{there is some } \ell \in \mathbf {Loc}\text{ and a fresh variable } u \in \mathbf {Var}\text{ such that }\\ \quad \quad \quad \quad \quad \quad \quad \quad (s[u \mapsto \ell ],h) \models x \mapsto u * \mathtt {tree}(u;y_1,\ldots ,y_n;z_1,\ldots ,z_m), \text{ or }\\ \quad \quad \quad \quad \quad \text{there are some } \ell _1,\ell _2 \in \mathbf {Loc}, \text{ fresh variables } u,v \in \mathbf {Var}, \text{ and some}\\ \quad \quad \quad \quad \quad \text{partitioning of } y_1,\ldots ,y_n \text{ into } a_1,\ldots ,a_k \text{ and } b_1,\ldots ,b_l \text{ such that }\\ \quad \quad \quad \quad \quad \quad \quad \quad (s[u \mapsto \ell _1,v \mapsto \ell _2],h) \models x \mapsto \langle u,v\rangle * \mathtt {tree}(u;a_1,\ldots ,a_k;z_1,\ldots ,z_m)\\ \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad * \mathtt {tree}(v;b_1,\ldots ,b_l;z_1,\ldots ,z_m). \end{array} \end{equation*} \)

We note that in a tree predicate \( \mathtt {tree}(x;y_1,\ldots ,y_n;z_1,\ldots ,z_m) \), we distinguish between the root \( x \), leaves \( y_1,\ldots ,y_n \) and sinks \( z_1,\ldots ,z_m \). We give an example for a tree with three leaves and one sink in Figure 8; note that leaves can repeat (as \( y_2 \) in Figure 8) but the tree definition ensures that for a leaf, we precisely track the number of incoming pointers, whereas sinks can always have an arbitrary number of incoming pointers. We comment on the recursive definition of the tree predicate: The base cases state that each tree either ends in a leaf or in a sink location; the composite cases stipulate that the successor locations respect the requirements for leaves and sinks; in particular, in case of two successors the leaves \( y_1,\ldots ,y_n \) can be partitioned into the leaves \( a_1,\ldots ,a_k \) and \( b_1,\ldots ,b_l \) of the respective sub-trees, i.e., we require that \( k+l=n \) and that \( a_1,\ldots ,a_k,b_1,\ldots ,b_l \) is a permutation of \( y_1,\ldots ,y_n \). We want to distinguish between leaves and sinks to be able to reason about tree composition. That is, we want to generalize the following reasoning to trees: For lists, we can prove that \( \mathtt {ls}(x,y) * \mathtt {ls}(y,z) \models \mathtt {ls}(x,z) \), i.e., that the composition of the list-segment predicates \( \mathtt {ls}(x,y) \) and \( \mathtt {ls}(y,z) \) implies the list-segment predicate \( \mathtt {ls}(x,z) \). Indeed, we have the following property about tree composition:

Fig. 8.

Fig. 8. Tree example: A stack-heap pair \( (s,h) \) with \( (s,h)\models \mathtt {tree}(x;y_1,y_2,y_2;\mathsf {nil}) \) .

Proposition 3.47.

\( \begin{multline*} \mathtt {tree}(x;y,y_1,\ldots ,y_n;z_1,\ldots ,z_m) * \mathtt {tree}(y;w_1,\ldots ,w_k;z_1,\ldots ,z_m) \models \\ \mathtt {tree}(x;y_1,\ldots ,y_n,w_1,\ldots ,w_k;z_1,\ldots ,z_m). \end{multline*} \)

Proof.

Direct from the semantics of the tree predicate.□

We note that the tree predicate generalizes the list segment predicate: it is easy to verify that the predicate \( \mathtt {tree}(x;y;\epsilon) \), where \( \epsilon \) is the empty sequence of variables, is satisfied by the same set of stack-heap pairs \( (s,h) \) as the list segment predicate \( \mathtt {ls}(x,y) \).

Correspondence of Strong and Weak Semantics on Positive Formulas. The correspondence continues to hold for the positive fragment of the extended logic. (It is sufficient to check that the base case of Lemma 2.5 is also satisfied for the tree predicate.)

Chunks. The definition of positive and negative chunks (Definitions 3.8) does not have to be changed, because positive chunks are defined with regard to the satisfaction of any atomic formula \( \tau \), which can now also be tree predicates.

The AMS abstraction. We need to generalize the AMS abstraction to incorporate pointers with multiple successors and trees. For this, we need to assume an upper bound \( k \) on the number of leaves that can appear in a tree predicate, i.e., we require \( n \le k \) for \( \mathtt {tree}(x;y_1,\ldots ,y_n;z_1,\ldots ,z_m) \). We are now ready to state the AMS generalization; we only need to change the second component of AMSs:

Definition 3.48

(AMS Edges).

We define AMS edges to be the partial function

\( E:V\rightharpoonup (V\cup V\times V) \times \lbrace =\!1\rbrace \cup (V\rightarrow \lbrace 0,\ldots ,k+1\rbrace \cup \lbrace \infty \rbrace) \times \lbrace \ge \!2\rbrace , \)

such that there is no \( \mathbf {v} \in \operatorname{dom}(E) \) with \( \mathsf {nil}\in \mathbf {v} \).

Intuitively, the AMS edges store whether there is a single pointer with one or two successors, or a tree with at least two allocated locations, for which we store the number of tree edges whose target location is in the image of the stack. We store the exact number of such edges in case of less or equal to \( k+1 \) edges to be able to precisely reason about the number of incoming edges of a tree leaf. We represent more than \( k+1 \) edges by \( \infty \), which is sufficient to reason about sinks. We now give an intuition why the bound of \( k+1 \) is sufficient to reason about the existence of models for formulas that include tree predicates with at most \( k \) leaves: Consider for example the predicate \( \mathtt {tree}(x;\epsilon ;\epsilon) \); i.e., we have a predicate with \( k=0 \) leaves. Then, a model of \( \mathtt {tree}(x;\epsilon ;\epsilon) \) might be composed of two chunks that are models of \( \mathtt {tree}(x;y;\epsilon) \) and \( \mathtt {tree}(y;\epsilon ;\epsilon) \); note that these predicates use at most \( k+1=1 \) leaves. In contrast, there can never be a model of \( \mathtt {tree}(x;\epsilon ;\epsilon) \) that is composed of some chunk that has two or more times the same leaf, e.g., \( \mathtt {tree}(x;y,y;\epsilon) \), because such a chunk would need to be composed with two other chunks that allocate \( y \) (which is not possible).

We generalize the AMS induced by a model \( (s,h) \) accordingly: For every equivalence class \( {[x]}^{s}_{=} \in \mathsf {cls}_{=}(s) \), we set \( \begin{multline*} \mathsf {edges}(s,h)({[x]}^{s}_{=}) := \\ \quad \quad {\left\lbrace \begin{array}{ll} \langle {[y]}^{s}_{=}, =\!1\rangle & \text{there are } y \in \operatorname{dom}(s), h_c \in \mathsf {chunks}^{+}(s,h)\text{ with } (s,h_c) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} x \mapsto y \\ \langle \langle {[y]}^{s}_{=},{[z]}^{s}_{=}\rangle , =\!1\rangle & \text{there are } y,z \in \operatorname{dom}(s), h_c \in \mathsf {chunks}^{+}\text{ with } (s,h_c) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} x \mapsto \langle y,z\rangle \\ \langle \lbrace v \mapsto d_v \rbrace , \ge \!2\rangle , & \text{there are } y_1,\ldots ,y_n,z_1,\ldots ,z_m \in \operatorname{dom}(s), h_c \in \mathsf {chunks}^{+}(s,h)\text{ with } \\ & \ (s,h_c) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \mathtt {tree}(x;y_1,\ldots ,y_n;z_1,\ldots ,z_m)\\ & \quad \quad \quad \quad \quad \quad \wedge \lnot x \mapsto y_1 \wedge \lnot x \mapsto \langle y_1,y_2\rangle \wedge \lnot x \mapsto \langle y_2,y_1\rangle , \text{ and } \\ & \ \text{ for every } v \in V\text{ with } d_v \lt \infty \text{ there are exactly } d_v \text{ variables } y_i\\ & \ \text{ with } y_i\in v, \text{ and for every } v \in V\text{ with } d_v = \infty \text{ there is some } z_i \in v \\ & \ \text{ with } |h_c^{-1}(s(z_i))| \gt k+1\\ \bot , & \text{otherwise.} \end{array}\right.} \end{multline*} \) We note that we do not need to include a separate case for lists in the extended definition of AMS edges, as lists are covered as a special case of \( \langle \lbrace v \mapsto d_v \rbrace , \ge \!2\rangle \), where \( \sum _{v \in V} d_v = 1 \), i.e., there is a single edge whose target location is in the image of the stack.

Lemma 3.49 (Realizability of AMS).

Let \( \mathcal {A}= \langle V,E,\rho ,\gamma \rangle \) be an AMS. There exists a model \( (s,h)= \mathsf {model}(\mathcal {A}) \) with \( \mathsf {ams}(s,h)=\mathcal {A} \) whose size is linear in the size of \( \mathcal {A} \) (we assume a unary representation of the numbers in AMS edges).

Proof.

(Lemma 3.49.) For every \( v \in V \), we fix a location \( \ell _v \in \mathbf {Loc} \), for every \( \mathbf {r} \in \rho \), we fix a location \( \ell _{\mathbf {r}} \in \mathbf {Loc} \), and for every \( 1\le i \le \gamma \), we fix a location \( \ell _i \in \mathbf {Loc} \); we assume all these locations to be different. We set \( s:= \bigcup _{x \in v, v \in V} \left\lbrace x \mapsto \ell _v\right\rbrace \). For every node \( v \in V \) with \( E(v)=\langle \lbrace w \mapsto d_w \rbrace , \ge \!2\rangle \), we fix some set of locations \( \mathbf {Loc}_v = \bigcup _{w \in V} \lbrace c^w_1,\ldots ,c^w_{e_w}\rbrace \subseteq \mathbf {Loc} \), where \( e_w = d_w \), in case \( d_w \lt \infty \), and \( e_w = k+2 \), otherwise; we require all those sets \( \mathbf {Loc}_v \) to be pairwise disjoint and disjoint from the sets \( \lbrace \ell _v \in \mathbf {Loc}\mid v \in V\rbrace \), \( \lbrace \ell _{\mathbf {r}} \mid \mathbf {r} \in \rho \rbrace \), and \( \lbrace \ell _i \mid \mathbf {r} \in 1\le i \le \gamma \rbrace \).

We now define \( h \) as the (disjoint) union of the following sets:

  • For every \( v \in V \) with \( E(v) = \langle v^{\prime },=\!1\rangle \) the set \( \begin{equation*} \left\lbrace \ell _v \mapsto \ell _{v^{\prime }}\right\rbrace \!. \end{equation*} \)

  • For every \( v \in V \) with \( E(v) = \langle \langle v^{\prime },v^{\prime \prime }\rangle ,=\!1\rangle \) the set \( \begin{equation*} \left\lbrace \ell _v \mapsto \langle \ell _{v^{\prime }},\ell _{v^{\prime \prime }}\rangle \right\rbrace \!. \end{equation*} \)

  • For every \( v \in V \) with \( E(v) = \langle \lbrace w \mapsto d_w \rbrace , \ge \!2\rangle \), using the locations \( \mathbf {Loc}_v = \bigcup _{w \in V} \lbrace c^w_1,\ldots ,c^w_{e_w}\rbrace \) and fixing an arbitrary order \( V= \lbrace w_1,\ldots ,w_n\rbrace \), the set \( \begin{multline*} \quad \quad \left\lbrace \ell _v \mapsto c^{w_1}_1\right\rbrace \cup \bigcup _{1 \le i \lt n} \left\lbrace c^{w_i}_j \mapsto \langle \ell _{w_i},c^{w_i}_{j+1}\rangle \mid 1 \le j \lt e_{w_i}\right\rbrace \cup \left\lbrace c^{w_i}_{e_{w_i}} \mapsto \langle \ell _{w_i},c^{w_{i+1}}_1\rangle \right\rbrace \\ \cup \left\lbrace c^{w_n}_j \mapsto \langle \ell _{w_n},c^{w_n}_{j+1}\rangle \mid 1 \le j \lt e_{w_n}\right\rbrace \cup \left\lbrace c^{w_n}_{e_{w_n}} \mapsto \ell _{w_n}\right\rbrace \!. \end{multline*} \)

  • For every \( \mathbf {r} \in \rho \) the set \( \begin{equation*} \left\lbrace \ell _\mathbf {r} \mapsto \ell _\mathbf {r}\right\rbrace \cup \bigcup _{v \in \mathbf {r}} \left\lbrace \ell _v \mapsto \ell _\mathbf {r}\right\rbrace \!. \end{equation*} \)

  • For every \( 1 \le i \le \gamma \) the set \( \begin{equation*} \left\lbrace \ell _i \mapsto \ell _i\right\rbrace \!. \end{equation*} \)

It is easy to verify that \( \mathsf {ams}(s,h)=\mathcal {A} \) and that \( \left|h\right| \in \mathcal {O}(\left|\mathcal {A}\right|) \).□

We now define abstract trees; this notion allows us to characterize the AMSs arising from abstracting trees.

Definition 3.50.

Given some \( \mathcal {A}=\langle V,E,\rho ,\gamma \rangle \in \mathbf {AMS} \) and \( x,y_1,\ldots ,y_n,z_1,\ldots ,z_m \in \mathbf {Var} \), we say that \( \mathcal {A} \) is an abstract tree with root \( x \), leaves \( y_1,\ldots ,y_n \) and sinks \( z_1,\ldots ,z_m \), in signs \( \mathcal {A}\in \mathbf {AbstTrees}(x;y_1,\ldots ,y_n;z_1,\ldots ,z_m) \), iff \( \mathsf {model}(\mathcal {A}) \models \mathtt {tree}(x;y_1,\ldots ,y_n;z_1,\ldots ,z_m) \), where \( \mathsf {model}(\mathcal {A}) \) is the canonical model of AMS \( \mathcal {A} \) from Lemma 3.49.

We now show that the notion of abstract trees indeed characterizes the models that satisfy tree predicates:

Lemma 3.51.

For all stack-heap pairs \( (s,h) \), we have that \( (s,h)\models \mathtt {tree}(x;y_1,\ldots ,y_n;z_1,\ldots ,z_m) \) iff \( \mathsf {ams}{(s,h)} \in \mathbf {AbstTrees}(x;y_1,\ldots ,y_n;z_1,\ldots ,z_m) \).

Proof.

We need to argue that \( (s,h)\models \mathtt {tree}(x;y_1,\ldots ,y_n;z_1,\ldots ,z_m) \) iff \( \mathsf {model}(\mathsf {ams}{(s,h)}) \models \mathtt {tree}(x;y_1,\ldots ,y_n;z_1,\ldots ,z_m) \).

Given some \( (s,h) \), we argue that \( (s,h)\models \mathtt {tree}(x;y_1,\ldots ,y_n;z_1,\ldots ,z_m) \) implies \( \mathsf {model}(\mathsf {ams}{(s,h)}) \models \mathtt {tree}(x;y_1,\ldots ,y_n;z_1,\ldots ,z_m) \). The proof of the other implication is similar. Let us assume that \( (s,h)\models \mathtt {tree}(x;y_1,\ldots ,y_n;z_1,\ldots ,z_m) \) . We note that \( \mathsf {ams}{(\mathsf {model}(\mathsf {ams}{(s,h)}))} = \mathsf {ams}{(s,h)} \). Hence, \( \mathsf {model}(\mathsf {ams}{(s,h)}) \) does not contain garbage and fully decomposes into positive chunks. We now observe that the positive chunks that consist of a single points-to assertion are the same in both models, only the chunks that belong to trees with \( \ge \!2 \) allocated locations may differ. For those tree chunks, we observe that they agree on the root, leaves and sinks, only their number of internal locations may differ. We now argue that \( \mathsf {model}(\mathsf {ams}{(s,h)}) \models \mathtt {tree}(x;y_1,\ldots ,y_n;z_1,\ldots ,z_m) \). We need to ensure that we can find a recursive unfolding of the tree predicate according to the semantics of the tree predicate. We can construct such an unfolding using the unfolding of semantics of the tree predicate in \( (s,h)\models \mathtt {tree}(x;y_1,\ldots ,y_n;z_1,\ldots ,z_m) \) and using that the chunks of both models abstract to the same AMSs, i.e., have the same roots, leaves and sinks. We note that we can precisely track the leaves \( y_1,\ldots ,y_n \) for an upper bound \( n \le k \) on the number of leaves. This is because each variable can be allocated at most once and hence \( k+1 \) is an upper bound on the number of times the same variable \( y_i \) can appear as a leaf of some tree chunk.□

This result on abstract trees is all that is needed to generalize our decision procedure and complexity results to our extended separation logic. (In particular, we note that Lemma 3.51 covers the base cases in the proofs of Theorems 3.19 and 3.37.)

Skip 4PROGRAM VERIFICATION WITH STRONG-SEPARATION LOGIC Section

4 PROGRAM VERIFICATION WITH STRONG-SEPARATION LOGIC

Our main practical motivation behind SSL is to obtain a decidable logic that can be used for fully automatically discharging verification conditions in a Hoare-style verification proof. Discharging VCs can be automated by calculi that symbolically execute pre-conditions forward, respectively, post-conditions backward, and then invoking an entailment checker. Symbolic execution calculi typically either introduce first-order quantifiers or fresh variables to deal with updates to the program variables. We leave the extension of SSL to support for quantifiers for future work and in this article develop a forward symbolic execution calculus based on fresh variables.

We target the usual Hoare-style setting where a verification engineer annotates the pre- and post-condition of a function and provides loop invariants. We exemplify two annotated functions in Figure 12; the left function reverses a list and the right function copies a list. In addition to the program variables, our annotations may contain logical variables (also known as ghost variables); for example, the annotations of list reverse only contain program variables, while the annotations of list copy also contain the logical variable \( u \) (which is assumed to be equal to \( x \) in the pre-condition).10

A simple heap-manipulating programming language. We consider the six program statements \( \mathsf {x.next := y} \), \( \mathsf {x := y.next} \) (where \( x \) is different from \( y \)), \( \mathsf {free(x)} \), \( \mathsf {malloc(x)} \), \( \mathsf {x := y} \) and \( \mathsf {assume(\varphi)} \), where \( \varphi \) is \( x = y \) or \( x \ne y \). We remark that we do not include a statement \( \mathsf {x := x.next} \) for ease of exposition; however, this is w.l.o.g., because \( \mathsf {x := x.next} \) can be simulated by the statements \( \mathsf {y := x.next; x := y} \) at the expense of introducing an additional program variable \( y \). We specify the semantics of the considered program statements via a small-step operational semantics. We state the semantics in Figure 11, where we write \( (s,h) \xrightarrow {c} (s^{\prime },h^{\prime }) \), with the meaning that executing \( c \) in state \( (s,h) \) leads to state \( (s^{\prime },h^{\prime }) \), and \( (s,h) \xrightarrow {c} \mathsf {error} \), when executing \( c \) leads to an error. Our only non-standard choice is the modelling of the \( \mathsf {malloc} \) statement: we assume a special program variable \( m \), which is never referenced by any program statement and only used in the modelling; the \( \mathsf {malloc} \) statement updates the value of the variable \( m \) to the target of the newly allocated memory cell; we include \( m \) to have a name for the target of the newly allocated memory cell. We say program statement \( c \) is safe for a stack-heap pair \( (s,h) \) if there is no transition \( (s,h) \xrightarrow {c} \mathsf {error} \). Given a sequence of program statements \( \mathbf {c}=c_1\cdots c_k \), we write \( (s,h) \xrightarrow {\mathbf {c}} (s^{\prime },h^{\prime }) \), if there are stack-heap pairs \( (s_i,h_i) \), with \( (s_0,h_0) = (s,h) \), \( (s_k,h_k) = (s^{\prime },h^{\prime }) \) and \( (s_i,h_i) \xrightarrow {c_i} (s_{i+1},h_{i+1}) \) for all \( 1 \le i \le k \).

Forward Symbolic Execution Rules. The rules for the program statements in Figure 9 are local in the sense that they only deal with a single pointer or the empty heap. The rules in Figure 10 are the main rules of our forward symbolic execution calculus. The frame rule is essential for lifting the local proof rules to larger heaps. Note that the frame rule requires substituting the modified program variables with fresh copies: We set \( \mathsf {modifiedVars}(c) := \lbrace x,m\rbrace \) for \( c=\mathsf {malloc}(x) \), \( \mathsf {modifiedVars}(c) := \lbrace x\rbrace \) for \( c = \mathsf {x := y.next} \) and \( c = \mathsf {x := y} \), and \( \mathsf {modifiedVars}(c) := \emptyset \), otherwise. The materialization rule ensures that the frame rule can be applied whenever the pre-condition of a local proof rule can be met. We now give more details. For a sequence of program statements \( \mathbf {c}=c_1\cdots c_k \) and a pre-condition \( P_\mathsf {start} \), the symbolic execution calculus derives triples \( \left\lbrace P_\mathsf {start}\right\rbrace c_1\cdots c_i \left\lbrace Q_i\right\rbrace \) for all \( 1\le i \le k \). To proceed from \( i \) to \( i+1 \), either (1) only the frame rule is applied or (2) the materialization rule is applied first followed by an application of the frame rule. The frame rule can be applied if the formula \( Q_i \) has the shape \( Q_i = A*P \), where \( A \) is suitably chosen and \( P \) is the pre-condition of the local proof rule for statement \( c_i \). Then, \( Q_{i+1} \) is given by \( Q_{i+1} = A[\mathbf {x}^{\prime }/\mathbf {x}] * Q \), where \( \mathbf {x} = \mathsf {modifiedVars}(c) \), \( \mathbf {x}^{\prime } \) are fresh copies of the variables \( \mathbf {x} \) and \( Q \) is the right hand side of the local proof rule for statement \( c_i \), i.e., we have \( \left\lbrace P\right\rbrace c_i \left\lbrace Q\right\rbrace \). The materialization rule may be applied to ensure that \( Q_i \) has the shape \( Q_i = A*P \). This is not needed in case \( P=\mathbf {emp} \) but may be necessary for points-to assertions such as \( P = x \mapsto y \). We note that \( Q_i \) guarantees that a pointer \( x \) is allocated iff \( Q_i {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \lnot ((x \mapsto \mathsf {nil}) {-\!\!\circledast }\mathsf {t}) \). Under this condition, the rule allows introducing a name \( z \) for the target of the pointer \( x \). We remark that while backward-symbolic execution calculi typically employ the magic wand, our forward calculus makes use of the dual septraction operator: this operator allowed us to design a general rule that guarantees a predicate of shape \( Q_i = A*P \) without the need of coming up with dedicated rules for, e.g., unfolding list predicates.

Fig. 9.

Fig. 9. Local proof rules of program statements for forward symbolic execution.

Fig. 10.

Fig. 10. The frame and the materialization rule for forward symbolic execution.

Fig. 11.

Fig. 11. Semantics of program statements.

Applying the forward symbolic execution calculus for verification. We now explain how the proof rules presented in Figures 9 and 10 can be used for program verification. Our goal is to verify that the pre-condition \( P \) of a loop-free piece of code \( c \) (in our case, a sequence of program statements) implies the post-condition \( Q \). For this, we apply the symbolic execution calculus and derive a triple \( \left\lbrace P\right\rbrace c \left\lbrace Q^{\prime }\right\rbrace \). It then remains to verify that the final state of the symbolic execution \( Q^{\prime } \) implies the post-condition \( Q \). Here, we face the difficulty that the symbolic execution introduces additional variables: Let us assume that all annotations are over a set of variables \( \mathbf {x} \), which includes the program variables and the logical variables. Further assume that the symbolic execution \( \left\lbrace P\right\rbrace c \left\lbrace Q^{\prime }\right\rbrace \) introduced the fresh variables \( \mathbf {y} \). With the results of Section 3, we can then verify the entailment \( Q^{\prime } \vert\!\mathop{=}\limits^{ \mathrm {st}} {\mathbf {x} \cup \mathbf {y}} Q \). However, we need to guarantee that all models \( (s,h) \) of \( Q \) with \( \operatorname{dom}(s) = \mathbf {x} \cup \mathbf {y} \) are also models when we restrict \( \operatorname{dom}(s) \) to \( \mathbf {x} \) (note that the variables \( \mathbf {y} \) are implicitly existentially quantified; we make this statement precise in Lemma 4.6 below). To deal with this issue, we require annotations to be robust:

Definition 4.1

(Robust Formula).

We call a formula \( \varphi \in \mathbf {SL} \) robust, if for all models \( (s_1,h) \) and \( (s_2,h) \) with \( \mathsf {fvs}(\varphi) \subseteq \operatorname{dom}(s_1) \) and \( \mathsf {fvs}(\varphi) \subseteq \operatorname{dom}(s_2) \) and \( s_1(x) = s_2(x) \) for all \( x \in \mathsf {fvs}(\varphi) \), we have that \( (s_1,h) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \) iff \( (s_2,h) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \).

We identify a fragment of robust formulas in the next lemma. In particular, we obtain that the annotations in Figure 12 are robust.

Fig. 12.

Fig. 12. List reverse (left) and list copy (right) annotated pre- and post-condition and loop invariants.

Lemma 4.2.

Let \( \varphi \in \mathbf {SL} \) be a positive formula. Then, \( \varphi \) is robust.

Proof.

Let \( (s_1,h) \) and \( (s_2,h) \) be two models with \( s_1(x) = s_2(x) \) for all \( x \in \mathsf {fvs}(\varphi) \). Then, by Lemma 2.7, we have that \( (s_1,h) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \) iff \( (s_1,h) {\vert\!\mathop{=}\limits^{ \mathrm {wk}}} \varphi \) iff \( (s_2,h) {\vert\!\mathop{=}\limits^{ \mathrm {wk}}} \varphi \) iff \( (s_2,h) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \).□

The following lemma allows us to construct robust formulas from known robust formulas:

Lemma 4.3.

Let \( \varphi \in \mathbf {SL} \) be formula. If \( \varphi \) is robust, then \( \varphi * x \mapsto y \) and \( x \mapsto y {-\!\!\circledast }\varphi \) are robust.

Proof.

Immediate from the definition of a robust formula.□

Not all formulas are robust, e.g., consider \( \varphi \) from Example 2.2. However, Lemma 2.7 does not cover all robust formulas, e.g., \( \mathsf {t} \) is robust. We leave the identification of further robust formulas for future work.

Soundness of Forward Symbolic Execution. We adapt the notion of a local action from Calcagno et al. [2007] to contracts:

Definition 4.4

(Local Contract).

Given some program statement \( c \) and SL formulae \( P,Q \), we say the triple \( \left\lbrace P\right\rbrace c \left\lbrace Q\right\rbrace \) is a local contract, if for every stack-heap pair \( (s,h) \) with \( (s,h) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} P \), every stack \( t \) with \( s\subseteq t \) and every heap \( h^\circ \) with \( h\uplus ^{t} h^\circ \ne \bot \), we have that

(1)

\( c \) is safe for \( (t,h\uplus ^{t} h^\circ) \), and

(2)

for every stack-heap pair \( (t^{\prime },h^{\prime }) \) with \( (t,h\uplus ^{t} h^\circ) \xrightarrow {c} (t^{\prime },h^{\prime }) \) there is some heap \( h^{\#} \) with \( h^{\#} \uplus ^{t^{\prime }} h^\circ = h^{\prime } \) and \( (t^{\prime },h^{\#}) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} Q \).

We now state that our local proofs rules specify local contracts:

Lemma 4.5.

Let \( c \) be a program statement, and let \( \left\lbrace P\right\rbrace c \left\lbrace Q\right\rbrace \) be the triple from its local proof rule as stated in Figure 9. Then, \( \left\lbrace P\right\rbrace c \left\lbrace Q\right\rbrace \) is a local contract.

Proof.

The requirements (1) and (2) of local contracts can be directly verified from the semantics of the program statements.□

We are now ready to state the soundness of our symbolic execution calculus (we assume robust formulas \( A \) in the frame rule, which can be ensured by the materialization rule11); we note that the statement makes precise the implicitly existentially quantified variables, stating that there is an extension of the stack to the variables \( V \) introduced by the symbolic execution such that \( Q \) holds:

Lemma 4.6 (Soundness of Forward Symbolic Execution).

Let \( \mathbf {c} \) be a sequence of program statements, let \( P \) be a robust formula, let \( \left\lbrace P\right\rbrace \mathbf {c} \left\lbrace Q\right\rbrace \) be the triple obtained from symbolic execution, and let \( V \) be the fresh variables introduced during symbolic execution. Then, \( Q \) is robust and for all \( (s,h)\xrightarrow {\mathbf {c}}(s^{\prime },h^{\prime }) \) with \( (s,h) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} P \), there is a stack \( s^{\prime \prime } \) with \( s^{\prime } \subseteq s^{\prime \prime } \), \( V \subseteq \operatorname{dom}(s^{\prime \prime }) \) and \( (s^{\prime \prime },h^{\prime }) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} Q \).

Proof.

See Appendix.□

Automation. We note that the presented approach can fully automatically verify that the pre-condition of a loop-free piece of code guarantees its post-condition: For every program statement, we apply its local proof rule using the frame rule (and in addition the materialization rule in case the existence of a pointer target must be guaranteed). We then discharge the entailment query using our decision procedure from Section 3. We now illustrate this approach on the programs from Figure 12. For both programs, we verify that the loop invariant is inductive (in both cases the loop-invariant \( P \) is propagated forward through the loop body; it is then checked that the obtained formula \( Q \) again implies the loop invariant \( P \); for verifying the implication, we apply our decision procedure from Corollary 3.43):

Example 4.7.

Verifying the loop invariant of list reverse: \( \begin{align*} &\left\lbrace \mathtt {ls}(x,\mathsf {nil}) * \mathtt {ls}(a,\mathsf {nil})\right\rbrace (=: P)\\ &\quad \mathtt {assume(x\ne \mathsf {nil})}\\ &\left\lbrace \mathtt {ls}(x,\mathsf {nil}) * \mathtt {ls}(a,\mathsf {nil}) * x \ne \mathsf {nil}\right\rbrace \\ &\quad {\texttt {# materialization}}\\ &\left\lbrace x \mapsto z {-\!\!\circledast }(\mathtt {ls}(x,\mathsf {nil}) * \mathtt {ls}(a,\mathsf {nil})* x \ne \mathsf {nil}) * x \mapsto z\right\rbrace \\ &\quad \mathtt {b := x.next}\\ &\left\lbrace x \mapsto z {-\!\!\circledast }(\mathtt {ls}(x,\mathsf {nil}) * \mathtt {ls}(a,\mathsf {nil})* x \ne \mathsf {nil}) * x \mapsto z * b = z\right\rbrace \\ &\quad \mathtt {x.next := a}\\ &\left\lbrace x \mapsto z {-\!\!\circledast }(\mathtt {ls}(x,\mathsf {nil}) * \mathtt {ls}(a,\mathsf {nil})* x \ne \mathsf {nil}) * x \mapsto a * b = z\right\rbrace \\ &\quad \mathtt {a := x}\\ &\left\lbrace x \mapsto z {-\!\!\circledast }(\mathtt {ls}(x,\mathsf {nil}) * \mathtt {ls}(a^{\prime },\mathsf {nil})* x \ne \mathsf {nil}) * x \mapsto a^{\prime } * b = z * a = x\right\rbrace \\ &\quad \mathtt {x := b}\\ &\, \lbrace x^{\prime } \mapsto z {-\!\!\circledast }(\mathtt {ls}(x^{\prime },\mathsf {nil}) * \mathtt {ls}(a^{\prime },\mathsf {nil}) * x^{\prime } \ne \mathsf {nil}) * x^{\prime } \mapsto a^{\prime } * b = z \ * \\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad a = x^{\prime } * x = b\rbrace (=: Q)\\ &\left\lbrace \mathtt {ls}(x,\mathsf {nil}) * \mathtt {ls}(a,\mathsf {nil})\right\rbrace (=: P). \end{align*} \)

Example 4.8.

Verifying the loop invariant of list copy: \( \begin{align*} &\left\lbrace \mathtt {ls}(u,x) * \mathtt {ls}(x,\mathsf {nil}) * \mathtt {ls}(r,s) * s \mapsto m\right\rbrace (=: P)\\ &\quad \mathtt {assume(x\ne \mathsf {nil})}\\ &\left\lbrace \mathtt {ls}(u,x) * \mathtt {ls}(x,\mathsf {nil}) * \mathtt {ls}(r,s) * s \mapsto m * x \ne \mathsf {nil}\right\rbrace \\ &\quad \mathtt {malloc(t)}\\ &\left\lbrace \mathtt {ls}(u,x) * \mathtt {ls}(x,\mathsf {nil}) * \mathtt {ls}(r,s) * s \mapsto m^{\prime } * x \ne \mathsf {nil}* t \mapsto m\right\rbrace \\ &\quad \mathtt {s.next := t}\\ &\left\lbrace \mathtt {ls}(u,x) * \mathtt {ls}(x,\mathsf {nil}) * \mathtt {ls}(r,s) * s \mapsto t * x \ne \mathsf {nil}* t \mapsto m\right\rbrace \\ &\quad \mathtt {s := t}\\ &\left\lbrace \mathtt {ls}(u,x) * \mathtt {ls}(x,\mathsf {nil}) * \mathtt {ls}(r,s^{\prime }) * s^{\prime } \mapsto t * x \ne \mathsf {nil}* t \mapsto m * s = t\right\rbrace \\ &\quad {\texttt {# materialization}}\\ &\ \lbrace x \mapsto z {-\!\!\circledast }(\mathtt {ls}(u,x) * \mathtt {ls}(x,\mathsf {nil}) * \mathtt {ls}(r,s^{\prime }) * s^{\prime } \mapsto t * x \ne \mathsf {nil}* t \mapsto m * s = t) \ * \\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad x \mapsto z\rbrace \\ &\quad \mathtt {y := x.next}\\ &\, \lbrace x \mapsto z {-\!\!\circledast }(\mathtt {ls}(u,x) * \mathtt {ls}(x,\mathsf {nil}) * \mathtt {ls}(r,s^{\prime }) * s^{\prime } \mapsto t * x \ne \mathsf {nil}* t \mapsto m * s = t) \ * \\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad x \mapsto z * y = z\rbrace \\ &\quad \mathtt {x := y}\\ &\, \lbrace x^{\prime } \mapsto z {-\!\!\circledast }(\mathtt {ls}(u,x^{\prime }) * \mathtt {ls}(x^{\prime },\mathsf {nil}) * \mathtt {ls}(r,s^{\prime }) * s^{\prime } \mapsto t \ *\\ & \quad \quad \quad \quad \quad \quad \quad \quad x^{\prime } \ne \mathsf {nil}* t \mapsto m * s = t) * x^{\prime } \mapsto z * y = z * x = y\rbrace (=: Q)\\ &\left\lbrace \mathtt {ls}(u,x) * \mathtt {ls}(x,\mathsf {nil}) * \mathtt {ls}(r,s) * s \mapsto m\right\rbrace (=: P). \end{align*} \)

While our decision procedure can automatically discharge the entailments in both of the above examples, we give a short direct argument for the benefit of the reader for the entailment check of Example 4.7 (a direct argument could similarly be worked out for Example 4.8): We note that \( Q^{\prime } \) simplifies to \( Q^{\prime \prime } = \left\lbrace a \mapsto x {-\!\!\circledast }(\mathtt {ls}(a,\mathsf {nil}) * \mathtt {ls}(a^{\prime },\mathsf {nil})) * a \mapsto a^{\prime } \right\rbrace \). Every model \( (s,h) \) of \( Q^{\prime \prime } \) must consist of a pointer \( a \mapsto a^{\prime } \), a list segment \( \mathtt {ls}(a^{\prime },\mathsf {nil}) \) and a heap \( h^{\prime } \) to which the pointer \( a \mapsto x \) can be added to obtain the list segment \( \mathtt {ls}(a,\mathsf {nil}) \); by looking at the semantics of the list segment predicate, we see that \( h^{\prime } \) in fact must be the list segment \( \mathtt {ls}(x,\mathsf {nil}) \). Further, the pointer \( a \mapsto a^{\prime } \) can be composed with the list segment \( \mathtt {ls}(a^{\prime },\mathsf {nil}) \) to obtain \( \mathtt {ls}(a,\mathsf {nil}) \).

Skip 5NORMAL FORMS AND THE ABDUCTION PROBLEM Section

5 NORMAL FORMS AND THE ABDUCTION PROBLEM

In this section, we introduce normal forms for the separation logic considered in this article. We obtain normal forms from the insight that we can precisely describe every AMS by a formula, i.e., we can construct a formula such that all models of the formula abstract to the AMS for which the formula was constructed. Normal forms allow us to transform a formula into an equivalent canonic representation: We prove that the obtained normal form is equivalent to the original formula (Theorem 5.3). Moreover, we show that the normal form transformation is a closure operator (Theorem 5.4). We then discuss that the normal form transformation has applications to the abduction problem: We recall that the weakest solution to the abduction problem can be syntactically characterized by a formula that involves the magic wand. Normal forms then allow us to compute an explicit representation of the weakest solution.

Normal Forms. We lift the abstraction functions from stacks to sets of variables: Let \( \mathbf {x} \subseteq \mathbf {Var} \) be a finite set of variables and \( \varphi \in \mathbf {SL} \) be a formula with \( \mathsf {fvs}(\varphi) \subseteq \mathbf {x} \). We set \( \alpha _{\mathbf {x}}(\varphi) := \left\lbrace \alpha _{\mathbf {s}}(\varphi) \mid \operatorname{dom}(s) = \mathbf {x} \right\rbrace \) and \( \mathsf {abst}_{\mathbf {x}}(\varphi) := \alpha _{\mathbf {x}}(\varphi) \cap \mathbf {AMS}_{\lceil \varphi \rceil ,\mathbf {x}} \), where \( \mathbf {AMS}_{k,\mathbf {x}}:= \left\lbrace \langle V,E,\rho ,\gamma \rangle \in \mathbf {AMS}\mid \bigcup V=\mathbf {x} \text{ and } \gamma \le k \right\rbrace \). (We note that \( \mathsf {abst}_{\mathbf {x}}(\varphi) \) is computable by the same argument as in the proof of Theorem 3.42.)

Definition 5.1

(Normal Form).

Let \( \mathsf {NF}_{\mathbf {x}}(\varphi) := \bigvee _{\mathcal {A}\in \mathsf {abst}_{\mathbf {x}}(\varphi)} \mathsf {AMS2SL}^{\lceil \varphi \rceil }(\mathcal {A}) \) the normal form of \( \varphi \), where \( \mathsf {AMS2SL}^{m}(\mathcal {A}) \) is defined as in Figure 13

Fig. 13.

Fig. 13. The induced formula \( \mathsf {AMS2SL}^{m}(\mathcal {A}) \) of AMS \( \mathcal {A}=\langle V,E,\rho ,\gamma \rangle \) with \( \gamma \le m \) .

.

The formula \( \mathsf {AMS2SL}^{m}(\mathcal {A}) \) represents a direct encoding of the AMS \( \mathcal {A} \): \( \mathsf {aliasing}(\mathcal {A}) \) encodes the aliasing between the stack variables as implied by \( V \); \( \mathsf {graph}(\mathcal {A}) \) encodes the points-to assertions and lists of length at least two corresponding to the edges \( E \) (the formula can be straightforwardly adapted to trees as introduced in Section 3.8); \( \mathsf {garbage}(\mathcal {A}) \) encodes that there are precisely \( \gamma \) negative chunks (in case \( \gamma \lt m \)) or at least \( \gamma \) negative chunks (in case \( \gamma = m \)),12 where the formula \( \mathsf {neg}(\mathcal {A}) \) ensures that these chunks are indeed negative (i.e., they do not satisfy a points-to or a list predicate) and contain exactly those allocated variables within some negative chunk as is specified by the negative allocation constraint \( \rho \). We have the following result about the formula \( \mathsf {AMS2SL}^{m}(\mathcal {A}) \):

Lemma 5.2.

Let \( \mathcal {A}=\langle V,E,\rho ,\gamma \rangle \) be an AMS with \( \gamma \le m \), and let \( (s,h) \) be a stack-heap pair. Then, we have that \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \mathsf {AMS2SL}^{m}(\mathcal {A}) \) iff \( \mathsf {ams}(s,h)=\langle V,E,\rho ,\gamma ^{\prime }\rangle \), with \( \gamma ^{\prime } = \gamma \) for \( \gamma \lt m \), and \( \gamma ^{\prime } \ge \gamma \) for \( \gamma = m \).

Proof.

We have that \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \mathsf {aliasing}(\mathcal {A}) \) iff the equivalence classes induced by \( s \) agree with the equivalence classes \( V \). We have that \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \mathsf {graph}(\mathcal {A}) \) iff the positive chunks of \( (s,h) \) are precisely the ones specified by \( E \) (we note that the formula \( \mathtt {ls}_{\ge 2} \) indeed ensures that the list segments corresponding to this formula have indeed length at least two). We have that \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \mathsf {garbage}(\mathcal {A}) \) iff \( (s,h) \) contains exactly \( \gamma \) negative chunks (in case \( \gamma \lt m \)), respectively, at least \( \gamma \) negative chunks (in case \( \gamma = m \)); the formula \( \mathsf {neg}(\mathcal {A}) \) ensures that all chunks are indeed negative, i.e., that there is no chunk satisfying a formula \( \max (\mathbf {v}) \mapsto \max (\mathbf {w}) \) or \( \mathtt {ls}(\max (\mathbf {v}),\max (\mathbf {w})) \)) and the allocated variables correspond to the sets specified by \( \rho \) (i.e., for every \( R \in \rho \) there is a chunk that allocates the variables in \( R \), for every \( R \in \rho \) the variables in \( R \) cannot be allocated in different chunks, and all variables not in some \( R \in \rho \) are not allocated).□

The normal form of a formula \( \varphi \) is then obtained by taking the disjunction over all formulas \( \mathsf {AMS2SL}^{m}(\mathcal {A}) \) for all AMS \( \mathcal {A}\in \alpha _{\mathbf {x}}(\varphi) \) that result from abstracting models of \( \varphi \). Intuitively, the formulas \( \mathsf {AMS2SL}^{m}(\mathcal {A}) \) partition the models of \( \varphi \) (recall that the Refinement Theorem states that models that abstract to the same AMS satisfy the same formulas). We now state the normal form of a formula is equivalent to the formula for which the normal form was constructed:

Theorem 5.3 (Equivalence).

\( \mathsf {NF}_{\mathbf {x}}(\varphi) \vert\!\mathop{=}\limits^{ \mathrm {st}} {\mathbf {x}}\varphi \) and \( \varphi \vert\!\mathop{=}\limits^{ \mathrm {st}} {\mathbf {x}}\mathsf {NF}_{\mathbf {x}}(\varphi) \).

Proof.

\( \varphi \vert\!\mathop{=}\limits^{ \mathrm {st}} {\mathbf {x}}\mathsf {NF}_{\mathbf {x}}(\varphi) \): Assume \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \) for some stack-heap pair \( (s,h) \). W.l.o.g., we can assume that \( (s,h) \) has at most \( \lceil \varphi \rceil \) negative chunks (otherwise, we can choose a stack-heap pair that has exactly \( \lceil \varphi \rceil \) negative chunks; by Theorem 3.37 this model still satisfies formula \( \varphi \)). We consider the AMS \( \mathcal {A}= \mathsf {ams}(s,h) \). With \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \), we get \( \mathcal {A}\in \mathsf {abst}_{\mathbf {x}}(\varphi) \). By Lemma 5.2, we have \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \mathsf {AMS2SL}^{\lceil \varphi \rceil }(\mathcal {A}) \). Because of \( \mathcal {A}\in \mathsf {abst}_{\mathbf {x}}(\varphi) \), we get that \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \mathsf {NF}_{\mathbf {x}}(\varphi) \).

\( \mathsf {NF}_{\mathbf {x}}(\varphi) \vert\!\mathop{=}\limits^{ \mathrm {st}} {\mathbf {x}}\varphi \): Assume \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \mathsf {NF}_{\mathbf {x}}(\varphi) \) for some stack-heap pair \( (s,h) \). Hence, there is some AMS \( \langle V,E,\rho ,\gamma \rangle = \mathcal {A}\in \mathsf {abst}_{\mathbf {x}}(\varphi) \) with \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \mathsf {AMS2SL}^{\lceil \varphi \rceil }(\mathcal {A}) \). By Lemma 5.2, we have that \( \mathsf {ams}(s,h)=\langle V,E,\rho ,\gamma ^{\prime }\rangle \), with \( \gamma ^{\prime } = \gamma \) for \( \gamma \lt m \), and \( \gamma ^{\prime } \ge \gamma \) for \( \gamma = m \). Then, we can conclude that \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \) (using Theorem 3.19 for \( \gamma \lt m \) and Theorem 3.37 for \( \gamma = m \)).□

We now state that the normal form transformation is a closure operator:

Theorem 5.4 (Closure Operator).

We have \( \mathsf {NF}_{\mathbf {x}}(\mathsf {NF}_{\mathbf {x}}(\varphi)) = \mathsf {NF}_{\mathbf {x}}(\varphi) \) and \( \lceil \mathsf {NF}_{\mathbf {x}}(\varphi)\rceil \le \max \lbrace 2,\lceil \varphi \rceil \rbrace \).

Proof.

By Theorem 5.3, we have that \( \mathsf {NF}_{\mathbf {x}}(\varphi) \) and \( \varphi \) are equivalent. Hence, we have that their models abstract to the same AMSs, i.e., \( \begin{equation*} \alpha _{\mathbf {x}}(\mathsf {NF}_{\mathbf {x}}(\varphi)) = \alpha _{\mathbf {x}}(\varphi) \quad (*). \end{equation*} \)

We now analyze the chunk size of the formula \( \mathsf {AMS2SL}^{m}(\mathcal {A}) \) for some \( \langle V,E,\rho ,\gamma \rangle = \mathcal {A} \) with \( \gamma \le m \). We observe that \( \lceil \mathsf {aliasing}(\mathcal {A})\rceil = \lceil \mathsf {graph}(\mathcal {A})\rceil = 0 \), \( \lceil \mathsf {neg}(\mathcal {A})\rceil = 2 \) and \( \lceil \mathsf {garbage}(\mathcal {A})\rceil = \max \lbrace 2,\gamma +1\rbrace \), for \( \gamma \lt m \), and \( \lceil \mathsf {garbage}(\mathcal {A})\rceil = \max \lbrace 2,\gamma \rbrace \), for \( \gamma = m \).

From these observations and (*), we obtain \( \begin{equation*} \lceil \mathsf {NF}_{\mathbf {x}}(\varphi)\rceil \le \max \lbrace 2,\lceil \varphi \rceil \rbrace . \end{equation*} \)

We now observe that \( \begin{equation*} \alpha _{\mathbf {x}}(\mathsf {NF}_{\mathbf {x}}(\varphi)) \cap \mathbf {AMS}_{\lceil \mathsf {NF}_{\mathbf {x}}(\varphi)\rceil ,\mathbf {x}} = \alpha _{\mathbf {x}}(\varphi) \cap \mathbf {AMS}_{\lceil \varphi \rceil ,\mathbf {x}}, \end{equation*} \) for \( \lceil \varphi \rceil = 0 \) and \( \lceil \varphi \rceil \ge 2 \). Moreover, we have \( \begin{equation*} \alpha _{\mathbf {x}}(\mathsf {NF}_{\mathbf {x}}(\varphi)) \cap \mathbf {AMS}_{\lceil \mathsf {NF}_{\mathbf {x}}(\varphi)\rceil ,\mathbf {x}} = \alpha _{\mathbf {x}}(\varphi) \cap \mathbf {AMS}_{2,\mathbf {x}}, \end{equation*} \) for \( \lceil \varphi \rceil = 1 \). We thus get that \( \begin{equation*} \bigvee _{\mathcal {A}\in \mathsf {abst}_{\mathbf {x}}(\mathsf {NF}_{\mathbf {x}}(\varphi))} \mathsf {AMS2SL}^{\lceil \mathsf {NF}_{\mathbf {x}}(\varphi)\rceil }(\mathcal {A}) = \bigvee _{\mathcal {A}\in \mathsf {abst}_{\mathbf {x}}(\varphi)} \mathsf {AMS2SL}^{\lceil \varphi \rceil }(\mathcal {A}) \end{equation*} \) (up to logical equivalence), where the equation holds for \( \lceil \varphi \rceil = 1 \), because the case distinction for \( \gamma = m = 1 \) in \( \mathsf {AMS2SL}^{m}(\mathcal {A}) \) amounts to an explicit chunk bound lifting from 1 to 2.□

The abduction problem. We recall the following generalization of the entailment problem: The abduction problem is to replace the question mark in the entailment \( \varphi * [?] \vert\!\mathop{=}\limits^{ \mathrm {st}} {\mathbf {x}}\psi \) by a formula such that the entailment becomes true. This problem is central for obtaining a scalable program analyzer as discussed in Calcagno et al. [2011].13 The abduction problem does in general not have a unique solution. Following Calcagno et al. [2011], we thus consider optimization versions of the abduction problem, looking for logically weakest and spatially minimal solutions:

Definition 5.5.

Let \( \varphi ,\psi \in \mathbf {SL} \) and \( \mathbf {x} \subseteq \mathbf {Var} \) be a finite set of variables. A formula \( \zeta \) is the weakest solution to the abduction problem \( \varphi * [?] \vert\!\mathop{=}\limits^{ \mathrm {st}} {\mathbf {x}}\psi \) if it holds for all abduction solutions \( \zeta ^{\prime } \) that \( \zeta ^{\prime }\vert\!\mathop{=}\limits^{ \mathrm {st}} {\mathbf {x}}\zeta \). An abduction solution is \( \zeta \) minimal, if there is no abduction solution \( \zeta ^{\prime } \) with \( \zeta \vert\!\mathop{=}\limits^{ \mathrm {st}} {\mathbf {x}}\zeta ^{\prime } * (\lnot \mathbf {emp}) \).

Lemma 5.6.

Let \( \varphi ,\psi \) be formulas and let \( \mathbf {x} \subseteq \mathbf {Var} \) be a finite set of variables. Then, (1) the weakest solution to the abduction problem \( \varphi * [?] \vert\!\mathop{=}\limits^{ \mathrm {st}} {\mathbf {x}}\psi \) is given by the formula \( \varphi {-\!\!\ast }\psi \), and (2) the weakest minimal solution is given by the formula \( \varphi {-\!\!\ast }\psi \wedge \lnot ((\varphi {-\!\!\ast }\psi) * \lnot \mathbf {emp}) \).

Proof.

(1) follows directly from the definition of the abduction problem and the semantics of \( {-\!\!\ast } \).

For (2), we introduce the shorthand \( \zeta := \varphi {-\!\!\ast }\psi \wedge \lnot ((\varphi {-\!\!\ast }\psi) * \lnot \mathbf {emp}) \). We note that \( \zeta \vert\!\mathop{=}\limits^{ \mathrm {st}} {\mathbf {x}}\varphi {-\!\!\ast }\psi \), and hence \( \zeta \) is a solution to the abduction problem by (1). Assume further that there is an abduction solution \( \zeta ^{\prime } \) with \( \zeta \vert\!\mathop{=}\limits^{ \mathrm {st}} {\mathbf {x}}\zeta ^{\prime } * (\lnot \mathbf {emp}) \). By (1), we have \( \zeta ^{\prime } \vert\!\mathop{=}\limits^{ \mathrm {st}} {\mathbf {x}}\varphi {-\!\!\ast }\psi \). Hence, \( \zeta \vert\!\mathop{=}\limits^{ \mathrm {st}} {\mathbf {x}}\varphi {-\!\!\ast }\psi * (\lnot \mathbf {emp}) \). However, this contradicts \( \zeta \vert\!\mathop{=}\limits^{ \mathrm {st}} {\mathbf {x}}\lnot ((\varphi {-\!\!\ast }\psi) * \lnot \mathbf {emp}) \). Thus, \( \zeta \) is minimal. Now, consider another minimal solution \( \zeta ^{\prime } \) to the abduction problem. By (1), we have \( \zeta ^{\prime } \vert\!\mathop{=}\limits^{ \mathrm {st}} {\mathbf {x}}\varphi {-\!\!\ast }\psi \). Because \( \zeta ^{\prime } \) is minimal, we have as above that \( \zeta ^{\prime } \vert\!\mathop{=}\limits^{ \mathrm {st}} {\mathbf {x}}\lnot ((\varphi {-\!\!\ast }\psi) * \lnot \mathbf {emp}) \). Hence, \( \zeta ^{\prime } \vert\!\mathop{=}\limits^{ \mathrm {st}} {\mathbf {x}}\zeta \). Thus, \( \zeta \) is the weakest minimal solution to the abduction problem.□

We now explain how normal forms have applications to the abduction problem. According to Lemma 5.6, the best solutions to the abduction problem are given by the formulas \( \zeta := \varphi {-\!\!\ast }\psi \) and \( \zeta ^{\prime } := \varphi {-\!\!\ast }\psi \wedge \lnot ((\varphi {-\!\!\ast }\psi) * \lnot \mathbf {emp}) \). While it is an important result that the existence of these solutions is guaranteed, we do a priori have no means to compute an explicit representation of these solutions nor to further analyze their structure. However, the normal form operator allows us to obtain the explicit representations \( \mathsf {NF}_{\mathbf {x}}(\zeta) \) and \( \mathsf {NF}_{\mathbf {x}}(\zeta ^{\prime }) \). We believe that using these explicit representations in a program analyzer or studying their properties is an interesting topic for further research. Here, we establish one concrete result on solutions to the abduction problem based on normal forms:

We can compute the weakest, respectively, the weakest minimal solution to the abduction problem for the positive fragment: Given \( \zeta = \varphi {-\!\!\ast }\psi \), respectively, \( \zeta = \varphi {-\!\!\ast }\psi \wedge \lnot ((\varphi {-\!\!\ast }\psi) * \lnot \mathbf {emp}) \), we consider the formula \( \bigvee _{\mathcal {A}\in \mathsf {abst}_{\mathbf {x}}(\zeta), \mathcal {A}\text{ is garbage-free}} \mathsf {AMS2SL}^{\lceil \zeta \rceil }(\mathcal {A}) \). Indeed, this formula is the weakest, respectively, the weakest minimal solution to the abduction problem from the positive fragment, in case we are willing to consider a slight extension of the positive fragment (observe that among the sub-formulas of \( \mathsf {aliasing}(\mathcal {A}) \) and \( \mathsf {graph}(\mathcal {A}) \) only the formula \( \mathtt {ls}_{\ge 2} \) is negative): one could either extend the positive fragment by allowing guarded negation14 or add a new spatial atom \( \mathtt {ls}_{\ge 2}(x,y) \) to SSL, with the semantics that \( \mathtt {ls}_{\ge 2}(x,y) \) holds in a model iff the model is a list segment of length at least 2 from \( x \) to \( y \); Sections 2 and 3 could be accordingly extended by this predicate; we can then simplify the formula \( \mathsf {graph}(\mathcal {A}) \) in \( \mathsf {AMS2SL}^{m}(\mathcal {A}) \) by directly translating edges \( E(\mathbf {v})=\langle \mathbf {v}^{\prime },\ge \!2\rangle \) to the atom \( \mathtt {ls}_{\ge 2}(\max (\mathbf {v}),\max (\mathbf {v}^{\prime })) \).

Skip 6CONCLUSION Section

6 CONCLUSION

We have shown that the satisfiability problem for “strong” separation logic with lists and trees is in the same complexity class as the satisfiability problem for standard “weak” separation logic without any data structures: \( {\rm PS}{\rm\small{PACE}} \)-complete. This is in stark contrast to the undecidability result for standard (weak) SL semantics, as shown by Demri et al. [2018].

We have demonstrated the potential of SSL for program verification: (1) We have provided symbolic execution rules that, in conjunction with our result on the decidability of entailment, can be used for fully automatically discharging verification conditions. (2) We have discussed how to compute explicit representations to optimal solutions of the abduction problem. This constitutes the first work that addresses the abduction problem for a separation logic closed under Boolean operators and the magic wand.

We consider the above results just the first steps in examining strong-separation logic, motivated by the desire to circumvent the undecidability result of Demri et al. [2018]. Future work is concerned with the practical evaluation of our decision procedures, with extending the symbolic execution calculus to a full Hoare logic as well as extending the results of this article to richer SL, such as SL with nested data structures or SL with limited support for arithmetic reasoning.

Appendices

A PROOFS FOR SECTION 2 (STRONG- AND WEAK-SEPARATION LOGIC)

Proof.

(Lemma 2.4.) We prove the claim by induction on the structure of the formula \( \varphi \). Clearly, the claim holds for the base cases \( \mathbf {emp} \), \( x \mapsto y \), \( \mathtt {ls}(x,y) \), \( x = y \) and \( x\ne y \). Further, the claim immediately follows from the induction assumption for the cases \( \varphi _1 \wedge \varphi _2 \), \( \varphi _1 \vee \varphi _2 \) and \( \lnot \varphi \). It remains to consider the cases \( \varphi _1 * \varphi _2 \) and \( \varphi _1 {-\!\!\circledast }\varphi _2 \). Let \( (s,h) \) and \( (s^{\prime },h^{\prime }) \) be two stack-heap pairs with \( (s,h)\cong (s^{\prime },h^{\prime }) \).

We will show that \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 * \varphi _2 \) implies \( (s^{\prime },h^{\prime }) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 * \varphi _2 \); the other direction is completely symmetric. We assume that \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 * \varphi _2 \). Then, there are \( h_1,h_2 \) with \( h_1 \uplus ^{s}h_2 = h \) and \( (s,h_i) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _i \) for \( i=1,2 \). We consider the bijection \( \sigma \) that witnesses the isomorphism between \( (s,h) \) and \( (s^{\prime },h^{\prime }) \). Let \( h^{\prime }_1 \), respectively, \( h^{\prime }_2 \) be the sub-heap of \( h^{\prime } \) restricted to \( \sigma (\operatorname{dom}(h_1)) \), respectively, \( \sigma (\operatorname{dom}(h_2)) \). It is easy to verify that \( h^{\prime }_1 \uplus ^{s}h^{\prime }_2 = h^{\prime } \) and \( (s,h_i) \cong (s^{\prime },h^{\prime }_i) \) for \( i=1,2 \). Hence, we can apply the induction assumption and get that \( (s^{\prime },h^{\prime }_i) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _i \) for \( i=1,2 \). Because of \( h^{\prime }_1 \uplus ^{s}h^{\prime }_2 = h^{\prime } \), we get \( (s^{\prime },h^{\prime }) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 * \varphi _2 \).

We will show that \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 {-\!\!\circledast }\varphi _2 \) implies \( (s^{\prime },h^{\prime }) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 {-\!\!\circledast }\varphi _2 \); the other direction is completely symmetric. We assume that \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 {-\!\!\circledast }\varphi _2 \). Hence, there is a heap \( h_0 \) with \( (s,h_0) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 \) and \( (s,h_0 \uplus ^{s}h) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _2 \). We note that in particular \( h_0 \uplus ^{s}h\ne \bot \). We consider the bijection \( \sigma \) that witnesses the isomorphism between \( (s,h) \) and \( (s^{\prime },h^{\prime }) \). Let \( L \subseteq \mathbf {Loc} \) be some subset of locations with \( L \cap (\mathsf {locs}(h^{\prime }) \cup \operatorname{img}(s^{\prime })) = \emptyset \) and \( |L| = \mathsf {locs}(h_0) \setminus (\mathsf {locs}(h) \cup \operatorname{img}(s)) \). We can extend \( \sigma \) to some bijective function \( \sigma ^{\prime }: (\mathsf {locs}(h_0) \cup \mathsf {locs}(h) \cup \operatorname{img}(s)) \rightarrow (L \cup \mathsf {locs}(h^{\prime }) \cup \operatorname{img}(s^{\prime })) \). Then, \( \sigma ^{\prime } \) induces a heap \( h^{\prime }_0 \) such that \( (s,h_0) \cong (s^{\prime },h^{\prime }_0) \), \( h^{\prime }_0 \uplus ^{s}h^{\prime } \ne \bot \) and \( (s,h_0 \uplus ^{s}h) \cong (s^{\prime },h^{\prime }_0 \uplus ^{s}h^{\prime }) \). By induction assumption, we get that \( (s^{\prime },h^{\prime }_0) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 \) and \( (s^{\prime },h^{\prime }_0 \uplus ^{s}h^{\prime }) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _2 \). Hence, \( (s^{\prime },h^{\prime }) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 {-\!\!\circledast }\varphi _2 \).□

B PROOFS FOR SECTION 3 (DECIDING THE SSL SATISFIABILITY PROBLEM)

Proof.

(Theorem 3.19.) Let \( \mathcal {A}= \langle V,E,\rho ,\gamma \rangle \) be the AMS with \( \mathsf {ams}{(s,h_{1})} = \mathcal {A}= \mathsf {ams}{(s,h_{2})} \). We proceed by induction on the structure of \( \varphi \). We only prove that \( (s,h_{1}) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \) implies that \( (s,h_{2}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \), as the other direction is completely analogous.

Assume that the claim holds for all subformulas of \( \varphi \) and assume that \( (s,h_{1}) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \).

  • Case \( \mathbf {emp} \), \( x = y \), \( x \ne y \), \( x \mapsto y \), \( \mathtt {ls}(x,y) \). Immediate consequence of Lemmas 3.20, 3.21, 3.22, and 3.24.

  • Case \( \varphi _1 * \varphi _2 \). By the semantics of \( * \), there exist \( h_{1,1},h_{1,2} \) with \( h_1 = h_{1,1}\uplus ^{s}h_{1,2} \), \( (s,h_{1,1}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 \), and \( (s,h_{1,2}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _2 \). Let \( \mathcal {A}_1 := \mathsf {ams}{(s,h_{1,1})} \) and \( \mathcal {A}_2 := \mathsf {ams}{(s,h_{1,2})} \). By Lemma 3.28, \( \mathsf {ams}{(s,h_{1})} = \mathcal {A}_1 \bullet \mathcal {A}_2 = \mathsf {ams}{(s,h_{2})} \). We can thus apply Lemma 3.29 to \( \mathsf {ams}{(s,h_{2})} \), \( \mathcal {A}_1 \), and \( \mathcal {A}_2 \) to obtain heaps \( h_{2,1},h_{2,2} \) with \( h_2=h_{2,1}\uplus ^{s}h_{2,2} \), \( \mathsf {ams}{(s,h_{2,1})}=\mathcal {A}_1 \) and \( \mathsf {ams}{(s,h_{2,2})}=\mathcal {A}_2 \). We can now apply the induction hypotheses for \( 1 \le i \le 2 \), \( \varphi _i \), \( h_{1,i} \) and \( h_{2,i} \), and obtain that \( (s,h_{2,i}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _i \). By the semantics of \( * \), we then have \( (s,h_{2})=(s,h_{2,1}\uplus ^{s}h_{2,2}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 * \varphi _2 \).

  • Case \( \varphi _1 {-\!\!\circledast }\varphi _2 \). Since \( (s,h_{1}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1{-\!\!\circledast }\varphi _2 \), there exists a heap \( h_0 \) with \( (s,h_0) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 \) and \( (s,h_1 \uplus ^{s}h_0) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _2 \). We can assume w.l.o.g. that \( h_2\uplus ^{s}h_0 \ne \bot \)—if this is not the case, simply replace \( h_0 \) with a heap \( h_0^{\prime } \) with \( (s,h_0)\cong (s,h_0^{\prime }) \), \( h_1\uplus ^{s}h_0^{\prime } \ne \bot \) and \( h_2\uplus ^{s}h_0^{\prime } \ne \bot \); then, \( (s,h_1 \uplus ^{s}h_0^{\prime }) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _2 \) by Lemma 2.4. We have that \( \mathsf {ams}{(s,h_1 \uplus ^{s}h_0)} = \mathsf {ams}{(s,h_{1})} \bullet \mathsf {ams}{(s,h_0)} = \mathsf {ams}{(s,h_{2})} \bullet \mathsf {ams}{(s,h_0)} = \mathsf {ams}{(s,h_2 \uplus ^{s}h_0)} \) (by assumption and Lemma 3.28). It therefore follows from the induction hypothesis for \( \varphi _2 \), \( (s,h_1 \uplus ^{s}h_0) \), and \( (s,h_2 \uplus ^{s}h_0) \) that \( (s,h_2 \uplus ^{s}h_0){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _2 \). Thus, \( (s,h_{2}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1{-\!\!\circledast }\varphi _2 \).

  • Case \( \varphi _1 \wedge \varphi _2 \), \( \varphi _1 \vee \varphi _2 \). By the semantics of \( \wedge \), respectively, \( \vee \), we have \( (s,h_{1}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 \) and/or \( (s,h_{1}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _2 \). We apply the induction hypotheses for \( \varphi _1 \) and \( \varphi _2 \) to obtain \( (s,h_{2}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 \) and/or \( (s,h_{2}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _2 \). By the semantics of \( \wedge \), respectively, \( \vee \), we then have \( (s,h_{2}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1\wedge \varphi _2 \), respectively, \( (s,h_{2}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1\vee \varphi _2 \).

  • Case \( \lnot \varphi _1 \). By the semantics of \( \lnot \), we have \( (s,h_{1})\not{\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 \). By the induction hypothesis for \( \varphi _1 \), we then obtain \( (s,h_{2})\not{\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 \). By the semantics of \( \lnot \), we have \( (s,h_{2}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \lnot \varphi _1 \).

Proof.

(Lemma 3.34.) Let \( \mathcal {A}\in \alpha _{s}(\varphi _1 * \varphi _2) \). There then exists a heap \( h \) such that \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1*\varphi _2 \) and \( \mathsf {ams}(s,h)=\mathcal {A} \). By the semantics of \( * \), we can split \( h \) into \( h_1\uplus ^{s}h_2 \) with \( (s,h_{i}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _i \) (and thus \( \mathsf {ams}(s,h_{i}) \in \alpha _{s}(\varphi _i) \)). By Lemma 3.28, \( \mathcal {A}= \mathsf {ams}(s,h_{1})\bullet \mathsf {ams}(s,h_{2}) \) for \( h_1 \), \( h_2 \) as above. Consequently, \( \mathcal {A}\in \alpha _{s}(\varphi _1) \bullet \alpha _{s}(\varphi _2) \) by definition of \( \bullet \).

Conversely, let \( \mathcal {A}\in \alpha _{s}(\varphi _1) \bullet \alpha _{s}(\varphi _2) \). By definition of \( \bullet \), there then exist \( \mathcal {A}_i \in \alpha _{s}(\varphi _i) \) such that \( \mathcal {A}=\mathcal {A}_1\bullet \mathcal {A}_2 \). Let \( h_1,h_2 \) be witnesses of that, i.e., \( (s,h_{i}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _i \) with \( \mathsf {ams}(s,h_{i})=\mathcal {A}_i \). Assume w.l.o.g. that \( h_1\uplus ^{s}h_2\ne \bot \). (Otherwise, replace \( h_2 \) with an \( h_2^{\prime } \) such that \( (s,h_{2}) \cong (s,h_2^{\prime }) \) and \( h_1\uplus ^{s}h_2^{\prime }\ne \bot \); by Lemma 2.4, we then have \( (s,h_2^{\prime }){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _2 \).) By the semantics of \( * \), \( (s,h_1\uplus ^{s}h_2){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1*\varphi _2 \). Therefore, \( \mathsf {ams}(s,h_1\uplus ^{s}h_2) \in \alpha _{s}(\varphi _1 * \varphi _2) \). By Lemma 3.28, \( \mathsf {ams}(s,h_1\uplus ^{s}h_2) = \mathcal {A} \). The claim follows.□

Proof.

(Lemma 3.35.) Let \( \mathcal {A}\in \alpha _{s}(\varphi _1 {-\!\!\circledast }\varphi _2) \). Then there exists a model \( (s,h) \) with \( \mathsf {ams}(s,h)=\mathcal {A} \) and \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1{-\!\!\circledast }\varphi _2 \). Consequently, there exists a heap \( h_1 \) such that \( h\uplus ^{s}h_1\ne \bot \), \( (s,h_{1}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 \) and \( (s,h\uplus ^{s}h_1){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _2 \). By definition of \( \alpha _s \), we then have \( \mathsf {ams}(s,h_{1}) \in \alpha _{s}(\varphi _1) \) and \( \mathsf {ams}(s,h\uplus ^{s}h_1) \in \alpha _{s}(\varphi _2) \). By Lemma 3.28, \( \mathsf {ams}(s,h\uplus ^{s}h_1) = \mathsf {ams}(s,h)\bullet \mathsf {ams}(s,h_{1}) \). In other words, we have for \( \mathcal {A}= \mathsf {ams}(s,h) \) and \( \mathcal {A}_1 := \mathsf {ams}(s,h_{1}) \) that \( \mathcal {A}_1 \in \alpha _{s}(\varphi _1) \) and \( \mathcal {A}\bullet \mathcal {A}_1 \in \alpha _{s}(\varphi _2) \). By definition of \( {-\!\!\bullet } \), we hence have \( \mathcal {A}\in \alpha _{s}(\varphi _1) {-\!\!\bullet }\alpha _{s}(\varphi _2) \).

Conversely, let \( \mathcal {A}\in \alpha _{s}(\varphi _1) {-\!\!\bullet }\alpha _{s}(\varphi _2) \). Then there exists an \( \mathcal {A}_1 \in \alpha _{s}(\varphi _1) \) such that \( \mathcal {A}\bullet \mathcal {A}_1 \in \alpha _{s}(\varphi _2) \). Let \( h,h_1 \) be heaps with \( \mathsf {ams}(s,h)= \mathcal {A} \), \( \mathsf {ams}(s,h_{1})=\mathcal {A}_1 \) and \( (s,h_{1}) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 \). Assume w.l.o.g. that \( h\uplus ^{s}h_1\ne \bot \). (Otherwise, replace \( h_1 \) with an \( h_1^{\prime } \) such that \( (s,h_{1}) \cong (s,h_1^{\prime }) \) and \( h\uplus ^{s}h_1^{\prime }\ne \bot \); by Lemma 2.4, we then have \( (s,h_1^{\prime }){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 \).)

By Lemma 3.28, we then have \( \mathsf {ams}(s,h\uplus ^{s}h_1) = \mathcal {A}\bullet \mathcal {A}_1 \). By Corollary 3.30, this allows us to conclude that \( (s,h\uplus ^{s}h_1) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _2 \). Consequently, \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1{-\!\!\circledast }\varphi _2 \), implying \( \mathcal {A}\in \alpha _{s}(\varphi _1 {-\!\!\circledast }\varphi _2) \).□

Proof.

(Lemma 3.36.) We proceed by structural induction on \( \varphi \). Let \( (s,h) \) be a stack-heap pair with \( (s,h){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \).

  • Case \( \mathbf {emp} \), \( x=y \), \( x\ne y \), \( x \mapsto y \), \( \mathtt {ls}(x,y) \). By Lemmas 3.20, 3.21, 3.22, and 3.24, we have that the AMS of all models of \( \varphi \) have a garbage-chunk count of 0.

  • Case \( \varphi _1*\varphi _2 \). Let \( h_1,h_2 \) be such that \( h=h_1\uplus ^{s}h_2 \), \( (s,h_1){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 \), and \( (s,h_2){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _2 \). By the definition of the garbage-chunk count, we must have \( \lceil \varphi _1\rceil = \lceil \varphi _2\rceil = 0 \). Hence, the claim follows from the induction assumption.

  • Case \( \varphi _1{-\!\!\circledast }\varphi _2 \). Let \( h_0 \) be such that \( (s,h_0){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 \) and \( (s,h\uplus ^{s}h_0) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _2 \). Because of \( 0 = \lceil \varphi _1{-\!\!\circledast }\varphi _2\rceil = \lceil \varphi _2\rceil \), it thus follows from the induction hypothesis that \( (s,h\uplus ^{s}h_0) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _2 \) does not contain negative chunks. This implies, that \( (s,h) \) does not contain negative chunks.

  • Case \( \varphi _1\vee \varphi _2 \). We then have \( (s,h_{1}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 \) or \( (s,h_{1}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _2 \). The claim then follows from the induction assumption, because we must have \( \lceil \varphi _1\rceil = \lceil \varphi _2\rceil = 0 \).

  • Case \( \varphi _1\wedge \varphi _2 \). We then have \( (s,h_{1}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 \) and \( (s,h_{1}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _2 \). The claim then follows from the induction assumption, because we have \( \lceil \varphi _1\rceil = 0 \) or \( \lceil \varphi _2\rceil = 0 \).

  • Case \( \lnot \varphi _1 \). Because of the assumption \( \lceil \varphi \rceil = 0 \), this case is not possible.

Proof.

(Theorem 3.37.) We proceed by structural induction on \( \varphi \). We only prove that \( (s,h_{1}) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \) implies \( (s,h_{2}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi \), as the proof of the other direction is very similar.

  • Case \( \mathbf {emp} \), \( x=y \), \( x\ne y \), \( x \mapsto y \), \( \mathtt {ls}(x,y) \). By Lemmas 3.20, 3.21, 3.22, and 3.24, we have that the AMS of all models of \( \varphi \) have a garbage-chunk count of 0. Thus, \( (s,h_{1})\vert\!\mathop{\not=}\limits^{ \mathrm {st}} \) and \( (s,h_{2})\vert\!\mathop{\not=}\limits^{ \mathrm {st}} \varphi \).

  • Case \( \varphi _1*\varphi _2 \). Assume \( (s,h_{1}) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1*\varphi _2 \). Let \( h_{1,1},h_{1,2} \) be such that \( h_1=h_{1,1}\uplus ^{s}h_{1,2} \), \( (s,h_{1,1}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 \), and \( (s,h_{1,2}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _2 \). Let \( \mathcal {A}_1= \langle V_1,E_1,\rho _1,m_1\rangle := \mathsf {ams}(h_{1,1}) \) and \( \mathcal {A}_2= \langle V_2,E_2,\rho _2,{}m_2 \rangle := \mathsf {ams}(h_{1,2}) \). Since \( k = \lceil \varphi _1\rceil +\lceil \varphi _2\rceil \), it follows that, either \( m_1 \ge \lceil \varphi _1\rceil \) or \( m_2 \ge \lceil \varphi _2\rceil \) (or both). We can assume w.l.o.g. that \( m_1 \ge \lceil \varphi _1\rceil \). We set \( \mathcal {A}_1^{\prime } := \langle V_1,E_1,\rho _1,n - \min \lbrace \lceil \varphi _2\rceil ,m_2\rbrace \rangle \) and \( \mathcal {A}_2^{\prime } := \langle V_2,E_2,\rho _2, \min \lbrace \lceil \varphi _2\rceil ,m_2\rbrace \rangle \). Observe that \( \mathsf {ams}(s,h_{2}) = \mathcal {A}_1^{\prime } \bullet \mathcal {A}_2^{\prime } \). There thus exist by Lemma 3.29 heaps \( h_{2,1}, h_{2,2} \) such that \( (s,h_{2})=h_{2,1}\uplus ^{s}h_{2,2} \), \( \mathsf {ams}(s,h_{2,1})=\mathcal {A}_1^{\prime } \) and \( \mathsf {ams}(s,h_{2,1})=\mathcal {A}_2^{\prime } \). As both \( m_1 \ge \lceil \varphi _1\rceil \) and \( n - \min \lbrace \lceil \varphi _2\rceil ,m_2\rbrace \ge k - \min \lbrace \lceil \varphi _2\rceil ,m_2\rbrace \ge \lceil \varphi _1\rceil \), we have by the induction hypothesis for \( \varphi _1 \) that \( (s,h_{2,1}) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 \). Additionally, we have \( h_{2,2}{\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _2 \) by Theorem 3.19 (for \( m_2 \lt \lceil \varphi _2\rceil \)) or by the induction hypothesis (for \( m_2 \ge \lceil \varphi _2\rceil \)). Consequently, \( (s,h_{2}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1*\varphi _2 \).

  • Case \( \varphi _1{-\!\!\circledast }\varphi _2 \). Assume \( (s,h_{1}) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1{-\!\!\circledast }\varphi _2 \). Let \( h_0 \) be such that \( (s,h_0) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 \) and \( (s,h_1\uplus ^{s}h_0) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _2 \). We can assume w.l.o.g. that \( h_2\uplus ^{s}h_0 \ne \bot \)—if this is not the case, simply replace \( h_0 \) with a heap \( h_0^{\prime } \) with \( (s,h_0)\cong (s,h_0^{\prime }) \), \( h_1\uplus ^{s}h_0^{\prime } \ne \bot \) and \( h_2\uplus ^{s}h_0^{\prime } \ne \bot \); then, \( (s,h_1 \uplus ^{s}h_0^{\prime }) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _2 \) by Lemma 2.4. We set \( \mathcal {A}_2 = \mathsf {ams}(s,h_1\uplus ^{s}h_0) \) and \( \mathcal {A}_2^{\prime } = \mathsf {ams}(s,h_2\uplus ^{s}h_0) \). By Lemma 3.28, we have \( \mathsf {ams}(s,h_1\uplus ^{s}h_0) = \mathsf {ams}(s,h_1) \bullet \mathsf {ams}(s,h_0) \) and \( \mathsf {ams}(s,h_2\uplus ^{s}h_0) = \mathsf {ams}(s,h_2) \bullet \mathsf {ams}(s,h_0) \). Hence, \( \mathcal {A}_2 = \langle V_2,E_2,\rho _2,m^{\prime } \rangle \) and \( \mathcal {A}_2^{\prime } = \langle V_2,E_2,\rho _2,n^{\prime } \rangle \) for some \( V_2,E_2,\rho _2 \) and \( m^{\prime },n^{\prime } \ge k = \lceil \varphi _1{-\!\!\circledast }\varphi _2\rceil = \lceil \varphi _2\rceil \). It thus follows from the induction hypothesis for \( \varphi _2 \) that \( (s,h_0\uplus ^{s}h_2) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _2 \).

  • Case \( \varphi _1\wedge \varphi _2 \), \( \varphi _1\vee \varphi _2 \). We then have \( (s,h_{1}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 \) and/or \( (s,h_{1}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _2 \). By definition of \( \lceil \varphi _1\wedge \varphi _2\rceil \), respectively, \( \lceil \varphi _1\vee \varphi _2\rceil \), it follows that \( n,m \ge \max (\lceil \varphi _1\rceil ,\lceil \varphi _2\rceil) \ge \lceil \varphi _i\rceil \). We therefore conclude from the induction hypothesis that \( (s,h_{2}) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 \) and/or \( (s,h_{2}) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _2 \). Thus, \( (s,h_{2}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 \wedge \varphi _2 \), respectively, \( (s,h_{2}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 \vee \varphi _2 \).

  • Case \( \varphi _1\wedge \varphi _2 \), \( \varphi _1\vee \varphi _2 \). We then have \( (s,h_{1}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 \) and/or \( (s,h_{1}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _2 \). By definition of \( \lceil \varphi _1\wedge \varphi _2\rceil \), respectively, \( \lceil \varphi _1\vee \varphi _2\rceil \), it follows that \( n,m \ge \max (\lceil \varphi _1\rceil ,\lceil \varphi _2\rceil) \ge \lceil \varphi _i\rceil \). We therefore conclude from the induction hypothesis that \( (s,h_{2}) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 \) and/or \( (s,h_{2}) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _2 \). Thus, \( (s,h_{2}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 \wedge \varphi _2 \), respectively, \( (s,h_{2}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \varphi _1 \vee \varphi _2 \).

  • Case \( \lnot \varphi _1 \). Assume \( (s,h_{1}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \lnot \varphi _1 \). Consequently, \( (s,h_{1})\vert\!\mathop{\not=}\limits^{ \mathrm {st}} \varphi _1 \). Since \( m,n \ge \lceil \lnot \varphi _1\rceil =\lceil \varphi _1\rceil \), it follows by induction that \( (s,h_{2})\vert\!\mathop{\not=}\limits^{ \mathrm {st}} \varphi _1 \). Then, \( (s,h_{2}){\vert\!\mathop{=}\limits^{ \mathrm {st}}} \lnot \varphi _1 \).

Proof.

(Theorem 3.40.) We proceed by induction on the structure of \( \varphi \):

  • Case \( \mathbf {emp} \), \( x=y \), \( x\ne y \), \( x \mapsto y \), \( \mathtt {ls}(x,y) \). By Lemmas 3.20, 3.21, 3.22, and 3.24 and the observation that all models of \( \varphi \) have garbage-chunk count of 0.

  • Case \( \varphi _1*\varphi _2 \). By the induction hypotheses, we have for \( 1 \le i \le 2 \) that \( \mathsf {abst}_{s}(\varphi _i)=\alpha _{s}(\varphi _i)\cap \mathbf {AMS}_{\lceil \varphi _i\rceil ,s} \) Let \( \mathbf {A}_i :=\mathsf {lift}_{\lceil \varphi _1\rceil \nearrow \lceil \varphi _1 * \varphi _2\rceil }(\mathsf {abst}_{s}(\varphi _i)) \). By Theorem 3.37, it follows that \( \mathbf {A}_i=\alpha _{s}(\varphi _i)\cap \mathbf {AMS}_{\lceil \varphi _1* \varphi _2\rceil ,s} \). By Lemma 3.34, it thus follows that \( \mathbf {A}_1 \bullet \mathbf {A}_2 \) contains all AMS in \( \alpha _{s}(\varphi _1 * \varphi _2) \) that can be obtained by composing AMS with a garbage-chunk count of at most \( \lceil \varphi _i * \varphi _2\rceil \). Thus, in particular, (1) \( \mathbf {A}_1 \bullet \mathbf {A}_2 \subseteq \alpha _{s}(\varphi _1 * \varphi _2) \) and (2) \( \mathbf {A}_1 \bullet \mathbf {A}_2 \supseteq \alpha _{s}(\varphi _1 * \varphi _2) \cap \mathbf {AMS}_{\lceil \varphi _1 * \varphi _2\rceil ,s} \). The claim follows.

  • Case \( \varphi _1 {-\!\!\circledast }\varphi _2 \). By the induction hypotheses, we have for \( 1 \le i \le 2 \) that \( \mathsf {abst}_{s}(\varphi _i)=\alpha _{s}(\varphi _i)\cap \mathbf {AMS}_{\lceil \varphi _i\rceil ,s} \). Let \( \mathbf {A}_2 :=\mathsf {lift}_{\lceil \varphi _2\rceil \nearrow \lceil \varphi _1* \varphi _2\rceil }(\mathsf {abst}_{s}(\varphi _2)) \). By Theorem 3.37, it follows that \( \mathbf {A}_2=\alpha _{s}(\varphi _2)\cap \mathbf {AMS}_{\lceil \varphi _1 * \varphi _2\rceil ,s} \). Thus, in particular, \( \mathbf {A}_2 \) contains every AMS in \( \alpha _{s}(\varphi _2) \) that can be obtained by composing an AMS in \( \mathbf {AMS}_{\lceil \varphi _1{-\!\!\circledast }\varphi _2\rceil }=\mathbf {AMS}_{\lceil \varphi _2\rceil } \) with an AMS from \( \alpha _{s}(\varphi _1)\cap \mathbf {AMS}_{\lceil \varphi _1\rceil ,s} \). With Lemma 3.35, we then get that \( (\mathsf {abst}_{s}(\varphi _1) {-\!\!\bullet }\mathbf {A}_2) \cap \mathbf {AMS}_{\lceil \varphi _1{-\!\!\circledast }\varphi _2\rceil } \) is precisely the set of AMS \( \alpha _{s}(\varphi _1 {-\!\!\circledast }\varphi _2) \cap \mathbf {AMS}_{\lceil \varphi _1{-\!\!\circledast }\varphi _2\rceil } \).

  • Case \( \varphi _1 \wedge \varphi _2 \). By the induction hypotheses, we have for \( 1 \le i \le 2 \) that \( \mathsf {abst}_{s}(\varphi _i)=\alpha _{s}(\varphi _i)\cap \mathbf {AMS}_{\lceil \varphi _i\rceil ,s} \). In case of \( \lceil \varphi _1\rceil = 0 \) or \( \lceil \varphi _2\rceil = 0 \), we have that all models of \( \varphi _1 \wedge \varphi _2 \) have a garbage-chunk count of 0 (by Lemma 3.36); hence, \( \mathsf {abst}_{s}(\varphi _1 \wedge \varphi _2) = \mathsf {abst}_{s}(\varphi _1) \cap \mathsf {abst}_{s}(\varphi _2) \). Otherwise, for \( 1 \le i \le 2 \), let \( \mathbf {A}_i :=\mathsf {lift}_{\lceil \varphi _1\rceil \nearrow \lceil \varphi \rceil }(\mathsf {abst}_{s}(\varphi _i)) \). By Theorem 3.37, we have \( \mathbf {A}_i=\alpha _{s}(\varphi _i)\cap \mathbf {AMS}_{\lceil \varphi \rceil ,s} \). The claim thus follows from Lemma 3.31.

  • Case \( \varphi _1 \vee \varphi _2 \). By the induction hypotheses, we have for \( 1 \le i \le 2 \) that \( \mathsf {abst}_{s}(\varphi _i)=\alpha _{s}(\varphi _i)\cap \mathbf {AMS}_{\lceil \varphi _i\rceil ,s} \). For \( 1 \le i \le 2 \), let \( \mathbf {A}_i :=\mathsf {lift}_{\lceil \varphi _1\rceil \nearrow \lceil \varphi \rceil }(\mathsf {abst}_{s}(\varphi _i)) \). By Theorem 3.37, we have \( \mathbf {A}_i=\alpha _{s}(\varphi _i)\cap \mathbf {AMS}_{\lceil \varphi \rceil ,s} \). The claim thus follows from Lemma 3.32.

  • Case \( \lnot \varphi _1 \). By the induction hypothesis, we have that \( \mathsf {abst}_{s}(\varphi _1)=\alpha _{s}(\varphi _1)\cap \mathbf {AMS}_{\lceil \varphi _1\rceil ,s} \). We proceed by a case distinction: Assume \( \lceil \varphi _1\rceil = \lceil \lnot \varphi _1\rceil \). From Lemma 3.33, it follows that \( \alpha _{s}(\lnot \varphi _1) \cap \mathbf {AMS}_{\lceil \lnot \varphi _1\rceil ,s} = \mathbf {AMS}_{\lceil \lnot \varphi _1\rceil ,s} \setminus \alpha _{s}(\varphi _1) = \mathbf {AMS}_{\lceil \lnot \varphi _1\rceil ,s} \setminus (\alpha _{s}(\varphi _1)\cap \mathbf {AMS}_{\lceil \varphi _1\rceil ,s}) \). Assume \( \lceil \varphi _1\rceil = 0 \). By Lemma 3.36, all models of \( \varphi _1 \) have a garbage-chunk count of 0. Hence, \( \mathsf {abst}_{s}(\lnot \varphi _1)= \alpha _{s}(\lnot \varphi _1) \cap \mathbf {AMS}_{\lceil \lnot \varphi _1\rceil ,s} = \mathbf {AMS}_{\lceil \lnot \varphi _1\rceil ,s} \setminus (\alpha _{s}(\varphi _1)\cap \mathbf {AMS}_{\lceil \varphi _1\rceil ,s}) \).

C PROOFS FOR SECTION 4 (PROGRAM VERIFICATION WITH STRONG-SEPARATION LOGIC)

Proof.

(Lemma 4.6.) We prove the claim by induction on the number of applications of the frame rule and materialization rule. We consider a sequence of program statements \( \mathbf {c}=c_1\cdots c_n \) for which the triple \( \left\lbrace P\right\rbrace \mathbf {c} \left\lbrace Q\right\rbrace \) was derived by symbolic execution, introducing some fresh variables \( V \). We then assume that the claim holds for \( \left\lbrace P\right\rbrace \mathbf {c} \left\lbrace Q\right\rbrace \) (for the base case, we allow \( \mathbf {c} \) to be the empty sequence, i.e, \( \mathbf {c}= \epsilon \), and consider \( \left\lbrace P\right\rbrace \epsilon \left\lbrace P\right\rbrace \)) and prove the claim for one more application of the frame rule or the materialization rule.

We first consider an application of the materialization rule. Then, we have \( Q {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \lnot ((x \mapsto \mathsf {nil}) {-\!\!\circledast }\mathsf {t}) \) (*), and we infer the triple \( \left\lbrace P\right\rbrace \mathbf {c} \left\lbrace x \mapsto z * ((x \mapsto z) {-\!\!\circledast }Q)\right\rbrace \), where \( z \) is some fresh variable. We now consider some stack-heap pairs \( (s,h)\xrightarrow {\mathbf {c}} (s^{\prime },h^{\prime }) \) with \( (s,h) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} P \). Because the claim holds for \( \left\lbrace P\right\rbrace \mathbf {c} \left\lbrace Q\right\rbrace \) and \( V \), there is some stack \( s^{\prime \prime } \) with \( s^{\prime } \subseteq s^{\prime \prime } \), \( V \subseteq \operatorname{dom}(s^{\prime \prime }) \) and \( (s^{\prime \prime },h^{\prime }) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} Q \). Because of (*), we have \( (s^{\prime \prime },h^{\prime }) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} \lnot ((x \mapsto \mathsf {nil}) {-\!\!\circledast }\mathsf {t}) \). Hence, there is some \( \ell \in \mathbf {Loc} \) such that \( h^{\prime }(s^{\prime \prime }(x)) = \ell \). We now consider the stack \( s^{\prime \prime \prime } = s^{\prime \prime }[z/\ell ] \). Note that \( s^{\prime } \subseteq s^{\prime \prime \prime } \). Because \( Q \) is robust by induction assumption, we then have that \( (s^{\prime \prime \prime },h^{\prime }) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} x \mapsto z * ((x \mapsto z) {-\!\!\circledast }Q) \). Moreover, we get from Lemma 4.3 that \( x \mapsto z * ((x \mapsto z) {-\!\!\circledast }Q) \) is robust. Thus, the claim is satisfied for the set of variables \( V^{\prime } = V \cup \lbrace z\rbrace \).

We now consider an application of the frame rule, i.e., we need to prove the claim for the sequence \( \mathbf {c}^{\prime }=\mathbf {c}c \), which extends \( \mathbf {c} \) by some statement \( c \). Let \( \left\lbrace P_c\right\rbrace c \left\lbrace Q_c\right\rbrace \) be the triple from the local proof rule for \( c \). Because the frame rule is applied, we have by assumption that there is some robust SL formula \( A \) with \( Q = A * P_c \). From the application of the frame rule, we then obtain \( \left\lbrace P\right\rbrace \mathbf {c}^{\prime } \left\lbrace A[\mathbf {x}^{\prime }/\mathbf {x}] * Q_c\right\rbrace \), where \( \mathbf {x} = \mathsf {modifiedVars}(c) \) and \( \mathbf {x}^{\prime } \) fresh. We now consider some stack-heap pairs \( (s,h)\xrightarrow {\mathbf {c}^{\prime }} (s^{\prime },h^{\prime }) \) with \( (s,h) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} P \). Then, there is some stack-heap pair \( (s^{\prime \prime },h^{\prime \prime }) \) with \( (s,h)\xrightarrow {\mathbf {c}} (s^{\prime \prime },h^{\prime \prime }) \) and \( (s^{\prime \prime },h^{\prime \prime })\xrightarrow {c} (s^{\prime },h^{\prime }) \). Because the claim holds for \( \left\lbrace P\right\rbrace \mathbf {c} \left\lbrace Q\right\rbrace \) and \( V \), there is some stack \( t \) with \( s^{\prime \prime } \subseteq t \), \( V \subseteq \operatorname{dom}(t) \) and \( (t,h^{\prime \prime }) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} Q \). Because of \( Q = A * P_c \), we have that there are some heaps \( h_1,h_2 \) with \( h_1 \uplus ^{t} h_2 = h^{\prime \prime } \) such that \( (t,h_1) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} P_c \) and \( (t,h_2) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} A \). Because of \( s^{\prime \prime } \subseteq t \) and \( (s^{\prime \prime },h^{\prime \prime })\xrightarrow {c} (s^{\prime },h^{\prime }) \), we get that \( (t,h^{\prime \prime })\xrightarrow {c} (t^{\prime },h^{\prime }) \) for some \( s^{\prime } \subseteq t^{\prime } \). Because \( \left\lbrace P_c\right\rbrace c \left\lbrace Q_c\right\rbrace \) is a local contract by Lemma 4.5, we get that there is a heap \( h_1^{\prime } \) with \( h_1^{\prime } \uplus ^{t^{\prime }} h_2 = h^{\prime } \) and \( (t^{\prime },h_1^{\prime }) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} Q_c \). We now consider the stack \( t^{\prime \prime } \) defined by \( t^{\prime \prime }(x) = t^{\prime }(x) \) for all \( x \in \operatorname{dom}(t^{\prime }) \) and \( t^{\prime \prime }(x^{\prime }) = t(x) \) for all \( x \in \mathsf {modifiedVars}(c) \), where \( x^{\prime } \in \mathbf {x}^{\prime } \) is the fresh copy created for \( x \). Note that \( t^{\prime } \subseteq t^{\prime \prime } \). We recall that \( A \) is robust by assumption. Hence, \( (t^{\prime \prime },h_2) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} A[\mathbf {x}^{\prime }/\mathbf {x}] \). Moreover, \( Q_c \) is robust by Lemma 4.2. Hence, \( (t^{\prime \prime },h_1) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} Q_c \). Thus, \( (t^{\prime \prime },h^{\prime }) {\vert\!\mathop{=}\limits^{ \mathrm {st}}} A[\mathbf {x}^{\prime }/\mathbf {x}]*Q_c \). Moreover, we get from Lemma 4.3 that \( A[\mathbf {x}^{\prime }/\mathbf {x}]*Q_c \) is robust. Hence, the claim is satisfied for the set of variables \( V^{\prime } = V \cup \mathbf {x}^{\prime } \).□

Footnotes

  1. 1 Assertions in intuitionistic separation logic satisfy the following monotonicity property: an assertion that is true for some portion of the heap remains true for any extension of the heap [Reynolds 2002]. The classical version of separation logic does not impose this monotonicity property and can therefore be used to reason about explicit storage deallocation.

    Footnote
  2. 2 For comparison, we also quote from the extended version [Krishna et al. 2019] why the separating implication was omitted from the logic: “Most presentations of SL also include the separating implication connective \( {-\!\!\ast } \). However, logics including \( {-\!\!\ast } \) are harder to automate and usually undecidable. By omitting \( {-\!\!\ast } \), we emphasize that we do not require it to perform flow-based reasoning.” (We recall that in this article, we establish decidability results for a separation logic that includes \( {-\!\!\ast } \)).

    Footnote
  3. 3 Note that \( x \ne y \) is not equivalent to \( \lnot (x = y) \) in our separation logic, as we require the heap to be empty for all models of \( x \ne y \).

    Footnote
  4. 4 As \( {-\!\!\ast } \) can be defined via \( {-\!\!\circledast } \) and \( \lnot \) and vice-versa, the expressivity of our logic does not depend on which operator we choose. We have chosen \( {-\!\!\circledast } \), because we can include this operator in the positive fragment considered later on.

    Footnote
  5. 5 Usually \( x=y \) is defined to hold for all heaps, not just the empty heap, when \( x \) and \( y \) are interpreted by the same location; however, this choice does not change the expressivity of the logic: the formula \( (x=y) * \mathsf {t} \) expresses the standard semantics. Our choice is needed for the results on the positive fragment considered in Section 2.3.

    Footnote
  6. 6 Strictly speaking, this only holds for the symbolic-heap fragment of the separation logic studied in this article, i.e., for symbolic-heaps composed of points-to predicates, list predicates and tree predicates (see Section 3.8). We consider the logic in Iosif et al. [2013], which proposes symbolic heaps of bounded treewidth, as an interesting direction for future work.

    Footnote
  7. 7 In Farka et al. [2021], decomposability is termed invertibility.

    Footnote
  8. 8 It is an interesting question for future work to relate the chunks considered in this article to the atomic building blocks used in SL symbolic executions engines. Likewise, it would be interesting to build a symbolic execution engine based on the chunks, respectively, on the AMS abstraction proposed in this article.

    Footnote
  9. 9 The edges of an AMS represent either a single pointer (case “\( =\!1 \)”) or a list segment of at least length two (case “\( \ge \!2 \)”).

    Footnote
  10. 10 \( m \) is a special program variable introduced for modelling \( \mathsf {malloc} \).

    Footnote
  11. 11 Assume that \( Q \) is the robust formula currently derived by the symbolic execution, that \( c \) is the next program statement, that \( \left\lbrace P_c\right\rbrace c \left\lbrace Q_c\right\rbrace \) is the triple from the local proof rule of program statement \( c \), and that \( Q \) is of shape \( Q = A * P_c \). If case \( A \) is not robust (note that this is only possible if \( P_c = x \mapsto z \) for some variables \( x \) and \( z \)), then one can first apply the materialization rule to derive formula \( Q^{\prime } = x \mapsto z * ((x \mapsto z) {-\!\!\circledast }Q) \). Then, \( A^{\prime } = (x \mapsto z) {-\!\!\circledast }Q \) is robust by Lemma 4.3.

    Footnote
  12. 12 For the case \( \gamma = m = 1 \), the formula \( \mathsf {garbage}(\mathcal {A}) \) contains a seemingly superfluous case distinction between exactly one negative chunk and at least two negative chunks; this case distinction is a technicality needed to cover a special case in Theorem 5.4.

    Footnote
  13. 13 While the program analyzer proposed in Calcagno et al. [2011] employs bi-abductive reasoning, the suggested bi-abductive procedure in fact proceeds in two separate abduction and frame-inference steps, where the main technical challenge is the abduction step, as frame inference can be incorporated into entailment checking. We believe that the situation for SSL is similar, i.e., solving abduction is the key to implementing a bi-abductive prover for SSL; hence, our focus on the abduction problem.

    Footnote
  14. 14 One can add guarded negation to our separation logic by extending the grammar of Figure 2 with \( \varphi ::= \cdots \mid \varphi \wedge \lnot \varphi \). All results about the positive fragment continue to hold when guarded negation is added to the positive fragment.

    Footnote

REFERENCES

  1. Antonopoulos Timos, Gorogiannis Nikos, Haase Christoph, Kanovich Max I., and Ouaknine Joël. 2014. Foundations for decision problems in separation logic with general inductive predicates. In Proceedings of the 17th International Conference on Foundations of Software Science and Computation Structures FOSS(ACS’14), Part of the European Joint Conferences on Theory and Practice of Software (ETAPS’14). Springer, Berlin, 411425. Google ScholarGoogle ScholarCross RefCross Ref
  2. Appel Andrew W.. 2014. Program Logics—For Certified Compilers. Cambridge University Press.Google ScholarGoogle ScholarCross RefCross Ref
  3. Berdine Josh, Calcagno Cristiano, and O’Hearn Peter W.. 2004. A decidable fragment of separation logic. In Proceedings of the 24th International Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS’04). Springer, 97109. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Berdine Josh, Calcagno Cristiano, and O’Hearn Peter W.. 2005. Symbolic execution with separation logic. In Proceedings of the 3rd Asian Symposium on Programming Languages and Systems (APLAS’05). 5268. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Berdine Josh, Cook Byron, and Ishtiaq Samin. 2011. SLAyer: Memory safety for systems-level code. In Proceedings of the 23rd International Conference on Computer Aided Verification (CAV’11). 178183. Google ScholarGoogle ScholarCross RefCross Ref
  6. Blom Stefan and Huisman Marieke. 2015. Witnessing the elimination of magic wands. Int. J. Softw. Tools Technol. Transfer 17, 6 (2015), 757781. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Bornat Richard, Calcagno Cristiano, O’Hearn Peter W., and Parkinson Matthew J.. 2005. Permission accounting in separation logic. In Proceedings of the 32nd ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL’05). 259270. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Brochenin Rémi, Demri Stéphane, and Lozes Etienne. 2012. On the almighty wand. Info. Comput. 211 (2012), 106137. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Calcagno Cristiano, Distefano Dino, Dubreil Jeremy, Gabi Dominik, Hooimeijer Pieter, Luca Martino, O’Hearn Peter, Papakonstantinou Irene, Purbrick Jim, and Rodriguez Dulma. 2015. Moving fast with software verification. In Proceedings of the 7th International Symposium NASA Formal Methods (NFM’15), Havelund Klaus, Holzmann Gerard, and Joshi Rajeev (Eds.). Springer International Publishing, Cham, 311.Google ScholarGoogle ScholarCross RefCross Ref
  10. Calcagno Cristiano, Distefano Dino, O’Hearn Peter W., and Yang Hongseok. 2011. Compositional shape analysis by means of bi-abduction. J. ACM 58, 6 (Dec. 2011), 26:1–26:66. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Calcagno C., O’Hearn P. W., and Yang Hongseok. 2007. Local action and abstract separation logic. In Proceedings of the 22nd Annual IEEE Symposium on Logic in Computer Science (LICS’07). 366378. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Calcagno Cristiano, Yang Hongseok, and O’Hearn Peter W.. 2001. Computability and complexity results for a spatial assertion language for data structures. In Proceedings of the Foundations of Software Technology and Theoretical Computer Science (FST TCS’01), Hariharan Ramesh, Vinay V., and Mukund Madhavan (Eds.). Lecture Notes in Computer Science, Vol. 2245. Springer, Berlin, 108119. Google ScholarGoogle ScholarCross RefCross Ref
  13. Cook Byron, Haase Christoph, Ouaknine Joël, Parkinson Matthew J., and Worrell James. 2011. Tractable reasoning in a fragment of separation logic. In Proceedings of the 22nd International Conference on Concurrency Theory (CONCUR’11). 235249. Google ScholarGoogle ScholarCross RefCross Ref
  14. Pinto Pedro da Rocha, Dinsdale-Young Thomas, and Gardner Philippa. 2014. TaDA: A logic for time and data abstraction. In Proceedings of the 28th European Conference on Object-Oriented Programming (ECOOP’14)(Lecture Notes in Computer Science, Vol. 8586), Jones Richard E. (Ed.). Springer, 207231.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Demri Stéphane and Deters Morgan. 2014. Expressive completeness of separation logic with two variables and no separating conjunction. In Proceedings of the Joint Meeting of the 23rd EACSL Annual Conference on Computer Science Logic (CSL’14) and the 29th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS’14) (CSL-LICS’14). ACM, New York, NY, 37:1–37:10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Demri Stéphane, Galmiche Didier, Larchey-Wendling Dominique, and Méry Daniel. 2014. Separation logic with one quantified variable. In Computer Science—Theory and Applications, Hirsch EdwardA., Kuznetsov SergeiO., Pin Jean-Éric, and Vereshchagin Nikolay K. (Eds.). Lecture Notes in Computer Science, Vol. 8476. Springer International Publishing, 125138. Google ScholarGoogle ScholarCross RefCross Ref
  17. Demri Stéphane, Lozes Étienne, and Mansutti Alessio. 2018. The effects of adding reachability predicates in propositional separation logic. In Foundations of Software Science and Computation Structures, Baier Christel and Lago Ugo Dal (Eds.). Springer International Publishing, Cham, 476493.Google ScholarGoogle ScholarCross RefCross Ref
  18. Dudka Kamil, Peringer Petr, and Vojnar Tomás. 2011. Predator: A practical tool for checking manipulation of dynamic data structures using separation logic. In Proceedings of the 23rd International Conference on Computer Aided Verification (CAV’11). 372378. Google ScholarGoogle ScholarCross RefCross Ref
  19. Echenim Mnacho, Iosif Radu, and Peltier Nicolas. 2019. The Bernays-Schönfinkel-Ramsey class of separation logic on arbitrary domains. In Foundations of Software Science and Computation Structures, Bojańczyk Mikołaj and Simpson Alex (Eds.). Springer International Publishing, Cham, 242259.Google ScholarGoogle ScholarCross RefCross Ref
  20. Farka Frantisek, Nanevski Aleksandar, Banerjee Anindya, Delbianco Germán Andrés, and Fábregas Ignacio. 2021. On algebraic abstractions for concurrent separation logics. Proc. ACM Program. Lang. 5 (2021), 132.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Gorogiannis Nikos, Kanovich Max, and O’Hearn Peter W.. 2011. The complexity of abduction for separated heap abstractions. In Proceedings of the 18th International Symposium on Static Analysis (SAS’11), Yahav Eran (Ed.). Springer, Berlin, 2542. Google ScholarGoogle ScholarCross RefCross Ref
  22. Gu Xincai, Chen Taolue, and Wu Zhilin. 2016. A complete decision procedure for linearly compositional separation logic with data constraints. In Proceedings of the 8th International Joint Conference on Automated Reasoning (IJCAR’16). 532549. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Iosif Radu, Rogalewicz Adam, and Simácek Jirí. 2013. The tree width of separation logic with recursive definitions. In Proceedings of the 24th International Conference on Automated Deduction (CADE’13). 2138. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Iosif Radu, Rogalewicz Adam, and Vojnar Tomás. 2014. Deciding entailments in inductive separation logic with tree automata. In Proceedings of the 12th International Symposium on Automated Technology for Verification and Analysis (ATVA’14). 201218. Google ScholarGoogle ScholarCross RefCross Ref
  25. Ishtiaq Samin S. and O’Hearn Peter W.. 2001a. BI as an assertion language for mutable data structures. In Proceedings of the 28th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL’01). 1426.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Ishtiaq Samin S. and O’Hearn Peter W.. 2001b. BI as an assertion language for mutable data structures. In Proceedings of the 28th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL’01), Hankin Chris and Schmidt Dave (Eds.). ACM, 1426.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Jacobs Bart, Smans Jan, Philippaerts Pieter, Vogels Frédéric, Penninckx Willem, and Piessens Frank. 2011. VeriFast: A powerful, sound, predictable, fast verifier for C and java. In Proceedings of the 3rd International Symposium on NASA Formal Methods (NFM’11). 4155. Google ScholarGoogle ScholarCross RefCross Ref
  28. Jung Ralf, Krebbers Robbert, Jourdan Jacques-Henri, Bizjak Ales, Birkedal Lars, and Dreyer Derek. 2018. Iris from the ground up: A modular foundation for higher-order concurrent separation logic. J. Funct. Program. 28 (2018), e20.Google ScholarGoogle ScholarCross RefCross Ref
  29. Katelaan Jens, Jovanović Dejan, and Weissenbacher Georg. 2018. A separation logic with data: Small models and automation. In Automated Reasoning, Galmiche Didier, Schulz Stephan, and Sebastiani Roberto (Eds.). Springer International Publishing, Cham, 455471.Google ScholarGoogle ScholarCross RefCross Ref
  30. Katelaan Jens, Matheja Christoph, and Zuleger Florian. 2019. Effective entailment checking for separation logic with inductive definitions. In Proceedings of the 25th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS’19), Part of the European Joint Conferences on Theory and Practice of Software (ETAPS’19). 319336. Google ScholarGoogle ScholarCross RefCross Ref
  31. Katelaan Jens and Zuleger Florian. 2020. Beyond symbolic heaps: Deciding separation logic with inductive definitions. In Proceedings of the 23rd International Conference on Logic for Programming, Artificial Intelligence, and Reasoning(EPiC Series in Computing, Vol. 73), Albert Elvira and Kovács Laura (Eds.). EasyChair, 390408.Google ScholarGoogle Scholar
  32. Krishna Siddharth, Shasha Dennis E., and Wies Thomas. 2018. Go with the flow: Compositional abstractions for concurrent data structures. Proc. ACM Program. Lang. 2 (2018), 37:1–37:31.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Krishna Siddharth, Summers Alexander J., and Wies Thomas. 2019. Local reasoning for global graph properties. Retrieved from http://arxiv.org/abs/1911.08632.Google ScholarGoogle Scholar
  34. Krishna Siddharth, Summers Alexander J., and Wies Thomas. 2020. Local reasoning for global graph properties. In Proceedings of the 29th European Symposium on Programming (ESOP’20), Part of the European Joint Conferences on Theory and Practice of Software (ETAPS’20)(Lecture Notes in Computer Science, Vol. 12075), Müller Peter (Ed.). Springer, 308335.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Le Quang Loc, Tatsuta Makoto, Sun Jun, and Chin Wei-Ngan. 2017. A decidable fragment in separation logic with inductive predicates and arithmetic. In Proceedings of the 29th International Conference on Computer Aided Verification (CAV’17). 495517. Google ScholarGoogle ScholarCross RefCross Ref
  36. Madhusudan P., Parlato Gennaro, and Qiu Xiaokang. 2011. Decidable logics combining heap structures and data. In Proceedings of the 38th Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL’11). ACM, New York, NY, 611622. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. O’Hearn Peter W.. 2007. Resources, concurrency, and local reasoning. Theor. Comput. Sci. 375, 1–3 (2007), 271307. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Pagel Jens, Matheja Christoph, and Zuleger Florian. 2020. Complete entailment checking for separation logic with inductive definitions. Retrieved from https://arxiv.org/abs/2002.01202.Google ScholarGoogle Scholar
  39. Pagel Jens and Zuleger Florian. 2021. Strong-separation logic. In Proceedings of the 30th European Symposium on Programming on Programming Languages and Systems (ESOP’21), Part of the European Joint Conferences on Theory and Practice of Software (ETAPS’21)(Lecture Notes in Computer Science, Vol. 12648), Yoshida Nobuko (Ed.). Springer, 664692.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Pérez Juan Antonio Navarro and Rybalchenko Andrey. 2013. Separation logic modulo theories. In Proceedings of the 11th Asian Symposium on Programming Languages and Systems (APLAS’13). 90106. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Piskac Ruzica, Wies Thomas, and Zufferey Damien. 2013. Automating separation logic using SMT. In Computer Aided Verification, Sharygina Natasha and Veith Helmut (Eds.). Lecture Notes in Computer Science, Vol. 8044. Springer, Berlin, 773789.Google ScholarGoogle Scholar
  42. Piskac Ruzica, Wies Thomas, and Zufferey Damien. 2014. Automating separation logic with trees and data. In Proceedings of the 26th International Conference on Computer Aided Verification (CAV’14), Part of the Vienna Summer of Logic (VSL’14). 711728. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Qiu Xiaokang, Garg Pranav, Stefanescu Andrei, and Madhusudan Parthasarathy. 2013. Natural proofs for structure, data, and separation. In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI’13). 231242. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Reynolds Andrew, Iosif Radu, and Serban Cristina. 2017. Reasoning in the Bernays-Schönfinkel-Ramsey fragment of separation logic. In Proceedings of the 18th International Conference on Verification, Model Checking, and Abstract Interpretation (VMCAI’17). 462482. Google ScholarGoogle ScholarCross RefCross Ref
  45. Reynolds Andrew, Iosif Radu, Serban Cristina, and King Tim. 2016. A decision procedure for separation logic in SMT. In Automated Technology for Verification and Analysis, Artho Cyrille, Legay Axel, and Peled Doron (Eds.). Springer International Publishing, Cham, 244261.Google ScholarGoogle ScholarCross RefCross Ref
  46. Reynolds John C.. 2002. Separation logic: A logic for shared mutable data structures. In Proceedings of the 17th IEEE Symposium on Logic in Computer Science (LICS’02). 5574. Google ScholarGoogle ScholarCross RefCross Ref
  47. Schwerhoff Malte and Summers Alexander J.. 2015. Lightweight support for magic wands in an automatic verifier. In Proceedings of the 29th European Conference on Object-Oriented Programming (ECOOP’15). 614638. Google ScholarGoogle ScholarCross RefCross Ref
  48. Sergey Ilya, Nanevski Aleksandar, and Banerjee Anindya. 2015. Mechanized verification of fine-grained concurrent programs. In Proceedings of the 36th ACM SIGPLAN Conference on Programming Language Design and Implementation, Grove David and Blackburn Stephen M. (Eds.). ACM, 7787.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Tatsuta Makoto and Kimura Daisuke. 2015. Separation logic with monadic inductive definitions and implicit existentials. In Proceedings of the 13th Asian Symposium on Programming Languages and Systems (APLAS’15). 6989. Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Strong-separation Logic

            Recommendations

            Comments

            Login options

            Check if you have access through your login credentials or your institution to get full access on this article.

            Sign in

            Full Access

            • Published in

              cover image ACM Transactions on Programming Languages and Systems
              ACM Transactions on Programming Languages and Systems  Volume 44, Issue 3
              September 2022
              302 pages
              ISSN:0164-0925
              EISSN:1558-4593
              DOI:10.1145/3544000
              Issue’s Table of Contents

              Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

              Publisher

              Association for Computing Machinery

              New York, NY, United States

              Publication History

              • Published: 15 July 2022
              • Online AM: 23 February 2022
              • Accepted: 1 November 2021
              • Revised: 1 September 2021
              • Received: 1 April 2021
              Published in toplas Volume 44, Issue 3

              Permissions

              Request permissions about this article.

              Request Permissions

              Check for updates

              Qualifiers

              • research-article
              • Refereed
            • Article Metrics

              • Downloads (Last 12 months)579
              • Downloads (Last 6 weeks)52

              Other Metrics

            PDF Format

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader

            HTML Format

            View this article in HTML Format .

            View HTML Format
            About Cookies On This Site

            We use cookies to ensure that we give you the best experience on our website.

            Learn more

            Got it!