Deadlock-Free Separation Logic: Linearity Yields Progress for Dependent Higher-Order Message Passing

We introduce a linear concurrent separation logic, called LinearActris, designed to guarantee deadlock and leak freedom for message-passing concurrency. LinearActris combines the strengths of session types and concurrent separation logic, allowing for the verification of challenging higher-order programs with mutable state through dependent protocols. The key challenge is to prove the adequacy theorem of LinearActris, which says that the logic indeed gives deadlock and leak freedom “for free” from linearity. We prove this theorem by defining a step-indexed model of separation logic, based on connectivity graphs. To demonstrate the expressive power of LinearActris, we prove soundness of a higher-order (GV-style) session type system using the technique of logical relations. All our results and examples have been mechanized in Coq.


INTRODUCTION
Session type systems [Honda 1993;Honda et al. 1998] allow type checking programs that involve message-passing concurrency.Session types are protocols, which can be seen as sequences of send (!) and receive (?) actions.They are associated with channels, and express in what order messages of what type should be transferred.For example, the session type !Z.?B.end is given to a channel over which an integer should be sent, after which a boolean is received.More complex session types can be formed with operators for choice (⊕,&), recursion (), etc.
Aside from ensuring type safety, linear session type systems [Caires and Pfenning 2010;Wadler 2012] can ensure deadlock freedom.That means that well-typed programs cannot end up in a state where all threads are waiting to receive a message from another.Deadlock freedom has been extended to large variety of session type systems [Carbone and Debois 2010;Fowler et al. 2021;Toninho et al. 2013;Toninho 2015;Caires et al. 2013;Pérez et al. 2014;Lindley and Morris 2015, 2016, 2017;Fowler et al. 2019;Das et al. 2018].The elegance of session type systems is that they give deadlock freedom essentially "for free"-it is obtained from "just" linear type checking.Moreover, session types are compositional-once functions have been type checked, they can be composed by merely establishing that the types agree.A final strength of session types is that deadlock freedom is maintained in a higher-order setting where closures and channels are transferred as first-class data over channels.The goal of this paper is to extend these advantages to separation logic.
Linear session types are unique and different from other methods for deadlock freedom-such as lock orders [Dijkstra 1971;Leino et al. 2010;Hamin and Jacobs 2018;Balzer et al. 2019;D'Osualdo et al. 2021], priorities [Kobayashi 1997;Padovani 2014;Dardha and Gay 2018], and global multiparty session types [Honda et al. 2008[Honda et al. , 2016]-because they do not require any additional proof obligations involving orders, priority annotations, or global types.Still, other methods neither supersede nor subsume session types in the range of programs they can prove to be deadlock free ( §9.1).
The ideas of session types are not limited to type checking, but have previously also been applied to functional verification.Bocchi et al. [2010]; Francalanza et al. [2011]; Lozes and Villard [2012]; Craciun et al. [2015]; Oortwijn et al. [2016]; Hinrichsen et al. [2020Hinrichsen et al. [ , 2022] ] have developed program logics that incorporate concepts from session types to verify increasingly sophisticated programs with message-passing concurrency.The protocols of these program logics make it possible to put logical conditions on the messages, allowing one to specify the contents (e.g., the message is an even number) instead of just the shape (e.g., it is an integer).The state of the art is the Actris logic and its descendants [Hinrichsen et al. 2020[Hinrichsen et al. , 2022;;Jacobs et al. 2023b], which are embedded in the Iris framework for concurrent separation logic in Coq [Jung et al. 2015[Jung et al. , 2016;;Krebbers et al. 2017a;Jung et al. 2018b].The key ingredient of Actris is its notion of dependent separation protocols, which inspired by dependent session types, can express dependencies between the data of messages and specify the transfer of resources.For example, the protocol !(ℓ : Loc,  : N)⟨ℓ⟩{ℓ ↦ → }; ?⟨⟩{ℓ ↦ → ( + 1)}; end says that a location ℓ with value  should be sent, after which the value  should be received, and the value of ℓ has been incremented.
Since Actris is a full-blown program logic, instead of a type system that aims to have decidable type checking, it can express more protocols and therefore verify safety of more programs than session types.In particular, it can express protocols where the shape (e.g., number of messages) of the protocol depends on the contents of earlier messages.Moreover, Hinrichsen et al. [2021] show that Actris can be used to give a semantic model to prove soundness of (affine) session types using the technique of logical relations in Iris [Timany et al. 2022].
Intuitively this theorem says that the logic is doing its job: a verified program  "cannot go wrong", i.e., it cannot perform illegal operations such as loading from a dangling location (use after free) or use an operator with wrong arguments (e.g., 3 +  .).Formally it says that if  can be verified (i.e., a Hoare triple with trivial precondition can be proved), and the initial configuration ([], ∅) (consisting of a single thread  and the empty heap) steps to ( [ 1 . . .  ], ℎ) (consisting of threads  1 . . .  and heap ℎ), then each thread   has either finished (is a value) or can make further progress (perform a step).Illegal operations cannot step, so adequacy guarantees they do not occur.
Despite the strong trust that the adequacy theorem gives in the correctness of the program logic-especially when mechanized in a proof assistant such as Coq-the adequacy theorem of most state-of-the-art program logics says nothing about deadlocks.In Iris, blocking operations (e.g., receiving from a channel whose buffer is empty, or acquiring a lock that has already been acquired) are modeled as busy loops, and thus cannot be distinguished from non-terminating programs.Both deadlocking and non-terminating programs can always step, and are thus trivially safe.
Goal of the paper.Our goal is to build a program logic for partial correctness that (1) enjoys an adequacy theorem that guarantees deadlock freedom for message passing concurrency, (2) combines the strengths of session types and concurrent separation logic to obtain deadlock freedom "for free" from linearity, without any additional proof obligations, and (3) is strong enough to verify challenging programs.By partial correctness we mean that we formalize deadlock freedom as a safety property (it is impossible for all threads to be waiting on a blocking operation), rather than a liveness property (threads are guaranteed to make progress or terminate).Formalizing deadlock freedom as a safety property is similar to the standard property of global progress in session type systems [Caires and Pfenning 2010] ( §10 contains a discussion of safety versus liveness).
Before discussing the desiderata of the program logic, let us investigate the operational semantics and adequacy theorem.To distinguish between deadlock and non-termination, we let the receive operation on a channel block the thread until a message is sent, instead of letting it perform a busy loop.With that change at hand, the adequacy theorem becomes similar to the global progress theorem of session type systems: Theorem 1.2.A proof of {Emp}  {Emp} implies that  enjoys global progress, i.e., if ( [], ∅) − → * t ([ 1 . . .  ], ℎ), then either   is a value for each  and ℎ = ∅, or ( [ 1 . . .  ], ℎ) can step.
Instead of requiring each thread to step, which would be false if a thread is genuinely waiting for another thread, we require the configuration as a whole to step.This means that there is always at least one thread that can step, i.e., there is no global deadlock.Additionally, compared to the adequacy theorem for safety, we require the final heap to be empty, which means all channels have been deallocated, i.e., there are no memory leaks.(Note that global progress does not subsume safety, we still need a theorem that ensures the absence of illegal non-blocking operations.) Our desired adequacy theorem does not hold for Iris-based logics such as Actris: • The need for linearity.Iris and Actris are affine, which means that resources must be used at most once, but can also be dropped (Iris satisfies the proof rule  *  ⊢ , or equivalently Emp ⊣⊢ True).Hence one can verify a program that creates a channel with endpoints  1 and  2 , have one thread perform a receive, and let the other thread perform a no-op: Thread 1:  1 .recv()Thread 2: do nothing This program can be verified in Iris/Actris because using affinity, the ownership of  2 can be dropped in the second thread.However, this program causes a deadlock: due to the absence of a send, the receive will block indefinitely.In session types this form of deadlock is ruled out by making the system linear, which means that resources must be used exactly once, and cannot be dropped until the protocol has been completed.• The need for acyclicity Linearity alone is not enough.If a thread could obtain ownership of both endpoints of a single channel, then it would be able to trivially deadlock itself, by performing the receive before the send.Linearity would not be violated, as the thread would still consume both channel ownership assertions according to the rules of the logic, but the thread would be blocked forever.More generally, if two threads own the endpoints of two channels, and perform a receive followed by send, there would be a deadlock: Thread 1:  1 .recv(); 1 .send(2)Thread 2:  2 .recv(); 2 .send(1) In session types, Wadler [2012] addresses this problem by combining thread and channel creation into a single construct.Together with linearity, this ensures that channel ownership is acyclic in a certain sense, and rules out all deadlocks without need for annotations.
In this paper we introduce LinearActris-which amends Actris with the aforementioned restrictions from linear session types to satisfy the goals we stated above.The key challenge that we address in the remainder of the introduction is proving the adequacy theorem of LinearActris.
Key challenge: Proving adequacy.Adequacy is commonly proved by giving a semantic interpretation of propositions and Hoare triples.For sequential separation logic [O'Hearn et al. 2001], propositions are modeled as heap predicates, and the semantics of Hoare triples is defined so that safety and leak freedom follow almost by definition.Since we consider a higher-order program logic, for a concurrent language with dynamic thread and channel spawning, and wish to prove global progress, this simple setup no longer suffices.We should address the following challenges: • Circular semantics.Session types and dependent separation protocols of Actris are higherorder, which means they can specify programs that transfer channels and closures over channels.In Actris one can write  ↣ !( : Loc)⟨⟩{ ↣ p}; end to say that  is a channel, over which a channel  with protocol p is sent.Here, the protocol p can contain protocol ownership assertions  ↣ p ′ , where p ′ can in turn contain protocol ownership assertions.This circularity involves a negative recursive occurrence and cannot be solved in set theory.It is similar to the type-world circularity in models of type systems with higher-order references [Ahmed 2004;Birkedal et al. 2011], and that of storable locks [Hobor et al. 2008] and impredicative invariants [Svendsen and Birkedal 2014], where step-indexing [Appel and McAllester 2001] is used to solve the circularity.The original Actris makes (in part) use of Iris's impredicative invariant mechanism to avoid solving this circularity explicitly.• Invariants and linearity.Iris's invariants are strongly tied to the logic being affine.Jung [2020, Thm 2] presents a paradox showing that a naïve linear version of Iris's invariants cannot be used to obtain even leak-freedom.Bizjak et al. [2019] present Iron, a linear version of Iris with an invariant mechanism that can be used to prove leak-freedom.Aside from not considering deadlock freedom, Iron avoids Jung's paradox by restricting the contents of invariants-namely, invariants cannot contain permissions to deallocate resources.Ownership of the end protocol needs to provide permission to deallocate the channel, making Iron's invariants insufficient for our purpose.• Invariants and acyclicity.Linearity alone is not enough to avoid deadlocks-one needs to maintain an invariant that the channel ownership topology is acyclic.Formalizing this acyclicity invariant is a key challenge of the syntactic meta theory of session types [Lindley andMorris 2015, 2016;Fowler et al. 2021;Jacobs et al. 2022a;Jacobs 2022].Since this prior work is aimed at syntactic theory of type systems, we need to investigate how to incorporate acyclicity of the topology into a semantic model of a program logic.Additionally, in type systems there is a 1-to-1 correspondence between physical references and ownership, but not in program logics.One can create protocols such as !( : Loc)⟨⟩{ ↣ p}; ?⟨()⟩{ ↣ p ′ }; end where a channel reference and ownership is sent, and only an acknowledgment () is returned.This means that the sending thread has to keep a reference to the channel, although it cannot use it before it has received the acknowledgment.We define a step-indexed linear model of separation logic as the solution of a recursive domain equation [America and Rutten 1989;Birkedal et al. 2010].To avoid reasoning about step-indices, we work in the pure step-indexed logic with a later modality (⊲) [Appel et al. 2007;Nakano 2000].
Similar to Iris we define Hoare triples in terms of weakest preconditions.A major difference in the definition of the weakest precondition compared to Iris is that we thread through the weakest preconditions of all threads, as well as the ownership and duality invariants of all channels.This way we can ensure that at all times the threads and channels form an acyclic topology with respect to channel ownership.To formalize acyclicity we use adapt notion of connectivity graphs by Jacobs et al. [2022a].Connectivity graphs were originally designed for the syntactic meta theory of type systems for deadlock freedom, but we adapt them so that they can be integrated into a step-indexed separation logic.To simplify the construction of the model and the operational semantics of the language, we base ourselves on the work of Dardha et al. [2012]; Jacobs et al. [2023b]: we use one-shot channels as primitive, and build multi-shot channels on top of those.

𝑒 ∈ Expr
Contributions.We introduce LinearActris-a concurrent separation logic for proving deadlockand leak freedom of message-passing programs, essentially offering these guarantees "for free" from linearity, without any additional proof obligations.This involves the following contributions: • We verify a range of examples of that use channels, closures, and mutable references as first-class data, demonstrating the expressive power of LinearActris ( §2).• We provide a formal description of the proof rules of LinearActris.First for multi-shot channels, and then for one-shot channels.Based on Jacobs et al. [2023b], we derive the logic for multi-shot channels from the one for one-shot channels ( § 3 and 4).• We provide a formal adequacy proof of LinearActris based on a step-indexed model of separation logic rooted in connectivity graphs [Jacobs et al. 2022a], showing that a derivation in LinearActris ensures deadlock and leak freedom of the program in question ( § 5 and 6).• We use LinearActris to construct a logical relations model that establishes deadlock freedom for a session-typed language ( §7).This contribution has two purposes.First, this provides an application that truly relies on our connectivity based approach to deadlock freedom (and is out of scope for logics based on e.g., lock orders).Second, this shows that every program that can be shown to be deadlock free using a GV-style session type system, can also be shown to be deadlock free using LinearActris.In fact, we go beyond existing GV-like systems by supporting the combination of recursive types, subtyping, term-and session type polymorphism, and unique mutable references ( §7).
• We mechanized all our results as well as a number of examples from the original Actris papers in Coq ( §8).See Jacobs et al. [2023a] for the sources.
We conclude the paper with related work ( §9) and discussion/future work ( §10).

LINEAR ACTRIS BY EXAMPLE
In this section we present LinearActris with example programs that we verify.We deliberately use very small examples.In our Coq mechanization we show that LinearActris can also be used to prove deadlock freedom of more challenging examples from the Actris papers, in particular, a number of increasingly complicated versions of parallel merge sort ( §8).
The programming language that we use in LinearActris is called ChanLang.It has concurrency, bidirectional message passing channels, mutable references, and functional programming constructs (such as lambdas, products, sums, and recursion).The syntax is shown in Fig. 1.ChanLang has the following constructs for message-passing concurrency: fork (.) Fork a new thread that runs  with a new channel a location , and also return the location of the new channel..send() Send message  over the channel at location ..recv() Receive a message over the channel at location ..close() Close the channel at location ..wait() Wait for the channel at location  to be closed.The .recv() and .wait() operations are blocking, and could thus potentially lead to deadlocks.As is common in session-typed languages like GV [Wadler 2012;Gay and Vasconcelos 2010], our fork operation both spawns the child thread, and sets up a channel for communication between the parent thread and child thread.This will turn out to be important for deadlock freedom ( § 5 and 6).
The following example illustrates how we can use these constructs to fork off a thread that receives a message from the main thread, adds one to it, and sends it back: The assert  operation asserts that  evaluates to true, and otherwise it gets stuck.Other illegal operations, such as sending over a closed channel, also get stuck forever.To verify the program, we need to reason about the channels  1 and  2 .We do so by means of channel ownership assertions  ↣ p, which state that we own a reference to the channel , and we must interact with it according to the protocol p.Our protocols are dependent separation protocols in the style of Actris [Hinrichsen et al. 2020].We can use the following dual pair of protocols for  1 and  2 at the fork: In these protocols, each step is either !⟨⟩ or ?⟨⟩, indicating that the owner of the reference must send or recv a value , respectively.The final !end / ?end indicates that the protocol is finished, and that the close / wait operation must be performed.
Quantified protocols.The preceding protocol is inflexible, because it specifies the exact values that must be sent and received.To alleviate this inflexibility, we can use a quantified protocol instead: This protocol states that if we send , then we will receive  + 1.When verifying a quantified protocol step, the sender can instantiate the quantified variable with any logical value.For example, the sender can instantiate  with 1, and send 1 over the channel.The receiver must be verified to work for any  chosen by the sender.The continuation of the protocol is allowed to be an arbitrary function of the quantified variables.This can be used to verify examples such as the following: let  = .recv() in if  < 5 then .close() else .send( − 5); . . .The protocol for  will have to have a different length, depending on which branch of the if is taken.We can verify this program using the following protocol for :  ↣ ?( : N)⟨⟩; if  < 5 then ( !end) else ( ! ⟨ − 5⟩; . ..) Mutable references.In addition to channels, our language has mutable references: ref  Allocate a new location in the heap and store the value  in it, and return the location.
! ℓ Read the value from the location ℓ. ℓ ←  Write the value  to the location ℓ. free ℓ Free the location ℓ.
Illegal operations, such as using a location that has been freed, are modeled as getting stuck forever.Consider the following variant of the preceding example: We send a mutable reference from the main thread to the forked thread, which increments it.The main thread waits for the forked thread to close its channel, and then asserts that the value of the reference is 2. The reference is then freed by the main thread.LinearActris can prove that this program is safe and does not deadlock.Note that safety relies on the blocking behavior of  1 .wait(),which ensures that the forked thread has finished before the main thread asserts that the value is 2 and frees the reference.The protocols to verify this program are as follows: This time, the protocol uses quantifiers for both the location  and the value  that is initially stored in the location.The protocol states that if we send a location , then this location will be incremented by 1.The curly brackets {_} indicate the separation logic resources that are sent along with the message.In the protocol for  1 above, the heap ownership assertion  ↦ →  is transmitted with the initial send step, and  ↦ →  + 1 is received in the wait step.As the following example shows, a reference need not be send over the channel, but can also be captured by the closure: We transfer  ↦ → 1 to the child thread immediately upon the fork, and the protocols simplify to: Sending channels over channels.In addition to exchanging references, LinearActris can also reason about programs that send channels over channels.Consider the following example: The program forks off two threads, which gives the main thread two channels  1 and  1 .The main thread then sends  1 over  1 , and waits for  1 to be closed, and then closes  1 .The first thread receives  1 from  2 , and then receives on  1 and asserts that the value is 2, and then closes  2 .The second thread sends 2 over  2 , and then waits for  2 to be closed.
That this program is safe and does not deadlock can be proven by LinearActris, but this is more subtle than one might think: if we were to swap the two  1 .wait(); 1 .close()operations, then the program would not be safe, as  1 might be closed before the other threads are done with it.We can verify the example using the following protocols: The protocol for  1 and  2 is simple: we send 2 and then end.The protocol for  1 is more interesting: we send a (quantified) location , and also send channel ownership for , with the same protocol as we chose for  1 .The continuation of the protocol is ?end{↣ !end}, which transfers ownership of  back to the main thread, but now at a new protocol. 1 ↣ ! ( : Val, Φ :Val → aProp)⟨ ⟩{WP  () {Φ}}; ?( : Val)⟨⟩{Φ  }; ?end The protocol allows us to send a closure  , provided we also send a weakest precondition assertion WP  () {Φ}, which ensures that the return value  of  satisfies Φ .We can then receive , and obtain the resources Φ .This protocol allows the closure  to capture linear resources in its environment, such as channels and references.

THE PROOF RULES OF LINEAR ACTRIS
In this section we present the rules of LinearActris.Like Iris, the key component of LinearActris is its weakest precondition WP  {} connective.This connective is a separation logic assertion, which states that if the program  is executed in the current heap, then its return value will satisfy predicate .The Hoare triple { }  {} is syntactic sugar for  ⊢ WP  {}.The adequacy theorem of LinearActris (Theorem 5.4) guarantees safety, deadlock freedom, and leak freedom for  provided we have a proof of Emp ⊢ WP  {Emp}.

Basic Separation Logic
Fig. 2 displays the grammar of LinearActris propositions, as well as the basic rules for reasoning about weakest preconditions that involve pure expressions and mutable references.The weakest precondition rules in this figure are fairly standard, so we will only give a brief overview.The rules WP-pure-step and WP-val are the basic rules for reasoning about pure expressions.The rules WP-Löb and WP-rec are used to reason about recursive functions.The WP-bind rule is used to reason about an expression nested inside a (call-by-value) evaluation context.The WP-wand rule can be used to weaken the postcondition, as well as to frame away parts of the precondition.The rules WP-alloc, WP-load, WP-store, and WP-free reason about mutable references.In combination with inference rules for the logical connectives (which are not shown in the figure), these rules handle single-threaded programs, such as programs that manipulate mutable linked lists.
Linearity.An important distinction between LinearActris and Iris/Actris, is that LinearActris is linear, whereas Iris is affine.This means that in LinearActris, the rule  ⊢ Emp does not hold for all , whereas in Iris it does.1This distinction is important, because the rule  ⊢ Emp can be used to leak resources.For instance if  = ℓ ↦ → , then ℓ ↦ →  ⊢ Emp can be used to leak the location ℓ.Unlike Iris, LinearActris guarantees leak freedom, and thus forces us to free locations.Furthermore, as we shall see shortly, the linearity of LinearActris is crucial for deadlock freedom, Basic weakest precondition rules: Heap manipulation rules: as it prevents us from dropping the obligation to send a message over a channel (recall, not sending a message means that the receiving end of the channel would block forever).

Channels and Protocols
Like Actris, LinearActris uses dependent separation protocols for reasoning about channels.The grammar of protocols is displayed in Fig. 3, and their meaning is as follows: • Send protocol ! ( ì )⟨⟩{ }; p.The variables ì  are binders that scope over , , and p, that is, these are functions of ì .During the verification of a send operation, we can instantiate ì  with mathematical values of our choosing, and then  must be equal to the physical value that is sent,  is a separation logic proposition that we transfer to the receiver, and p is the new protocol for the channel.
• Receive protocol ?( ì )⟨⟩{ }; p.During the verification of a receive operation, we learn that there exists a choice of mathematical values ì  such that the physical value received equals ,  is a separation logic proposition we receive, and p is the new protocol for the channel.
• Close protocol !end{ }.During the verification of a close operation,  is a separation logic proposition that we transfer to the other side.• Wait protocol ?end{ }.During the verification of a wait operation,  is a separation logic proposition that we receive.• Recursive protocol  .p.This is a recursive protocol, where  is a binder that scopes over p.The recursive protocol can be unfolded by replacing  with p. Recursive protocols with parameters are also supported, we give an example of such a protocol in §3.4.
Dependent separation protocols: Duality and subprotocols: Channel weakest precondition rules: The weakest precondition rules for channels in Fig. 3 work as follows: • WP-fork: This rule is used to verify a fork operation.The rule states that fork returns a channel , for which we can choose a protocol p.We must then verify that the thread that is spawned on the other side, operates with its side of the channel according to the dual protocol p, which is the same as p except that all send and receive operations are swapped.Deadlock and leak freedom.The rules in Fig. 3 are designed to ensure deadlock-and leak freedom.The reader may note that there are no apparent proof obligations for these properties, other than linearity: there are no preconditions that require us to follow a certain lock-or priority order.In § 4 to 6 we will see how the rules in Fig. 3 ensure deadlock freedom and leak freedom.

Subprotocols
An important feature of Actris are subprotocols, analogous to subtyping in type systems.LinearActris also supports subprotocols.The subprotocol relation is written p 1 ⊑ p 2 , and satisfies the rules in Fig. 3.The subprotocol relation lets us make the protocol of a channel more specific: we can turn channel ownership  ↣ p 1 into  ↣ p 2 , provided that p 1 ⊑ p 2 .The rules of Fig. 3 are general, and imply various special cases, e.g., the rules allow us to instantiate a quantifier in a send protocol: !( : N)⟨⟩; ?⟨ + 1⟩; ?end ⊑ !⟨1⟩; ?⟨2⟩; ?endDually, we can abstract a quantifier in a receive protocol: ?⟨1⟩; !⟨2⟩; !end ⊑ ?( : N)⟨⟩; !⟨ + 1⟩; !endWe can apply subprotocols deeper inside the protocol using the special case that if p 1 ⊑ p 2 , then: We can also use the subprotocol relation to make the propositions that are transferred more specific.
If we have a separating implication  1 − *  2 , then we can replace the proposition that is transferred:

Guarded Recursive Protocols and Choice
Another important feature of Actris and LinearActris is the ability to construct infinite protocols.
With the constructs we have see so far, we can construct unbounded protocols and verify programs with them.One can use well-founded recursion in the meta-logic (i.e., a Fixpoint definition in Coq) to define a recursive function that constructs a protocol, and then use that protocol in the verification of a program.For example, we can construct a protocol that sends  messages, and then closes the channel, for any  determined by the first message: However, well-founded recursion does not allow us to construct truly infinite protocols, such as the following protocol that sends increasing natural numbers forever: Actris and LinearActris allow us to construct such infinite protocols using guarded recursion: ≜ !⟨⟩;  ( + 1) or formally:  ≜  ..!⟨⟩;  ( + 1) This definition is guarded, because the recursive call is guarded by a message send.Our notion of guardedness also allows the recursive call to occur inside the resources.For example: Guarded recursion is most useful in combination with choice, which we can encode using a quantified protocol.This lets us express "services" that can perform a certain action (such as sending a natural number) forever, but allow the receiver to close the channel: ≜ !( : N)⟨⟩; ?( : Bool)⟨⟩; if  then ( !end) else

FROM MULTI-SHOT TO ONE-SHOT CHANNELS
Before discussing the adequacy proof of LinearActris ( § 5 and 6), we first reduce multi-shot channels and protocols to single-shot channels and protocols, inspired by the approach of Dardha et al.
[2012] for session types and Jacobs et al. [2023b] for separation logic.
The reason we encode multi-shot channels in terms of one-shot channels is twofold.First, it is easier to prove adequacy of the one-shot logic, because it is simpler.The ideas required are not fundamentally different, but there are fewer cases to handle.Second, we believe that the encoding of multi-shot channels in terms of one-shot channels showcases the flexibility of LinearActris: the encoding involves mutable references and transmitting channels over channels and creating new threads in a non-trivial way.If one considers the examples of § 2 in light of the encoding, one realizes that a lot is going on at run-time, and one might therefore expect it to be difficult to verify deadlock and leak freedom.The encoding shows that LinearActris is flexible enough to modularly build the multi-shot abstraction in terms of one-shot channels.

Primitive One-Shot Logic
The primitive one-shot channels have the following operations: fork1 (.) Fork a new thread that runs  with a new one-shot channel , and return .send1   Send message  over the channel .recv1  Receive a message over the channel , and free .
The send1   and recv1  operations may only be used once per one-shot channel.The operation recv1  is blocking.The primitive one-shot channels are governed by simple one-shot protocols, which are defined in Fig. 4. A one-shot protocol is either ! or ?,where  ∈Val → aProp is a separation logic predicate that specifies which values are allowed to be transmitted.The dual of ! is ?, and vice versa.The primitive one-shot channel weakest precondition rules are given in Fig. 4. The rules are similar to the rules of LinearActris, except that they are simpler because they do not have to deal with the complexity of multi-shot channels and protocols: • WP-prim-send: When we send1  , we must have channel ownership  ↣ 1 !, and we must provide resources   to be transmitted.The postcondition is Emp, because the channel ownership is consumed.
One-shot protocols: p ∈ Prot ::= ! | ?where  ∈Val → aProp (Protocols) One-shot channel weakest precondition rules: • WP-prim-recv: When we recv1 , we must have channel ownership  ↣ 1 ?, and we obtain resources   where  is the value that was received.The channel ownership is consumed.

Encoding of Multi-Shot Channels
The multi-shot channels from §3 are implemented in terms of one-shot channels.The implementation is given in Fig. 5.A multi-shot channel endpoint is represented as a mutable reference that stores a one-shot channel.When we send a message  on a multi-shot channel, we create a continuation one-shot channel  ′ , and we send the message ( ′ , ) on the one-shot channel that is stored in the mutable reference.The channel  ′ is then stored in the mutable reference of the sender, to be used for communicating the next message.On the other side, we receive a message ( ′ , ), and we store  ′ in the receiver's mutable reference, and then return .The multi-shot channel is closed by doing a final synchronisation on the one-shot channel without creating a continuation channel, and freeing the mutable reference.We define multi-shot protocols in terms of one-shot protocols, as shown in Fig. 5.The definition for ?( ì )⟨⟩{ }; p specifies that there exists an instantiation of the binders ì , such that the message (, ) is sent over the one-shot channel, which means that the value is specified by  in the protocol.We additionally transmit the resources , as well as new channel ownership  ↣ 1  for the continuation channel at the right protocol.The definition of the send protocol is simply dual.The definitions of the close and wait protocols are special cases of the send and receive protocols, as no continuation channel is created.
Finally, multi-shot channel ownership ℓ ↣ p is defined in terms of heap ownership and one-shot channel ownership, as shown in Fig. 5.The definition states that the mutable reference ℓ stores a one-shot channel , and that the one-shot channel has protocol  ⊑ p.This means that the multi-shot channels support subprotocols, even though the one-shot channels do not.
With these definitions, along with the specifications for one-shot protocols, we can then derive the specifications for multi-shot channels, as presented in Fig. 3.

WHY LINEAR ACTRIS IS DEADLOCK FREE: CONNECTIVITY GRAPHS
Now that we have given the rules of the one-shot logic, we cover how it guarantees deadlock-and leak freedom by linearity.We first give the general structure of the adequacy proof, and explain how it uses an invariant that is preserved as the program executes ( §5.1).We give an intuition for the principles that the invariant needs to enforce, by going through some faulty examples, and Multi-shot imperative channel implementation: by showing how the notion of connectivity graphs [Jacobs et al. 2022a] is used ( §5.2).We finally present how we reason about the preservation of the invariant in terms of connectivity graphs ( §5.3).In the next section we will give a more formal presentation of the adequacy proof, including the use of step-indexing to stratify circular definitions ( §6).

General Approach
Our general approach is to define an invariant  (ì , ℎ), which describes the state of the configuration of threads and heap.The invariant shall satisfy three properties that together imply adequacy.This approach is similar to the technique of progress and preservation for proving type safety [Wright and Felleisen 1994;Pierce 2002;Harper 2016], but our invariant is defined semantically (in terms of the operational semantics of the language) instead of syntactically (in terms of inductively defined judgments).The first of these properties is that the invariant can be established by the weakest precondition of the program: That is, the invariant holds for the initial configuration with one thread  and empty heap.The second property is that the invariant is preserved by the steps of our operational semantics: Lemma 5.2 (Preservation ).If  (ì , ℎ) holds, and The third property is that the invariant implies the conclusion of the adequacy theorem: Lemma 5.3 (Progress ).If  (ì , ℎ) holds, then either (ì , ℎ) can step, or ì  are all values and ℎ = ∅.
Together, these three properties imply adequacy, because if we start with the initial configuration, then we can repeatedly apply the preservation theorem to get to a configuration where the invariant holds, after which we can apply the progress theorem to establish adequacy: Theorem 5.4 (Adeqacy ).If Emp ⊢ WP  {Emp} is holds, and ( [], ∅) − → * t (ì , ℎ), then either: are all values and ℎ = ∅.
In addition to this adequacy theorem, our logic also guarantees safety: Theorem 5.5 (Safety ).If Emp ⊢ WP  {Emp} is holds, and ( [], ∅) − → * t (ì , ℎ), then every thread in ì  can either reduce, or is a value, or is blocked on a receive or wait operation.
The safety theorem is a straightforward consequence of our invariant, so we will not discuss it further.The reader can find the proof in the Coq mechanization [Jacobs et al. 2023a].In the next subsections we aim to give an intuition of what the invariant  (ì , ℎ) looks like, and why it is preserved by the operational semantics of our language.

The Invariant Properties
The proof rules of LinearActris rule out deadlocks and leaks.To prove this, we will define the invariant, as introduced in § 5.1.In this section we investigate the properties that we need the invariant to enforce.We do this by considering program examples that do deadlock or leak to identify patterns we need to exclude.This allows us to build up to a formulation of our invariant that is sufficient to make the proof go through, analogous to strengthening the induction hypothesis.
Consider the following example: The forked-off thread does nothing, and the main thread waits for the forked-off thread by attempting to receive a message.The problem is that the forked-off thread does not fulfill its obligation to send a message.The proof rules of LinearActris exclude this example because the forked off thread gives a linear channel ownership assertion for  2 that it must consume, and in this example, there is no operation in the forked-off thread that can consume the ownership assertion.To exclude this pattern our invariant must uphold the following property: Channel fulfillment: Terminated threads must not hold ownership assertions of channels.Now consider the following type of deadlock, where both sides try to receive: The rules of LinearActris exclude this example because upon fork, one thread gets a receive assertion with ? and the other gets a send assertion !, or vice versa.Our invariant enforces this with the following property: Channel duality: Each channel in the configuration is in one of two states: (1) There exist two channel ownership assertions  ↣ ! and  ↣ ? for that channel, and the channel buffer is empty.(2) There exist only the receiver assertion  ↣ ?, and the channel contains a value  that satisfies  .
Next, consider the type of deadlock illustrated by the following example: In this example, the forked-off thread smuggles its own channel back to the main thread by putting it in the reference .The main thread then attempts to receive, but this will block forever, as the matching send (on !) is performed after the receive.The rules of LinearActris exclude this example because the reference  must be uniquely owned.Since the forked-off thread immediately stores a value in the reference, the forked off thread must get initial ownership over .The forked-off thread then stores its channel  2 in the reference, but according to the proof rules of LinearActris, this does not consume the channel ownership assertion for  2 .Hence, the proof rules prevent this example for similar reasons as before.
This example is not yet ruled out by the invariant property above, however, as the invariant has no way of excluding the possibility that the main thread might be holding both channel ownership assertions for  1 as well as  2 .We depict the situation when the deadlock occurs as follows: Here, the  1 node represents the main thread, the  node represents the channel, and the two edges labeled  1 and  2 represent ownership of the two endpoints of .The red triangle on the  1 edge represents the fact that the thread  1 is currently blocked on recv1  1 .This is a deadlock because  1 has both endpoints, so recv1  1 will never get unblocked.
Our invariant rules out this situation with the following property: Weak channel acyclicity: No thread can hold ownership over both endpoints of a channel.
This property is yet again not enough to guarantee deadlock freedom.In general, it can be the case that there are several threads that are waiting for each other, and that none of them will ever perform the send that the others are waiting for.Consider the following situation: Here, both threads are waiting for each other, but neither of them will ever perform the send that the other is waiting for.This example does not violate the preceding invariant properties as neither thread holds both channel ownership assertions for the same channel.Yet, there is still a deadlock: This graph shows that thread  1 is blocked on recv1  1 , and thread  2 is blocked on recv1  2 .The problem occurs because the graph is cyclic in an undirected sense.Note that there are no cycles in this directed graph, but deadlocks can already occur if there is a cycle when one is allowed to traverse ownership edges in either direction.The rules of LinearActris yet again prevent this situation from arising.In general, it may be difficult to see why the proof rules of LinearActris ensure weak channel acyclicity.After all, we can send channel assertions over channels via the transferred resources.One way to understand this is via the following strengthening of the invariant, which generalizes the preceding invariant by considering the graph of channel ownership assertions held by the threads:

Strong channel acyclicity (initial version):
There exists a connectivity graph of channel ownership assertions, where there is an edge from thread  to channel  if  holds a channel ownership assertion  ↣ 1 p for an endpoint  of .This graph must be strongly acyclic.
By the term strongly acyclic, we mean that there is at most one path from any node to another, even if one is allowed to follow edges backwards.
Leaks.The aforementioned properties are enough to rule out the preceding examples, but there are subtle types of deadlocks that can still occur.The last remaining issue is that we have not yet taken into account the fact that we can store ownership assertions in channels, by transferring them via the send operation.There is thus a danger that we can leak channel ownership assertions circularly into each other, and thus create a cycle of channel ownership assertions.This could cause deadlocks in the same way as the first example in this section: by leaking a send ownership assertion, a send will never happen, and the receiver will block indefinitely.
For this reason, deadlocks are intimately related to leaks.It might be tempting to think that linearity alone is enough to rule out leaks, but as we alluded to, this is not the case.Consider what would happen if we had two channel endpoints  1 and  2 , and do the following: This program would not deadlock, but it would put the channel  2 in the buffer of  1 .If  1 and  2 turned out to be two endpoints of the same channel, then this would be a leak, as the channel would never be freed.We can use protocol Φ  ≜ True with  1 ↣ 1 !Φ and  2 ↣ 1 ?Φ.This protocol allows us to transfer any resource, including the ownership assertion for  2 .Thus, channel ownership for the channel would be stored inside itself, and we would have a leak: We strengthen our invariant to ensure there cannot be any cyclic ownership between channels:

Strong channel acyclicity (final version):
There exists a connectivity graph of channel ownership assertions, where there is: • An edge from a thread  to a channel  if  holds a channel ownership assertion  ↣ 1 p for an endpoint  of .• An edge from a channel  to a channel  ′ if  contains a message with associated channel ownership assertion  ′ ↣ 1 p for an endpoint  ′ of  ′ .This graph must be strongly acyclic.
Note that channel ownership should not be confused with having a reference to a channel.A thread can have a reference to a channel without having channel ownership for that channel, and a thread can have channel ownership for a channel without having a reference to that channel.
We can now understand deadlocks and leaks in terms of the connectivity graph: • Deadlock.In order for a thread to be able to perform a receive or wait operation, it must have channel ownership for the channel that it is receiving from.Therefore, if we have a deadlock in which threads are blocked on each other in a circular manner, then there must be a cycle of threads and channels in the connectivity graph.• Leak.If, after the program has terminated, there are still channels in the heap, then the channel ownership for them must be stored inside each other in a circular manner, and then there must be a cycle of channels in the connectivity graph.

Preserving the Invariant
We now discuss how we preserve the invariant by virtue of our program logic rules.The property of channel fulfillment is preserved by the fact that we work in a linear logic.The property of channel duality is preserved by the fact that we force channels to be dual when allocated.The property of strong channel acyclicity is more intricate, as the connectivity graph must be updated as the program executes.In Fig. 6, we show how the connectivity graph transforms due to each of the one-shot channel operations: • Fork.When thread  1 does a fork operation, it adds a new thread  2 to the connectivity graph, and connects it to the original thread via a channel .The two edges to the channel are labeled with dual protocols  and .The original thread  1 originally owned separation logic resources  * , which may contain ownership of other channels (and mutable references, which we ignore here).This is represented as edges from  1 to the owned channels.We let  be the ownership of the channel that is transferred to the new thread  2 , while  is the part that thread  1 keeps for itself.Due to this split of ownership, the fork operation corresponds to a modification of the graph, as shown in the figure.Crucially, if the original graph is strongly acyclic, the resulting graph is still strongly acyclic.Note that this relies on the separation between  * .If we had a channel ownership assertion that occurred both in  and in , then the resulting graph would not be strongly acyclic.
• Send.When thread  1 performs a send operation on a channel  with protocol !, it must provide resources  , where  is the value it wants to send.The resources   get transferred to the channel, and the thread loses its connection to the channel, because it is one-shot.Therefore, the send operation corresponds to a modification of the graph, as shown in the figure .The reader can see strong acyclicity is preserved.• Receive.When thread  2 performs a receive operation on a channel  with protocol ?, it receives a value  and resources   from the channel.The channel gets deallocated and removed from the graph, because the channel is one-shot.If the thread initially owned resources , then afterwards it owns resources   * .Note that these resources are separated-this relies crucially on the acyclicity of the graph before the receive operation: if thread  2 already had channel ownership for some channel  ′ , and additionally got a second channel ownership assertion  ′ via  , then the original graph would not have been strongly acyclic.In short, the proof rules of LinearActris ensure that strong acyclicity of the connectivity graph is preserved, and thus its adequacy theorem can ensure that the program is deadlock and leak free.In the next section, we will give an overview of how this is proved formally.

FORMAL ADEQUACY PROOF
In this section we give a formal overview of our adequacy proof.We first give a model of the propositions aProp of LinearActris by solving a recursive domain equation in a step-indexed universe of sets [America and Rutten 1989;Birkedal et al. 2010] ( §6.1).We then define the invariant that captures the properties presented in §5, which we use in the adequacy proof ( §6.2).We then define the LinearActris weakest preconditions, such that it enforces the invariant ( §6.3).Finally, we sketch how the weakest precondition rules and the adequacy theorem are proved ( §6.4).

The Step-Indexed Model of Propositions
To map the intuition of the previous section to a formal model of separation logic, we will first give the semantics of the type of propositions.This means we need to define a type aProp with the usual separation logic operators and the connectives  ↣ 1 p ∈ aProp and ℓ ↦ →  ∈ aProp.These connectives assert ownership of outgoing edges to a channel or a heap location in the connectivity graph.To define aProp, we solve the following recursive domain equation: )) → siProp Before discussing the technicalities (siProp, ▶), let us provide the intuition behind this definition.Recall from §5 that the nodes of the connectivity graph are locations (which can either be channels or references, collectively called cells) or threads.Formally, we let  ∈ Node ::= Cell(ℓ) | Thread(tid) with ℓ ∈ Loc and tid ∈ N.There is a directed edge from node  1 to  2 in the connectivity graph if  1 owns  2 .Edges are labeled with either a protocol (inl p) in case of ownership of a channel Cell(), or the value of the reference (inr ) in case of ownership of a reference Cell(ℓ).A LinearActris proposition should always be seen in the context of a particular thread of channel, where the outgoing edges describe which nodes the thread or channel owns.Threads Thread() are never owned, but are included for conformity with the graph.
The type aProp is not well-defined as an inductive or coinductive definition in the category of sets, because the recursive occurrence of aProp is in negative position.That is why we use the results by America and Rutten [1989]; Birkedal et al. [2010] to solve the recursive domain equation using step-indexing [Appel and McAllester 2001].The use of step-indexing is evident by the use of (pure) step-indexed propositions (siProp) as our meta logic, and the use of the ▶ constructor which guards the recursion.This construction is similar to how the model of Iris is constructed, with the crucial difference that Iris considers monotone predicates to obtain an affine logic.
With this definition at hand, we can define the connectives of our separation logic: We have glossed over several technical details here, such as the embedding of  into ▶, and that the right hand sides of these definitions live in the step-indexed logic siProp.We refer the interested reader to our Coq mechanization for the full details [Jacobs et al. 2023a].

The Invariant
The invariant is defined in terms of a connectivity graph, which is a labeled directed graph that is strongly acyclic.Recall that the nodes are the logical objects Node in the configuration.The incoming edges of channels are labeled by the protocols ! and ? appearing in the channel ownership assertions  ↣ 1 p.The incoming edge of a mutable reference is labeled by the value of the reference appearing in the reference ownership assertion ℓ ↦ → .
The invariant  () on a configuration  is therefore defined as follows: I() ≜ ∃ : CGraph.∀.local_inv( [], in_labels  (), out_edges  ()) Here, in_labels  () is the multiset of labels on incoming edges of , and out_edges  () is a finite map of outgoing edges of  (as in Jacobs et al. [2022a]).Furthermore,  [] looks up the physical state associated to the node  in configuration .Given  = (ì , ℎ), the value of  [] is: • Expr() if  = Thread(tid) and  is the expression with thread ID tid in ì , • Ref() if  = Cell(ℓ) and  is the value of the mutable reference at location ℓ in heap ℎ, • Chan(⊥) if  = Cell() and no value has been sent to the channel at location  in heap ℎ, • Chan() if  = Cell() and value  has been sent over the channel at location  in heap ℎ, • ⊥ if  is not a valid thread in ì  or cell in ℎ.The definition of  states that there is a connectivity graph  (that is strongly acyclic), and that for every value of , the local invariant local_inv holds.The local invariant constrains the relation between the physical state of the node , and the incoming and outgoing edges of  in the graph, and thus relates the graph to the configuration.It is defined as: The local invariant for a thread Expr() states that the incoming edges  are empty, and that we have a weakest precondition for , which owns the outgoing edges Σ (see the end of §6.3 for the difference between the primitive WP 0  {} and frame-preserving WP  {}).the outgoing edges Σ are empty.The local invariant for an empty channel Chan(⊥) states that the incoming edges  are the set containing the dual protocols ! and ?, and that the outgoing edges Σ are empty.The local invariant for a channel Chan() with value  states that the incoming edges  are the singleton set containing the protocol !, and that the outgoing edges Σ are owned by the protocol predicate  .The local invariant for a logical object that does not exist in the physical configuration, states that the incoming edges  and outgoing edges Σ are empty.

Weakest Preconditions
We have now defined the invariant, but we still need to define the weakest preconditions, which is the main challenge of proving adequacy.To do so, we first define a partial invariant I • (, tid, Σ), which states that the invariant I holds for all threads and channels in the configuration , except for the thread tid that our weakest precondition is considering: The partial invariant I • (, tid, Σ) states that there is a strongly acyclic connectivity graph , and that for every node , the local invariant local_inv holds, except for thread tid, for which we require that the incoming edges are empty (as threads are never owned), and the outgoing edges are Σ.
Using the partial invariant, we define the primitive weakest precondition WP 0  {} by cases depending on whether the expression  is a value or not: This definition states that if the expression is a value , then the weakest precondition holds if the predicate  holds for the value  (for technical step-indexing reasons, there is a ⋄ modality in front of the predicate, to allow us to remove ⊲ from pure assumptions [Jung et al. 2018b, §5.7]).If the expression  is not a value, then we operate under the assumption that the partial invariant I • ((ì , ℎ), tid, Σ) holds (under the later modality).We must then show that the expression is either reducible or blocked, expressed by the predicate reducible_or_blocked(, ℎ, Σ).This means that  can either step in the context of the heap ℎ, or that  is blocked on a receive operation on a channel for which Σ contains the ? protocol.Secondly, we must show that the invariant and weakest precondition are preserved: if  steps to  ′ , then we must find outgoing edges Σ ′ such that the partial invariant holds for the new configuration (ì  + + ì  new , ℎ ′ ), and the weakest precondition holds for  ′ under Σ ′ .Here, ì  new is the list of new threads that are spawned by the step, and Σ ′ are the new outgoing edges that are owned by the current thread tid.
Recursion.The reader may have noticed that the definition of the weakest precondition WP 0  {} and the partial invariant I • (, tid, Σ) are mutually recursive, and recursive occurrences appear in negative position.We use Iris's guarded fixed-point operator [Jung et al. 2018b, §5.6], which requires all recursive occurrences to appear under a later modality (⊲).
Framing.The frame rule of separation logic does not hold for WP 0  {}.Thus, to obtain the LinearActris weakest precondition, we define it as the frame-preserving closure [Charguéraud 2020] of the primitive weakest precondition, which satisfies the frame rule: In this definition, there is a later modality (⊲) in front of , but only if  is not a value.This makes sure that we get the step-framing rule of Iris: ⊲  * WP  {} ⊢ WP  { . *   } if  ∉Val.

Weakest Precondition Rules and Adequacy
With the definition of the weakest precondition connective at hand, we prove the weakest precondition rules of LinearActris.These proofs are relatively complex, as we need to reason about the connectivity graph, and how it is transformed when we perform a step, as shown in Fig. 6.
The adequacy proof (Theorem 5.4) follows the structure sketched in §5, by proving the initialization, preservation, and progress theorems.For the progress theorem, we use the fact that the connectivity graph is acyclic, which means that we can always find a thread that can step.Formally, we apply the principle of waiting induction [Jacobs et al. 2022a].We refer the interested reader to the Coq mechanization for the full details [Jacobs et al. 2023a].

SEMANTIC TYPING
This section shows that every program that can be shown to be deadlock free using an advanced session type system, can also be shown to be deadlock free using LinearActris.We prove this result using the "logical approach" to semantic typing [Appel et al. 2007;Dreyer et al. 2011;Jung et al. 2018a;Timany et al. 2022].In short, we translate a typing derivation of ⊢  :  from a syntactic session type system into a proof of wp  {⟦⟧} in LinearActris, where ⟦⟧ ∈ Val → aProp is the semantic interpretation of the syntactic type .Due to our adequacy theorem for weakest preconditions (Theorem 5.4), we obtain the corollary that all well-typed programs in the syntactic session type system are deadlock and leak free when executed.This theorem is quite easy to prove, as our program logic does all the heavy lifting.
Our session type system is inspired by the GV family [Wadler 2012;Gay and Vasconcelos 2010], but uses strong updates to track changes to the session types of channels.Moreover, our type system is more expressive than earlier deadlock-free type systems that have appeared in the literature: it supports the combination of session-typed channels with recursive types, subtyping, term-and session type polymorphism, and unique mutable references.
We give a brief overview of the key ingredient of semantic typing ( §7.1) before applying it to session types ( §7.2).

Semantic Typing in a Nutshell
A type system involves a set of types  ∈ Type and a typing judgment ⊢  :  (we omit typing contexts for brevity).Conventionally both notions are defined syntactically, i.e., the set of types is defined as an inductive type where each constructor corresponds to a type former, and the typing judgment is defined as an inductive relation where each constructor corresponds to a typing rule.
The key property of a type system for deadlock freedom is: Corollary 7.1 (Syntactic type soundness).If the typing judgment ⊢  :  is derivable for a base type  (integer, boolean), then  is deadlock and leak free.
To prove this property using semantic typing one carries out the following steps: (1) Define the semantic interpretation ⟦⟧ ∈ Val → aProp of each syntactic type .One should think of ⟦⟧ as being the set of values of type .However, ⟦⟧ is not an ordinary set, but a predicate in separation logic, making it possible to give a meaningful interpretation to types that describe stateful objects, in our case, session types that describe channels.
Semantic typing rules for channels: (2) Define the semantic typing judgment ⊨  :  in terms of the interpretation of types.This judgment roughly says that  is deadlock and leak free, and when  terminates with value , it satisfies ⟦⟧ .With a program logic at hand, the semantic typing judgment can simply be defined using the weakest precondition, i.e., ⊨  :  ≜ wp  {⟦⟧}.
(3) Prove the fundamental theorem, which says that every expression that is syntactically typed is also semantically typed.That is, ⊢  :  implies ⊨  : .The fundamental theorem is proved by induction on the typing derivation ⊢  : .This means that for each syntactic typing rule (with ⊢) we have to prove a semantic version (with ⊨).The semantic typing rules are proved using the corresponding weakest precondition rules.
The fundamental theorem almost immediately gives syntactic type soundness.Since the semantic typing judgment is defined in terms of weakest preconditions, it gives us deadlock and leak freedom by adequacy ( §7.1).Hence, when composing the fundamental theorem and adequacy, we obtain that every syntactically typed expression is deadlock and leak free, i.e., Corollary 7.1.
To streamline this development, we follow the foundational approach to semantic typing, inspired by Appel and McAllester [2001]; Ahmed [2004]; Ahmed et al. [2010]; Jung et al. [2018a].We omit the syntactic definition of the type system, and immediately define types and the typing judgment in terms of their semantics.Type formers are simply combinators on semantic types, and typing rules are simply lemmas about the semantic typing judgment.

Type System
An overview of the key definitions appears in Fig. 7.We omit details about unique mutable references, polymorphism, and copy (a.k.a.unrestricted) types for brevity's sake, and refer the interested reader to our Coq mechanization and the affine type system which we adopted [Hinrichsen et al. 2021], as the details revolving these aspects are mostly unchanged.
Type formers.The type system consists of two kinds of types, term types and session types.We have the usual linear term type constructs such as any, Z, and  ⊸ , in addition to the channel type chan , which is parametric on a session type .We support the usual session types such as !.  and ?. , as well as the ones for closing and branching (omitted for brevity's sake).
In a semantic type system, term types are defined as propositions over values (Type ≜ Val → aProp).For example, the type chan  is defined in terms of the channel ownership  ↣ .Session types  are defined using our dependent protocols p.We use the protocol binders to capture that channels exchange values  for which the term type predicate  holds.
Typing judgment.As we work with a language with strong updates, we use a typing judgment Γ ⊨  :  ⊨ Γ ′ with a pre-and post-context Γ, Γ ′ ∈ List(String × Type), similar to RustBelt [Jung et al. 2018a].Using the post-context can track how types of variables change throughout evaluation.
We closing substitutions to define our typing contexts, as is standard in logical relation models.Closing substitutions  ∈ String fin − ⇀Val are finite partial functions that map the free variables of an expression to corresponding values.Closing substitutions come with a judgment Γ ⊨ , which expresses that the closing substitution  is well-typed in the context Γ.The judgment says that for every typed variable (, ) ∈ Γ there is a corresponding value in the closing substitution  (), for which we own the resources ( ()).
The typing judgment Γ ⊨  :  ⊨ Γ ′ is defined using our weakest precondition.That is, given a closing substitution  and resources Γ ⊨  for the pre-context Γ, the weakest precondition holds for  (under substitution with ), with the postcondition stating that the resources   for the resulting value  are owned separately from the resources Γ ′ ⊨  for the post-context Γ ′ .
Typing rules.In a semantic type system, every typing rule corresponds to a lemma, which states that if the premises hold semantically, then the conclusion holds semantically.These lemmas are proved using the rules of LinearActris, by unfolding the typing judgment and the type formers, which yields goals that are directly provable using the corresponding weakest precondition rules.
Semantic type soundness.As the semantic typing judgment is defined in terms of weakest precondition, we obtain a type soundness theorem as a direct corollary of adequacy (Theorem 5.4).This theorem says that our type systems ensures there are no deadlocks and leaks.We obtain a similar type soundness theorem for safety (no illegal non-blocking operations, such as use-after-free) using LinearActris's safety theorem (Theorem 5.5).

COQ MECHANIZATION AND EVALUATION
All definitions, theorems, and examples in this paper have been mechanized in Coq using the Iris framework. 2The full sources are available in our artifact [Jacobs et al. 2023a].
The components of our mechanization and the corresponding line counts are displayed in Table 1.The definition of ChanLang includes the syntax and small-step operational semantics.LinearActris is built on top of an abstract separation logic modeled as step-indexed predicates over a partial commutative monoid (PCM).The abstract separation logic is adapted from Iris, with the crucial difference that Iris considers monotone predicates to obtain an affine logic, while our logic is linear.The mechanization of connectivity graphs is adapted from Jacobs et al. [2022a], but many lemmas had to be ported to the step-indexed setting (i.e., from Coq's Prop to siProp).To construct the propositions aProp of LinearActris we instantiate our abstract separation logic with the PCM of finite maps with disjoint union, and subsequently use Iris's solver for recursive domain equations.

Component
Section LOC With these components at hand, we define the weakest precondition connective and prove adequacy for the one-shot logic, and finally define the multi-shot logic and our semantic typing system.
Tactics.Inspired by the original Actris papers, we provide custom tactics for symbolic execution and reasoning about subprotocols.These tactics are built on top of the Iris Proof Mode [Krebbers et al. 2017b[Krebbers et al. , 2018]], and are used for the verification of all of our examples.
Basic examples.We mechanize Hoare triples for examples presented in § 2, as well as the examples presented in §1 of the Actris 2.0 paper [Hinrichsen et al. 2022].The proofs of these Hoare triples are the same as those in the original Actris mechanization, but due to our strong adequacy theorem we obtain that these examples satisfy deadlock and leak freedom "for free".
Merge sort examples.We verify the Hoare triples for all merge sort examples from §5 of the Actris 2.0 paper [Hinrichsen et al. 2022].This includes choice, recursion, higher-order quantification (for generic sorting functions), and delegation.The most advanced version of merge sort recursively creates new child processes for handling the divide-and-conquer part of the merge sort algorithm, and sends the list element-by-element from the parent process to the child processes.We use the following dependent separation protocols, which use the encodings of the choice protocols (⊕, &) presented in Hinrichsen et al. [2022, §5.3]: } !endGiven a total order ( , ≤) and an interpretation predicate  :  →Val → aProp, the protocol p sort expresses that the input list is sent, and the sorted list is sent back.The auxiliary protocol p head sort ì  is used for sending the input list, where the parameter ì  keeps track of the elements that have been sent so far.At every iteration, there is a choice (⊕) between (1) sending more elements, and (2) indicating that the whole input list has been sent.received so far.At every iteration, there is a branch (&) between (1) receiving more elements, and (2) protocol termination.The conditions in the protocol ensure that the elements of ì  are returned in sorted fashion, and that when terminated, the resulting list ì  is a permutation of the input list ì .
Semantic typing examples.Inspired by Hinrichsen et al. [2021] we show that the type system from § 7 can be used to type check a computation service that uses choice, recursion, and polymorphism.We use the following session type: The session type says that a client can repeatedly send computations () ⊸  to a service, which returns the result  of forcing the computation.Due to the support for polymorphism, it is possible to send a computation () ⊸  with a different type  in each iteration of the protocol.

RELATED WORK
We first discuss other approaches to prove deadlock freedom ( §9.1), then discuss mechanizations of session types ( §9.2), and channel implementations ( §9.3).Finally, we discuss related work on models of separation logic ( §9.4).

Proof Methods for Deadlock Freedom
Linear session types.The GV type system [Wadler 2012;Gay and Vasconcelos 2010] and follow-up work [Lindley and Morris 2015, 2016, 2017;Fowler et al. 2019Fowler et al. , 2021] ] ensure deadlock freedom for a functional language with session types by linearity.Earlier work proved deadlock freedom for a linear -calculus using a graphical approach [Carbone and Debois 2010].Toninho et al. [2013]; Toninho [2015]'s deadlock-free SILL embeds session-typed processes into a functional language via a monad.Like GV, the seminal paper by Caires and Pfenning [2010] and Toninho [2015]'s PhD thesis spurred a series of derivatives [Caires et al. 2013;Pérez et al. 2014;Das et al. 2018], in which deadlock freedom is guaranteed by linearity.The contribution of our work is to obtain deadlock freedom from linearity in separation logic instead of a type system.[Honda et al. 2008[Honda et al. , 2016] ] generalize session types from bidirectional channels to n-to-n channels.To ensure deadlock freedom, multiparty session type systems use a consistency check that generalizes the duality condition of binary session types.The consistency check can be performed via projections of a global type, or via an explicit check on a collection of local types [Scalas and Yoshida 2019].Purely multiparty approaches generally assume a static topology, and thus do not support dynamic creation of threads and channels.This makes them orthogonal in the programs they can establish to be deadlock free compared to linear binary session types (hybrid approaches exist, see below).

Multiparty session types. Multiparty session types
Lock orders.Dijkstra originally proposed lock orders as a mechanism to ensure deadlock freedom for his Dining Philosophers problem [Dijkstra 1971].Lock orders have been incorporated into a number of verification tools and separation logics that support proving deadlock freedom, for example [Leino et al. 2010;Le et al. 2013;Zhang et al. 2016;Hamin and Jacobs 2018].Lock-order based approaches are orthogonal in expressive strength compared to session types.For instance, it is far from clear how to build a logical relation for a language with session types in terms of a separation logic with lock orders.In the session-typed source language, deadlock freedom is ensured by linearity, and it does not seem possible to translate this into order-based reasoning in the target program logic.Since session types do not have order obligations, it is not clear how the order conditions on the receive operations are justified.
LiLi [Liang andFeng 2016, 2018] is a program logic for proving liveness (and therein deadlock freedom) of concurrent objects using a method they call definite actions, which is conceptually similar to lock orders.The definite actions act as obligations which can be used to impose an order of use of multiple (encoded) blocking operations, such as acquisition and releases of locks.The TaDA Live separation logic for proving liveness of concurrent programs [D'Osualdo et al. 2021] uses a similar concept called layers.Similar to LiLi-and thereby lock orders-these layers form a hierarchy of conditions, which can be used to encode an ordering between locks.
Choreographies.Choreographic languages [Montesi 2021;Cruz-Filipe et al. 2021b,a,b] allow the programmer to write a global program that is automatically split into local programs that communicate via channels for which deadlock freedom is guaranteed by construction.Since choreographies are based on program generation, they are very different from our approach.
Usages and obligations.Yet another mechanism for deadlock freedom are usages and obligations [Kobayashi 1997;Igarashi and Kobayashi 1997;Kobayashi et al. 1999;Igarashi and Kobayashi 2001;Kobayashi 2002;Igarashi and Kobayashi 2004], which ensure that channels are used in a non-deadlocking order.In contrast to lock orders, the priority involved in usages and obligations always increases in the order.These mechanisms have also been extended to session-typed languages [Dardha and Gay 2018].Similar to lock orders, usages and obligations entail additional proof obligations, and as such, are orthogonal to obtaining deadlock freedom from linearity.
Hybrid approaches.Message passing has been extended with locks and sharing [Benton 1994;Villard et al. 2009;Reed 2009b;Lozes andVillard 2011, 2012;Pfenning and Griffith 2015;Balzer et al. 2018Balzer et al. , 2019;;Hinrichsen et al. 2020;Qian et al. 2021;Rocha andCaires 2021, 2023;Jacobs and Balzer 2023].Some of these approaches ensure termination, deadlock, and/or leak freedom, e.g., via lock orders, linearity, or other checks.Multiparty session types have been combined with linearity to guarantee progress beyond one session [Carbone et al. 2015[Carbone et al. , 2016[Carbone et al. , 2017;;Jacobs et al. 2022b].In this paper we used bidirectional channels (built on top of one-shot channels) as the sole concurrency primitive.In future work, we would like to add locks and multiparty session types, inspired by the preceding work ( §10).et al. [2021] use Actris to prove soundness of a session type system via the method of semantic typing, inspired by RustBelt [Jung et al. 2018a].We follow a similar approach, but in addition to proving type safety, we prove deadlock and leak freedom.Thiemann [2019] proves type safety of a linear GV-like session type system using dependent types in Agda, Rouvoet et al. [2020] streamline this approach via separation logic.Goto et al. [2016]; Ciccone and Padovani [2020]; Castro-Perez et al. [2020]; Reed [2009a]; Chaudhuri et al. [2019] mechanize -calculus with session types.These works generally show safety, but Jacobs et al. [2022a]'s Coq mechanization shows deadlock freedom.We generalize their approach of connectivity graphs to the context of separation logic.Lastly, Castro-Perez et al. [2021]; Jacobs et al. [2022b] mechanize multiparty session types.

Verification of Message-Passing Implementations
While channels are a primitive of our operational semantics, others have verified message-passing implementations that use atomic primitives, such as compare-and-swap or atomic-exchange.Mansky et al. [2017] verifies a message-passing system written in C using VST [Appel 2014;Cao et al. 2018].Tassarotti et al. [2017] proves the correctness of a compiler for an affine session-typed language, showing that the target terminates iff the source program terminates (under fair scheduling assumptions).In the future, we would like to implement our channels using atomic primitives.In this setting, it is less clear how to formulate the adequacy theorem.As low-level implementations of channels perform busy loops, we would need to model deadlock freedom as a liveness property such as progress under fair scheduling ( §10).
Recent work applies Actris to obtain reliable message-passing specifications for channels built on top of UDP-like primitives [Gondelman et al. 2023].Similarly to the shared memory setting, the implementation busy loops until a message has been successfully transferred over the unreliable network, which can only be guaranteed under fair scheduling and a fair network.

Linear Models of Separation Logic
The original presentations of sequential separation logic [O'Hearn et al. 2001] and concurrent separation logic (CSL) [O'Hearn 2004;Brookes 2004] use a linear model.For sequential separation logic, linearity gives leak freedom, and with scoped CSL-style invariants this scales to concurrent programs that use parallel composition.When extending the language with more general invariant mechanisms that support unscoped thread creation [Hobor et al. 2008;Svendsen and Birkedal 2014] the situation becomes more complicated.Jung [2020, Thm 2] shows that linearity alone does not give leak freedom, and other mechanisms are needed.The Iron logic [Bizjak et al. 2019] provides such a mechanism: by disallowing deallocation permissions in invariants, leak freedom can be obtained.Unfortunately, ownership of the end protocol needs to include permission to deallocate the channel, making Iron's invariants insufficient for higher-order session types.
While all resources in Iris are affine, and all resources in LinearActris are linear, there have been various efforts to make hybrid models of separation logics that have both linear and affine resources [Tassarotti et al. 2017;Cao et al. 2017;Krebbers et al. 2018;Mansky 2022].Typically they use some form of partial commutative monoids equipped with an order that specifies which resources can be dropped.The model of LinearActris is an instance of the step-indexed ordered resource algebra model by Krebbers et al. [2018], taking the order to be the reflexive relation, meaning no resources can be dropped.An interesting direction for future work is to add a notion of ghost state to LinearActris, for which these hybrid models could be useful.

CONCLUSION AND FUTURE WORK
The key strength of LinearActris is deadlock and leak freedom "for free" from linearity, while being otherwise very close to the original Actris logic [Hinrichsen et al. 2020[Hinrichsen et al. , 2022]].As such, we are able to verify example programs that use higher-order channels (sending channels over channels as messages), higher-order functions (passing closures as arguments to other closures, and sending closures over channels as messages), and higher-order store (sending references over channels as messages), as illustrated in §2.As an added benefit of LinearActris being close to the Actris logic, we were able to port most of the examples from the original Actris papers to LinearActris ( §8).
We now discuss some limitations of LinearActris and directions for future work.
Asynchronous subtyping.Actris 2.0 [Hinrichsen et al. 2022] supports asynchronous subtyping of channels, which allows the subtyping rule ?⟨⟩; !⟨⟩; p ⊑ !⟨⟩; ?⟨⟩; p.This rule allows the user of a channel to perform send steps ahead of time.The reason why this rule is sound in the original Actris framework, is that channels have two separate buffers, one for sending and one for receiving.In the LinearActris logic, we only have one buffer, and messages must enter this buffer in the order specified in the protocol, and hence we cannot support asynchronous subtyping.We believe we could support asynchronous subtyping if we add a second buffer to channels.However, this would introduce complications that are orthogonal to the main contributions of this paper, as we can no longer use the single-shot buffer encoding of channels by Jacobs et al. [2023b].
Other concurrency constructs.LinearActris is designed for message-passing concurrency, and does not support other concurrency constructs such as locks, semaphores, or monitors.The original Actris logic supports these constructs, in particular, it employs locks to model sharable channels, inspired by manifest sharing in session types [Balzer et al. 2018[Balzer et al. , 2019]].In the original Actris, these constructs are implemented using busy loops and verified using Iris's mechanisms for ghost state and invariants.When stating deadlock freedom using global progress, it is significantly more complicated to add other concurrency constructs.To ensure that deadlocks can be distinguished from ordinary loops, one would need to add such constructs as primitive blocking operations, and they need to be explicitly handled as part of the connectivity graph.In future work, we would like to pursue this direction.Our reason for believing this to be possible, is that the connectivity graphbased approach to deadlock freedom has been designed to be flexible in the kind of concurrency constructs, and has already been applied to a type system for locks [Jacobs and Balzer 2023].
Multiparty communication.We would like to extend the LinearActris logic with multiparty communication inspired by multiparty session types [Honda et al. 2008].In prior work, Jacobs et al. [2022b] used connectivity graphs to prove deadlock freedom of a session type system that combines GV-style dynamic thread and channel spawning with multiparty session types.However, we believe that extending these results to separation logic is non-trivial, even without considering deadlock and leak freedom.In particular, it is not clear how global types could be generalized to Actris-style dependent separation protocols.
Liveness.LinearActris guarantees deadlock freedom, but does not guarantee liveness.Deadlock freedom (stated as global progress-the standard way of formulating this property in the session types literature [Caires and Pfenning 2010;Carbone and Debois 2010;Wadler 2012]) means that the program as a whole cannot get stuck on message receives indefinitely, but does not guarantee that the program will eventually terminate or produce a result.In particular, deadlock freedom does not rule out infinite loops written by the user.To guarantee liveness, one needs to prove that loops in the program eventually terminate, or produce a result that counts as progress, and prove that the program cannot get stuck in other ways, such as by waiting for a message that will never arrive.In future work, we plan to investigate whether the LinearActris logic can be extended to guarantee liveness for higher-order message passing, by taking inspiration from liveness logics such as LiLi [Liang and Feng 2016] and TaDa Live [D'Osualdo et al. 2021], and existing work on termination and liveness in Iris [Tassarotti et al. 2017;Spies et al. 2021].
Iris invariants.We hope that our work can be a step towards bringing deadlock and leak freedom to full-fledged separation logics for fine-grained concurrency with Iris-style impredicative invariants [Svendsen and Birkedal 2014].Recent progress has been made for leak freedom [Bizjak et al. 2019], and termination, as well as termination-preserving refinement [Spies et al. 2021;Tassarotti et al. 2017].Nevertheless, key challenges related to Iris-style invariants remain.As channels can be seen as a particular type of invariant, we hope that our connectivity graph approach can be generalized, e.g., to a linear form of invariants that are compatible with leak-and deadlock freedom.

Fig. 2 .
Fig. 2. The basic rules of our separation logic.

Fig. 5 .
Fig. 5. Multi-shot channels and protocols in terms of one-shot channels and protocols.

Fig. 6 .
Fig. 6.The one-shot channel operations and the corresponding connectivity graph transformations.

Fig. 7 .
Fig. 7. Typing judgements and type formers of the semantic type system.
Storing channels in references.Consider the following variation of the previous example, in which we wrap channel  1 in a reference: LinearActris can reason about higher-order programs that send closures, which capture references and channels.Consider the following program, which spawns a thread that receives and runs a closure from the main thread, and then sends the result back:let  1 = fork ( 2 .let  =  2 .recv() in  2 .send(());  2 .close()) in . . .

•
WP-send: This rule is used to verify a send operation.The rule states that if we have channel ownership  ↣ ! ( ì )⟨⟩{ }; p, then we can choose an instantiation ì WP-close: This rule is used to verify a close operation.The rule states that if we have channel ownership  ↣ !end{ }, then we can close the channel.We must also provide ownership of the proposition , which is transmitted to the other side.•WP-wait: This rule is used to verify a wait operation.The rule states that if we have channel ownership  ↣ ?end{ }, then we can wait on the channel, and afterwards we obtain .
The local invariant for a reference Ref() states that the incoming edges  are the singleton set with value , and that Proc.ACM Program.Lang., Vol. 8, No. POPL, Article 47. Publication date: January 2024.

Table 1 .
Overview of the LinearActris Coq mechanization The auxiliary protocol p tail sort ì  ì  is used for receiving back the sorted list, where the parameter ì  keeps track of the elements that have been Proc.ACM Program.Lang., Vol. 8, No. POPL, Article 47. Publication date: January 2024.