Explicit Refinement Types

We present λert, a type theory supporting refinement types with explicit proofs. Instead of solving refinement constraints with an SMT solver like DML and Liquid Haskell, our system requires and permits programmers to embed proofs of properties within the program text, letting us support a rich logic of properties including quantifiers and induction. We show that the type system is sound by showing that every refined program erases to a simply-typed program, and by means of a denotational semantics, we show that every erased program has all of the properties demanded by its refined type. All of our proofs are formalised in Lean 4.


INTRODUCTION
Refinement typing extends an underlying type system with the ability to associate types with logical predicates on their inhabitants, constructing, for example, the type of nonempty lists or integers between three and seven.Most such systems support type dependency between the input and output types of functions, allowing us to give specifications like "the output of this function is the type of integers greater than the input." The core concept underlying type checkers for refinement types is that of logical entailment.If we have a value satisfying a predicate and require a value satisfying , we require that entails for our program to typecheck; this then reduces the typechecking problem to typechecking in our underlying type system plus discharging a set of entailment obligations, called our verification conditions.Given an appropriate choice of logic for our predicates, SMT solvers provide a highly effective way of automatically discharging verification conditions: this allows us to use refinement types without dealing with the bookkeeping details required by manual proofs.Combined with type and annotation inference (in which we also infer refinements), refinement types allow verifying nontrivial properties of complex programs with a minimal annotation burden, making them more appealing for use as part of a practical software development workflow -for example, the Liquid Types implementation of refinement typing for ML required a manual annotation burden of 1% to prove the DML array benchmarks safe [Rondon et al. 2008;Xi and Pfenning 1998].
However, refinement typing's reliance on automation is a double-edged sword: while it makes refinement types practical for usage in real-world contexts, it also creates a hard ceiling for expressiveness, especially if we want to use quantifiers, which would make the SMT problem undecidable.
Rather than carefully massaging annotations into forms satisfying the particular SMT solver that is in use, it makes sense to let programmers provide manual proofs for the (hopefully, few) cases where the solver gets stuck: in utopia, humans would prove interesting things, and machines would handle the bookkeeping.
However, it is nontrivial to figure out how to add explicit proofs into a system, because the semantics of refinement type systems is subtle.So before we can add the capability to move freely between explicit proofs and automation, we need to know what manual proofs should look like!To achieve this goal, we introduce a system of explicit refinement types, ert , in which all proofs are manual, to explore the design space.Since our proofs are entirely manual, our refinement logic can be extremely rich, and, in particular, we support full first-order logic with quantifiers.
While the presence of explicit proofs means that functions and propositions look dependently typed, ert is not a traditional dependent type theory.In particular, we maintain the refinement discipline that "fancy types" can always be erased, leaving behind a simply-typed skeleton.Furthermore, ert has no judgemental equality -there is no reduction in types.

Contributions
• We take a simply-typed effectful lambda calculus, stlc (in section 5.1), and add a refinement type discipline to it, to obtain the ert language (in Section 4).• We support a rich logic of properties, including full first-order quantifiers, as well as ghost variables/arguments (see section 4.2).It does not rely on automated proof, and instead features explicit proofs of all properties.• We show (in section 4.3) that this type system satisfies the expected properties of a type system, such as the syntactic substitution property.• In section 5, we give a denotational semantics for both the simply-typed and refined calculi, and we prove the soundness of the intepretations.Using this, we establish semantic regularity of typing, which shows that every program respects the refinement discipline.• We describe our mechanization of the semantics and proofs in Lean 4 in section 6.The proofs are available in the supplementary material.

REFINEMENT TYPES 2.1 Pragmatics of Refinement Types
Consider a simply-typed functional programming language.In many cases, the valid inputs for a function are not the entire input type but rather a subset of the input; for example, consider the classic function head : List A → A.
Note that we cannot define this as a total function for an arbitrary type ; we must either provide a default value, change the return type, or use some system of exceptions/partial functions.
Refinement types give us native support for types guaranteeing an invariant about their inhabitants by allowing us to associate a base type with a predicate.For example, we could represent the types of natural numbers and nonzero integers (in a Liquid-Haskell-like notation) as: type Nat = {v: Int | v >= 0} type Nonzero = {v: Int | v <> 0} (1) We could then write the type signature of division as By allowing dependency, where the output type of a function is allowed to depend on the value of its arguments, we can use such functionality to encode the specification of operations as well, as in eq : x:Int → y:Int → {b: Bool | (x = y) = b} (3) Note that these types all use equality and inequality.This is not the judgemental equality of dependent type theory, but rather a propositional equality that can occur inside types.We could now attempt to implement a safe-division function on the natural numbers as follows safeDiv : Nat → Nat → Nat safeDiv = if eq 0 then 0 else div (4) A refinement type system reduces the problem of type-checking to the question of whether a set of logical verification conditions holds.In this case, all of the propositions are decidable.Unfortunately, with more general specifications, checking these conditions quickly becomes undecidable.
For example, if we permitted equations between arbitrary polynomials, then Hilbert's tenth problem -the solution of Diophantine equatiosn -can easily be encoded.However, by appropriately restricting the logic of assertions, we can reduce the problem of checking verification conditions to a decidable fragment of logic that, while NP-hard (or worse), can usually be solved very effectively in practice.
In particular, many languages supporting refinement types require refinements to be expressions in the quantifier-free fragment of first-order logic, with atoms restricted to carefully chosen ground theories with efficient decision procedures.In Liquid Haskell, for example, formulas in refinements are restricted to formulas in QF-UFLIA [Vazou et al. 2014]: the quantifier-free theory of uninterpreted functions with linear integer arithmetic.While such a system may, at first glance, seem very limited, it is possible to prove very sophisticated properties of real programs with it.In particular, with clever function definitions and termination checking (which generates a separate set of verification conditions to ensure some measure on recursive calls to a function decreases), it is even possible to perform proofs by induction in a mostly-automated fashion [Jhala and Vazou 2020].
However, it can be challenging to formulate definitions in such a limited fragment of logic, and many properties (such as monotonicity of functions or relational composition) require the use of quantifiers to be stated naturally.Unfortunately, attempts to extend solvers to support more advanced features like quantifiers usually lead to very unreliable performance.
Another issue is that systems designed around off-the-shelf solvers often do not satisfy the de-Bruijn criterion: their trusted code-base, rather than being restricted to a small, easily-trusted kernel, instead will often consist of an entire solver and associated tooling!Unfortunately, SMT solvers are incredibly complex pieces of software and hence are very likely to have soundness bugs, as recent research on fuzzing state-of-the-art solvers like CVC4 and Z3 for bugs tends to show [Winterer et al. 2020].
In our work, we build a refinement type system, which replaces the use of automated solvers with explicit proofs.This permits (at the price of writing proofs) the free use of quantifiers in propositions, while also having a trusted codebase small enough to be formally verified.

Semantics of Refinement Types
To understand what refinement types are semantically, it is worth recalling that there are two main styles of doing denotational semantics, which Reynolds dubbed "intrinsic" and "extrinsic" [Reynolds 2003].
The intrinsic style is the usual style of categorical semantics -we find some category where types and contexts are objects, and a term Γ ⊢ : is interpreted as an element of the hom-set Hom(Γ, ).(Dependency does make things more complicated technically, but not in any conceptually essential fashion.)In the intrinsic style, only well-typed terms have denotations, and ill-typed terms are grammatically ill-formed and do not have a semantics at all.Another way of putting this is that intrinsic semantics interprets typing derivations rather than terms.
In the extrinsic style, the interpretation function is defined upon raw terms, and ill-typed programs do get a semantics.Types are then defined as retracts of the underlying semantic object interpreting the raw terms.A good example of this style of semantics is Milner's proof of type safety for ML in [Milner 1978]: he gave a denotational semantics for an untyped lambda calculus, and then gave an interpretation of types by logical relations over this untyped calculus, which let him use the fundamental lemma to extract type safety.Logical relations (and realizabilty methods in general) are typical examples of extrinsic semantics.Normally type theorists elide the distinction between these two styles of semantics, because we prove coherence: that the same term cannot have two derivations with different semantics.This is called the "bracketing theorem" in Reynolds [2003] and Melliès and Zeilberger [2015].
Refinement types puts these two styles of semantics together in a different way.We start with a unrefined, base language with an intrinsic semantics, and then define a second system of types as a family of retracts over the base types.In ert , we use a monadic lambda-calculus as the base language, and define our system of refinements over this calculus, with the refinements structured as a fibration over the base calculus.We deliberately avoided foregrounding the categorical machinery in our paper to make it more accessible, though we do direct readers to Melliès and Zeilberger [2015]'s POPL paper (which deeply influenced our design).
Another, perhaps more familiar, instance of this framework are Hoare logic and separation logic, which can be understood as refinement type systems over a base "uni-typed" imperative language.In fact, the distinction in Hoare logic between "logical variables" (which appear in specifications) and program variables is (in semantic terms) precisely the same as the distinction between logical and computational terms in ert -which we will discuss further in the next section.

EXPLICIT REFINEMENT TYPES
In this section, we introduce our system of explicit refinement types, ert , which we construct by enriching the simply-typed lambda calculus with proofs, intersection types, and union types.We recover a computational interpretation of terms in our calculus by recursively erasing the logical information added, yielding back simply-typed stlc terms.The presence of proofs allows us to track logical facts, and by pairing a term with a proof of its property, form subset types.We also support a general form of intersection and union type, to allow us to pass around terms used only in proofs in a way that is guaranteed to be computationally irrelevant, which is both a significant performance concern and allows type signatures to more clearly express the programmer's intent.For example, consider the following definition of a vector type: In particular, we define a vector of length as a list paired with the information that the list has length .That is, a vector is a subset type: a base type List paired with a proposition (ℓ) (here len ℓ = ), which we will interpret as containing all elements ℓ of the base type satisfying (ℓ).In general, we write such types as { : | ( )}, and introduce them with the form { , }, where is of type and is a proof of ( ).We define a length function on vectors as follows: Let us break this definition down, starting from the signature.Vec.len begins by universally quantifying over a natural number with the quantifier ∀ : N, and then has a function type Vec → N. We can interpret this as saying that for every , Vec.len takes vectors of length to a natural number.
The ∀ quantifier in the type of Vec.len behaves like the quantifiers in ML-style polymorphism, rather than the pi-type of dependent type theory.It indicates that the same function can handle vectors of any length, with no explicit branching on the length.In contrast, a pi-type can lets the function body compute with the natural number argument.
An explicit binding, written : N , represents this quantifier in the actual definition; we call a variable binding surrounded by double bars (e.g., : ) a ghost binding.Moving inwards, we have a function type with input Vec and output N; this corresponds to the lambda-expression " : Vec , " in the definition, where ≡ let {ℓ, _} = in len ℓ is an expression of type N. Breaking down , we must first explicitly destructure our vector into its components, a list ℓ and a proof, that ℓ is of length .Unlike in refinement type systems like DML and Liquid Haskell, this is explicit, with no entailment-based subtyping.
The definition in equation 6 ignores the proof component of the let-binding and hence the refinement information carried by our type system.One way to use the proof information would be to have a definition for Vec.len, which promises to return an integer equal to the length.

Vec.len
However, one helpful feature of ert is that even in this case, we can prove facts about our definitions by writing freestanding proofs about them rather than cramming every possible fact we could want into our type signature.For example, we could give a proof a proposition that the definition in equation 6 is correct as follows: Vec.len_def : Let us break this definition down, again starting with the signature.We quantify over both the length : N and the vector : Vec , and then assert the equality proposition Vec.len = .In the proofs, both universal quantifiers are introduced by ghost lambdas " ˆ : N " and " ˆ : Vec ".However, we use ˆ instead of because we are proving a universally quantified proposition rather than forming a term of intersection type.The proof of equality is a term of the form trans[...], which represents an Agda-style syntax sugar for equational reasoning.In particular, if trans , , : = → = → = is the transitivity rule, then we have the desugaring where each is evidence of the proposition = +1 .
Examining the proof of equation 5, we see that every piece of evidence used except one is of the form " something ".These are explicit -reduction proofs, which are necessary since our calculus does not include a notion of judgemental equality: type equality is simply -equivalence. 1While this dramatically simplifies the meta-theory, this can make even simple proofs very long.In our examples, we implement the pattern of repeatedly applying a -reduction as syntactic sugar.Writing it as (pronounced "by beta"), we get the much simpler proof Vec.len_def : ∀ : N, ∀ : Vec , len = Vec.len_def≡ ˆ : N , ˆ : Vec , let {ℓ, } = in trans[len {ℓ, } =( ) len ℓ =( ) ] (10) However, these definitions raise a question: why not simply write The problem with the above definition is that is a ghost variable, indicating we may use it in propositions and proofs but not generally in terms.This is a critical distinction to be able to make: the specification of a program may involve values which we do not want to manipulate at runtime.For example, the correctness proof of a sorting routine might use an inductive datatype of permutations, elements of which could potentially be much larger than the list itself.By making a distinction between ghost and computational variables, we can define an efficient erasure of refined terms (which contain proofs) to simply-typed terms (which do not).For example, we can erase the signature in equation 7 into a simple type: At the type level, we erase any dependency information, leaving us with simple types.Propositions are all either wholly erased or erased to the unit type.At the term level, erasure essentially consists of recursively erasing ghost variables and proofs into units and (in the case of proofs of falsehood) error stops, yielding a well-typed term in the simply-typed lambda calculus extended with an error stop effect (i.e., the exception monad).We will prove in section 5 that the produced terms are always well-typed.
As expected, we see that erasure sends base types like 1 and N, as well as their literals like () or 0, to themselves.Erasure distributes over (dependent) function types, i.Similarly, universal quantifiers are erased to the unit type 1 as follows: |∀ : , ( )| = 1 → | ( )|.Correspondingly, we erase the introduction and elimination rules for intersection types like so: In general, propositions and ghosts in value types (such as subsets) are erased completely, whereas propositions and ghosts in function-style types (such as intersection types) are erased to units to avoid problems with the eager evaluation order of stlc .(See section 7 for a more detailed discussion.) Returning to the original question, erasing the definition in equation 11 yields the term : 1, : List | |, , which is not well-typed at 1 → List | | → N. Another way of thinking about this is that if Vec.len took the length as a computational argument, we would never need to call the function -we would have to have the length in hand to call Vec.len in the first place.However, in our setting, vectors are merely refined lists, which erase into raw lists.A raw list does not carry its length ; but is a well-defined property of its specification.Since the specification value gets erased to a unit, then a program that wants to compute the length must traverse the list -i.e., in equation 6 we call List.len.
Another advantage of having explicit proofs as part of a refinement type system is the ability to reuse previous theorems and perform proofs by induction.For a simple example of this, consider the following definition of addition for natural numbers: natrec is the eliminator for the natural numbers, which is defined essentially by iteration, with typing rule Natrec.The eliminator has the standard behavior, given by reduction rules zero and succ , which essentially amount to substituting into recursively times.We can then use these axioms to prove that zero is a left-identity by -reduction, simply writing zero le : (∀ : N, 0 + = ) ≡ ˆ : N , .In contrast, we need induction to prove that zero is a right-identity.To perform induction, we introduce the ind eliminator for natural numbers, essentially the propositional version of natrec, with typing rule Ind.We may then write Induction lets us prove many arithmetic facts without baking them into the type system.For example, we can prove that addition is commutative as follows (see figure 1 for helpers), where symm and congr are proofs that equality is symmetric and transitive respectively: Note the ability to reuse theorems we have proved previously.Another advantage of explicit proofs is that we do not need to encode even fundamental facts into our core calculus since we can prove them from a small core of base axioms.Minimizing the number of axioms simplifies the implementation of the type-checker and reduces the size of the trusted codebase while allowing Judgment Meaning the programmer to effectively write refinements and proofs using facts that the language designer may not have considered.

FORMALIZATION
We give ert 's grammar and typing rules in sections 4.1 and 4.2.We then give proofs of some expected metatheoretic properties, such as substitution and regularity, in section 4.3.

Grammar
The grammar for ert (given in figure 15 in the appendix) consists of four separate syntactic categories: types , propositions , terms , and proofs .We denote the set of (syntactically) wellformed terms living in each by Type, Prop, Term, and Proof, respectively.We may then define a typing context Γ as a list of computational variables : , ghost variables : , and propositional variables : , as in figure 3.
Ghost variables only appear within proofs, types, and propositions, whereas computational variables can occur anywhere, including both computational and logical terms.Typing contexts are telescopic, so types and propositions appearing later in the context may depend on previously defined variables.We use these syntactic categories to state the shape of ert 's typing judgments in figure 2. A context is well-formed if the types of each of its variables are well-formed in the context made up of all previously defined variables, with the context well-formedness rules in figure 3. The presence of both computational and ghost variables means that our contexts have additional structural properties beyond the usual ones such as weakening and exchange.Since a computational variable can be used in more places than a ghost variable, we introduce the concept of an upgrade.
When we upgrade a context, some of the ghost variables can be replaced by computational variables with the same name and type.So, for example, the context : , : can be upgraded to : , : .In figure 4, we formalise this with the rules of the judgment Δ ≤ Γ, which reads "Δ upgrades Γ".We now define the upgrade of Γ, written Γ ↑ , to be the context with all ghost variables in Γ replaced by term variables, that is, Note that Γ ↑ ≤ Γ (but Γ ≤ Γ ↑ iff Γ contains no ghost variables).A context Δ which upgrades Γ types more terms than Γ, since we may use a computational variable anywhere a ghost variable is expected, but not vice versa.In particular, we may prove the following lemma: Since the only difference between computational and ghost variables is that ghosts can't be used in computational terms, this distinction does not matter for proofs, or for proposition-and type-well-formedness. Formally:

Typing Rules
The type formation rules for ert are collected in figure 5, the term formation rules in figure 7, the proposition formation rules in figure 6, and the proof rules and axioms in figures 8 and 9 respectively.This has formation rule Eq-WF, which checks that and are well-typed in an upgraded context.Because equality is a mathematical proposition, we may use ghost variables freely.Furthermore, note that we can consider equalities between terms of any type, including terms of higher type such as function types.This has often been challenging to support with refinement types (see Vazou and Greenberg [2022] for a detailed discussion), but is unproblematic with ert .The introduction rule is the standard reflexivity axiom, Rfl, and equalities may be eliminated via substitution using the rule Subst.The substitution eliminator is powerful enough to prove the other equality axioms, such as, for example, transitivity: Type equality ert is just -equivalence, and so any equations which would have come via judgemental equality in a dependent type theory must be expressed as equality axioms.For example, beta-reduction for functions is expressed via the axiom ty , and there are similar rules for each of the type constructors in the language.Furthermore, because proofs are computationally irrelevant, they support an extensionality principle: the axiom Ir-Pr lets us replace any proof with any other proof .

Type
Γ, : ⊢ : Γ, : ⊢ : Dependent functions of type ( : ) → may be introduced by lambda abstraction, via the rule Lam, and may be eliminated by application, via the rule App.The corresponding and equations are introduced axiomatically, with the rules ty and ty .Each of the and axioms is subscripted with an annotation naming the type former it is for.For example, the subscript "ty" in ty refers to the fact that dependent function types are parameterized by a term variable (with a type).The rule application itself is annotated with the function body and argument.
While dependent function types abstract over a computational variable, we can also abstract over ghost or (computationally) irrelevant variables, yielding a form of intersection type.The type well-formedness conditions for ∀ : , (in Intr-WF) are essentially the same conditions as for dependent functions, but the introduction rule Lam-Ir checks the body assuming the parameter is irrelevant.
We can eliminate a term of intersection type by applying it to an expression which is well-typed in the upgraded context Γ ↑ , i.e., which may contain ghost variables, via the rule App-Ir.Similarly to the case for dependent functions, reduction must be encoded as an axiom ir , where the "ir" stands for "irrelevant." We may also introduce an irrelevance axiom, Ir-Ty, which essentially says Let-And Γ ⊢ let , : ( : Γ, : ⊢ : Γ ↑ ⊢ : ∃ : , Γ, : ∃ : , ⊢ ty Γ, : , : Fig. 8. ert Proof Typing that ghost arguments do not matter for the purposes of determining equality whenever the ghost variable does not occur in the result type.We move on to introduce dependent pair types with the type formation rule Pair-WF.The introduction rule Pair looks a bit like the introduction rule for sigma-types in dependent type theories, with the type of the second component varying according to the first component, and both components computationally relevant.
The elimination form is a let-binding form (in Let-Pair).We may also eliminate into proofs using Let-Pair-Pr; note that, in this case, the expression : ( : ) × may contain ghost variables (as Fig. 9. ert Axiom Typing it only needs to be well-typed in Γ ↑ ).As before, reduction for dependent pair elimination must be encoded as an axiom, pair .
Often, however, we may want to be able to consider, dually to intersection types, union types ∃ : , ( ) (Union-WF), which we may views as dependent pairs conditioned on a ghost variable, or, set-theoretically, as elements of ( ) for some valid .Similarly to for dependent pairs, we support an introduction rule Pair-Ir, and elimination via let-binding using rules Let-Ir and Let-Ir-Pr.Note that, unlike for dependent pairs, and similarly to intersection types, only needs to be welltyped in Γ ↑ (rather than Γ) in the introduction rule Pair-Ir, while in Let-Ir, the binder : is a ghost binder rather than a term binder.Finally, as before, we also introduce a reduction rule for let-bindings, ir .
Just as dependent functions and pairs have corresponding type formers quantifying over ghost variables rather than term variables, we may also construct type formers predicated over propositions.In particular, we may consider the precondition type: essentially a closure yielding an element of the type if the proposition is true; this has introduction rule Pre-WF.Introduction is by abstracting a term over a proof variable, with rule Lam-Pr.Note that the type in Lam-Pr is allowed to depend on a proof : ; this is because we may consider precondition types ( : ) ⇒ in which is only well-formed if holds.For example, consider a function : { | ( )} → (we will cover subset types shortly); to reason about values of for : , we need ( ) to hold, hence, we require dependency on proofs to be able to type ( : Similarly to for the computational and ghost variable cases, we must also introduce a reduction rule pr .Dually, we may introduce the subset type former Set-WF, representing, in essence, elements of type satisfying the predicate ( ).This has the expected introduction rule Set and supports elimination via let-binding with rules Let-Set and Let-Set-Pr.We also introduce a reduction rule, set , as expected.
Currently, we have the ability to manipulate data and associate it with propositions, but do not yet have any bona fide data types.While there is no semantic obstacle to introducing a full language of datatype declarations, for simplicity, we will restrict ourselves to the unit type, sum types, and the natural numbers.Other inductive types follow a similar pattern.As usual, a value of the unit type may be introduced with rule Unit; rather than the eliminator we would expect from dependent type theory, we may simply couple this with an axiom, Uniq, stating that every member of the unit type is equal to ().While on it's own this is not a particularly interesting datatype, when combined with coproduct types + , we may define, for example, the type of Booleans as 2 ≡ 1+1, allowing us to construct finite types.Coproducts may be introduced via injection (via rules Inl and Inr) and eliminated by case splitting (via rules Cases and Cases-Pr).To demonstrate support for infinite types, we introduce the natural numbers N, constants of which may be built up using Zero and Succ.The elimination rule, Natrec, implements essentially iteration with the current step available as a ghost variable (for reasoning about in propositions); it's computational semantics are given by the axioms zero and succ .Furthermore, just as we may construct terms recursively via natrec, we may also perform proofs by induction via ind using rule Ind.Note that, unlike in Natrec, the expression over which we are performing induction is allowed to contain ghost variables.

Propositional
Structure.Now that we have given essentially a complete description of the ert 's terms and their computational behaviour, we can describe the main components of ert 's propositional logic, which for the most part mirror the term formers, since we encode proofs in first-order logic as -terms via the Curry-Howard correspondence.We begin by introducing propositions ⊤ and ⊥; the former is only equipped with introduction rule True, whereas ⊥, being an initial object, is only equipped with the elimination rules Absurd-Pr and Absurd.The latter rule is especially important, as it is the main way with which our logic is capable of interacting with our term calculus by allowing us to safely erase unreachable branches from, e.g., a natrec or cases expression.
Proofs of equality do not interact meaningfully with the term calculus, since they are logical formulae, and our logic is classical and nonconstructive.So principles such as unique choice (which permit turning a proof that there exists a unique element of type into a computational value of type ) are not valid.The only case in which this is allowable is for the empty type (see the discussion of the Absurd rule above).To make use of this, we need the additional axiom Discr, which essentially says that the right-hand and left-hand side of a coproduct type are disjoint.This suffices to effectively introduce disequality into our type theory, as desired.
We may now introduce the rest of the connectives of first-order logic, suitably modified in order to fit our setting.In particular, we have implication ( : ) ⇒ , which, as per the Curry-Howard correspondence, is introduced by abstracting over a proposition variable, via rule Imp, and eliminated via modus ponens (under Curry-Howard, application) MP.Similarly, we have conjunction ( : ) ∧ , which is introduced by constructing a pair, via rule And, and eliminated via a letbinding, with rule Let-And.We note that both associated proposition formers, Imp-WF and And-WF, are "dependent, " in that their right-hand side is allowed to depend on a proof variable for the left-hand side .This is because we allow the case where is not well-formed without holding (for example, because it is about a term that requires a proof of ).On the other hand, similarly to for coproducts, the rules for disjunction ∨ are simpler, with introduction by injection via rules Orl and Orr, and elimination via case splitting with rule Cases-Or.
Finally, just as we may consider a type quantifying over a proposition, to support full first-order logic, we must also be able to consider propositions quantified over types.In particular, we may introduce universally quantified propositions with formation rule Univ-Wf.These may be introduced by generalization via rule Gen, and specialized via elimination rule Spec.Similarly, we may introduce existentially quantified propositions with formation rule Exists-WF.We introduce proofs of an existentially quantified proposition by introducing a witness via rule Wit, and may eliminate proofs via let-binding with rule Let-Exists.Note in particular that, in both cases, we treat the variable being quantified over as a ghost variable, since it is appearing in a proposition.Furthermore, since propositions have no computational semantics, a reduction rule is unnecessary for either.
Altogether, our and rules are quite powerful, enabling us to prove more extensionality properties than a casual look at the axioms may suggest.In particular, fixing a term with free variable : { : | }, the -rule for subsets can be written as: (note that , , , , are assumed to be fresh variables here).In general, we only need to introduce explicit extensionality axioms for intersection types ( ir ) and precondition types ( pr ), with all the other and extensionality rules we would expect to hold being derivable from the rest of the axioms, with the sole exception of function extensionality, which is not compatible with our current semantics.See section 5.3 for the explanation of why.
Overall, ert 's logic is essentially multi-sorted first-order logic, with the sorts drawn from the types of ert 's programming language.Since ert has function types, this means that the ert logic is fairly close to PA , Peano arithmetic over the full type hierarchy.So any property provable about ert terms with first-order logic and induction should be provable.(We have not proved any theorems about the expressivity of ert 's logic, though.)

Syntactic Metatheory
ert satisfies the expected syntactic properties of substitution and regularity.To show this, we define substitutions as functions : Var → Term ⊎ Proof, and can recursively define captureavoiding substitution on terms/proofs , written [ ] , in the obvious way.We define a well-formed substitution from Γ to Δ, written Γ ⊢ ′ : Δ, as a substitution satisfying the following conditions: Furthermore, we say is a strict substitution, written Γ ⊢ : Δ, if ghost variables in Γ are only replaced with ghost variables in Δ. Formalized as: LogicalRefinement/Typed/Subst.lean, theorem HasType.subst' The proof of substitution is a routine induction, which as usual requires first proving weakening.
Once we know that substitution holds, we can prove regularity: Formalized as: LogicalRefinement/Typed/Regular.lean, theorem HasType.regular This requires syntactic substitution since some of the typing rules (such as App) involve a substitution in the result types.One other result we will use later is that substitutions can be upgraded; that is, Γ ⊢ : Δ =⇒ Γ ↑ ⊢ : Δ ↑ .To avoid confusion, we write the latter as Γ ↑ ⊢ ↑ : Δ ↑ , with the upgrade on substitutions taken to be the identity.

SEMANTICS
To give a denotational semantics for the ert calculus, we first show that all the proofs and dependencies in an ert term can be erased in a compositional way.This yields a simple type for each ert type, and a simply-typed term for each ert term.We then give a semantics for each ert type as a subset of the denotational semantics of the erasure for that type.Finally, we show that each well-typed ert term lies in the subset defined by its type.
In section 5.1, we recall the syntax and semantics of the simply-typed lambda calculus.In section 5.2, we give an erasure function from ert types and terms to stlc types and terms respectively, and prove some expected properties like preservation of well-typedness and semantic substitution (where the denotational semantics of an ert term are taken to be the semantics of its erasure).Finally, in section 5.3, we give a semantics to ert types by assigning each a subset, and show that the denotations of all well-typed ert terms lie in the subset assigned to their type, i.e., semantic regularity.From this, we deduce that "well typed programs don't go wrong".
For the more categorically-minded reader, we interpret stlc in the Kleisli category of the exception monad ( ) (in the category of sets), and then interpret ert in terms of the subset fibration over it.That is, an ert type can be understood as a pair ( , ⊆ ), where is a type of stlc , and is a predicate on that type; and ert terms are maps ∈ (Γ, Γ ) → ( ( ), ( ) ) such that for all ∈ Γ , we have ( ) ∈ ( ) .Semantically, our erasure operation amounts to the forgetful functor into the Kleisli category which drops the property information.

The Simply-Typed Lambda Calculus
We begin by providing typing rules for stlc in figure 10 (a formal grammar is provided in figure 16 in the appendix).This is a standard lambda calculus with functions, sums, products, natural numbers, as well as a simple effect: error stops.We define Term to be the set of stlc terms.Similarly to the ert calculus, given a function : Var → Term , we may then recursively define (captureavoiding) substitution of a term in the usual manner.We say that is a substitution from Γ to Δ, written Γ ⊢ : Δ, if it satisfies the property that ( : ) ∈ Γ =⇒ Δ ⊢ : .We may then state the usual property of syntactic substitution as follows: We may now give stlc a denotational semantics.Fixing the exception monad M, with exception error, we begin by giving denotations for stlc types in figure 11, using Moggi's call-by-value semantics for types [Moggi 1991].We may then define the denotation of an stlc context elementwise by taking • = 1, Γ, : = Γ × M .Despite the fact that our semantics is call-by-value, the interpretation of each hypothesis lives in the monad M. We do this so our denotational semantics can interpret substituting arbitrary terms for variables, and not just values for variables.(Since the substitution rule permits substituting arbitrary terms for variables, our semantics has to support this too.)For each variable : in a context Γ, we define pointwise projections : Γ → .We define (in figure 12) the denotation of a derivation Γ ⊢ : as a function of type Γ → M( ), which takes environments (elements of Γ ) to elements of the monadic type M( ).
We can now give a relatively straightforward account of the denotational semantics of an stlc substitution as follows: we interpret a substitution Γ ⊢ : Δ as a function Γ ⊢ : Δ : Δ → Γ by composing it elementwise with the the denotational semantics.Supposing ∈ Δ : We may now state semantic substitution for the stlc as follows: L 5.2 (S S ( stlc )).Given stlc derivation Γ ⊢ : and stlc substitution Γ ⊢ : Δ, we have

Erasure
We define a notion of erasure | | of ert types and terms to corresponding stlc ones in figure 13.Erasure on types simply erases all dependency and propositional information leaving behind a simply-typed skeleton.For tuple-like type formers like { : | }, the propositional information is erased completely, yielding | |, whereas for function-like type formers like ( : ) ⇒ , it is instead erased to a unit, yielding 1 → | |; this is to avoid issues with eager evaluation.Where necessary, we take proofs and propositions to erase into the unit as a convenience, i.e., We may then recursively define the erasure of an ert context into an stlc context as follows: Fig. 10.stlc typing rules Γ ⊢ : :  With this definition in hand, we may define the erasure of a substitution pointwise, that is, as It then follows as a trivial corollary of lemma 5.3 that, given a substitution Γ ⊢ : Δ, we have |Γ| ⊢ | | : |Δ|; we write this as |Γ ⊢ : Δ|.We are now in a position to prove that the erasure of the substitution of an ert term is the erased substitution of the corresponding erased stlc term: We may then deduce the following corollary from lemma 5.2 and lemma 5.4:

Denotational Semantics
Using lemma 5.3, we could assign a computational meaning to well-typed ert terms Γ ⊢ : by simply composing the denotation for stlc terms with the erasure function, i.e., taking |Γ ⊢ : | : |Γ| → M | | .While this interpretation assigns terms a computational meaning, it simply ignores their refinements: there is not yet any guarantee that the refinements mean anything.
To rectify this, we will give a denotational semantics for ert types (in figure 14), which maps each type to a subset of the denotation of the corresponding erased type.This semantics is mutually recursive with semantics for ert propositions as well.
The denotation of types (and propositions) is parameterized by an environment drawn from the interpretation of the context Γ.To break the recursion between the semantics of contexts and types, the domain of the interpretation function for types (and propositions) is not restricted to valid ert environments ∈ Γ ok , but is defined for all stlc environments ∈ |Γ| .However, the semantics of ert types does depend upon the values of ghost variables.As a result, we do not consider the erasure |Γ|, but rather the erasure of the upgrade |Γ ↑ |.
The denotation of types basically follows the structure of a unary logical relation (i.e., a logical predicate).For example, the intepretation Γ ⊢ ( : ) which satisfy the property that for all inputs in the refined type , the function returns a pure value in the refined type .An element of a pair ( : The union type ∃ : , [ ] are those elements of | | which lie in [ ] for some in .The intersection type ∀ : , is almost dual, but to account for call-by-value evaluation in the case where may be an empty type, we consider thunks 1 → M(| |) rather than elements of | |.This is also where the terms "intersection type" and "union type" come from -the semantics of ∀ : , is literally a giant intersection, and likewise ∃ : , [ ] is interpreted with a giant union.
An element of a subset type { : | } is an element of which also satisfies the property .The dual precondition type ( : ) ⇒ represents elements of , conditional on holding.Just as with intersections, this type must be represented by thunks 1 → M(| |) to account for the case where is false.Units and natural numbers have the same denotation as their simplytyped counterpart, and a coproduct + is either a left injection of a value satisfying or a right injection of a value satisfying .A proposition could be interpreted by a map |Γ| → 2, but it is more convenient to think of it as a subset of |Γ| -the set of contexts for which the proposition holds.So ⊤ is the whole set of contexts, ⊥ is the empty set, and disjunction and conjunction are modelled by union and intersection.Because conjunction is written : ∧ [ ], we have to extend the environment of the interpretation of .Since we erase all propositions to 1, we just choose ret () as the erased proof.The same idea is used in the case of propositional implication.Quantifiers in our language of propositions are interpreted by quantifiers in the meta-language, and equality is interpreted as the set of contexts ∈ |Γ ↑ | for which the equality holds.Note in particular that this means that the law of the excluded middle is sound, and cannot interfere with program execution in any way (since all propositions are erased).
Because the unrefined language is ambiently effectful (with the effect of error stops), every expression lies in a monadic type in the semantics.The refined semantics of the monad is given by the E Γ ⊢ ty relation, which picks out the subset of the monad not equal to error .That is, we require all refined terms to terminate without error.
Propositions are interpreted as a map from the intepretation of contexts into the (boolean, classical) truth values, or equivalently, as a subset of the contexts.Hence each logical connective is interpreted by the corresponding operation in the Boolean algebra of sets -conjunction is intersection, disjunction is union, and so on.We interpret equality at a type as equality of the underlying elements of | |.
This interpretation, while simple, is in some sense too simple -it causes function extensionality to fail.In particular, suppose Pos ≡ { : Z | > 0} is the type of positive numbers.Two functions , : Pos → Z may agree on all positive arguments but fail to be equal because they erase to two functions which do different things on negative arguments.(For example, consider = id and = abs.)This is also the reason that the ir and pr rules have an inhabitation premise (e.g., Γ ↑ ⊢ : ).A more complex semantics based on partial equivalence relations could let us resolve this issue, but we preferred to stick with the simplest possible semantics for expository reasons.
The semantics of types and propositions is defined over a bigger set of contexts than just the valid ones, but once we have this semantics, we can use it to define the valid contexts.Again, the interpretation Γ ok is going to be the subset of |Γ ↑ | which satisfy all the propositions and in which all the values lie within their ert types.So the empty context is inhabited by the empty environment; the context Γ, : is inhabited by ( , ret ()) when is in Γ ok and is satisfied; and Γ, : is inhabited by ( , ret ) when ∈ Γ ok and is in the (subset) interpretation of .(The case of ghost variables is the same as ordinary variables, since we care about the values of ghost variables when interpreting propositions in refined types.)It is easy to show that no ert type can contain any errors.However, we give semantics to ert terms by erasure, and so we have to connect the erased semantics of ert terms to these semantic types.Furthermore, the semantics of ert types cares about ghost values, but the semantics of erased terms ignore ghost values.
To relate these two, we also need to define the corresponding notion of a downgrade of an environment.Given an environment ∈ |Γ ↑ | , we recursively define its downgrade ↓ Γ ∈ |Γ| , written ↓ when Γ is clear from context, as Semantically, this discards all of the ghost information from an environment.We may now state the primary theorems proven about the semantics of ert , namely, semantic substitution and semantic regularity.
T 5.7 (S S ).Given ert derivation Γ ⊢ ty, ert substitution Γ ⊢ : Δ, and valid environment ∈ Δ ok , we have Formalized as: LogicalRefinement/Denot/Subst.lean, theorem SubstCtx.subst_denot We can also show that for any well-typed ert term, its erasure lies in the interpretation of the ert type.This shows that every well-typed term satisfies the properties of its fancy type.

T 5.8 (S R
).Given an ert derivation Γ ⊢ : , we have  Furthermore, it is an immediate corollary of lemma 5.6 and theorem 5.8 that the ert logic is consistent, since { : 1 | } is inhabited if and only if is true.It is also a corollary that for any well-typed term Γ ⊢ : , we have that ∀ ∈ Γ ok , |Γ ⊢ : | ≠ error.That is, "well-typed programs do not go wrong".

FORMAL VERIFICATION
We have proved all the results stated in the previous sections in Lean 4. The proof development is about 15.8 kLoC in length and is partially automated, though there is much potential for further automation.In particular, lemmas 4.1, 4.2, and 4.3 are heavily automated, with one template tactic that applies to most of the cases, whereas theorems 5.7 and 5.8 and lemmas 5.2, 5.4, and 5.5 had to be proved manually.These properties typically had lots of equality coercions, which had to be plumbed manually because they are difficult to automate.It is possible that with more experience with Lean tactics they might also be automated, but we were not able to do so.
The formalized syntax and semantics are mostly the same as that presented in this writeup.There are two main differences.First, we have implemented variables using de-Bruijn indices.Second, we folded types, propositions, terms, and proofs into a single inductive type to avoid mutual recursion, which Lean 4 currently has poor support for.
This project was the first time the authors used Lean 4 for serious formalization work.While we ran into numerous issues due to Lean still being in active early-stage development, we found it to be a highly effective formalization tool.One issue we ran into was very high memory usage and, in some cases, timeouts, when using Lean's simp tactic on complex pattern matches.The addition of the dsimp tactic, after a discussion on the Lean 4 Zulip, mostly alleviated this, and performance has improved in later versions of Lean.We otherwise found the quality of automation to be very good: even though the authors are novices at Lean, we were able to easily maintain and extend the proofs without needing to edit theorems proved via tactics.For example, we originally forgot to include the Unit-WF axiom, but we were able to include it with only minor edits to the manual theorems in about 30 minutes.As the formalization made heavy use of dependent types, we also ran into many issues attempting to establish equalities between dependently typed terms.However, in this case, we found Lean relatively easy to use compared to other dependently-typed proof assistants based on dependent types, with our experiments at Coq-based formalization running into similar issues.

DISCUSSION
Function Extensionality, Recursion and Effects.Our current semantics is inconsistent with function extensionality because two functions must be equal over their entire, unrefined domain to satisfy the denotation of the equality type.To support extensionality (and related types like quotient types), we should be able to intepret the calculus via a semantics based on partial equivalence relations, as in [Harper 1992].
We also want to reason about the partial correctness and divergent programs.Hence, it makes sense to add support for general recursive definitions, including nonterminating definitions, by moving to a domain-theoretic semantics (rather than the set-theoretic semantics we currently use).We also want to extend the base language with more effects (such as store and IO) and extend ert to support fine-grained reasoning about them via an effect system such as in [Katsumata 2014].
Categorical Semantics.The motivating model of refinement types underlying our work is that of [Melliès and Zeilberger 2015], which equates type refinement systems with functors from a category of typing derivations to a category of terms.In essence, one can view our work as taking the setup in [Melliès and Zeilberger 2015] and inlining all the categorical definitions for the case of the simply-typed lambda calculus.
We would like to update our formalisation to work in terms of the categorical semantics; this would let us to account for all of the extensions above at once, without having to reprove theorems (such as semantics substitution and regularity) for each modification.
Recently, Kura [2021] has studied the denotational semantics of a system of refinements over Ahman's variant of dependent call-by-push-value [Ahman 2018].Kura also uses a fibrational semantics similar to the Zeilberger-Mellies semantics (as well as Katsumata's semantics of effect systems [Katsumata 2014]) to equip a dependent type theory with a subtype relation arising from the entailment relation of first-order logic.Unlike ert , the use of subtyping means that there are no explicit proofs, and hence type checking is undecidable.Without the : 1, the term would erase to error, which violates our invariant.It is possible to avoid introducing any units at all if the base calculus is in call-by-push-value form.However, since one of our main objectives was to have a simple, familiar semantics, we the price of introducing units was lower than introducing call-by-push-value.
Automation and Solver Integration.One of the critical advantages of refinement types is the potential for significantly reducing the annotation burden of formal verification.Hence, to make ert usable, it should be able to be automated to a similar degree for similarly complex programs.One potential form of basic automation is support for an "smt" tactic, similar to section 3's " " tactic; we can similarly envision calling out to various automated theorem provers like Vampire [Kovács and Voronkov 2013] or SPASS [Weidenbach et al. 2002].
A more powerful approach would be to adapt the work on Liquid Typing [Rondon et al. 2008] to this setting, which works by inferring appropriate refinement types and proofs for an unrefined program such that, given that the program's preconditions are satisfied, the preconditions of all function calls within the program as well as the postconditions of the program are both satisfied.Liquid Typing sometimes requires annotations to infer appropriate invariants and may require explicit checks to be added for conditions it cannot verify are implied by the preconditions.One way to combine Liquid Typing with ert would be to, using ert types as annotations, automatically refine the types of subterms of an ert program to make it typecheck, inferring and inserting proofs as necessary.One advantage of this approach would be that (assuming it compiles down to fullyannotated ert ) it removes the liquid typing algorithm itself from the trusted codebase, and, if the solvers used support proof output, the solvers themselves as well.Furthermore, we could replace potentially expensive runtime checks with explicit proofs.

Relationship of ert to Dependent Types
The most well-known approach to designing programming languages with integrated support for proof is dependent type theory.The semantics of dependent type theories is generally given in an "intrinsic" style, in which well-typed terms (in fact, typing derivations) are given a denotational semantics, and ill-typed terms are regarded as meaningless (i.e., have no semantics).
On the other hand, ert is a refinement type system, which takes an existing programming language (in this case, the simply-typed lambda calculus with error stops), and extends it so that existing programs can be given richer, more precise types.This ensures that it is always possible to forget the rich types and be left with the simply-typed skeleton.A good analogy is to Hoare logic, in which pre-conditions, post-conditions and loop invariants can be seen as rich type annotations on a simple while-program.These logical annotations can always be erased, leaving behind an untyped while-program.
Like in Hoare logic, ert distinguishes logical assertions from the type-theoretic structure of the programming language.This is in contrast to the traditional Curry-Howard interpretation of logic in dependent type theory, and more closely resembles the Prop type of Coq (which is a sort of purely logical assertions), or even more closely the "logic-enriched type theories" of Aczel and Gambino [Gambino and Aczel 2006] (which extends type theory with a new judgement of logical propositions).
Most dependent type theories also feature a notion of judgemental equality, in which types are considered equal modulo some equational theory (usually containing and sometimesconversions).ert is designed to work without a judgemental equality, since it is a source of both metatheoretical difficulty, and complicates the design of tooling.However, there are some dependent type theories, such as Objective Type Theory [van den Berg and den Besten 2021] and Zombie [Sjöberg and Weirich 2015], which implement reduction propositionally as axioms, similarly to what we have done.

Refinement Logics and Squashed Curry-Howard
As is well-known, even type theories without a Prop sort like Coq or Lean's still have a logical reading -the famous "propositions as types" principle, where each type-theoretic connective (function types, pairs, sums) corresponds to a logical connective (implication, conjunction, disjunction).This is unsuitable for our purposes.We plan to use ert as the basis of extending practical SMTbased refinement type systems with explicit proofs, and SMT solvers are fundamentally based on classical logic.So we want the semantics of the propositions in our refinement types to be classical as well, which is not possible when propositions and types are identified.
This also makes it difficult to use the various modal techniques proposed to integrate proof irrelevance into type theory, such as Awodey and Bauer's squash types [Awodey and Bauer 2004], or Sterling and Harper's phase modalities [Sterling and Harper 2021].
For example, with the Awodey-Bauer squash type [ ], logical disjunctions and existentials are encoded as where the [−] operator has a degenerate equality.As a result, if the type theory is intuitionistic the logic of propositions must be as well, which is contrary to our needs.However, there is a deeper problem with using squash types.The semantics of the Awodey-Bauer squash type is such that if is a proposition (i.e., all inhabitants are equal), then and [ ] are isomorphic.Since is a proposition if all its inhabitants are equal, a contractible type like Σ : N. = is a proposition, and hence there is a map [Σ : N. = ] → Σ : N. = .
That is, it is possible to extract computational data from a squashed type, and so erasure of propositions and squashed types is a much more subtle problem than it may first seem.Kraus et al's paper Notions of Anonymous Existence in Martin-Löf Type Theory [Kraus et al. 2017] studies this and similar issues in detail.There is a similar obstacle when using the Sterling-Harper approach, which could use a pair of modalities to control whether a term is potentially in the specification or runtime phases.Once again, re-using type-theoretic connectives as logical connectives forces the identification of the refinement logic and the type theory.

Erasure in Dependent Type Theory
Coq [The Coq Development Team 2021] supports a notion of erasure, in which terms of proposition type are systematically elided as a dependently-typed program is extracted to a functional language like Ocaml or Haskell.It also lets users declare certain function parameters as useless, but since Coq does not distinguish logical variables from computational ones in its type system, a well-typed Coq term may fail to successfully extract.In contrast, ert can always erase both proofs and logical variables, and furthemore guarantees that all erased terms are also well-typed because it is a refinement over a pre-existing simple type theory.This is not the case in Coq, where extracted terms may have to use the unsafe cast Obj.magic.
Another critical feature of ert is that it supports Hoare-style logical variables.is that they are not program variables, and cannot influence the runtime behaviour of a program -they only exist for specification purposes.So in a type where ∀ : N is an intersection-style quantifier, it is only possible to compute the length of the list by actually traversing the list.In plain Martin Löf type theory, the corresponding type has a degenerate implementation: This kind of computational irrelevance is different from what is sometimes called proof-irrelevance or definitional irrelevance, since different values of are not equal.Two approaches that have arisen to manage this kind of irrelevance are the implicit forall quantifier of the implicit calculus of constructions (ICC) [Barras and Bernardo 2008], and the usage annotations in Atkey and McBride's quantitative type theory (QTT) [Atkey 2018].
As we did, the ICC introduced an intersection type ∀ : .to support computationally irrelevant quantification.However, because there was no separation of the refinement layer from the base layer, the denotational semantics of the ICC is much more complicated -the Luo-style extended calculus of constructions (ECC) [Luo 1990] has a simple set-theoretic model, but the only known denotational model of the ICC [Miquel 2000] is based on coherence spaces.In our view, this is a significant increase in the complexity of the model.
In contrast, we are able to model intersection types with ordinary set-theoretic intersections.The reason for this is that the refinement type discipline ensures that we only ever make use of structural set-theoretic operations.That is, every ert type is a subset of an underlying base type, and so when we take an intersection, we are only taking the intersection of a family of subsets of a particular set (the base type).From a mathematical point of view, this is much better-behaved than taking intersections of arbitrary sets, and having this invariant lets us interpret intersections more simply than is possible in the semantics of the ICC.
Our semantics (and indeed, syntax) for intersection types is very similar to the semantics of the "dependent intersection types" introduced by Kopylov [2003] for the Nuprl system.They worked with partial equivalence relations over a term model, rather than our simple set-theoretic model.As mentioned above, we will also need to move to a PER model to support function extensionality, though we expect we can still consider PERs over sets, rather than having to use a term model.
In his PhD dissertation [Tejiščák 2019], Tejiščák studies how to apply QTT directly to the problem of managing computational irrelevance.The runtime and erasable annotations in QTT look very similar to our distinction between computational and ghost variables.However, we cannot directly use this type system, because it does not have a sort of propositions, and therefore must express logical properties in a construtive, Curry-Howard logic.

Relation to Other Systems
ATS [Cui et al. 2005] is a system in the Dependent ML style.That is, the type system has a very hard separation between indices (which can occur inside types) and terms (which do runtime computation).Like ert , it is possible in ATS to give explicit proofs of quantified and inductive formulae.
ATS's erasure theorem is proved operationally, by exhibiting a simulation between the reductions of the fully-typed language and erased, untyped programs.This makes it hard to reason about the equality of program terms (something like a logical relation would have to be added), but ATS does not have to, because it distinguishes program terms and indices, and only permits indices to occur in types.
However, since program terms cannot occur in constraints, correctness arguments about functions are forced into an indirect style -for each recursive function, one must define an inductive relation encoding its graph, and then show that inputs and outputs are related according to this relation.As a result, proving something like (e.g.) that the type of endofunctions → , the identity, and function composition have the structure of a monoid will be very challenging.In ert , in contrast, this would be very straightforward, since (following liquid types) indices and program terms are one and the same.
Modern ATS has also extended the proposition language to support a notion of linear assertion, which permits verifying imperative programs in the style of separation logic.Extending ert with support for richer reasoning about effectful programs is ongoing work.
F * is a full dependent type theory which has replaced the usual conversion relation with an SMT-based approach.F * makes no effort to keep quantifiers out of the constraints sent to its SMT solver, and hence does not (and cannot) have any decidability guarantees -the F * typechecking problem is undecidable.This best-effort view lets F * take maximum advantage of the solver, at the price of sometimes letting the typechecker loop.In contrast, ert has decidable, near linear-time typechecking, because typing is fully syntax-directed and has no conversion relation.
The treatment of computational irrelevance in F * is similar in effect (though different in technical detail) to ICC.As in ICC, ghost arguments can affect typing but not computations, but there is no notion of a less-typed base language that ghosts can be erased to.

Fig. 11 .
Fig. 11.stlc type denotations parameterized by a monad M. The denotation of a term of type has type M .

Fig. 12 .
Fig. 12. Denotations for stlc terms, where M is the exception monad with error : M