Decalf: A Directed, Effectful Cost-Aware Logical Framework

We present decalf, a directed, effectful cost-aware logical framework for studying quantitative aspects of functional programs with effects. Like calf, the language is based on a formal phase distinction between the extension and the intension of a program, its pure behavior as distinct from its cost measured by an effectful step-counting primitive. The type theory ensures that the behavior is unaffected by the cost accounting. Unlike calf, the present language takes account of effects, such as probabilistic choice and mutable state. This extension requires a reformulation of calf’s approach to cost accounting: rather than rely on a ”separable” notion of cost, here a cost bound is simply another program. To make this formal, we equip every type with an intrinsic preorder, relaxing the precise cost accounting intrinsic to a program to a looser but nevertheless informative estimate. For example, the cost bound of a probabilistic program is itself a probabilistic program that specifies the distribution of costs. This approach serves as a streamlined alternative to the standard method of isolating a cost recurrence and readily extends to higher-order, effectful programs. The development proceeds by first introducing the decalf type system, which is based on an intrinsic ordering among terms that restricts in the extensional phase to extensional equality, but in the intensional phase reflects an approximation of the cost of a program of interest. This formulation is then applied to a number of illustrative examples, including pure and effectful sorting algorithms, simple probabilistic programs, and higher-order functions. Finally, we justify decalf via a model in the topos of augmented simplicial sets.


Introduction
The calf language [Niu et al. 2022a] is a full-spectrum dependent type theory that consolidates the specification and verification of the (extensional) behavior and (intensional) cost of programs.For example, in calf it is possible to prove that insertion sort and merge sort are extensionally equal (i.e. have the same input/output behavior), and also that the former uses quadratically many comparisons for a given input list, whereas the latter uses poly-logarithmically many for the same input.Both programs are terms of the type theory, rather than encodings of programs internally to the type theory, and the stated properties are expressed as types.
It may seem, at first blush, that the stated properties cannot possibly be verified by typingafter all, if the sorting functions are equal as functions on lists, then how can they have different properties?Moreover, what does it mean, type-theoretically, for the two algorithms to require the stated number of comparisons?After all, it is not possible to determine a posteriori which piece of code is a comparison, much less determine how often it is executed.Furthermore, dependent type theory [Martin-Löf 1984] is given as an equational theory, so what would a comparison count even mean in such a setting?
The key to understanding how these questions are handled by calf lies in the combination of two developments in type theoretic semantics of programming languages: (1) The view of cost as a computational effect, implemented equationally via Levy's call-by-pushvalue [Kavvos et al. 2019;Levy 2003].
(2) The reformulation of phase distinctions [Harper et al. 1990] in terms of open and closed modalities from topos theory, emanating from Sterling's Synthetic Tait Computability [Sterling 2021;Sterling and Harper 2021].
1.1 Polarity, Call-By-Push-Value, and Compositional Cost Analysis In call-by-push-value, values of (positive) type are distinguished from computations of (negative) type. 1 Following the notation of Niu et al., we will typically write arbitrary positive types as , ,  and negative types as ,  ,  .Although not all models of call-by-push-value are of this form, it is instructive to think of negative types as being the algebras for a strong monad on the category of positive types and pure functions.Variables range over elements of positive types, which classify "passive" values such as booleans, numbers, tuples, and lists; computations inhabit negative types, which classify "active" computations, including the computation of values and, characteristically, functions mapping values to computations.Call-by-push-value can also accommodate type dependency more smoothly than either call-by-value or call-by-name [Ahman et al. 2016;Pédrot and Tabareau 2019;Vákár 2017].
The purpose of imposing polarity in calf's dependent type theory via call-by-push-value is to give a compositional account of cost; in particular, calf instruments code with its cost by means of a write-only "step counting" effect step  () that annotates computation  with an additional  cost, so determining the figure-of-merit for cost analysis.(In the case of sorting, the comparison operation is instrumented with an invocation of step counting on each usage.)The introduction of step counting as just described means that insertion sort and merge sort are not equal as functions on lists, exactly because they have different costs (but see Section 1.2).Moreover, their cost bounds can be characterized by saying, informally for the moment, that their steps counts on completion are related to the length of the input in the expected way.

A Phase Distinction Between Cost and Behavior
The calf type theory is therefore capable of expressing and verifying the usual sequential and parallel cost bounds on these two sorting algorithms, as well as for other algorithms as described in the paper of Niu et al. [2022a].But what about their purely behavioral (extensional) equivalence?Here the second key idea comes into play, the introduction of a phase distinction, represented by a proposition (i.e. a type with at most one element) expressing that the extensional phase is in effect, 1 Here borrowing terminology from polarization in proof theory to distinguish the two classes of types.
Decalf: A Directed, Effectful Cost-Aware Logical Framework 10:3 written ¶ ext .2When ¶ ext is true/inhabited, e.g. by assumption in the context, then the step counting computation is equal to the identity function, and hence the two sorting algorithms are deemed extensionally equal.
1.2.1 Purely Extensional Types It is best to understand the proposition ¶ ext as a switch that collapses all the cost information when activated/assumed.Then a type  is considered purely extensional when it "maintains the fiction" that ¶ ext is true: phrased more precisely, a purely extensional type is one for which the constant mapping  → ( ¶ ext → ) is an isomorphism in the logical framework.Any type  can be restricted to its extensional part, namely the function space  ≔ ( ¶ ext → ).This extensional modality is also called an open modality in topos theory [Rijke et al. 2020].31.2.2Purely Intensional Types Dually, a type is considered purely intensional when it maintains the fiction that ¶ ext is not true; put in precise terms,  is purely intensional if and only if  is a singleton, or (equivalently) if the projection map ¶ ext ×  → ¶ ext is an isomorphism.In either case, the idea is that a purely intensional type  is one that becomes trivial (i.e. the unit type) within the extensional phase.This facility is used by calf to allow cost profiling information to be stored intensionally and then stripped away automatically in the extensional phase.4

Noninterference Between Intension and Extension
Under the phase distinction, intensional data has the noninterference property with respect to extensional data.Proposition 1.1 (Noninterference).Let  be a purely intensional type, and let  be a purely extensional type.Then any function  :  →  is constant, i.e. there is some  0 :  such that  = _.0 .
Noninterference is crucial for realistic cost analysis: instrumenting programs with profiling data would not be conservative if the behavior of formalized programs could depend on that profiling data.Another reflection of noninterference is that we have an isomorphism ( → ) (  → ), which means that the behavior of a function is a function of the behaviors of its inputs.

Compositional Cost Analysis for Effectful Code
To motivate the contributions of this paper it is helpful to make more precise the informal discussion about cost and correctness in calf.Using again the examples of insertion sort and merge sort, the following facts about them can be verified in calf: (1) Insertion sort is quadratic:  : list(nat) ⊢ _ : isBounded list(nat) (isort , | | 2 ).
(3) They are extensionally equal: (isort = msort).Informally, the two sorting algorithms have the same input/output behavior, with insertion sort being notably less efficient than merge sort.
Here the relation isBounded  (, ), for computation  : F() and cost  : C, is defined by hasCost  (, ) ≔ ( : ) × ( = step  (ret())) isBounded  (, ) ≔ ( ′ : C) × ( ≤ C  ′ ) × hasCost  (,  ′ ) The first defines the cost of a computation of type F() by an equation stating that it returns a value of type  with the specified cost, expressed as an equation between computations.The second states that the cost of a computation is at most the specified bound.5 As long as the bounded expression is pure (free of effects other than the cost accounting itself), these definitions make good sense, and indeed these formulations have been validated empirically in the worst-case analysis of several algorithms [Niu et al. 2022a].However, many algorithms rely on effects for their correctness and/or efficiency.For example, randomized algorithms require probabilistic sampling to ensure a good distribution on cost.Other algorithms rely on effects such as errors, nondeterminism, and mutable storage for correctness.As defined, calf does not account for any such behaviors.To be sure, it is entirely possible to extend calf with, say, a computation to probabilistically choose between two computations, and one can surely reason about the behavior of such programs, given sufficient libraries for reasoning about probabilities.The difficulty is with the definition of hasCost and isBounded: if  has effects other than cost profiling, then the definition of hasCost is not sensible because it neglects effects.For example, if  has effects on mutable storage, then the "final answer" must not only reflect the stepping effect, but also any "side effects" that it engenders as well.In the case of randomization the outcome of , including its cost, is influenced by the probabilistic choice, which cannot, in general, be disregarded.
The definition of isBounded reflects a long-standing tendency in the literature to isolate a "mathematical" characterization of the cost of executing an algorithm by a function-typically recursively defined-that defines the number of steps taken as a function of (some abstraction of) its input.So, in the case of sorting, the functions | | 2 and | | lg | | are mathematical functions that are used to specify the number of comparisons taken to sort a list .Often the characterization is given by a recurrence, which is nothing other than a total recursive function of the cost parameter.For example, Niu et al. [2022a] analyze the cost of Euclid's algorithm in terms of the inverse to the Fibonacci sequence.While desirable when applicable, it cannot be expected in general that the cost can be so characterized.For example, one difficulty with the classical approach arises when higher-order functions are considered.In truth the cost of a computation that makes use of a function argument cannot be abstractly characterized in terms of "pure" costs, precisely because the cost of the algorithm depends on the full behavior of that function.
Effects, such as probabilistic sampling, introduce a similar difficulty, even in the case that the outcome is determinate, because the cost cannot be specified without reference to the source of randomness, its distribution.To put it pithily, a computation that incurs cost sampled from a binomial distribution is best characterized by the computation that implements the binomial distribution itself.The central question that drives the present work is this: What better way to define the cost of an effectful program than by another effectful program?After all, calf is a full-spectrum dependent type theory capable of formulating a very broad range of mathematical concepts-any appeal to an extrinsic formulation would defeat the very purpose of the present work, which is to develop a synthetic account of cost analysis for effectful programs that extends the established calf methodology for the special case of pure functional programs.

decalf: a Directed, Effectful Cost-Aware Logical Framework
The key to achieving these goals is to reformulate the calf type theory to consider inequalities, as well as equalities, on programs that, intuitively, express relaxations of the cost profiling information in a program.Reflexivity of the preorder means that it is always valid to say, in effect, that a piece of code "costs what it costs." In this sense a cost analysis can never "fail", but of course it is usually desirable to characterize the cost of a program more succinctly and informatively using, say, a closed form, whenever possible.Transitivity of the preorder means that bounds on bounds may be consolidated, facilitating modular reasoning.As regards the calf methodology, in decalf the role of the hasCost relation is replaced by equational reasoning, and the role of the isBounded relation is replaced by inequational reasoning.Inequalities, like equalities, account for both cost and behavior (taking into account the extensional phase as appropriate).
Thus decalf may be described as a directed extension of calf to account for effects.The "directed" aspect is manifested by the inequational judgments,  ≤  ′ for values of (positive) type , and  ≤  ′ for computations of (negative) type  .
• Intensionally, inequality relaxes costs: if  ≤  ′ , then step  () ≤ step  ′ ().In the pure case (with profiling as the only effect), the original calf methodology is recovered by defining isBounded via inequality, aligning with the existing definition of hasCost: In the presence of other effects, though, this new definition of isBounded generalizes that of calf.For example, suppose  makes use of randomization; as long as all possible executions use at most  cost and return the same value, then we will have isBounded (, ).Sometimes, we will wish to let the cost bound itself include effects.Then, we will use the inequality relation  ≤  ′ more generally.
Furthermore, the inequality relation can compare programs at all computation types, whereas isBounded only compares at type F().To analyze a computation of type  → F() in calf, one must quantify over values  :  and then use isBounded at type .In contrast, decalf allows computations of type  → F() to be compared and analyzed directly.

Simultaneous analysis of cost and correctness
The inequality relation compares both the cost (for inequality) and the correctness (for equality) simultaneously.In general, this is essential: the cost of later computations may depend on the behavioral/extensional part of earlier computations.For example, suppose reverse is the usual list reverse algorithm with linear cost in the length of its input.Then, knowing only the cost of a computation  : F(list(nat)) is not enough for determining the cost of bind(; reverse), which depends on (the length of) the result of .Thus we see that the intension-extension phase distinction counterposes the interference of behavior with cost to the noninterference of cost with behavior (Section 1.2.3).
However, when the code exhibits noninterference of behavior with cost, we may work with a notion of a classical bound of  by cost  that arises by postcomposition with  → 1: This technique is useful for collapsing down multiple possible return values (Sections 3.2 and 3.3) that all have the same cost and aligns well with the traditional presentations of cost analysis in which cost and correctness are considered independently as a final result.
Synopsis The remainder of this paper is organized as follows: In Section 2 we define the decalf type theory, which is used in the remainder of the paper.In Section 3 we formulate algorithms (some with effects) and derive their cost bounds.In Section 4 we justify decalf topos-theoretically.In Section 5 we summarize results, and in Section 6 we suggest directions for future work.

Related Work
As regards related work, the principal reference is Niu et al. [2022a] on which the present paper is based.Therein is provided a comprehensive comparison to related work on formalized cost analysis, all of which applies as well to the present setting.We call attention, in particular, to the central role of the (dependent) call-by-push-value formulation of type theory [Levy 2003;Pédrot and Tabareau 2019], and to the application of synthetic Tait computability [Sterling 2021] to integrate intensional and extensional aspects of program behavior.In subsequent work, Grodin and Harper [2023] perform amortized cost analyses coinductively by proving an equality between a realistic implementation of a data structure and a simpler specification implementation; we draw inspiration from this perspective, here considering inductive algorithms and generalizing to inequality.
Directed type theory and synthetic domain theory The decalf type theory and its built-in inequality relation is closely related to the idea of directed type theory [Licata and Harper 2011;Riehl and Shulman 2017], which generalizes Martin-Löf's (bidirectional) identity types to account for directed identifications.Another important input to decalf's design is synthetic domain theory [Hyland 1991;Phoa 1991], in which types are also equipped with an intrinsic preorder.Both of these inputs can be seen in our presheaf model of decalf (Section 4), which resembles both traditional models of higher category theory and directed type theory and (pre)sheaf models of synthetic domain theory [Fiore andRosolini 1997, 2001].Our method to isolate presheaves that behave like preorders via internal orthogonality comes from Fiore [1997], whose ideas we have combined with the modern accounts of internal orthogonal reflection of Christensen et al. [2020]; Rijke et al. [2020].
Integrating cost and behavior As observed by Niu et al. in their work on calf, obtaining a cost bound on a program frequently depends on simultaneously verifying a behavioral invariant of the code/data structures involved.This requirement is fulfilled in the setting of calf because programs are nothing more than terms in a dependent type theory equipped with a cost effect; consequently one can use dependent type theory as a rich specification language for program behavior.In this aspect the decalf verification ethos remains unchanged from that of calf: although decalf lacks general dependent sums as a theory, we may still use the judgmental dependent types at the level of the logical framework (see Section 2.1) to encode the necessary program behaviors.
On the other hand, one may take a more stratified view on programs vs. logic, an approach exemplified by Radiček et al. [2017] in the context of relational cost analysis.To handle behavioral properties of programs, the authors introduce a type system U C by integrating a version of HOL with a suitable equational theory on the cost monad.Aside from the well-known differences between verification in dependent type theory and program logics, the type system of Radiček et al. differs from decalf in several axes.First, U C is designed around the verification of pure functional programs, whereas decalf is designed around the modular accommodation of different (possibly interacting) effects.Second, whereas U C works with additive costs7 , decalf may be instantiated with any cost monoid.Lastly, Radiček et al. also propose a type system for relational cost analysis.In this paper we focus on unary program analysis to isolate the new ideas brought forth in decalf, but it should be possible to incorporate the techniques of relation cost analysis into a dependently-typed setting.
Cost analysis of higher-order functions In calf, cost bounds are "global" in the sense that cost refinements such as hasCost and isBounded from Section 1.3 are only defined for the type of free computations F().In contrast, the presence of an intrinsic preorder structure on every type in decalf allows us to easily express the cost bounds of higher-order functions as functions themselves (see Section 3.3).In this sense decalf represents a departure from the standard dogma held by the cost analysis community in which cost as a notion is categorically segregated from programs.In our view this stratification often invites unnecessary duplication of structures, especially in the analysis of higher-order functions.This can be seen in e.g. the work of Rajani et al. on a type system called -amor for higher-order amortized analysis.Roughly, -amor deals with the pain points of 10:7 traditional potential-based type systems by properly accounting for the affine structure of higherorder potential-carrying types.To accurately represent the cost structure of higher-order functions, -amor is also equipped with a language of indices resembling a simple functional language.Indices are a classic example of a "shadow language" in the sense that they simply abstract over the existing constructs of the associated programming language.By taking seriously the dictum "cost bounds are programs", we may dispense with such duplications of effort in decalf by means of internal cost refinements that are defined using ordinary type-theoretic constructions.

Algebraic effects
In Section 3, we extend decalf with various effect structures: nondeterminism, probabilistic choice, and a simple form of global state.Each effect is specified via an algebraic theory, drawing from foundational ideas on algebraic effects [Plotkin and Power 2002].Note that here, we do not support handling of effects.

The decalf Type Theory
The decalf type theory is a dependent extension of Levy's call-by-push-value framework [Levy 2003] and Tabareau [2019].As in that work, the decalf type theory includes both positive and negative dependent functions and internalized equality and inequality types (more on the latter shortly).
In contrast to op. cit., decalf does not have dependent sums, but it does have positive products, non-dependent sums, and inductive types.In this regard, the decalf type theory is simpler than both calf [Niu et al. 2022a] or cbpv [Pédrot and Tabareau 2019].
The importance of the call-by-push-value-inspired formulation is that it is compatible with effects.In the case of calf, the sole effect under consideration is cost profiling, or step-counting, which is used to express the cost of a computation.For example, in the case of sorting, the comparison function is instrumented with a step operation that serves to count the number of comparisons, and permits the cost of sorting to be expressed in terms of this fundamental operation.In terms of semantics, the manifestation of the profiling effect is that computations types are interpreted as an algebra for the writer monad, which maintains state to accumulate the total cost of step operations that have been executed.Various forms of cost accounting (both sequential and parallel) are accounted for by parameterizing the type theory with respect to an ordered cost monoid.The present work seeks to extend calf to consider programs that engender effects besides profiling.
Phase distinction As with calf, the decalf type theory includes a phase distinction between extensional and intensional aspects of programs.As in calf the extensional phase is governed by a proposition (a type with at most one element) named ¶ ext , whose assumption renders step counting inert.However, in contrast to calf, the expression of cost bounds in the presence of effects is much more delicate.In calf the outcome of a closed computation must be of the form step  (ret()), which specifies both the cost, , and the final value, , of that program.However, in the presence of effects, this simple characterization is not available.For example, the outcome of a probabilistic program is a distribution of such costs and behaviors derived from the distribution of its randomized inputs.Similarly, the outcome of an imperative computation must include accesses to the global state, because the final value may well refer to it.As in calf, the "open" modality  ≔ ( ¶ ext → ) is used to disregard the profiling effects to speak of pure behavior.
Program inequalities To account for effects, the specification and verification of cost bounds in decalf is markedly different from that in calf.The most natural, and most general, way to specify the outcome of an effectful computation is by another effectful computation.Thus, just as a randomized algorithm uses coin flips to guide its behavior, so the cost specification correspondingly makes use of coin flips in its formulation.This might suggest an equational formulation of cost, until one considers that algorithm analysis is typically phrased in terms of upper bounds on cost.In calf that was handled by the preorder on the cost monoid, but to account for this in decalf the essential move is to introduce an approximation ordering among the programs of a type that permits these bounds to be relaxed.Thus, upper bounds on cost are expressed using the approximation ordering, written  ≤  ′ , where  ′ captures some (weakening of) the cost of .
In particular, in the case of purely functional programs (apart from step counting), the expression  ′ can, as in calf, be taken to have the form step  (ret()).At the other extreme, the reflexivity of a preorder ensures that  ′ may always be taken to be  itself, specifying that " costs whatever it costs"!This remark is not entirely frivolous.For one thing, there certainly are situations in which no better bound can be proved without significant simplifying assumptions.These situations arise especially when using higher-order functions whose cost and behavior may well be highly sensitive to the exact input, and not just some approximation thereof.Here approximation comes to the rescue, allowing one function to be approximated by another, and relying on the monotonicity of the approximation ordering to derive a useful upper bound.For example, let map be the usual higher-order list mapping function taking a function as an argument (Example 3.12).If  ≤  ′ , then map  ≤ map  ′ , and the latter may well admit a meaningful upper bound-if, say,  ′ has constant cost, then, by transitivity of the approximation preorder, that constant may be used to derive a useful linear-time upper bound for that instance of list mapping.

Presentation of decalf in a Logical Framework
Following the formulation of Niu et al. [2022a], the decalf type theory is defined as a signature in an extensional, higher-order, dependently typed logical framework (LF).Specifically, a type theory is described by a list of constants in a version of extensional type theory with dependent product and sum types.The adequacy of the LF presentation of decalf with respect to the conventional presentations with contexts follows from conservativity of locally cartesian closed categories [Gratzer and Sterling 2020] over categories with representable maps [Uemura 2021[Uemura , 2023]].
The object theory decalf is specified with judgments declared as constants ending in Jdg, handling binding and scope of variables via the framework-level dependent product ( : ) →  ().

Dependent Call-By-Push-Value Structure
First, we give a presentation of our core dependent call-by-push-value calculus: Types in decalf are divided into two classes, the positive value types tp + and the negative computation types tp ⊖ .Each type has a corresponding collection of terms, tm + () and tm ⊖ ( ).Following Niu et al. [2022a], we define computations as tm ⊖ ( ) tm + (U( )), leading to a less bureaucratic version of call-by-push-value in which thunk and force are identities.
The two levels are linked by a pair of modalities, F() and U( ), that, respectively, classify computations of a given value and reify such computations as values.The ret and bind constructs return values and sequence computations, as would be expected in a language with effects.Semantically, computation types are interpreted as algebras for a monad, which provides the structure required to consolidate effects as a computation proceeds.Equality of values,  =  ′ , and computations,  =  ′ , is the equality provided by the logical framework.

Type Structure
The decalf language includes both positive and negative dependent product types.As to the former, these are functions that map values to values, and hence must not have effects.Their applications constitute complex values in the sense of [Levy 2003, §3].We make use of them for readability; for example, we freely use arithmetic operations to describe the amount of cost being incurred.The latter map values to computations, and hence may have effects, both profiling and any other effect that may be added to the language.The decalf type theory includes equality types with equality reflection, with the consequence that function equality is extensional (and hence undecidable in cases of interest).8 The language is also equipped with standard positive types, including (non-dependent) sum types and inductive types (such as natural numbers and lists), whose definitions are included in Appendix A. Importantly, the elimination forms for these types take as motives families of judgments, not just types.The reason for this is to support so-called "large eliminations, " families of types indexed by sums and inductive types.

Reasoning About Extensional Properties Using ¶ ext
In general, programs, equations, and inequalities in decalf take cost structure into account.To consider only cost-ignoring behavioral properties, we study programs in the fragment of decalf under the extensional phase, ¶ ext : Here, (−) is the extensional modality, governing behavioral specifications in the sense that any type in the image of is oblivious to computation steps.One such behavioral specification is the extensional equality between programs, rendered in decalf as the type ( =  ′ ).

Preorder Structure on Types
The approximation preorder on decalf values and computations is induced by these principles: (1) All functions are (automatically) monotone.
The extensionality requirement expresses that the approximation ordering is solely to do with cost: when cost effects are suppressed, the preorder is just equality, and thus has no effect on the behavior of the program.We render these conditions formally in the logical framework as follows: By the definition of tm ⊖ ( ), computations of type  are compared at value type U( ).We may also internalize this judgmental structure: leq :

Cost Monoid: Cost Structure of Programs
Cost-aware programs carry quantitative information through elements of the cost monoid C, which is a positive type in decalf.Because different algorithms and cost models require different notions of cost, we parameterize decalf by a purely intensional monoid (C, +, 0) in the sense of Section 1.2.2; in other words C becomes a singleton in the extensional phase. 9 Since every type is equipped with an intrinsic preorder, this automatically makes (C, +, 0, ≤ C ) a preordered monoid.

Cost as an Effect in decalf
As in calf, costs in decalf are formulated in terms of a profiling computation step   () that is parameterized by a computation type  and an element of the cost type .The meaning of step   () is to charge  units of cost and continue as ; consequently, we require that step is coherent with the monoid structure on C. step 9 Observe that this implies that we have step  ( ) =  for every cost  : C in the extensional phase.
Proc In addition, we require equations governing the interaction of step with other computations, as is standard in call-by-push-value.

Verification Examples
Equipped with equality and inequality of programs, we now provide examples of how a cost analysis may be performed.In place of a cost bound, we simply use another program.The purpose of cost analysis, then, will be to condense the details of a complex program.
In the forthcoming examples, we will instantiate the cost model to (, ≤  , +, 0), the natural numbers with the usual ordering and additive monoid structure.Note that this object is not the inductive type of natural numbers, whose preorder is discrete.We note that  is a purely intensional type and provide a description of  as a quotient inductive type [Kaposi et al. 2019] in Section 4.5.1.

Pure Algorithms
First, we discuss pure algorithms in which cost is the only available effect, presenting the work of Niu et al. [2022a] in decalf.In this case, a cost bound will typically look like a closed form for the program at hand.Example 3.1 (Doubling function).Consider the function that recursively doubles a natural number, given in Fig. 1, where we annotate with one unit of cost per recursive call.Its behavior can be concisely specified via a non-recursive closed form: Here, 2 is a complex value used to specify the return value.By induction, we may prove that double is equal to the closed form double bound .This fact constitutes a proof of both the cost and correctness of double.As a corollary, we may isolate a correctness-only proof using the extensional phase: (double = .ret(2)). ⌟ Although this approach is new, the same reasoning is valid in calf.However, not every algorithm has a simple closed form; sometimes, we may only wish to give an upper bound.
Example 3.2 (List insert).Consider the implementation of insert, a subroutine of insertion sort, in Fig. 2. Here, we count the number of comparison operations performed.The cost incurred by the computation insert   depends on the particular elements of the list l in relation to the value x.To characterize its cost precisely, we could of course take insert as its own cost bound, since insert is equal to itself by reflexivity.However, this bound provides more detail than a client may wish for.Rather than characterize this cost precisely, then, it is common to give only an upper bound.
In the worst case, insert   incurs | | cost, when x is placed at the end of l.Thus, we may define insert bound ≔  . .step | | (ret(insert spec  )).Here, insert spec is a complex value specifying the isort : intended behavior of the insert computation.Using program inequality of decalf, we prove by induction that insert is bounded by this closed form in the sense that insert ≤ insert bound .As in Example 3.1, this fact constitutes a proof of both the cost and correctness of insert.In terms of cost, it shows that insert   incurs at most | | cost.The inequality of programs only has an impact on cost; thus, this proof also guarantees that insert   returns insert spec  .Since inequality is extensionally equality, we achieve the following extensional correctness guarantee as a corollary of the inequality above: (insert =  . .ret(insert spec  )).⌟ Remark 3.3.The implementation of insert spec here corresponds to a component of the cost bound proofs of Niu et al. [2022a].There, the specification is implemented inline in the cost bound proof as the value that the computation proved to return.Here, we reorganize the data, giving the specification implementation first and using it to prove a program bound.
Example 3.4 (Insertion sort).In Fig. 3, we show the implementation of the insertion sort algorithm, using auxiliary function insert from Fig. 2. As in Example 3.2, we may define a bounding program: Then, we may prove by induction that isort ≤ isort bound .
Although it is less common than proving an upper bound, we may also show a lower bound of a computation.Since insert   costs at least 1 on a non-empty list  and isort is length-preserving, we can show that isort  incurs at least | | − 1 cost: Adapting the work by Niu et al. [2022a], we may also define the merge sort algorithm, msort, and prove that it is bounded by cost | | lg| |: Using the fact that program inequality is extensionally equality, though, we may recover the proof that these two sorting algorithms are extensionally equal.Theorem 3.5.In the extensional phase, we have that isort = msort.
Proof.Extensionally, the inequalities in the cost bounds are equalities and the cost operation is trivialized.So, isort = isort bound =  .ret(sort spec ) = msort bound = msort.□ Proc Fig. 5. Quicksort algorithm [Hoare 1961[Hoare , 1962]], where the choose auxiliary function chooses a pivot nondeterministically.As in Figs. 2 and 3, the cost instrumentation tracks one unit of cost per comparison.

Effectful Algorithms
Treating cost bounds as program inequalities, we can extend decalf with various computational effects and prove bounds on effectful programs.

Nondeterminism
First, we consider the nondeterminism effect, specified in Fig. 4. Here, we specify finitary nondeterministic branching as a semilattice structure.The identity element for a binary nondeterministic branch is called fail, since nullary nondeterminism is akin to failure.Example 3.6 (Quicksort).In Fig. 5, we define a variant of the quicksort algorithm [Hoare 1961[Hoare , 1962] ] in which the pivot is chosen nondeterministically.The number of comparisons computed by qsort  depends on which element is chosen as a pivot; in the worst case, it can compute | | 2 comparisons.We may prove this by induction: The fact that qsort  is nondeterministic is not reflected in this bound: regardless of the chosen pivot, it always incurs at most | | 2 cost and returns sort spec .In other words, the use of nondeterminism was benign: extensionally, it is invisible, rendering the program effect-free.⌟ In the following examples, we consider situations where the effect is not benign and thus appears in the bounding program.
Example 3.7 (List lookup).In Fig. 6, we define a function lookup that finds an element of a list at a given index.Here, the cost model is that one cost should be incurred per recursive call.In case the list is shorter than the desired index, the program performs the fail effect, terminating the program.This effect is not benign: even extensionally, the impact of fail is visible.Therefore, any bound for this program must involve the fail effect, as well.We can define an exact bound for lookup: Using the monotonicity of inequality and the laws for branch, it is possible to bound  as follows: However, for programs with more branching, specifying the correctness alongside the cost may add undesired noise.Thus, one may wish to bound the computation ; ret(★) instead: ; ret(★) ≤ step 12 (ret(★)).⌟

Probabilistic Choice
Similar to nondeterminism, we consider the (finitely-supported) probabilistic choice effect, where the nondeterminism is weighted by a rational number between 0 and 1.We specify the signature in Fig. 7 as the axioms for a convex space, where flip  ( 0 ,  1 ) takes the -weighted combination of computations  0 and  1 : with probability  compute  1 , and with probability 1 −  compute  0 .
Fig. 7. Specification of the flip primitive for finitary probabilistic choice, which forms a convex space.The law for interaction with the step primitive is included.Like in Fig. 4, the laws for interactions with computation type primitives are omitted for brevity, as they are analogous to those described in Section 2.7.The worst-case analysis of a probabilistic algorithm is analogous to the worst-case analysis of its nondeterministic counterpart.For example, one could define a randomized variant of qsort and show that its worst-case behavior is quadratic.Some algorithms, though, can be given tighter bounds based on their probabilistic weights.
Example 3.9 (Random sublist).In Fig. 8, we describe sublist, an algorithm for selecting a random sublist of an input list.We keep an element with probability ½ and count the number of ::-nodes as our cost model.It is not obvious how to simplify this code on its own, since the output is dependent on the effect.However, we can exactly bound the algorithm  .(sublist ; ret(★)) that ignores the returned list by the binomial distribution, given in Fig. 9 4, the laws for interactions with computation type primitives are omitted for brevity, as they are analogous to those described in Section 2.7.
twice : Example 3.10 (State-dependent cost).For this example, let state type  = nat.Recall the double function from Example 3.1, and consider the program  ≔ get(.bind  ′ ← double  in set( ′ ; ret())) that doubles the global state and returns its original value.Here, pervasive effects are used, so they must appear in the bounding program; notice that the cost even depends on the result of the get operation.A tight bound for  is  bound ≔ get(.set(2; step  (ret()))), specifying that  must read the global state , set the state back to 2, incur  cost, and then return ; thus  =  bound .⌟ This example illustrates once again that for general effectful programs, the effects must be available in the language of cost bounds, thus shattering the illusion that there can be a simple "shadow language" of cost bounds as discussed in Section 1.5.

Higher-Order Functions
Thus far we have considered first-order functions, where the function inputs were simple data.What about higher-order functions that takes in suspended computations as input?Given the definitions of hasCost and isBounded from Section 1, it is unclear what a bound for a higher-order function should be, especially if the input computation is effectful.Using program equality and inequality, though, a bound for a higher-order function is just another higher-order function.
Example 3.11 (Twice-run computation).In Fig. 11, we define a function twice that takes as input a suspended computation  : U(F(nat)), runs it twice, and sums the results.In general,  could be costly, probabilistically sample, interact with mutable state, and more.Therefore, the only plausible choice of bound for  would be  itself, since  ≤ .This aligns with standard practice in algorithms literature: for arbitrary computation inputs, the cost bound and behavioral correctness of a higher-order function depend on the specific implementation details of the program.If some properties about input  are known, though, we may be able to simplify.For example, if ; ret(★) ≤ step 1 (ret(★)), we can prove that twice ; ret(★) ≤ step 2 (ret(★)).In other words, if we know that  incurs at most 1 cost and always returns, we can show that twice  incurs at most 2 cost and always returns.⌟ Example 3.12 (List map).In a similar direction, consider the implementation of the map function on lists in Fig. 12.If nothing is known about the input  , then map is the only reasonable bound for itself.However, if some properties about  are known, we can provide a more concise bound.
(1) Suppose for all , it is the case that  ; ret(★) ≤ step  (ret(★)), for some fixed cost .Then for all lists , we have map  ; ret(★) ≤ step  | | (ret(★)).In other words, if each application of  incurs at most  cost and returns, we can show that map   incurs at most  | | cost and always returns, the standard bound on map for total functions.(2) Suppose for all , it is the case that  ; ret(★) ≤ binomial , for some fixed .Then, for all lists , we have map  ; ret(★) ≤ binomial (| |).In other words, if the cost of each application of  is bounded by the binomial distribution with  trials, we can show that map   is bounded by the binomial distribution with | | trials.⌟ This style of reasoning aligns well with existing on-paper techniques.If details about an input computation are known, then a concise and insightful bound can be derived.Otherwise, one must examine the program in its entirety to understand the behavior.

Parallelism
When calf is instantiated with the parallel cost monoid in the sense of Niu et al. [2022a], one obtains a theory for reasoning about a version of fork-join parallelism.Parallel composition is represented via an operation ∥ : , where we write  1 ⊗  2 for parallel cost composition.In the decalf theory equipped with only the cost effect, one may continue to define and reason about parallel programs in this fashion.Moreover, because of the presence of the preorder structure, we automatically obtain a cost refinement lemma for parallel composition by monotonicity, in the sense that The interaction of parallelism with other effects is a difficult problem; we leave a proper theory to future work.

A presheaf model of decalf
Our goal is to construct a model of type theory that contains a non-trivial interpretation of the constructs of decalf: this must necessarily contain a universe of types equipped with a built-in preorder structure, as well as a phase distinction -such that in the "extensional" phase, the inequality relations collapse to equalities.Although the technical development of this model brings together many sophisticated tools from category theory, the main ideas behind our construction can and should be explained intuitively.Proofs of these and more results appear in Appendix B.
Main Idea 1 (An interval type for automatic monotonicity).The first problem to solve when building a model of decalf is to devise a binary relation (⊑  ) ⊆  ×  on every type  such that any function  →  is automatically monotone.The solution to this problem, which was first discovered in the world of synthetic domain theory, is to define (⊑  ) uniformly in  by considering functions into  from an interval, i.e. some special type I equipped with two constants 0, 1 : I.
An interval (I, 0, 1) always induces a path relation  ⊑   ⇐⇒ ∃ : I → .0 =  ∧ 1 = , and this relation is automatically preserved by any function  :  → .Suppose that  ⊑   and we wish to show that   ⊑   ; by definition, we may choose some  : I →  such that 0 =  and 1 = ; then the map  •  : I →  satisfies 0 =   and 1 =  , and so we have   ⊑   .
Although the idea of an interval lets us define a binary relation on every type , this relation does not enjoy almost any of the properties that we need in order to model decalf: (1) Extensional discreteness: ( ⊑  ) does not necessarily imply ( = ).
(2) Path transitivity: It need not be the case that (⊑  ) is transitive, i.e. exhibit  as a preorder.
(3) Pointwise order: It need not be the case that functions have the pointwise order, i.e. we do not necessarily have  ⊑ →  if and only if ∀ : .  ⊑  .We can solve all the problems above by using the categorical notion of orthogonality.
Main Idea 2 (Orthogonality).Let  be a type, and let  :  →  be a function; the concept of orthogonality is one way to make precise the idea that  behaves "as if" the map  :  →  were an isomorphism.We say that  is orthogonal to  :  →  when any function  :  →  can be extended to a unique function  ! :  →  such that  ! • =  .More succinctly, the precomposition map (− • ) : ( → ) → ( → ) is required to be an isomorphism.
Many common structures may be characterized via orthogonality conditions.For instance, a set is subsingleton if and only if it is orthogonal to the map * : 2 → 1.Likewise, a poset is a complete partial order (cpo) if and only if it is orthogonal to  ↩→ , where  is the poset of natural numbers with the usual order and  is the free extension of  by an infinite point.
We can now summarize how orthogonality solves the three problems we identified above.
Main Idea 3 (Extensional discreteness by orthogonality).We can force ( ⊑  ) to imply ( = ) by refining our specification of the interval: in particular, we shall require the interval I to be orthogonal to the unique function ⊥ → ¶ ext .By our metaphor (Main Idea 2), this means that we want the interval I to "think" that ¶ ext is false.From a mathematical point of view, this is equivalent to saying that the I 1 as we shall have To see that this condition suffices, we observe that ( ⊑  ) is defined to be ¶ ext → ∃ : we know that any such  must be constant, and so we have indeed have ( = ).
There are no reasonable conditions that we can impose on the interval I to ensure that each path relation (⊑  ) is transitive.Indeed, Fiore and Rosolini [1997, Proposition 1.2] have shown that only a slight strengthening of the transitivity condition will imply that I 2, and under these conditions it would follow that every type is discrete, i.e. we would have  ⊑   if and only if  = .We likewise cannot hope for a condition on I that makes every function space have the pointwise order.In either case the best we can do is restrict our attention to a class of types that do have these desirable properties, and provide a universal way to approximate any given type by a type in this class; such a class of types is called a reflective subuniverse [Rijke et al. 2020 Main Idea 4 (Reflective subuniverses).A reflective subuniverse is defined to be a class of types S such that for any type , there exists a type  S ∈ S and a map   :  →  S to which every type  ∈ S is orthogonal.10Recalling our metaphor, this means that every type in S thinks that  is isomorphic to  S ; types that do not lie in S would then see  S as the "best approximation" of  by a type lying in S. Reflective subuniverses are exponential ideals, which means that if  lies in S, then we automatically have ( → ) ∈ S; in fact, the same applies for dependent function spaces.
Main Idea 5 (Transitivity and pointwise functions by orthogonality).It happens that given some finite collection of maps M, the class of types orthogonal to every map in M is a reflective subuniverse, assuming sufficiently powerful quotient and inductive types.Therefore, in order to obtain a reflective subuniverse of types  such that (⊑  ) is transitive and (⊑ → ) is pointwise, it would suffice to find orthogonality conditions that imply these properties.Using the interval I and pushouts we can indeed find a pair of maps such that if  is orthogonal to both, then the relation (⊑  ) is transitive and any function space  →  has the pointwise order.Thus we obtain a reflective subuniverse whose types satisfy all the desirable properties needed by decalf.
Taking stock, what exactly do we need to do in order to construct a model of decalf along the lines of Main Ideas 1 to 5? It will be sufficient to find a model of type theory with a proposition ¶ ext and an interval object (I, 0, 1) such that I is orthogonal to ⊥ → ¶ ext .From just this data, all the rest follows by the construction of reflective subuniverses from orthogonal classes.In the sections that follow we will develop the ideas explained intuitively above with more mathematical precision, culminating in an explicit construction of a specific model of decalf supporting a cartesian closed and order-preserving embedding from the category of preorders and monotone maps.

Topos-Theoretic Preliminaries
Much of this section will revolve around synthetic constructions in the internal language of an elementary topos, i.e. a cartesian closed category with finite limits and a subobject classifier.An elementary topos ℰ has an internal language, which is a form of extensional dependent type theory with a univalent universe of propositions (subsingleton types).We will favor working type theoretically on the inside of ℰ rather than diagrammatically on the outside of ℰ; unless we say otherwise, all statements are to be understood as internal.We refer the reader to Awodey et al. [2021]; Maietti [2005] for further discussion of this type theoretic language.Definition 4.1.We define an elementary QWI-topos to be an elementary topos closed under QWI-types à la Fiore et al. [2021].QWI-topoi are closed under a form of quotient inductive types dubbed QWI-types by Fiore et al. [2021], whence the name.Quotient inductive types allow the simultaneous definition of a type by generators and relations.
Example 4.2.Every category of Set-valued presheaves is a QWI-topos.⌟ Orthogonality and Local Types In the internal language of an elementary topos ℰ, we will state many important conditions in terms of orthogonality (Main Idea 2); by our convention, we shall always mean internal orthogonality.We shall introduce an intermediate notion of suborthogonality.Definition 4.3.Let  be a type, and let  :  →  be an arbitrary map.We say that  is suborthogonal (resp.orthogonal) to  :  →  when the induced precomposition map   :   →   is a monomorphism (resp.isomorphism).If  is a type parameterizing a family of maps S = {  :   →   }  ∈  , we say that  is orthogonal to S if and only if it is orthogonal to each   :   →   ∈ S for  :  .We will also say that  is S-local when it is orthogonal to S.
When ℰ is an elementary QWI-topos, the S-local types are internally reflective in ℰ, meaning that every type has a "best approximation" by an S-local type.Proposition 4.4 (Orthogonal reflection).Let ℰ be an elementary QWI-topos.In the internal language of ℰ, let S be a  -indexed family of maps for some type  ; then for any type , we may define an S-local type  S and a map   :  →  S such that every S-local type is orthogonal to   .
Proof.The orthogonal reflection can be constructed by means of a quotient inductive type, adapting the method of Rijke et al. [2020] from homotopy type theory to extensional type theory.□

Synthetic Preorders in the Presence of an Interval
Let ℰ be an elementary topos equipped with an interval object, i.e. an object I with two elements 0, 1 : I. 11 We will work in the internal language of ℰ for the remainder of this section.The following definitions are adapted from Fiore and Rosolini [1997].
Definition 4.8 (The path relation).On any type  we may define a reflexive relation ⊑  ⊆  × , saying that  ⊑   if and only if there exists some path from  to  in .Definition 4.9 (Path-transitivity).A type  is called path-transitive when it is orthogonal to the dotted cocartesian gap map  : I ∨ I → I 2 depicted below: .( 0 ,  )  Lemma 4.10 (Path-transitive types are preorders).If a type  is path-transitive in the sense of Definition 4.9, then the path relation ⊑  is a preorder on .
As remarked by Fiore and Rosolini [2001, §1.3], the converse to Lemma 4.10 need not hold. 11We do not here assume that 0 ≠ 1.
Decalf: A Directed, Effectful Cost-Aware Logical Framework 10:21 4.2.2Boundary Separation As it stands, there could be two distinct paths I →  that have the same endpoints.We wish to isolate the types  for which paths are uniquely determined by their endpoints; this property was dubbed boundary separation by Sterling et al. [2019Sterling et al. [ , 2022] ] and Σseparation by Fiore and Rosolini [2001].The purpose of imposing boundary separation in our setting is to make the path order on function spaces pointwise, as we shall see in Lemma 4.15.
Definition 4.11.A type  is called boundary separated when it is suborthogonal to the boundary inclusion  : 2 → I determined by the two endpoints of I.
Although the presentation of boundary separation in terms of suborthogonality is simple and elegant, it will be later advantageous to observe that boundary separation can also be seen as an orthogonality property so as to incorporate it into a reflective subcategory via Proposition 4.4.To that end, we introduce path suspensions below in order to state Lemma 4.13, which characterizes boundary separation in terms of orthogonality.Definition 4.12.We define the path suspension of a type  to be following pushout: The universal property of path suspension places functions S →  in bijection with triples (,  : ;  :  →  I ) where  is valued in paths from  to .Lemma 4.13.A type  is boundary separated if and only if it is orthogonal to the path suspension S * : S2 → S1 of the terminal map * : 2 → 1.
Remark 4.14.Observe that a type is a proposition (subsingleton) if and only if it is orthogonal to the terminal map * : 2 → 1. Boundary separation asserts that the path spaces with fixed endpoints are propositions.Our use of path suspension therefore corresponds precisely to the observation of Christensen et al. [2020] in the context of homotopy type theory that a type is separated with respect to a given class of maps if and only if it is local (orthogonal) to their suspensions.Fiore and Rosolini [2001] equivalently describe boundary separation in terms of orthogonality to a different (but closely related) map; we have preferred here to make the connection with suspensions.Lemma 4.15.Let  :  ⊢  be a family of boundary separated types, and let  ,  : ( : ) →  be a pair of dependent functions.Then  ⊑ (:)→  if and only if for all  :  we have   ⊑  .

Synthetic Preorders
We now come to a suitable definition of "synthetic preorder" within ℰ. Definition 4.16.A type  is called a synthetic preorder when it is both path-transitive and boundary separated, i.e. orthogonal to both  : I ∨ I → I 2 and S * : S2 → S1.
The benefit of defining synthetic preorders in terms of (internal) orthogonality is that they automatically form a (full, internal) reflective subcategory of ℰ via Proposition 4.4; we specialize the general result below to the case of path-transitivity.
Corollary 4.17 (Synthetic preorder reflection).Assume that ℰ is a QWI-topos.For any type  we may define a synthetic preorder P equipped with a map   :  → P to which any synthetic preorder  is orthogonal.In other words, every type  has a reflection P as a synthetic preorder.
Corollary 4.17 further implies that the full (internal) subcategory of ℰ spanned by synthetic preorders is cartesian closed, and moreover closed under dependent functions spaces for families of synthetic preorders.It is also (internally) complete and cocomplete, with limits computed as in the ambient category and colimits computed by applying the synthetic preorder reflection to the those of the ambient category.These results concerning (internally) orthogonal subcategories, among many others, can be found in the work of Rijke [2019]; Rijke et al. [2020].

Discrete Types
We introduce a notion of discrete types that provides a sufficient condition for being a synthetic preorder.
Definition 4.18.A type  is called discrete when the following equivalent conditions hold: (1) the type  is orthogonal to the map I → 1; (2) the type  is boundary separated and the path relation (⊑  ) is the diagonal, i.e. (⊑  ) ⊆ (=).
Proposition 4.19.Any discrete type  is a synthetic preorder.

A Synthetic Theory of Partially Discrete Preorders
Let ℰ be an elementary topos equipped with an interval object as in the previous section.In this section, we extend the axiomatics of ℰ to account for a phase distinction under which every synthetic preorder becomes discrete.Semantically, this corresponds to assuming an indeterminate proposition ¶ ext : Ω such that the interval I is ¶ ext -connected in the sense of Rijke et al. [2020], i.e. such that the proposition ¶ ext → (I 1) holds or, equivalently, I is orthogonal to ⊥ → ¶ ext .The purpose of this axiom is as follows: if I is ¶ ext -connected in the above sense, any path I →  restricts to a constant function under ¶ ext ; thus we always have ¶ ext ∧  ⊑   →  = .Thus in the presence of this axiom, any synthetic preorder is "partially discrete" in the sense of being discrete as soon as ¶ ext = ⊤.
We now summarize the axiomatics of synthetic partially discrete preorder theory for an arbitrary elementary QWI-topos ℰ.
Axiom 4.21.We assume a type I together with two elements 0, 1 : I.The purpose of making N a discrete type is to allow the motive of the elimination principle of inductive data types to range over arbitrary types, which enables us to compute program invariants in decalf using the dependent sum of the LF as mentioned in Section 1.5.

Well-Adapted Models
A priori, a cost bound in the theory of synthetic preorders has little to do with a cost bound in concrete preorders.However, in a model of synthetic preorders that embeds the concrete preorders, we may relate a synthetic cost bound to a more traditional account of cost bounds in concrete preorders.This is captured by the notion of well-adapted models.(1) We say that ℰ is well-adapted when there is a fully faithful cartesian closed functor Preord ↩→ ℰ that preserves the interval object.This means that the actual interval [1] : Preord is sent to I and the points 0, 1 : I are determined by the maps 0, 1 : [0] → [1] respectively.(2) If moreover ℰ is a model of synthetic partially discrete preorder theory, we say ℰ is welladapated if there is a functor satisfying the conditions above and whose image lies within the ¶ ext -connected types, i.e. those that restrict under ¶ ext to a singleton.
This terminology is inspired by an analogous situation in synthetic differential geometry where a topos model ℰ is called well-adapted [Dubuc 1979] when there is an embedding of ordinary manifolds into ℰ preserving the differential geometry structure.The canonical example of a welladapted model of synthetic (partially discrete) preorder theory is (augmented) simplicial sets, for which the corresponding embedding is given by the nerve functor.Now, we discuss how to relate synthetic cost bounds to cost bounds in concrete preorders in well-adapted models.
Well-Adapted Models and Concrete Preorders Let ℰ be a well-adapted model of synthetic preorder theory (Definition 4.25), and write  : Preord ↩→ ℰ for the associated embedding.In this section, we describe how to instantiate the constructs of decalf in any model ℰ of synthetic partially discrete preorder theory in the sense of Definition 4.24.To construct our model, we fix universes U ∈ V in ℰ; judgments of calf are intepreted in the outer universe V. We will write U P ⊆ U for the subuniverse of U spanned by synthetic preorders, i.e. path-transitive and boundary separated types.

Cost Structure
The theory of decalf is parameterized by a cost monoid C : U P that is ¶ ext -connected, i.e. becomes a singleton when ¶ ext is true.In a well-adapted model of synthetic preorder theory ℰ in the sense of Section 4.3, we may take an ordinary preordered monoid  and define C as the image of  under the full embedding Preord ↩→ ℰ.An alternative method would be to define the cost monoid C as a quotient inductive type that builds in the expected order structure.
For instance, we may define the synthetic preorder  that can be thought of as the colimit of the inclusions of finite chains I 0 → I 1 → . . .by means of a quotient inductive type, shown in Fig. 13.The object  as defined is ¶ ext -connected because I is.

Monads for Effects
Let  be a (strong) monad on U. The monad  may not preserve the property of being a synthetic preorder, but we may adapt it to a monad  P on U P by postprocessing with the synthetic preorder reflection, sending  : U P to P(). 12 We can then adapt  P to support a cost effect using the cost monad transformer corresponding to the writer monad C × −.In particular, we define a new monad  on U P by the assignment   ≔  P (C × −).
(1) For verifying pure code (as in calf), we can let  be the identity monad.
(2) For nondeterminism (as in Section 3.2.1),we can let  be the free semilattice monad, which can be defined by a quotient inductive type.(3) For probabilistic choice (as in Section 3.2.2),we can let  be the free convex space monad, which is likewise (constructively) definable by a quotient inductive type.13(4) For global state (as in Section 3.2.3),we can let  be the state monad  →  × −.
As an alternative to defining  using the cost monad transformer as above, we can also treat the effect signatures of Section 3.2 as specifications of an algebraic effect, where for each  : C we have a generating operation step  .In such cases, we can simply define  as the free monad for these effect signature using a quotient inductive type.
Remark 4.30.Although we have restricted our attention to Eilenberg-Moore models of decalf above, it is indeed possible and desirable to consider more general models.For example, our interpretation of global state in terms of algebras for the state monad is somewhat bizarre and uncanonical; this could be replaced by interpreting computation types as (C × −)-algebras, and then letting U( ) ≔  →  and F() ≔  × C × .

Universes of Positive and Negative Types
We then define tp + to be U P itself, letting the decoding function tm + () : tp + → Jdg be the image of  : U P under the inclusion U P ↩→ V, which we shall leave implicit in our informal notations.We interpret tp ⊖ by the type of  -algebras where  is the application of the cost monad transformer to a monad  from Section 4.5.2, which we may equip, as we please, with the structure of the Eilenberg-Moore category.Then tm ⊖ ( ) is interpreted the same as tm + (U( )).Thus we have a free-forgetful adjunction F ⊣ U : tp ⊖ → tp + interpreting the call-by-push-value adjunctive structure of decalf.

Inequality Relation
For any type  : tp + and elements ,  : tm + (), the inequality  ≤  is interpreted by the path relation  ⊑ tm + () , which is transitive because  lies in U P .The ≤ ext axiom is a consequence of the partial discreteness axiom (Axiom 4.22) that we have assumed of ℰ; the ≤ mono axiom holds by definition.The ≤ pi axiom holds by Lemma 4.15, as every synthetic preorder is boundary separated.Thus the path relation  ⊑ tm + ()  is a proposition (and so a synthetic preorder) and may be internalized as a type of decalf.

A Presheaf Model of decalf in Augmented Simplicial Sets
We will construct a presheaf model of decalf by equipping a simplicial model of synthetic preorders with a phase distinction.

Simplicial Sets for Synthetic Preorders
To model synthetic preorders in a QWI-topos, we take a cue from higher category theory and consider simplicial sets, which are presheaves on the simplex category defined below.Definition 4.31 (Simplex category).We will write ∆ for the simplex category, i.e. the category of inhabited finite ordinals [] and order-preserving maps between them.By convention, [0] will denote the singleton ordinal, i.e. the terminal object of ∆.
What do simplicial sets have to do with preorders?Every preorder can be reconstructed by gluing simplices together in a canonical way; this is the density of the embedding  : ∆ ↩→ Preord, which implies that the corresponding nerve functor  : Preord → Pr(∆) sending a preorder  to the restricted hom presheaf hom Preord ( −, ) is fully faithful.In this way, simplicial sets are an appropriate place to study concrete preorders; this is the content of a well-adapted model as defined in Section 4.3.The synthetic preorder theory of simplicial sets, then, studies sufficient conditions in the internal language for arbitrary simplicial sets to "behave like" those that arise from actual preorders via the nerve functor  .
Theorem 4.32.The category Pr(∆) of simplicial sets forms a non-trivial model of synthetic preorder theory in the sense of Definition 4.24 in which the interval is given by the representable presheaf y[1] and its two global points.4.6.2Augmented Simplicial Sets for Synthetic Partially Discrete Preorders In fact, Pr(∆) also forms a (highly degenerate) model of synthetic partially discrete preorder theory in the sense of Definition 4.24, setting ¶ ext ≔ ⊥.Our goal is to find a non-trivial model ℰ, i.e.where the slice ℰ/ ¶ ext is not the terminal category.The most canonical choice for such a topos is obtained by freely extending Pr(∆) with a maximal topos-theoretic point by forming an "inverted Sierpiński cone", i.e. the Artin gluing [Artin et al. 1972] of the constant presheaves functor Set → Pr(∆).This gluing can also be presented equivalently by presheaves on a different category, as adding a maximal point to a presheaf topos corresponds (dually) to freely extending the base category by an initial objectwhich amounts in the case of ∆ to the use of augmented simplicial sets.These two perspectives on the same topos are both useful, and play a role in our results.Definition 4.33 (Augmented simplex category).We will write ∆ ⊥ for the augmented simplex category, the free extension of ∆ by an initial object [−1].Concretely, ∆ ⊥ can be thought of as the category of arbitrary finite ordinals and order-preserving maps between them; under this interpretation, [−1] corresponds to the empty ordinal.
Theorem 4.34.The category Pr(∆ ⊥ ) of augmented simplicial sets forms a non-trivial well-adapted model of synthetic partially discrete preorder theory in the sense of Definition 4.24 in which: (1) the interval is given by the representable presheaf I ≔ y[1]; (2) and the phase distinction is given by the representable subterminal presheaf ¶ ext ≔ y[−1].

Soundness of decalf
The following soundness theorem is a corollary of Theorem 4.34 via the description of algebra models of decalf in Section 4.5.Theorem 4.35 (Soundness).For any of the notions of computational effect considered in Section 3.2, we have a non-trivial model of the decalf theory in Pr(∆ ⊥ ).

Conclusion
In this work, we presented decalf, an inequational extension of calf [Niu et al. 2022a] that supports precise and approximate bounds on the cost and effect structure of programs.Ab initio, the theory of decalf has been forged and guided by the pragmatic struggles encountered in cost analysis and program verification.Throughout the development, our guiding principle has been that a cost bound for an effectful program should be another effectful program.
In Section 3, we demonstrated this methodology through a variety of case studies.For pure, first-order algorithms, we were able to provide simple proofs of combined cost and correctness.Such proofs in decalf are more streamlined than their calf counterparts and can be carried out without reference to any separable notion of recurrence.Instead, the code itself serves the role of the recurrence, which we then solve for a closed form, either exactly using equality or loosely using inequality.Then, using the extensional modality, we were able to extract extensional equalities from both equality and inequality program bounds.For example, from the cost and correctness proofs of various sorting algorithms, we may determine immediately that all the given sorting algorithms are extensionally equal.This approach scaled naturally to support more complex classes of programs, including higher-order programs with non-cost effects.
In Section 4, we justified this style of reasoning using the notion of a synthetic partially discrete preorder theory, a novel formulation of an intrinsic theory of preorders that smoothly integrates with the existing intension-extension phase distinction of calf.To obtain a model of this new theory, we draw inspiration from both work in directed type theory and synthetic domain theory and characterize the synthetic preorders using simple orthogonality conditions, which furnish a well-behaved subuniverse that supports the structures for workaday program verification.

Future Work
We view decalf as a starting point for deeper investigations.Here, we outline future directions.
Amortized analysis via coinduction Our approach draws inspiration from Grodin and Harper [2023], who perform amortized analyses in calf for programs whose only effect is cost.We anticipate that decalf could be used to perform a broader class of amortized analyses.
Advanced probabilistic reasoning In Section 3.2.2,we implement a simple probabilistic program and show an exact bound on its cost.However, many algorithms in practice do not have wellbehaved distributions, so one may instead wish to analyze expected and high probability bounds.We believe that the techniques presented here will scale to support such reasoning in future work.
Parallelism and more sophisticated effects As mentioned in Section 3.4, while parallelism is compatible with pure algorithms in decalf, we leave a proper theory of the interaction between parallelism and other effects to future work.Additionally, it would be useful to consider a semantics for decalf that goes beyond the simple, non-enriched algebraic effects we have considered here, allowing for constructs like control or unbounded recursion.
Abstraction and specification implementations One drawback of decalf, inherited from calf, is that one must compute the result of a computation via a "specification" implementation to give a cost bound.Unfortunately, this puts abstraction at odds with cost analysis: in order to export a bound of an abstract computation, one must also make the return value public.We observe a similarity to frameworks based on program logics, in which one sometimes verifies an effectful (there, imperative; here, costly) algorithm by first providing a functional specification (for instance, see the case study on list fold in Iris [Birkedal and Bizjak 2022]).We hope this tension can be resolved in future iterations of decalf.Proof.A map S2 →  is a pair of paths between two fixed elements of ; a map S1 →  is a single path between two fixed elements.Thus orthogonality to the suspension S2 → S1 means precisely that any two paths between the same elements are equal, which is precisely suborthogonality to  : 2 → I. □ Lemma 4.15.Let  :  ⊢  be a family of boundary separated types, and let  ,  : ( : ) →  be a pair of dependent functions.Then  ⊑ (:)→  if and only if for all  :  we have   ⊑  .
Proof.The forward direction is trivial, and holds even for non-boundary separated types.In the backwards direction, suppose that we have ∀ : .  ⊑  ; because  is boundary separated, paths therein are uniquely determined by their endpoints, so we may use the topos-valid principle of unique choice to obtain a family of paths  :  → I →  such that  is in each case a path from   to .By permuting arguments, we thus have a path from  to  and so we have  ⊑ (:)→ .□ Synthetic preorders are not closed under all dependent sum types however; dependent sum types can be formed, however, when the indexing type is discrete.Lemma B.7 (Discretely indexed sums).Let  be a discrete type, and let  :  ⊢  be a family of synthetic preorders.Then the dependent sum type ( : ) ×  is a synthetic preorder.
Proof.Boundary separation is obviously preserved by dependent sum types, so we will consider only path-transitivity.We fix an orthogonal lifting scenario as follows: Proof.Unfolding the definition of the path relation, this means we need to define a map I →   whose endpoints are determined by   and  at 0 and 1, respectively.Using the fact that  is full and faithful and preserves the interval object, it suffices to define a map  : [1] → , which we may take to be  and  since we have assumed that  ≤ .To check that the map so defined has the correct endpoints, we compute along the boundary: Lemma B.9.In the category Pr(∆) of simplicial sets, Axiom 4.23 holds.
Proof.We need to show that N is internally orthogonal to the interval I = y[1].This means we need to solve lifting problems of the following form: Because every presheaf can be defined as a colimit of representables, it suffices to solve the following lifting problem given a nonempty ordinal []: Noting that the nerve  : Preord → Pr(∆) restricts to y on ∆ and sends the discrete preorder N to the natural numbers object of Pr(∆), we further rewrite the problem as follows: Since the nerve preserves products and is fully faithful, it suffices to find lifts in Preord: Proc.ACM Program.Lang., Vol. 8, No. POPL, Article 10.Publication date: January 2024.Decalf: A Directed, Effectful Cost-Aware Logical Framework 10:5 Fig. 2. Recursive implementation of the list insertion, the auxiliary function of insertion sort, instrumented with one cost per element comparison.

Fig. 3 .
Fig.3.Insertion sort algorithm, using auxiliary insert function from Fig.2.Cost is not directly instrumented here, but a call to isort counts comparisons based on the implementation of insert.

Fig. 6 .
Fig.6.The usual implementation of list index lookup, instrumented with one cost per recursive call.If the desired index is out of bounds, the computation errors via the fail effect.

Fig. 8 .
Fig.8.Implementation of sublist, an algorithm to compute a random sublist of an input list, where one unit of cost is incurred for each ::-node in the output list.

Fig. 11 .
Fig. 11.Implementation of the twice function which takes a suspended computation as input, runs it twice, and adds the results.No cost is instrumented explicitly, but  may incur cost (and/or other effects).
Fig. 12. Implementation of the map function on lists, which applies a suspended function elementwise to an input list.No cost is instrumented explicitly, but the applications of  may incur cost (and/or other effects).
Definition 4.5 (Partial order on the interval).We have an embedding [−] : I ↩→ Ω sending  : I to the proposition ( = 1), which creates a partial order  → I  ⇐⇒ ( [] → [ ]) on I. Definition 4.6 (Finite chains).For each finite ordinal , we may define an auxiliary figure I  classifying "chains" of length  in I, setting I  to be the subobject of I  spanned by vectors ( 0 → I . . .→ I  −1 ).In particular, we have I 0 = 1, I 1 = I, and I 2 = {,  : I |  → I  }. 4.2.1 Paths and Path-Transitivity Definition 4.7 (Paths).A path in a type  from an element  :  to  :  is defined to be a function  : I →  such that 0 =  and 1 = .
Axiom 4.22.The interval object I of Axiom 4.21 is ¶ ext -connected.Axiom 4.23.The natural numbers object N is discrete in the sense of Definition 4.18.Definition 4.24.Let ℰ be an elementary QWI-topos.(1) We say that ℰ is a model of synthetic preorder theory when it satisfies Axioms 4.21 and 4.23.(2) If ℰ additionally satisfies Axioms 4.20 and 4.22, then we call it a model of synthetic partially discrete preorder theory.
Corollary 4.27 (Completeness of synthetic cost bounds).Let ,  : Preord be concrete preorders.Given monotone maps  ,  :  →  such that  ≤  on the pointwise order, then the synthetic preorder relation   ⊑   →   holds in ℰ. Proof.By Theorem 4.26 and the fact that  is cartesian closed.□ Theorem 4.28.Let (, ≤) be a concrete preorder.If  ⊑    holds in ℰ, then there exist ,  :  such that  ≤  and  =   and  = .Corollary 4.29 (Soundness of synthetic cost bounds).Let ,  : Preord be concrete preorders.Given maps ,  :   →   such that the synthetic preorder relation  ⊑   →   holds in ℰ, then there exist  ,  :  →  such that  ≤  in the pointwise order and  =   and  = .Proof.By Theorem 4.28 and the fact that  is cartesian closed.□ 4.5 Algebra Models of decalf in Synthetic Partially Discrete Preorder Theory Fig. 13.Quotient inductive type defining cost structure .
of Main Theorems Lemma 4.10 (Path-transitive types are preorders).If a type  is path-transitive in the sense of Definition 4.9, then the path relation ⊑  is a preorder on .Proof.Let  : I →  and  : I →  with 1 = 0; we wish to construct a path from 0 to 1.With these data, we may define a map  : I ∨ I →  using the universal property of the pushout: is orthogonal to  : I ∨ I → I 2 , we have the following unique lift: new path  ≔ γ (, ) restricting γ along the diagonal, and we compute:0 = γ (0, 0) = 0 1 = γ (1, 1) = 1 □Lemma 4.13.A type  is boundary separated if and only if it is orthogonal to the path suspension S * : S2 → S1 of the terminal map * : 2 → 1.
a unique lift ψ : I 2 → ( : ) ×  with  = ψ • .Because  is discrete, it follows that the restriction  1 •  : I ∨ I →  is constant on some element  : , and so the restriction  2 • is a non-dependent function I∨I → .Therefore, it suffices to solve the following simpler orthogonal lifting problem: above exists because we have assumed each  is path-transitive.□ Proposition 4.19.Any discrete type  is a synthetic preorder.Proof.This follows from Lemma B.7, considering the dependent sum type (_ : ) × 1. □ B.4 Well-Adapted Models Theorem 4.26.Let (, ≤) be a concrete preorder.If  ≤ , then   ⊑    holds in ℰ.
this follows since for every nonempty ordinal [] a map [] × [1] → N is determined by a single  : N and likewise a map [] → N. □ in which types are classified into one of two categories, (1) Value, or positive, types are those whose elements are "pure data." This includes finite sums and products, inductive types such as the natural numbers or lists of a value type, and suspensions of computations.(2) Computation, or negative, types are "active" and, in particular, may engender effects.The basic constructs are ret and bind, which incorporate values as computations, and sequence computations, respectively.Computation types include functions from positive to negative types, according to the intuition that functions may only be applied to values, and doing so engenders a computation.However, functions may be turned into values by suspension.