skip to main content
research-article
Free Access

CHAD: Combinatory Homomorphic Automatic Differentiation

Published:17 August 2022Publication History

Skip Abstract Section

Abstract

We introduce Combinatory Homomorphic Automatic Differentiation (CHAD), a principled, pure, provably correct define-then-run method for performing forward and reverse mode automatic differentiation (AD) on programming languages with expressive features. It implements AD as a compositional, type-respecting source-code transformation that generates purely functional code. This code transformation is principled in the sense that it is the unique homomorphic (structure preserving) extension to expressive languages of Elliott’s well-known and unambiguous definitions of AD for a first-order functional language. Correctness of the method follows by a (compositional) logical relations argument that shows that the semantics of the syntactic derivative is the usual calculus derivative of the semantics of the original program.

In their most elegant formulation, the transformations generate code with linear types. However, the code transformations can be implemented in a standard functional language lacking linear types: While the correctness proof requires tracking of linearity, the actual transformations do not. In fact, even in a standard functional language, we can get all of the type-safety that linear types give us: We can implement all linear types used to type the transformations as abstract types by using a basic module system.

In this article, we detail the method when applied to a simple higher-order language for manipulating statically sized arrays. However, we explain how the methodology applies, more generally, to functional languages with other expressive features. Finally, we discuss how the scope of CHAD extends beyond applications in AD to other dynamic program analyses that accumulate data in a commutative monoid.

Skip 1INTRODUCTION Section

1 INTRODUCTION

Automatic differentiation (AD) is a technique for transforming code that implements a function f into code that computes f’s derivative, essentially by using the chain rule for derivatives. Due to its efficiency and numerical stability, AD is the technique of choice whenever we need to compute derivatives of functions that are implemented as programs, particularly in high-dimensional settings. Optimization and Monte Carlo integration algorithms, such as gradient descent and Hamiltonian Monte Carlo methods, rely crucially on the calculation of derivatives. These algorithms are used in virtually every machine learning and computational statistics application, and the calculation of derivatives is usually the computational bottleneck. These applications explain the recent surge of interest in AD, which has resulted in the proliferation of popular AD systems such as TensorFlow [1], PyTorch [36], and Stan Math [9].

AD, roughly speaking, comes in two modes: forward mode and reverse mode. When differentiating a function \( \mathbb {R}^n\rightarrow \mathbb {R}^m \), forward mode tends to be more efficient if mn, while reverse mode generally is more performant if nm. As most applications reduce to optimization or Monte Carlo integration of an objective function \( \mathbb {R}^n\rightarrow \mathbb {R} \) with n very large (at the time of this article, in the order of 104–107), reverse mode AD is in many ways the more interesting algorithm [5].

However, reverse AD is also more complicated to understand and implement than forward AD. Forward AD can be implemented as a structure-preserving program transformation, even on languages with complex features [38]. As such, it admits an elegant proof of correctness [21]. By contrast, reverse AD is only well understood as a compile-time source-code transformation that does not rely on using a runtime interpreter (also called define-then-run style AD) on limited programming languages, such as first-order functional languages. Typically, its implementations on more expressive languages that have features such as higher-order functions make use of interpreted define-by-run approaches. These approaches first build a computation graph during runtime, effectively evaluating the program until a straight-line first-order program is left, and then they perform automatic differentiation on this new program [9, 36]. Such approaches first have the severe downside that they can suffer from interpretation overhead. Second, the differentiated code cannot benefit as well from existing optimizing compiler architectures. As such, these AD libraries need to be implemented using carefully, manually optimized code, that, for example, does not contain any common subexpressions. This implementation process is precarious and labour intensive. Furthermore, some whole-program optimizations that a compiler would detect go entirely unused in such systems.

Similarly, correctness proofs of reverse AD have taken a define-by-run approach or have relied on non-standard operational semantics, using forms of symbolic execution [2, 8, 31]. Most work that treats reverse AD as a source-code transformation does so by making use of complex transformations that introduce mutable state and/or non-local control flow [37, 44]. As a result, we are not sure whether and why such techniques are correct. Furthermore, AD applications (e.g., in machine learning) tend to be run on parallel hardware, which can be easier to target with purely functional code. Another approach has been to compile high-level languages to a low-level imperative representation first and then to perform AD at that level [22] using mutation and jumps. This approach has the downside that we might lose important opportunities for compiler optimizations, such as map-fusion and embarrassingly parallel maps, which we can exploit if we perform define-then-run AD on a high-level functional representation.

A notable exception to these define-by-run and non-functional approaches to AD is Elliott’s work [16], which presents an elegant, purely functional, define-then-run version of reverse AD. Unfortunately, their techniques are limited to first-order programs over tuples of real numbers. The workshop paper [43] by Vytiniotis, Belov, Wei, Plotkin, and Abadi proposes two possible extensions of Elliott’s functional AD to accommodate higher-order functions. However, it does not address whether or why these extensions would be correct or establish a more general methodology for applying AD to languages with expressive features.

This article introduces Combinatory Homomorphic Automatic Differentiation (CHAD) and its proof of correctness. CHAD is based on the observation that Elliott’s work [16] has a unique structure preserving extension that lets us perform AD on various expressive programming language features. We see purely functional higher-order (parallel) array processing languages such as Accelerate [10] and Futhark [19] as particularly relevant platforms for the machine learning applications that AD tends to be used for. With that in mind, we detail CHAD when applied to higher-order functional programs over (primitive) arrays of reals. This article includes the following:

  • We introduce CHAD, a categorical perspective on AD, that lets us see AD as a uniquely determined homomorphic (structure-preserving) functor from the syntax of its source programming language (Section 3) to the syntax of its target language (Section 4).

  • We explain, from this categorical setting, precisely in what sense reverse AD is the “mirror image” of forward AD (Section 6).

  • We detail how this technique lets us define purely functional define-then-run reverse mode AD on a higher-order language (Section 7).

  • We present an elegant proof of semantic correctness of the resulting AD transformations, based on a semantic logical relations argument, demonstrating that the transformations calculate the derivatives of the program in the usual mathematical sense (Sections 5 and 8).

  • We show that the AD definitions and correctness proof are extensible to higher-order primitives such as a map-operation over our primitive arrays (Section 10).

  • We show how our techniques are readily implementable in standard functional languages to give purely functional, principled, semantically correct, compositional, define-then-run reverse mode AD (Section 9).

  • Finally, we place CHAD in a broader context and explain how it applies, more generally, to dynamic program analyses that accumulate information in a commutative monoid (Section 11).

We start by giving a high-level overview of the main insights and theorems in this article in Section 2.

For a review of the basics of AD, we refer the reader to References [5, 32]. We discuss recent related work studying AD from a programming languages perspective in Section 12.

Skip 2KEY IDEAS Section

2 KEY IDEAS

We start by providing a high-level overview of the article, highlighting the main insights and theorems underlying our contributions.

2.1 Aims of Automatic Differentiation

The basic challenge that automatic differentiation aims to solve is the following. We are given a program x: realnt: realm that takes an n-dimensional array of (floating point) real numbers as input and produces an m-dimensional array of reals as output. That is, t computes some mathematical function \( [\![ { t}]\!] :\mathbb {R}^n\rightarrow \mathbb {R}^m \). We want to transform the code of t to1

  • a program \( \overrightarrow {\mathcal {D}}_{}({ t})_2 \) that computes the derivative \( D[\![ { t}]\!] :\mathbb {R}^n\rightarrow \underline{\mathbb {R}}^n\multimap \underline{\mathbb {R}}^m \), in the case of forward AD;

  • a program \( \overleftarrow {\mathcal {D}}_{}({ t})_2 \) that computes the transposed derivative \( {D[\![ { t}]\!] }^{t}:\mathbb {R}^n\rightarrow \underline{\mathbb {R}}^m\multimap \underline{\mathbb {R}}^n \), in the case of reverse AD.

Here, we write \( \underline{\mathbb {R}}^n \) for the space of (co)tangent vectors to \( \mathbb {R}^n \); we regard \( \underline{\mathbb {R}}^n \) as a commutative monoid under elementwise addition. We write ⊸ for a linear function type to emphasize that derivatives are linear in the sense of being monoid homomorphisms.

Furthermore, we have some more desiderata for these code transformations:

(1)

we want these code transformations to be defined compositionally, so we can easily extend the source programming language we apply the transformations to with new primitives;

(2)

we want these transformations to apply to a wide range of programming techniques, so we are not limited in our programming style even if we want our code to be differentiated;

(3)

we want the transformations to generate purely functional code so we can easily prove its correctness and deploy it on parallel hardware;

(4)

we want the code size of \( \overrightarrow {\mathcal {D}}_{}({ t})_2 \) and \( \overleftarrow {\mathcal {D}}_{}({ t})_2 \) to grow linearly in the size of t, so we can apply the technique to large code-bases;

(5)

we want the time complexity of \( \overrightarrow {\mathcal {D}}_{}({ t})_2 \) and \( \overleftarrow {\mathcal {D}}_{}({ t})_2 \) to be proportional to that of t and, generally, as low as possible; this means that we can use forward AD to efficiently compute a column of the Jacobian matrix of partial derivatives, while reverse AD efficiently computes a row of the Jacobian.

In this article, we demonstrate how the CHAD technique of automatic differentiation satisfies desiderata (1)–(4); we leave (5) to future work. It achieves this by taking seriously the mathematical structure of programming languages as freely generated categories and by observing that differentiation is compositional according to the chain rule.

2.2 The Chain Rule: Pairing and Sharing of Primals and Derivatives

To achieve desideratum (1) of compositionality, it is tempting to examine the chain rule, the key property of compositionality of derivatives. Given \( f:\mathbb {R}^n\rightarrow \mathbb {R}^m \), we write \( \begin{align*} \mathcal {T}_{}f:\;&\mathbb {R}^n\rightarrow \mathbb {R}^m\times (\underline{\mathbb {R}}^n\multimap \underline{\mathbb {R}}^m)\\ &x\;\; \mapsto (f(x), \; \;v\;\,\mapsto Df(x)(v)) \end{align*} \) for the function that pairs up the primal function value f(x) with the derivative Df(x) of f at x that acts on tangent vectors v. The chain rule then gives the following formula for the derivative of the composition f; g of f and g: \( \begin{equation*} \mathcal {T}_{}(f;g)(x) = (\mathcal {T}_{1}g(\mathcal {T}_{1}f(x)), \mathcal {T}_{2}f(x);\mathcal {T}_{2}g(\mathcal {T}_{1}f(x))), \end{equation*} \) where we write \( \mathcal {T}_{1}f\stackrel{\mathrm{def}}{=}\mathcal {T}_{}f;\pi _1 \) and \( \mathcal {T}_{2}f\stackrel{\mathrm{def}}{=}\mathcal {T}_{}f;\pi _2 \) for the first and second components of \( \mathcal {T}_{}f \), respectively. We make two observations:

(1)

the derivative of the composition f; g does not only depend on the derivatives of g and f but also on the primal value of f;

(2)

the primal value of f is used twice: once in the primal value of f; g and once in its derivative; we want to share these repeated subcomputations, to address desiderata (4) and (5).

Insight 1.

It is wise to pair up computations of primal function values and derivatives and to share computation between both if we want to calculate derivatives of functions compositionally and efficiently.

Similarly, we can pair up f’s transposed (adjoint) derivative Dft, which propagates not tangent vectors but cotangent vectors: \( \begin{align*} \mathcal {T}^*_{}f : \;&\mathbb {R}^n \rightarrow \mathbb {R}^m\times (\underline{\mathbb {R}}^m\multimap \underline{\mathbb {R}}^n)\\ &x\;\;\mapsto (f(x), \;\;v\;\,\,\mapsto {Df}^{t}(x)(v)). \end{align*} \) It then satisfies the following chain rule, which follows from the usual chain rule above together with the fact that (A; B)t = Bt; At for linear maps A and B (transposition is contravariant—note the resulting reversed order of \( \mathcal {T}^*_{2}f \) and \( \mathcal {T}^*_{2}g \) for reverse AD): \( \begin{equation*} \mathcal {T}^*_{}(f;g)(x) = (\mathcal {T}^*_{1}g(\mathcal {T}^*_{1}f(x)), \mathcal {T}^*_{2}g(\mathcal {T}^*_{1}f(x));\mathcal {T}^*_{2}f(x)). \end{equation*} \) Again, pairing and sharing the primal and (transposed) derivative computations is beneficial.

CHAD directly implements the operations \( \mathcal {T}_{} \) and \( \mathcal {T}^*_{} \) as source code transformations \( \overrightarrow {\mathcal {D}}_{} \) and \( \overleftarrow {\mathcal {D}}_{} \) on a functional language to implement forward2 and reverse mode AD, respectively. These code transformations are defined compositionally through structural induction on the syntax by exploiting the chain rules above combined with the categorical structure of programming languages.

2.3 CHAD on a First-order Functional Language

Here, we outline how CHAD looks when applied to programs written in a first-order functional language. We consider this material known, as it is essentially the algorithm of Reference [16]. However, we present it in terms of a λ-calculus rather than categorical combinators, by applying the well-known mechanical translations between both formalisms [13]. We hope that this presentation makes the algorithm easier to apply in practice.

We consider a source programming language (see Section 3) where we write τ, σ, ρ for types that are either statically sized arrays of n real numbers realn or tuples \( { \tau }\boldsymbol {\mathop {*}}{ \sigma } \) of types τ, σ. These types will be called first-order types in this section.3 We consider programs t of type σ in a typing context Γ = x1: τ1, …, xn: τn, where xi are identifiers. We write such typing of programs in a context as Γt: σ. As long as our language has certain primitive operations (which we represent schematically), \( \begin{equation*} \frac {\Gamma \vdash { t}_1 : \mathbf {real}^{n_1}\quad \cdots \quad \Gamma \vdash { t}_k : \mathbf {real}^{n_k}}{\Gamma \vdash \mathsf {op}({ t}_1,\ldots ,{ t}_k) : \mathbf {real}^m}, \end{equation*} \) such as constants (as nullary operations), (elementwise) addition and multiplication of arrays, inner products and certain non-linear functions like sigmoid functions, we can write complex programs by sequencing together such operations. Figure 1(a) and (b) give some examples of some programs we can write, where we write real for real1 and indicate shared subcomputations with let-bindings.

Fig. 1.

Fig. 1. Forward and reverse AD illustrated on simple first-order functional programs.

CHAD transforms the types and programs of this source language into types and programs of a suitably chosen target language (see Section 4) that is a superset of the source language. CHAD associates to each source language type τ types of

  • forward mode primal values \( \overrightarrow {\mathcal {D}}_{}({ \tau })_1 \);

    we define \( \overrightarrow {\mathcal {D}}_{}(\mathbf {real}^n)\stackrel{\mathrm{def}}{=}\mathbf {real}^n \) and \( \overrightarrow {\mathcal {D}}_{}({ \tau }\boldsymbol {\mathop {*}}{ \sigma })_1\stackrel{\mathrm{def}}{=}\overrightarrow {\mathcal {D}}_{}({ \tau })_1\boldsymbol {\mathop {*}}\overrightarrow {\mathcal {D}}_{}({ \sigma })_1 \); that is, for now \( \overrightarrow {\mathcal {D}}_{}({ \tau })_1={ \tau } \);

  • reverse mode primal values \( \overleftarrow {\mathcal {D}}_{}({ \tau })_1 \);

    we define \( \overleftarrow {\mathcal {D}}_{}(\mathbf {real}^n)\stackrel{\mathrm{def}}{=}\mathbf {real}^n \) and \( \overleftarrow {\mathcal {D}}_{}({ \tau }\boldsymbol {\mathop {*}}{ \sigma })_1\stackrel{\mathrm{def}}{=}\overleftarrow {\mathcal {D}}_{}({ \tau })_1\boldsymbol {\mathop {*}}\overleftarrow {\mathcal {D}}_{}({ \sigma })_1 \); that is, for now \( \overleftarrow {\mathcal {D}}_{}({ \tau })_1={ \tau } \);

  • forward mode tangent values \( \overrightarrow {\mathcal {D}}_{}({ \tau })_2 \);

    we define \( \overrightarrow {\mathcal {D}}_{}(\mathbf {real}^n)_2\stackrel{\mathrm{def}}{=}\underline{\mathbf {real}}^n \) and \( \overrightarrow {\mathcal {D}}_{}({ \tau }\boldsymbol {\mathop {*}}{ \sigma })\stackrel{\mathrm{def}}{=}\overrightarrow {\mathcal {D}}_{}({ \tau })_2\boldsymbol {\mathop {*}}\overrightarrow {\mathcal {D}}_{}({ \sigma })_2 \);

  • reverse mode cotangent values \( \overleftarrow {\mathcal {D}}_{}({ \tau })_2 \);

    we define \( \overleftarrow {\mathcal {D}}_{}(\mathbf {real}^n)_2\stackrel{\mathrm{def}}{=}\underline{\mathbf {real}}^n \) and \( \overleftarrow {\mathcal {D}}_{}({ \tau }\boldsymbol {\mathop {*}}{ \sigma })\stackrel{\mathrm{def}}{=}\overleftarrow {\mathcal {D}}_{}({ \tau })_2\boldsymbol {\mathop {*}}\overleftarrow {\mathcal {D}}_{}({ \sigma })_2 \).

The types \( \overrightarrow {\mathcal {D}}_{}({ \tau })_1 \) and \( \overleftarrow {\mathcal {D}}_{}({ \tau })_1 \) of primals are Cartesian types, which we can think of as denoting sets, while the types \( \overrightarrow {\mathcal {D}}_{}({ \tau })_2 \) and \( \overleftarrow {\mathcal {D}}_{}({ \tau })_2 \) are linear types that denote commutative monoids. That is, such linear types in our language need to have a commutative monoid structure (0, +). For example, real n is the commutative monoid over realn with 0 is the zero vector and (+) as elementwise addition of vectors. Derivatives and transposed derivatives are then linear functions, that is, homomorphisms of this (0, +)-monoid structure. As we will see, we use the monoid structure to initialize and accumulate (co)tangents in the definition of CHAD.

We extend these operations \( \overrightarrow {\mathcal {D}}_{} \) and \( \overleftarrow {\mathcal {D}}_{} \) to act not only on types but also on typing contexts Γ = x1: τ1, …, xn: τn to produce primal contexts and (co)tangent types: \( \begin{align*} \overrightarrow {\mathcal {D}}_{}({ x}_1:{ \tau }_1,\ldots ,{ x}_n:{ \tau }_n)_1&={ x}_1:\overrightarrow {\mathcal {D}}_{}({ \tau }_1)_1,\ldots , { x}_n:\overrightarrow {\mathcal {D}}_{}({ \tau }_n)_1 \qquad \overrightarrow {\mathcal {D}}_{}({ x}_1:{ \tau }_1,\ldots ,{ x}_n:{ \tau }_n)_2=\overrightarrow {\mathcal {D}}_{}({ \tau }_1)_2\boldsymbol {\mathop {*}}\cdots \boldsymbol {\mathop {*}}\overrightarrow {\mathcal {D}}_{}({ \tau }_n)_2\\ \overleftarrow {\mathcal {D}}_{}({ x}_1:{ \tau }_1,\ldots ,{ x}_n:{ \tau }_n)_1&={ x}_1:\overleftarrow {\mathcal {D}}_{}({ \tau }_1)_1,\ldots , { x}_n:\overleftarrow {\mathcal {D}}_{}({ \tau }_n)_1 \qquad \overleftarrow {\mathcal {D}}_{}({ x}_1:{ \tau }_1,\ldots ,{ x}_n:{ \tau }_n)_2=\overleftarrow {\mathcal {D}}_{}({ \tau }_1)_2\boldsymbol {\mathop {*}}\cdots \boldsymbol {\mathop {*}}\overleftarrow {\mathcal {D}}_{}({ \tau }_n)_2. \end{align*} \) To each program Γt: σ, CHAD then associates programs calculating the forward mode and reverse mode derivatives \( \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t}) \) and \( \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t}) \), whose definitions make use of the list \( \overline{\Gamma } \) of identifiers that occur in Γ: \( \begin{align*} &\overrightarrow {\mathcal {D}}_{}(\Gamma)_1\vdash \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t}) : \overrightarrow {\mathcal {D}}_{}({ \sigma })_1\boldsymbol {\mathop {*}} (\overrightarrow {\mathcal {D}}_{}(\Gamma)_2\multimap \overrightarrow {\mathcal {D}}_{}({ \sigma })_2)\\ &\overleftarrow {\mathcal {D}}_{}(\Gamma)_1\vdash \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t}) : \overleftarrow {\mathcal {D}}_{}({ \sigma })_1\boldsymbol {\mathop {*}} (\overleftarrow {\mathcal {D}}_{}({ \sigma })_2\multimap \overleftarrow {\mathcal {D}}_{}(\Gamma)_2). \end{align*} \) Observing that each program t computes a differentiable (infinitely differentiable) function \( [\![ { t}]\!] \) between Euclidean spaces, as long as all primitive operations \( \mathsf {op} \) are differentiable, the key property that we prove for these code transformations is that they actually calculate derivatives:

Theorem A (Correctness of CHAD, Theorem 8.3).

For any well-typed program (where τi and σ are first-order types, i.e., realn and tuples of such types) \( \begin{equation*} { x}_1:{ \tau }_1,\ldots ,{ x}_n:{ \tau }_n\vdash {{ t}}:{ \sigma } \end{equation*} \) we have that \( [\![ \overrightarrow {\mathcal {D}}_{}({ t})]\!] =\mathcal {T}_{}[\![ { t}]\!] \;\text{ and }\;[\![ \overleftarrow {\mathcal {D}}_{}({ t})]\!] =\mathcal {T}^*_{}[\![ { t}]\!] . \)

Once we fix a semantics for the source and target languages, we can show that this theorem holds if we define \( \overrightarrow {\mathcal {D}}_{} \) and \( \overleftarrow {\mathcal {D}}_{} \) on programs using the chain rule. The proof works by plain induction on the syntax.

For example, we can correctly define reverse mode CHAD on a first-order language as follows (see Section 7): \( \begin{align*} &\overleftarrow {\mathcal {D}}_{\overline{\Gamma }}(\mathsf {op}({ t}_1,\ldots ,{ t}_k)) &&\stackrel{\mathrm{def}}{=}&& \mathbf {let}\,\langle { x}_1, { x}_1^{\prime }\rangle =\overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t}_1)\,\mathbf {in}\,\cdots \mathbf {let}\,\langle { x}_k, { x}_k^{\prime }\rangle =\overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t}_k)\,\mathbf {in}\,\\ &&&&&\langle \mathsf {op}({ x}_1,\ldots ,{ x}_k), \underline{\lambda } \mathsf {v}.\,\mathbf {let}\,\mathsf {v}={D \mathsf {op}}^{t}({ x}_1,\ldots ,{ x}_k;\mathsf {v})\,\mathbf {in}\,{ x}_1^{\prime }{\bullet } (\mathbf {proj}_{1}\,{\mathsf {v}})+\cdots +{ x}_k^{\prime }{\bullet } (\mathbf {proj}_{k}\,{\mathsf {v}})\rangle \\ &\overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ x}) && \stackrel{\mathrm{def}}{=}&& \langle { x}, \underline{\lambda } \mathsf {v}.\, \mathbf {coproj}_{\mathbf {idx}({ x}; \overline{\Gamma })\,}\,(\mathsf {v})\rangle \\ & \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}(\mathbf {let}\,{ x}={ t}\,\mathbf {in}\,{ s}) && \stackrel{\mathrm{def}}{=}&& \mathbf {let}\,\langle { x}, { x}^{\prime }\rangle =\overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})\,\mathbf {in}\, \mathbf {let}\,\langle { y}, { y}^{\prime }\rangle =\overleftarrow {\mathcal {D}}_{\overline{\Gamma },{ x}}({ s})\,\mathbf {in}\, \langle { y}, \underline{\lambda } \mathsf {v}.\,\mathbf {let}\,\mathsf {v}={ y}^{\prime }{\bullet } \mathsf {v}\,\mathbf {in}\, \mathbf {fst}\,\mathsf {v}+{ x}^{\prime }{\bullet } (\mathbf {snd}\,\mathsf {v}) \rangle \\ & \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}(\langle { t}, { s}\rangle) && \stackrel{\mathrm{def}}{=}&& \mathbf {let}\,\langle { x}, { x}^{\prime }\rangle =\overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})\,\mathbf {in}\, \mathbf {let}\,\langle { y}, { y}^{\prime }\rangle =\overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})\,\mathbf {in}\, \langle \langle { x}, { y}\rangle , \underline{\lambda } \mathsf {v}.\,{ x}^{\prime }{\bullet } (\mathbf {fst}\,\mathsf {v}) + {{ y}^{\prime }{\bullet } (\mathbf {snd}\,\mathsf {v})}\rangle \\ & \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}(\mathbf {fst}\,{ t}) &&\stackrel{\mathrm{def}}{=}&& \mathbf {let}\,\langle { x}, { x}^{\prime }\rangle =\overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})\,\mathbf {in}\,\langle \mathbf {fst}\,{ x}, \underline{\lambda } \mathsf {v}.\,{ x}^{\prime }{\bullet } \langle \mathsf {v}, \underline{0}\rangle \rangle \\ & \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}(\mathbf {snd}\,{ t}) && \stackrel{\mathrm{def}}{=}&& \mathbf {let}\,\langle { x}, { x}^{\prime }\rangle =\overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})\,\mathbf {in}\,\langle \mathbf {snd}\,{ x}, \underline{\lambda } \mathsf {v}.\,{ x}^{\prime }{\bullet } \langle \underline{0}, \mathsf {v}\rangle \rangle . \end{align*} \) Here, we write \( \underline{\lambda } \mathsf {v}.\,{ t} \) for a linear function abstraction (merely a notational convention – it can simply be thought of as a plain function abstraction) and ts for a linear function application of t: τσ to the argument s: τ (which again can be thought of as a plain function application). Furthermore, given a program t of tuple type \( {{ \underline{\sigma } }_1}\boldsymbol {\mathop {*}}{\cdots }\boldsymbol {\mathop {*}}{{ \underline{\sigma } }_n} \), we write projit for its ith projection of type σ i. Similarly, given a program t of linear type σ i, we write coproji (t) for the ith coprojection ⟨ 0, …, 0, t, 0, …, 0 ⟩ of type \( {{ \underline{\sigma } }_1}\boldsymbol {\mathop {*}}{\cdots }\boldsymbol {\mathop {*}}{{ \underline{\sigma } }_n} \) and we write idx(xi; x1, …, xn) = i for the index of an identifier in a list of identifiers. Finally, \( {D \mathsf {op}}^{t} \) here is a linear operation that implements the transposed derivative of the primitive operation \( \mathsf {op} \). We note that we crucially need the commutative monoid structure on linear types to correctly define the reverse mode derivatives of programs that involve tuples (or n-ary operations for n ≠ 1). Intuitively, matrix transposition (of derivatives) flips the copying-deleting comonoid structure provided by tuples into the addition-zero monoid structure.

Insight 2.

In functional define-then-run reverse AD, we need to have a commutative monoid structure on types of cotangents to mirror the comonoid structure coming from tuples: copying fan-out in the original program gets translated into fan-in in the transposed derivative, for accumulating incoming cotangents. This leads to linear types of cotangents.

Furthermore, observe that CHAD pairs up primal and (co)tangent values and shares common subcomputations, as desired. We see that what CHAD achieves is a compositional efficient reverse mode AD algorithm that computes the (transposed) derivatives of a composite program in terms of the (transposed) derivatives \( {D \mathsf {op}}^{t} \) of the basic building blocks \( \mathsf {op} \). Finally, it does so in a way that satisfies desiderata (1)–(5).

For example, Figure 1(c) and (d) display the code that forward and reverse mode CHAD, respectively, generate for the source programs of Figure 1(a) and (b). This is the code that is actually generated by the CHAD code transformations in our Haskell implementation followed by some very basic simplifications that do not affect time complexity and whose only purpose here is to aid legibility. For more information about how exactly this code relates to the output one gets when applying the forward and reverse AD macros in this article to the source programs, see Appendix B.

2.4 Intermezzo: The Categorical Structure of CHAD

While this definition of CHAD on a first-order language straightforwardly follows from the mathematics of derivatives, it is not immediately clear how to should be extended to source languages with more expressive features such as higher-order functions. Indeed, we do not typically consider derivatives of higher-order functions in calculus. In fact, it is not even clear what a tangent or cotangent to a function type should be, or, for that matter, what a primal associated with a value of function type is. To solve this mystery, we employ some category theory.

Observe that the first-order source language we consider can be viewed as a category Syn with products (see Section 3): Its objects are types τ, σ, ρ and morphisms tSyn(τ, σ) are programs x: τt: σ modulo standard βη-program equivalence (identities are given by variables and composition is done through let-bindings). This category is freely generated by the objects realn and morphisms \( \mathsf {op} \) in the sense that any consistent assignment of objects F(realn) and morphisms \( F(\mathsf {op}) \) in a category with products \( \mathcal {C} \) extends to a unique product preserving functor \( F:\mathbf {Syn}\rightarrow \mathcal {C} \).

Suppose that we are given a categorical model \( \mathcal {L}:\mathcal {C}^{op}\rightarrow \mathbf {Cat} \) of linear logic (a so-called locally indexed category—see, for example, Section 9.3.4 in Reference [29]), where we think of the objects and morphisms of \( \mathcal {C} \) as the semantics of Cartesian types and their programs and of the objects and morphisms of \( \mathcal {L} \) as the semantics of linear types and their programs. We observe that we can define categories \( \Sigma _{\mathcal {C}}\mathcal {L} \) and \( \Sigma _{\mathcal {C}}\mathcal {L}^{op} \) (their so-called Grothendieck constructions, or Σ-types, see Section 6) with objects that are pairs (A1, A2) with A1 an object of \( \mathcal {C} \) and A2 an object of \( \mathcal {L} \) and homsets \( \begin{align*} \Sigma _{\mathcal {C}}\mathcal {L}((A_1,A_2),(B_1,B_2))&\stackrel{\mathrm{def}}{=}\mathcal {C}(A_1,B_1)\times \mathcal {L}(A_1)(A_2, B_2)\cong \mathcal {C}(A_1,B_1\times (A_2\multimap B_2))\\ \Sigma _{\mathcal {C}}\mathcal {L}^{op}((A_1,A_2),(B_1,B_2))&\stackrel{\mathrm{def}}{=}\mathcal {C}(A_1,B_1)\times \mathcal {L}(A_1)(B_2, A_2)\cong \mathcal {C}(A_1,B_1\times (B_2\multimap A_2)). \end{align*} \) We prove that these categories have finite products, provided that some conditions are satisfied: namely that \( \mathcal {C} \) has finite products and \( \mathcal {L} \) has indexed finite biproducts (or, equivalently, has indexed finite products and is enriched over commutative monoids). Indeed, then ∏iI(A1i, A2i) = (∏iIA1i, ∏iIA2i). In other words, it is sufficient if our model of linear logic is biadditive. In particular, the categorical model of linear logic that we can build from the syntax of our target language for CHAD, LSyn: CSynopCat, satisfies our conditions (in fact, it is the initial model that does so), so ΣCSynLSyn and ΣCSynLSynop have finite products. By the universal property of the source language Syn, we obtain a canonical definition of CHAD.

Theorem B (CHAD from a Universal Property, Corollary 7.1).

Forward and reverse mode CHAD are the unique structure preserving functors \( \begin{align*} &\overrightarrow {\mathcal {D}}_{}(-):\mathbf {Syn}\rightarrow \Sigma _{{\mathbf {CSyn}}}{\mathbf {LSyn}}&&\overleftarrow {\mathcal {D}}_{}(-):\mathbf {Syn}\rightarrow \Sigma _{{\mathbf {CSyn}}}{\mathbf {LSyn}}^{op} \end{align*} \) from the syntactic category Syn of the source language to (opposite) Grothendieck construction of the target language LSyn: CSynopCat that send primitive operations \( \mathsf {op} \) to their derivative \( D \mathsf {op} \) and transposed derivative \( {D \mathsf {op}}^{t} \), respectively.

The definitions following from this universal property reproduce the definitions of CHAD that we have given so far. Intuitively, the linear types represent commutative monoids, implementing the idea that (transposed) derivatives are linear functions in the sense that Df(x)(0) = 0 and Df(x)(v + v′) = Df(x)(v) + Df(x)(v′). We have seen that this commutative monoid structure is important when writing down the definitions of AD as a source-code transformation.

Seeing that a higher-order language can be viewed as a freely generated Cartesian closed category Syn, it is tempting to find a suitable target language such that ΣCSynLSyn and ΣCSynLSynop are Cartesian closed. Then, we can define CHAD on this higher-order language via Theorem B.

Insight 3.

To understand how to perform CHAD on a source language with language feature X (e.g., higher-order functions), we need to understand the categorical semantics of language feature X (e.g., categorical exponentials) in categories of the form \( \Sigma _\mathcal {C}\mathcal {L} \) and \( \Sigma _\mathcal {C}\mathcal {L}^{op} \). Giving sufficient conditions on a model of linear logic \( \mathcal {L}:\mathcal {C}^{op}\rightarrow \mathbf {Cat} \) for such a semantics to exist yields a suitable target language for CHAD as the initial such model LSyn: CSynopCat , with the definition of the algorithm falling from the universal property of the source language.

2.5 Cartesian Closure of ΣC L and ΣC L op and CHAD of Higher-order Functions

Having had this insight, we identify conditions on a locally indexed category \( \mathcal {L}:\mathcal {C}^{op}\rightarrow \mathbf {Cat} \) that are enough to guarantee that \( \Sigma _\mathcal {C}\mathcal {L} \) and \( \Sigma _\mathcal {C}\mathcal {L}^{op} \) are Cartesian closed (see Section 6).

Theorem C (Cartesian Closure of \( \Sigma _\mathcal {C}\mathcal {L} \) and \( \Sigma _\mathcal {C}\mathcal {L}^{op} \), Theorems 6.16.2) Suppose that a locally indexed category \( \mathcal {L}:\mathcal {C}^{op}\rightarrow \mathbf {Cat} \) supports (we are intentionally a bit vague here for the sake of legibility)

  • linear !(−)⊗(−)-types (copowers);

  • linear (−)⇒(−)-types (powers);

  • Cartesian (−)⊸(−)-types (types of linear functions);

  • linear biproduct types (or equivalently, linear (additive) product types and enrichment of \( \mathcal {L} \) over commutative monoids);

  • Cartesian tuple and function types.

Then, \( \Sigma _\mathcal {C}\mathcal {L} \) and \( \Sigma _\mathcal {C}\mathcal {L}^{op} \) are Cartesian closed with, respective, exponentials: \( \begin{align*} (A_1,A_2)\Rightarrow {\Sigma }_{\mathcal {C}}{\mathcal {L}} (B_1, B_2) = (A_1\Rightarrow B_1\times (A_2\multimap B_2), A_1\Rightarrow B_2) \\ (A_1,A_2)\Rightarrow {\Sigma }_{\mathcal {C}}{\mathcal {L}^{op}} (B_1, B_2) = (A_1\Rightarrow B_1\times (B_2\multimap A_2), {!}A_1\otimes _{} B_2). \end{align*} \)

In particular, if we extend our target language with (linear) powers, (linear) copowers and (Cartesian) function types, then LSyn: CSynopCat satisfies the conditions of Theorem C, so we can extend Theorem B to our higher-order source language. In particular, we find the following definitions of CHAD for primals and (co)tangents to function types: \( \begin{align*} &\overrightarrow {\mathcal {D}}_{}({ \tau }\rightarrow { \sigma })_1\stackrel{\mathrm{def}}{=}\overrightarrow {\mathcal {D}}_{}({ \tau })_1\rightarrow \overrightarrow {\mathcal {D}}_{}({ \sigma })\boldsymbol {\mathop {*}}(\overrightarrow {\mathcal {D}}_{}({ \tau })_2\multimap \overrightarrow {\mathcal {D}}_{}({ \sigma })_2) && \overrightarrow {\mathcal {D}}_{}({ \tau }\rightarrow { \sigma })_2\stackrel{\mathrm{def}}{=}\overrightarrow {\mathcal {D}}_{}({ \tau })_1\rightarrow \overrightarrow {\mathcal {D}}_{}({ \sigma })_2\\ &\overleftarrow {\mathcal {D}}_{}({ \tau }\rightarrow { \sigma })_1\stackrel{\mathrm{def}}{=}\overleftarrow {\mathcal {D}}_{}({ \tau })_1\rightarrow \overleftarrow {\mathcal {D}}_{}({ \sigma })\boldsymbol {\mathop {*}}(\overleftarrow {\mathcal {D}}_{}({ \sigma })_2\multimap \overleftarrow {\mathcal {D}}_{}({ \tau })_2) && \overleftarrow {\mathcal {D}}_{}({ \tau }\rightarrow { \sigma })_2\stackrel{\mathrm{def}}{=}{!}\overleftarrow {\mathcal {D}}_{}({ \tau })_1\otimes _{}\overleftarrow {\mathcal {D}}_{}({ \sigma })_2. \end{align*} \) Interestingly, we see that for higher-order programs, the primal transformations are no longer the identity. Indeed, the primals \( \overrightarrow {\mathcal {D}}_{}({ \tau }\rightarrow { \sigma })_1 \) and \( \overleftarrow {\mathcal {D}}_{}({ \tau }\rightarrow { \sigma })_1 \) of the function type τσ store not only the primal function itself, but also its derivative with respect to its argument. The other half of a function’s derivative, namely the derivative with respect to the context variables over which it closes, is stored in the tangent space \( \overrightarrow {\mathcal {D}}_{}({ \tau }\rightarrow { \sigma })_2 \) and cotangent space \( \overleftarrow {\mathcal {D}}_{}({ \tau }\rightarrow { \sigma })_2 \) of the function type τσ.

Insight 4.

A forward (respectively, reverse) mode primal to a function type τσ keeps track of both the function and its derivative with respect to its argument (respectively, transposed derivative). For reverse AD, a cotangent at function type τσ (to be propagated back to the enclosing context of the function), keeps track of the incoming cotangents v of type \( \overleftarrow {\mathcal {D}}_{}({ \sigma })_2 \) for each a primal x of type \( \overleftarrow {\mathcal {D}}_{}({ \tau })_1 \) on which we call the function. We store these pairs (x, v) in the type \( {!}\overleftarrow {\mathcal {D}}_{}({ \tau })_1\otimes _{}\overleftarrow {\mathcal {D}}_{}({ \sigma })_2 \) (which we will see is essentially a quotient of a list of pairs of type \( \overleftarrow {\mathcal {D}}_{}({ \tau })_1\boldsymbol {\mathop {*}}\overleftarrow {\mathcal {D}}_{}({ \sigma })_2 \)). Less surprisingly, for forward AD, a tangent at function type τσ (propagated forward from the enclosing context of the function) consists of a function sending each argument primal of type \( \overrightarrow {\mathcal {D}}_{}({ \tau })_1 \) to the outgoing tangent of type \( \overrightarrow {\mathcal {D}}_{}({ \sigma })_2 \).

On programs, we obtain the following extensions of our definitions for reverse AD: \( \begin{align*} & \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}(\lambda { x}.\,{ t}) &\hspace{-20.0pt}\phantom{.}&\stackrel{\mathrm{def}}{=}\; \mathbf {let}\,{ y}=\lambda { x}.\,\overleftarrow {\mathcal {D}}_{\overline{\Gamma },{ x}}({ t})\,\mathbf {in}\, \langle \lambda { x}.\,\mathbf {let}\,\langle { z}, { z}^{\prime }\rangle ={ y}\,{ x}\,\mathbf {in}\, \langle { z}, \underline{\lambda } \mathsf {v}.\,\mathbf {snd}\,({ z}^{\prime }{\bullet } \mathsf {v})\rangle , \underline{\lambda } \mathsf {v}.\,\mathbf {case}\,\mathsf {v}\,\mathbf {of}\,{!}{ x}\otimes _{}\mathsf {v}\rightarrow \mathbf {fst}\,((\mathbf {snd}\,({ y}\,{ x})){\bullet } \mathsf {v}) \rangle \\ &\overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t}\,{ s})&\hspace{-12.0pt}\phantom{.}&\stackrel{\mathrm{def}}{=}\; \mathbf {let}\,\langle { x}, { x}^{\prime }_{\text{ctx}}\rangle =\overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})\,\mathbf {in}\, \mathbf {let}\,\langle { y}, { y}^{\prime }\rangle =\overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})\,\mathbf {in}\, \mathbf {let}\,\langle { z}, { x}^{\prime }_{\text{arg}}\rangle ={ x}\,{ y}\,\mathbf {in}\, \langle { z}, \underline{\lambda } \mathsf {v}.\,{ x}^{\prime }_{\text{ctx}}{\bullet } (!{ y}\otimes \mathsf {v}) + { y}^{\prime }{\bullet } ({ x}^{\prime }_{\text{arg}}{\bullet } \mathsf {v})\rangle . \end{align*} \) With regards to \( \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}(\lambda { x}.\,{ t}) \): suppose that (λx. t): τσ. Note then that we have Γ, x: τt: σ and hence we get for t’s derivative that \( \overleftarrow {\mathcal {D}}_{}(\Gamma)_1, { x}:\overleftarrow {\mathcal {D}}_{}({ \tau })_1 \vdash \overleftarrow {\mathcal {D}}_{\overline{\Gamma },x}(t) : \overleftarrow {\mathcal {D}}_{}({ \sigma })_1 \boldsymbol {\mathop {*}} (\overleftarrow {\mathcal {D}}_{}({ \sigma })_2 \multimap \overleftarrow {\mathcal {D}}_{}(\Gamma)_2 \boldsymbol {\mathop {*}} \overleftarrow {\mathcal {D}}_{}({ \tau })_2) \). Calling the transposed derivative function for t (z′ in the primal, snd (yx) in the dual) hence gives us both halves of the transposed derivative (the derivative with respect to the function argument and the context variables, that is) of the function; we then select the appropriate ones using projections. Similarly, in \( \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t}\,{ s}) \) we extract the transposed derivative \( { x}^{\prime }_{\text{ctx}} \) of t with respect to the context variables from the cotangent of t and obtain the transposed derivative \( { x}^{\prime }_{\text{arg}} \) of t with respect to its function arguments from t’s primal. We combine these two halves of the transposed derivative with s’s transposed derivative (which we get from its cotangent) to get the correct transposed derivative for the function application ts.

2.6 Proving CHAD Correct

With these definitions in place, we turn to the correctness of the source-code transformations. To phrase correctness, we first need to construct a suitable semantics with an uncontroversial notion of semantic differentiation (see Section 5). We choose to work with a semantics in terms of the category Set of sets and functions,4 noting that any function \( f:\mathbb {R}^n\rightarrow \mathbb {R}^m \) has a unique derivative as long as f is differentiable. We will only be interested in this semantic notion of derivative of first-order functions for the sake of correctness of AD, and we will not concern ourselves with semantic derivatives of higher-order functions. We interpret the required linear types in the category CMon of commutative monoids and homomorphisms.

By the universal properties of the syntax, we obtain canonical, structure-preserving (homomorphic) functors \( [\![ -]\!] :\mathbf {Syn}\rightarrow \mathbf {Set} \), \( [\![ -]\!] :{\mathbf {CSyn}}\rightarrow \mathbf {Set} \) and \( [\![ -]\!] :{\mathbf {LSyn}}\rightarrow \mathbf {CMon} \) once we fix interpretations \( \mathbb {R}^n \) of realn and well-typed (differentiable) interpretations \( [\![ \mathsf {op}]\!] \) for each operation \( \mathsf {op} \). These functors define a concrete denotational semantics for our source and target languages.

Having constructed the semantics, we can turn to the correctness proof (of Section 8). Because calculus does not provide an unambiguous notion of derivative at function spaces, we cannot prove that the AD transformations correctly implement mathematical derivatives by plain induction on the syntax. Instead, we use a logical relations argument over the semantics.

Insight 5.

Once we show that the (transposed) derivatives of primitive operations \( \mathsf {op} \) are correctly implemented, correctness of (transposed) derivatives of all other programs follows from a standard logical relations construction over the semantics that relates a curve to its primal and (co)tangent curve. By the chain rule for (transposed) derivatives, all CHAD transformed programs respect the logical relations. By basic calculus results, CHAD therefore has to compute the (transposed) derivative.

In Section 8, we present an elegant high-level formulation of this correctness argument, using categorical logical relations techniques (subsconing). To make this argument accessible to a wider audience of readers, we present here a low-level description of the logical relations argument. The reader may note that these arguments look significantly different from the usual definitions of logical relations. That difference is caused by the non-standard Cartesian closed structure of \( \Sigma _\mathcal {C}\mathcal {L} \) and \( \Sigma _\mathcal {C}\mathcal {L}^{op} \) and the proof is entirely standard when viewed from the higher level of abstraction that subsconing gives us.

We first sketch the correctness argument for forward mode CHAD. By induction on the structure of types, writing (f, f′) for the product pairing of f and f′, we construct a logical relation Pτ on types τ as \( \begin{align*} P_{{ \tau }}&\subseteq (\mathbb {R}^d\Rightarrow [\![ { \tau }]\!])\times (\mathbb {R}^d\Rightarrow ([\![ \overrightarrow {\mathcal {D}}_{}({ \tau })_1]\!] \times (\underline{\mathbb {R}}^d\multimap [\![ \overrightarrow {\mathcal {D}}_{}({ \tau })_2]\!])))\\ P_{\mathbf {real}^n} &\stackrel{\mathrm{def}}{=}\left\lbrace (f, g)\mid f\text { is {differentiable}{} and } g=\mathcal {T}_{}f\right\rbrace \\ P_{\mathbf {1}} &\stackrel{\mathrm{def}}{=}\left\lbrace (x\mapsto (),x\mapsto ((), r\mapsto ()))\right\rbrace \\ P_{{ \tau }\boldsymbol {\mathop {*}}{ \sigma }}&\stackrel{\mathrm{def}}{=}\left\lbrace (((f, f^{\prime }),((g, g^{\prime }),x\mapsto r\mapsto (h(x)(r), h^{\prime }(x)(r)))))\mid (f,(g,h))\in P_{{ \tau }}, (f^{\prime },(g^{\prime },h^{\prime }))\in P_{{ \sigma }}\right\rbrace \\ P_{{ \tau }\rightarrow { \sigma }}&\stackrel{\mathrm{def}}{=}\big \lbrace (f,(g,h))\mid \forall (f^{\prime },(g^{\prime },h^{\prime }))\in P_{{ \tau }}.(x\mapsto f(x)(f^{\prime }(x)), (x\mapsto \pi _1 (g(x)(g^{\prime }(x))),\\ &\phantom{big\lbrace (f,(g,h))\mid \forall (f^{\prime },(g^{\prime },h^{\prime }))\in P_{{ \tau }}.\quad } {x}\mapsto {r}\mapsto (\pi _2 (g(x)(g^{\prime }(x))))(h^{\prime }(x)(r)) + h(x)(r)(g^{\prime }(x))))\in P_{{ \sigma }}\big \rbrace . \end{align*} \) We extend the logical relation to typing contexts Γ = x1: τ1, …, xn: τn as \( P_\Gamma \stackrel{\mathrm{def}}{=}P_{{ \tau }_1\boldsymbol {\mathop {*}}\cdots \boldsymbol {\mathop {*}}{ \tau }_n} \). Then, we establish the following fundamental lemma, which says that all well-typed source language programs t respect the logical relation.

Lemma 2.1.

For any source language program Γt: σ and any \( f:\mathbb {R}^d\rightarrow [\![ \Gamma ]\!] \), \( g:\mathbb {R}^d\rightarrow [\![ \overrightarrow {\mathcal {D}}_{}(\Gamma)_1]\!] \), \( h:\mathbb {R}^d\rightarrow \underline{\mathbb {R}}^d\multimap [\![ \overrightarrow {\mathcal {D}}_{}(\Gamma)_2]\!] \) such that (f, (g, h)) ∈ PΓ, we have that \( (f;[\![ { t}]\!] , (g; [\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})]\!] ; \pi _1, x\mapsto r\mapsto \pi _2([\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})]\!] (g(x)))(h(x)(r))))\in P_{{ \sigma }} \).

The proof goes via induction on the typing derivation of t. The main remaining step in the argument is to note that any tangent vector at \( [\![ { \tau }_1\boldsymbol {\mathop {*}}\cdots \boldsymbol {\mathop {*}}{ \tau }_n]\!] \cong \mathbb {R}^N \), for first-order τi, can be represented by a curve \( \mathbb {R}\rightarrow [\![ { \tau }_1\boldsymbol {\mathop {*}}\cdots \boldsymbol {\mathop {*}}{ \tau }_n]\!] \).

Similarly, for reverse mode CHAD, we define, by induction on the structure of types, a logical relation Pτ on types τ (and, as before, we also define \( P_{\Gamma }=P_{{ \tau }_1\boldsymbol {\mathop {*}}\cdots \boldsymbol {\mathop {*}}{ \tau }_n} \) for typing contexts Γ = x1: τ1, …, xn: τn): \( \begin{align*} P_{{ \tau }}&\subseteq (\mathbb {R}^d\rightarrow [\![ { \tau }]\!])\times (\mathbb {R}^d\rightarrow ([\![ \overleftarrow {\mathcal {D}}_{}({ \tau })_1]\!] \times ([\![ \overleftarrow {\mathcal {D}}_{}({ \tau })_2]\!] \multimap \underline{\mathbb {R}}^d)))\\ P_{\mathbf {real}^n} &\stackrel{\mathrm{def}}{=}\big \lbrace (f, g)\mid f\text { is {differentiable}{} and },g=\mathcal {T}^*_{}f \big \rbrace \\ \ P_{\mathbf {1}}&\stackrel{\mathrm{def}}{=}\left\lbrace (x\mapsto (),x\mapsto ((),v\mapsto 0))\right\rbrace \\ P_{{ \tau }\boldsymbol {\mathop {*}}{ \sigma }}&\stackrel{\mathrm{def}}{=}\left\lbrace (((f, f^{\prime }),((g, g^{\prime }),{x}\mapsto {v}\mapsto h(x)(\pi _1 v)+h^{\prime }(x)(\pi _2 v))))\mid (f,(g,h))\in P_{{ \tau }}, (f^{\prime },(g^{\prime },h^{\prime }))\in P_{{ \sigma }}\right\rbrace \\ P_{{ \tau }\rightarrow { \sigma }}&\stackrel{\mathrm{def}}{=}\big \lbrace (f,(g,h))\mid \forall (f^{\prime },(g^{\prime },h^{\prime }))\in P_{{ \tau }}. (x\mapsto f(x)(f^{\prime }(x))), (x\mapsto \pi _1 (g(x)(g^{\prime }(x))),\\ &\phantom{\stackrel{\mathrm{def}}{=}\big \lbrace (f,(g,h))\mid \forall (f^{\prime },(g^{\prime },h^{\prime }))\in P_{{ \tau }}.\quad } {x}\mapsto {v}\mapsto h({x})({!}g^{\prime }(x)\otimes _{}v) +h^{\prime }(x)((\pi _2(g(x)(g^{\prime }(x))))v))\in P_{{ \sigma }} \big \rbrace . \end{align*} \) Then, we establish the following fundamental lemma.

Lemma 2.2.

For any source language program Γt: σ and any \( f:\mathbb {R}^d\rightarrow [\![ \Gamma ]\!] \), \( g:\mathbb {R}^d\rightarrow [\![ \overleftarrow {\mathcal {D}}_{}(\Gamma)_1]\!] \), \( h:\mathbb {R}^d\rightarrow [\![ \overleftarrow {\mathcal {D}}_{}(\Gamma)_2]\!] \multimap \underline{\mathbb {R}}^d \) such that (f, (g, h)) ∈ PΓ, we have that \( (f;[\![ { t}]\!] , (g; [\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})]\!] ; \pi _1,{x}\mapsto {v}\mapsto h(x)(\pi _2([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})]\!] (g(x)))(v))))\in P_{{ \sigma }} \).

The proof goes via induction on the typing derivation of t. Correctness follows from the fundamental lemma by observing that any cotangent vector at \( [\![ { \tau }_1\boldsymbol {\mathop {*}}\cdots \boldsymbol {\mathop {*}}{ \tau }_n]\!] \cong \mathbb {R}^N \), for first-order τi, can be represented by a curve \( \mathbb {R}\rightarrow [\![ { \tau }_1\boldsymbol {\mathop {*}}\cdots \boldsymbol {\mathop {*}}{ \tau }_n]\!] \).

We obtain our main theorem, Theorem A, but now for our CHAD algorithms applied to a higher-order source language.

2.7 A Practical Implementation in Haskell

Next, we address the practicality of our method (in Section 9). The code transformations we employ are not too daunting to implement and they are well behaved in the sense that the derivative code they generate grows linearly in the size of the original source code. However, the implementation of the required linear types presents a challenge. Indeed, types like !(−)⊗(−) and (−)⊸(−) are absent from languages such as Haskell and OCaml. Luckily, in this instance, we can implement them using abstract data types by using a (basic) module system:

Insight 6.

Under the hood, !τσ can consist of a list of values of type \( { \tau }\boldsymbol {\mathop {*}}{ \sigma } \). Its API ensures that the list order and the difference between xs + + [(t, s), (t, s′)] and xs + + [(t, s + s′)] (or xs and xs + + [(t, 0)]) cannot be observed: as such, it is a quotient type. Meanwhile, τσ can be implemented as a standard function type τσ with a limited API that enforces that we can only ever construct linear functions: as such, it is a subtype.

This idea leads to our reference implementation of CHAD in Haskell (available at https://github.com/VMatthijs/CHAD), which generates perfectly standard simply typed functional code that is given a bit of extra type-safety with the linear types, implemented as abstract types. To illustrate what our method does in practice, we consider two programs of our higher-order source language, that we may want to differentiate in Figure 2(a) and (b). The forward and reverse mode derivatives that our CHAD implementation generates for these programs are listed in Figure 2(c) and (d), again modulo minor simplifications that aid legibility but have no significant runtime implications.5

Fig. 2.

Fig. 2. Forward and reverse AD illustrated on higher-order functional array processing programs. The parts of the programs that involve AD on higher-order functions are marked in blue. Observe that in (c) (respectively, (d)), the primal value associated with the function f from program (a) (respectively, (b)) computes both the original function f as well as its derivative (respectively, transposed derivative) with respect to its argument z (respectively, x2i). In (c), the tangent f′ to f is produced by propagating forwards the tangent x′ to the context variable x that f captures, by using f’s derivative with respect to x. This lets us correctly propagate forwards the contributions to ys′ from both f’s dependence on its argument z and on its context variable x. Dually, in (d), the cotangent to f, which we construct from the cotangent ys′, is consumed by propagating it backwards to the cotangent x1′ to the context variable x1 that f captures, by using f’s transposed derivative with respect to x1. Meanwhile, the adjoint x2′ is constructed using the part of the primal of f that captures f’s transposed derivative with respect to x2i.

In Section 9, we also phrase the correctness proof of the AD transformations in elementary terms, such that it holds in the applied setting where we use abstract types to implement linear types. We show that our correctness results are meaningful, as they make use of a denotational semantics that is adequate with respect to the standard operational semantics. Furthermore, to stress the applicability of our method, we show in Section 10 that it extends to higher-order (primitive) operations, such as map.

Finally (in Section 11), we zoom out and reflect on how this method generalizes. The crux of CHAD is in the following steps:

  • view the source language as a freely generated category Syn with some appropriate structure \( \mathcal {S} \) (such as Cartesian closure, coproducts, (co)inductive types, iteration), generated from objects realn and morphisms \( \mathsf {op} \);

  • find a suitable target language LSyn (with linear types arising from the effect of commutative monoids) for the translation such that ΣCSynLSyn and ΣCSynLSynop are categories with the structure \( \mathcal {S} \); in our experience, this is possible for most common choices of \( \mathcal {S} \) corresponding to programming language constructs;

  • then, by the universal property of Syn, we obtain unique structure preserving (homomorphic) functors \( \overrightarrow {\mathcal {D}}_{}: \mathbf {Syn}\rightarrow \Sigma _{\mathbf {CSyn}}{\mathbf {LSyn}} \) and \( \overleftarrow {\mathcal {D}}_{}:\mathbf {Syn}\rightarrow \Sigma _{\mathbf {CSyn}}{\mathbf {LSyn}}^{op} \) defining forward and reverse mode AD transformations, as soon as we fix their action on \( \mathsf {op} \) (and realn) to implement the derivative of the operations;

  • the correctness of these AD methods follows by a standard categorical logical relations argument as the subscones \( \overrightarrow {\mathbf {SScone}}_{} \) and \( \overleftarrow {\mathbf {SScone}}_{} \) tend to also be categories with the structure \( \mathcal {S} \) for most choices of \( \mathcal {S} \).

Insight 7.

The definition and correctness proof of forward and reverse AD on expressive programming languages follow automatically, by viewing the algorithms as structure preserving functors \( \overrightarrow {\mathcal {D}}_{}: \mathbf {Syn}\rightarrow \Sigma _{\mathbf {CSyn}}{\mathbf {LSyn}} \) and \( \overleftarrow {\mathcal {D}}_{}:\mathbf {Syn}\rightarrow \Sigma _{\mathbf {CSyn}}{\mathbf {LSyn}}^{op} \).

We conclude by observing that, in this sense, CHAD is not specific to automatic differentiation at all. We can choose different generators than realn and \( \mathsf {op} \) for Syn and different mappings of these generators under \( \overrightarrow {\mathcal {D}}_{} \) and \( \overleftarrow {\mathcal {D}}_{} \). Doing so lets CHAD derive various other dynamic program analyses that accumulate data in a commutative monoid, together with their correctness proofs by logical relations (see Section 11.3).

Skip 3λ-CALCULUS AS A SOURCE LANGUAGE FOR AUTOMATIC DIFFERENTIATION Section

3 λ-CALCULUS AS A SOURCE LANGUAGE FOR AUTOMATIC DIFFERENTIATION

As a source language for our AD translations, we can begin with a standard, simply typed λ-calculus that has ground types realn of statically sized6 arrays of n real numbers, for all \( n\in \mathbb {N} \), and sets \( \mathsf {Op}_{n_1,\ldots ,n_k}^m \) of primitive operations \( \mathsf {op} \) for all \( k, m, n_1,\ldots , n_k\in \mathbb {N} \). These operations will be interpreted as differentiable7 functions \( (\mathbb {R}^{n_1}\times \cdots \times \mathbb {R}^{n_k})\rightarrow \mathbb {R}^m \). Examples to keep in mind for \( \mathsf {op} \) include

  • constants \( \underline{c}\in \mathsf {Op}_{}^n \) for each \( c\in \mathbb {R}^n \), for which we slightly abuse notation and write c () as c ;

  • elementwise addition and product \( (+),(*)\!\in \!\mathsf {Op}_{n,n}^n \) and matrix-vector product \( (\star)\!\in \!\mathsf {Op}_{n\cdot m, m}^n \);

  • operations for summing all the elements in an array: \( \mathrm{sum}\in \mathsf {Op}_{n}^1 \);

  • some non-linear functions like the sigmoid function \( \varsigma \in \mathsf {Op}_{1}^1 \).

We intentionally present operations in a schematic way, as primitive operations tend to form a collection that is added to in a by-need fashion, as an AD library develops. The precise operations needed will depend on the applications. In statistics and machine learning applications, \( \mathsf {Op} \) tends to include a mix of multi-dimensional linear algebra operations and mostly one-dimensional non-linear functions. A typical library for use in machine learning would work with multi-dimensional arrays (sometimes called “tensors”). We focus here on one-dimensional arrays as the issues of how precisely to represent the arrays are orthogonal to the concerns of our development.

The types τ, σ, ρ and terms t, s, r of our AD source language are as follows: \( \begin{align*} \begin{array}{llll} { \tau }, { \sigma }, { \rho } & ::=& & \qquad \text{types} \\ &\mathrel {\vert }& \mathbf {real}^n & \qquad \text{real arrays}\\ &\mathrel {\vert }\quad \, & \mathbf {1}& \qquad \text{nullary product}\\ &&&\\ { t}, { s}, { r} & ::=& & \qquad \text{terms} \\ & & { x}& \qquad \text{identifier} \\ & \mathrel {\vert }& \mathbf {let}\,{ x}={ t}\,\mathbf {in}\,{ s} & \qquad \text{$\mathbf {let}$-bindings}\\ &\mathrel {\vert }& \mathsf {op}({ t}_1,\ldots ,{ t}_k) & \qquad \text{operations} \\ \end{array} \qquad \begin{array}{llll} &\mathrel {\vert }\quad \,& { \tau }_1\boldsymbol {\mathop {*}} { \tau }_2 & \qquad \text{binary product} \\ &\mathrel {\vert }& { \tau }\rightarrow { \sigma } & \qquad \text{function} \\ &&&\\ &&&\\ &\mathrel {\vert }& \langle \rangle \ \mathrel {\vert }\langle { t}, { s}\rangle & \qquad \text{product tuples}\\ & \mathrel {\vert }& \mathbf {fst}\,{{ t}}\ \mathrel {\vert }\mathbf {snd}\,{{ t}}& \qquad \text{product projections}\\ &\mathrel {\vert }& \lambda { x}.\,{ t}&\qquad \text{function abstraction}\\ &\mathrel {\vert }&{ t}\,{ s} & \qquad \text{function application} \end{array} \end{align*} \)

The typing rules are in Figure 3. We use the usual conventions for free and bound variables and write the capture-avoiding substitution of x with s in t as t[s/x]. We employ the usual syntactic sugar \( \lambda \langle { x}, { y}\rangle .\,{ t}\stackrel{\mathrm{def}}{=}\lambda { z}.\,{{ t}{}[^{\mathbf {fst}\,{ z}}\!/\!_{{ x}},^{\mathbf {snd}\,{ z}}\!/\!_{{ y}}]} \), and we write real for real1. As Figure 4 displays, we consider the standard βη-equational theory for our language, where equations hold on pairs of terms of the same type in the same context. We could consider further equations for our operations, but we do not as we will not need them.

Fig. 3.

Fig. 3. Typing rules for the AD source language.

Fig. 4.

Fig. 4. Standard βη-laws for products and functions. We write \( \stackrel{\# { x}_1,\ldots ,{ x}_n}{=} \) to indicate that the variables x1, …, xn need to be fresh in the left hand side. As usual, we only distinguish terms up to α-renaming of bound variables.

This standard λ-calculus is widely known to be equivalent to the free Cartesian closed category Syn generated by the objects realn and the morphisms \( \mathsf {op} \) (see Reference [27]).

  • Syn has types τ, σ, ρ as objects;

  • Syn has morphisms tSyn(τ, σ) that are in 1-1 correspendence with terms x: τt: σ up to βη-equivalence (which includes α-equivalence);

  • identities are represented by x: τx: τ;

  • composition of x: τt: σ and y: σs: ρ is represented by x: τlety = tins: ρ;

  • 1 and \( { \tau }\boldsymbol {\mathop {*}}{ \sigma } \) represent nullary and binary product, while τσ is the categorical exponential.

Syn has the following well-known universal property.

Proposition 3.1 (Universal Property of Syn).

For any Cartesian closed category \( (\mathcal {C},\mathbb {1},\times ,\Rightarrow) \), we obtain a unique Cartesian closed functor \( F:\mathbf {Syn}\rightarrow \mathcal {C} \), once we choose objects F(realn) of \( \mathcal {C} \) as well as, for each \( \mathsf {op}\in \mathsf {Op}^m_{n_1,\ldots ,n_k} \), make well-typed choices of \( \mathcal {C} \)-morphisms \( F(\mathsf {op}):(F(\mathbf {real}^{n_1})\times \cdots \times F(\mathbf {real}^{n_k}))\rightarrow F{\mathbf {real}^m}. \)

Skip 4LINEAR λ-CALCULUS AS AN IDEALISED AD TARGET LANGUAGE Section

4 LINEAR λ-CALCULUS AS AN IDEALISED AD TARGET LANGUAGE

As a target language for our AD source code transformations, we consider a language that extends the language of Section 3 with limited linear types. We could opt to work with a full linear logic as in Reference [3] or Reference [6]. Instead, however, we will only include the bare minimum of linear type formers that we actually need to phrase the AD transformations. The resulting language is closely related to, but more minimal than, the Enriched Effect Calculus of Reference [14]. We limit our language in this way because we want to stress that the resulting code transformations can easily be implemented in existing functional languages such as Haskell or OCaml. As we discuss in Section 9, the idea will be to make use of a module system to implement the required linear types as abstract data types.

In our idealised target language, we consider linear types (aka computation types) τ , σ , ρ , in addition to the Cartesian types (value types) τ, σ, ρ that we have considered so far. We think of Cartesian types as denoting sets and linear types as denoting sets equipped with an algebraic structure. The Cartesian types will be used to represent sets of primals. The relevant algebraic structure on linear types, in this instance, turns out to be that of a commutative monoid, as this algebraic structure is needed to phrase automatic differentiation algorithms. Indeed, we will use the linear types to denote sets of (co)tangent vectors. These (co)tangents form a commutative monoid under addition.

Concretely, we extend the types and terms of our language as follows: \( \begin{align*} \begin{array}{llll} { \underline{\tau } }, { \underline{\sigma } }, { \underline{\rho } } & ::=& & \qquad \text{linear types}\\ &\mathrel {\vert }& \underline{\mathbf {real}}^n & \qquad \text{real array}\\ & \mathrel {\vert }& \underline{\mathbf {1}}& \qquad \text{unit type}\\ &&&\\ { \tau }, { \sigma }, { \rho } & ::=& & \qquad \text{Cartesian types} \\ &\mathrel {\vert }& \ldots & \qquad \text{as in Section~3}\\ &&&\\ { t}, { s}, { r} & ::=& & \qquad \text{terms} \\ &\mathrel {\vert }& \ldots & \qquad \text{as in Section~3} \\ & \mathrel {\vert }& \mathsf {v}& \qquad \text{linear identifier}\\ &\mathrel {\vert }& \mathbf {let}\,\mathsf {v}={ t}\,\mathbf {in}\,{ s} & \qquad \text{linear $\mathbf {let}$-binding}\\ \end{array} \qquad \begin{array}{llll} & \mathrel {\vert }& { \underline{\tau } }\boldsymbol {\mathop {*}} { \underline{\sigma } } & \qquad \text{binary product}\\ & \mathrel {\vert }& { \tau }\rightarrow { \underline{\sigma } } & \qquad \text{power}\\ &\mathrel {\vert }&{!}{ \tau }\otimes _{}{ \underline{\sigma } }& \qquad \text{copower}\\ &&&\\ &\mathrel {\vert }\quad \, & { \underline{\tau } }\multimap { \underline{\sigma } } & \qquad \text{linear function}\\ &&&\\ &&&\\ &\mathrel {\vert }& \mathsf {lop}({ t}_1,\ldots ,{ t}_k;{ s}) & \qquad \text{linear operation}\\ &\mathrel {\vert }\;\; & {!}{ t}\otimes _{}{ s}\,\mathrel {\vert }\mathbf {case}\,{ t}\,\mathbf {of}\,{!}{ y}\otimes _{}\mathsf {v}\rightarrow { s} & \qquad \text{copower intro/elim}\\ &\mathrel {\vert }& \underline{\lambda } \mathsf {v}.\,{{ t}}\,\mathrel {\vert }{ t}{\bullet } { s} & \qquad \text{abstraction/application}\\ &\mathrel {\vert }& \underline{0}\,\mathrel {\vert }{ t}+{ s} & \qquad \text{monoid structure.} \end{array} \end{align*} \) We work with linear operations \( \mathsf {lop}\in \mathsf {LOp}_{n_1,\ldots ,n_k; n^{\prime }_1,\ldots ,n^{\prime }_l}^{m_1,\ldots ,m_r} \), which are intended to represent differentiable functions \( \begin{equation*} (\mathbb {R}^{n_1}\times \cdots \times \mathbb {R}^{n_k}\times \mathbb {R}^{n^{\prime }_1}\times \cdots \times \mathbb {R}^{n^{\prime }_l})\rightarrow \mathbb {R}^{m_1}\times \cdots \mathbb {R}^{m_r} \end{equation*} \) that are linear (in the sense of respecting 0 and +) in the last l arguments but not in the first k. We write \( \begin{equation*} {\bf LDom} (\mathsf {lop})\stackrel{\mathrm{def}}{=}\mathbf {real}^{n^{\prime }_1}\boldsymbol {\mathop {*}}\cdots \boldsymbol {\mathop {*}} \mathbf {real}^{n^{\prime }_l} \qquad \text{and}\qquad {\bf CDomo} (\mathsf {lop})\stackrel{\mathrm{def}}{=}\mathbf {real}^{m_1}\boldsymbol {\mathop {*}}\cdots \boldsymbol {\mathop {*}} \mathbf {real}^{m_r} \end{equation*} \) for \( \mathsf {lop}\in \mathsf {LOp}_{n_1,\ldots ,n_k; n^{\prime }_1,\ldots ,n^{\prime }_l}^{m_1,\ldots ,m_r} \). These operations can include dense and sparse matrix-vector multiplications, for example. Their purpose is to serve as primitives to implement derivatives \( D \mathsf {op}({ x}_1,\ldots ,{ x}_k;\mathsf {v}) \) and transposed derivatives \( {D \mathsf {op}}^{t}({ x}_1,\ldots ,{ x}_k;\mathsf {v}) \) of the operations \( \mathsf {op} \) from the source language as terms with free variables \( { x}_1,\ldots ,{ x}_k,\mathsf {v} \) that are linear in \( \mathsf {v} \). In fact, one can also opt to directly include primitive linear operations \( \begin{align*} &D \mathsf {op}\in \mathsf {LOp}_{n_1,\ldots ,n_k;n_1,\ldots ,n_k}^m &{D \mathsf {op}}^{t}\in \mathsf {LOp}_{n_1,\ldots ,n_k;m}^{n_1,\ldots ,n_k} \end{align*} \) in \( \mathsf {LOp} \) as the derivatives of each (Cartesian) operation \( \mathsf {op}\in \mathsf {Op} \).

In addition to the judgement Γt: τ, which we encountered in Section 3, we now consider an additional judgement \( \Gamma ;\mathsf {v}:{ \underline{\tau } }\vdash { t}: { \underline{\sigma } } \). While we think of the former as denoting a function between sets, we think of the latter as a function from the set that Γ denotes to the set of monoid homomorphisms from the denotation of τ to that of σ .

Figures 3 and 5 display the typing rules of our language.

Fig. 5.

Fig. 5. Typing rules for the idealised AD target language with linear types, which we consider on top of the rules of Figure 3.

We consider the βη + -equational theory of Figures 4 and 6 for our language, where equations hold on pairs of terms of the same type in the same context. It includes βη-rules as well as commutative monoid and homomorphism laws.

Fig. 6.

Fig. 6. Equational rules for the idealised, linear AD language, which we use on top of the rules of Figure 4. In addition to standard βη-rules for !(−)⊗(−)- and ⊸-types, we add rules making (0, +) into a commutative monoid on the terms of each linear type as well as rules that say that terms of linear types are homomorphisms in their linear variable.

Skip 5SEMANTICS OF THE SOURCE AND TARGET LANGUAGES Section

5 SEMANTICS OF THE SOURCE AND TARGET LANGUAGES

5.1 Preliminaries

5.1.1 Category Theory.

We assume familiarity with categories, functors, natural transformations, and their theory of (co)limits and adjunctions. We write:

  • unary, binary, and I-ary products as \( \mathbb {1} \), X1 × X2, and ∏iIXi, writing πi for the projections and (), (x1, x2), and (xi)iI for the tupling maps;

  • unary, binary, and I-ary coproducts as \( \mathbb {0} \), X1 + X2, and ∑iIXi, writing ιi for the injections and [], [x1, x2], and [xi]iI for the cotupling maps;

  • exponentials as YX, writing Λ and ev for currying and evaluation.

5.1.2 Commutative Monoids.

A monoid (|X|, 0X, +X) consists of a set |X| with an element 0X ∈ |X| and a function (+ X): |X| × |X| → |X| such that 0X + Xx = x = x + X0X for any x ∈ |X| and x + X(x′ + Xx′′) = (x + Xx′) + Xx′′ for any x, x′, x′′ ∈ |X|. A monoid (|X|, 0X, +X) is called commutative if x + Xx′ = x′ + Xx for all x, x′ ∈ |X|. Given monoids X and Y, a function f: |X| → |Y| is called a homomorphism of monoids if f(0X) = 0Y and f(x + Xx′) = f(x) + Yf(x′). We write CMon for the category of commutative monoids and their homomorphisms. We will frequently simply write 0 for 0X and + for + X, if X is clear from context. We will sometimes write \( \sum _{i=1}^nx_i \) for ((x1 + x2) + ⋅⋅⋅)⋅⋅⋅ + xn.

Example 5.1.

The real numbers \( \underline{\mathbb {R}} \) form a commutative monoid with 0 and + equal to the number 0 and addition of numbers.

Example 5.2.

Given commutative monoids (Xi)iI, we can form the product monoidiIXi with underlying set ∏iI|Xi|, \( 0=(0_{X_i})_{i\in I} \) and \( (x_i)_{i\in I}+(y_i)_{i\in I}\stackrel{\mathrm{def}}{=}(x_i+y_i)_{i\in I} \). Given a set I and a commutative monoid X, we can form the power monoid \( I\Rightarrow X\stackrel{\mathrm{def}}{=}\prod _{i\in I} X \) as the I-fold self-product monoid.

Example 5.2 gives the categorical product in CMon. We can, for example, construct a commutative monoid structure on any Euclidean space \( \underline{\mathbb {R}}^k \stackrel{\mathrm{def}}{=}\left\lbrace 0,\ldots , k-1\right\rbrace \Rightarrow \underline{\mathbb {R}} \) by combining the one on \( \underline{\mathbb {R}} \) with the power monoid structure.

Example 5.3.

Given commutative monoids (Xi)iI, we can form the coproduct monoidiIXi with underlying set \( \left\lbrace (x_i)_{i\in I}\in \prod _{i\in I}{X_i}\mid \left\lbrace j\in I\mid x_j\ne 0_{X_j}\right\rbrace \text { is finite}\right\rbrace \), \( 0=(0_{X_i})_{i\in I} \) and \( (x_i)_{i\in I}+(y_i)_{i\in I}\stackrel{\mathrm{def}}{=}(x_i+y_i)_{i\in I} \). Given a set I and a commutative monoid X, we can form the copower monoid \( {!}I\otimes _{}X\stackrel{\mathrm{def}}{=}\sum _{i\in I} X \) as the I-fold self-coproduct monoid. We will often write \( {!}i\otimes _{}x\stackrel{\mathrm{def}}{=}(\text{if }j=i\text{ then }x\text{ else } 0_X)_{j\in I}\in {!}I\otimes _{}X \).

Example 5.3 gives the categorical coproduct in CMon.

Example 5.4.

Given commutative monoids X and Y, we can form the commutative monoid XY of homomorphisms from X to Y. We define \( |X\multimap Y|\stackrel{\mathrm{def}}{=}\mathbf {CMon}(X,Y) \), \( 0_{X\multimap Y}\stackrel{\mathrm{def}}{=}(x\mapsto 0_Y) \), and \( f+_{X\multimap Y} g\stackrel{\mathrm{def}}{=}(x\mapsto f(x)+_Y g(x)) \).

Example 5.4 gives the categorical internal hom in CMon. Commutative monoid homomorphisms !IXY are in 1-1-correspondence with functions I → |XY|.

Finally, a category \( \mathcal {C} \) is called CMon-enriched if we have a commutative monoid structure on each homset \( \mathcal {C}(C,C^{\prime }) \) and function composition gives monoid homomorphisms \( \mathcal {C}(C,C^{\prime })\rightarrow \mathcal {C}(C^{\prime },C^{\prime \prime })\multimap \mathcal {C}(C,C^{\prime \prime }) \). In a category \( \mathcal {C} \) with finite products, these products are well-known to be biproducts (i.e., simultaneously products and coproducts) if and only if \( \mathcal {C} \) is CMon-enriched (for more details, see, for example, Reference [17]): define \( []\stackrel{\mathrm{def}}{=}0 \) and \( [f,g]\stackrel{\mathrm{def}}{=}\pi _1;f + \pi _2;g \) and, conversely, \( 0\stackrel{\mathrm{def}}{=}[] \) and \( f + g\stackrel{\mathrm{def}}{=}({\rm id}_{}, {\rm id}_{});[f,g] \).

5.2 Abstract Denotational Semantics

By the universal property of Syn (Proposition 3.1), the language of Section 3 has a canonical interpretation in any Cartesian closed category \( (\mathcal {C},\mathbb {1},\times ,\Rightarrow) \), once we fix \( \mathcal {C} \)-objects \( [\![ \mathbf {real}^n]\!] \) to interpret realn and \( \mathcal {C} \)-morphisms \( [\![ \mathsf {op}]\!] \in \mathcal {C}([\![ \mathop {\rm Dom}\left(\mathsf {op}\right)]\!] ,[\![ \mathbf {real}^m]\!]) \) to interpret \( \mathsf {op}\in \mathsf {Op}_{n_1,\ldots ,n_k}^m \). That is, any Cartesian closed category with such a choice of objects and morphisms is a categorical model of the source language of Section 3. We interpret types τ and contexts Γ as \( \mathcal {C} \)-objects \( [\![ { \tau }]\!] \) and \( [\![ \Gamma ]\!] \): \( \begin{equation*} [\![ { x}_1:{ \tau }_1,\ldots ,{ x}_n:{ \tau }_n]\!] \stackrel{\mathrm{def}}{=}[\![ { \tau }_1]\!] \times \cdots \times [\![ { \tau }_n]\!] \quad [\![ \mathbf {1}]\!] \stackrel{\mathrm{def}}{=}\mathbb {1}\quad [\![ { \tau }\boldsymbol {\mathop {*}}{ \sigma }]\!] \stackrel{\mathrm{def}}{=}[\![ { \tau }]\!] \times [\![ { \sigma }]\!] \quad [\![ { \tau }\rightarrow { \sigma }]\!] \stackrel{\mathrm{def}}{=}[\![ { \tau }]\!] \Rightarrow [\![ { \sigma }]\!] . \end{equation*} \) We interpret terms Γt: τ as morphisms \( [\![ { t}]\!] \) in \( \mathcal {C}([\![ \Gamma ]\!] ,[\![ { \tau }]\!]) \): \( \begin{equation*} \begin{array}{lll} [\![ \mathsf {op}({ t}_1,\ldots ,{ t}_k)]\!] \stackrel{\mathrm{def}}{=}([\![ { t}_1]\!] ,\ldots ,[\![ { t}_k]\!]);[\![ \mathsf {op}]\!] \\ [\![ { x}_1:{ \tau }_1,\ldots ,{ x}_n:{ \tau }_n\vdash { x}_k:{ \tau }_k]\!] \stackrel{\mathrm{def}}{=}\pi _k\qquad \quad &[\![ \mathbf {let}\,{ x}={ t}\,\mathbf {in}\,{ s}]\!] \stackrel{\mathrm{def}}{=}({\rm id}_{}, [\![ { t}]\!]);[\![ { s}]\!] \\ [\![ \langle \rangle ]\!] \stackrel{\mathrm{def}}{=}()\qquad \quad & [\![ \langle { t}, { s}\rangle ]\!] \stackrel{\mathrm{def}}{=}([\![ { t}]\!] , [\![ { s}]\!])\\ [\![ \mathbf {fst}\,{ t}]\!] \stackrel{\mathrm{def}}{=}[\![ { t}]\!] ;\pi _1\qquad \quad [\![ \mathbf {snd}\,{ t}]\!] \stackrel{\mathrm{def}}{=}[\![ { t}]\!] ;\pi _2 \qquad \quad & [\![ \lambda { x}.\,{ t}]\!] \stackrel{\mathrm{def}}{=}\Lambda ([\![ { t}]\!])\qquad \quad & [\![ { t}\,{ s}]\!] \stackrel{\mathrm{def}}{=}([\![ { t}]\!] , [\![ { s}]\!]);\mathbf {ev}^{}. \end{array} \end{equation*} \)

We discuss how to extend \( [\![ -]\!] \) to apply to the full target language of Section 4 by defining an appropriate notion of categorical model for the target language of Section 4.

Definition 5.5

(Categorical Model of the Target Language).

By a categorical model of the target language, we mean the following data:

  • A categorical model \( \mathcal {C} \) of the source language.

  • A locally indexed category (see, for example, Section 9.3.4 in Reference [29]) \( \mathcal {L}:\mathcal {C}^{op}\rightarrow \mathbf {Cat} \), i.e.,8 a (strict) contravariant functor from \( \mathcal {C} \) to the category Cat of categories, such that \( \mathrm{ob}\,\mathcal {L}(C)=\mathrm{ob}\,\mathcal {L}(C^{\prime }) \) and \( \mathcal {L}(f)(L)=L \) for any object L of \( \mathrm{ob}\,\mathcal {L}(C) \) and any f: C′ → C in \( \mathcal {C} \).

  • \( \mathcal {L} \) is biadditive: each category \( \mathcal {L}(C) \) has (chosen) finite biproducts \( (\mathbb {1},\times) \) and \( \mathcal {L}(f) \) preserves them, for any f: C′ → C in \( \mathcal {C} \), in the sense that \( \mathcal {L}(f)(\mathbb {1})=\mathbb {1} \) and \( \mathcal {L}(f)(L\times L^{\prime })= \mathcal {L}(f)(L)\times \mathcal {L}(f)(L^{\prime }) \).

  • \( \mathcal {L} \) supports !(−)⊗(−)-types and ⇒-types: \( \mathcal {L}(\pi _1) \) has a left adjoint !C′⊗C − and a right adjoint functor C′⇒C − , for each product projection π1: C × C′ → C in \( \mathcal {C} \), satisfying a Beck-Chevalley condition9: \( {!}C^{\prime }\otimes _{C}L={!}C^{\prime }\otimes _{C^{\prime \prime }}L \) and \( C^{\prime }\Rightarrow _C L=C^{\prime } \Rightarrow _{C^{\prime \prime }} L \) for any \( C,C^{\prime \prime }\in \mathrm{ob}\,\mathcal {C} \). We simply write !C′⊗L and C′⇒L. We write Φ and Ψ for the natural isomorphisms \( \mathcal {L}(C)({!}C^{\prime }\otimes _{}L, L^{\prime })\xrightarrow {\cong } \mathcal {L}(C\times C^{\prime })(L,L^{\prime }) \) and \( \mathcal {L}(C\times C^{\prime })(L,L^{\prime })\xrightarrow {\cong }\mathcal {L}(C)(L, C^{\prime }\Rightarrow L^{\prime }) \).

  • \( \mathcal {L} \) supports Cartesian ⊸-types: the functor \( \mathcal {C}^{op}\rightarrow \mathbf {Set} \); \( C\mapsto \mathcal {L}(C)(L,L^{\prime }) \) is representable for any objects L, L′ of \( \mathcal {L} \). That is, we have objects LL′ of \( \mathcal {C} \) with isomorphisms \( \underline{\Lambda }:\mathcal {L}(C)(L,L^{\prime }) \xrightarrow {\cong }\mathcal {C}(C,L\multimap L^{\prime }) \), natural in C.

  • \( \mathcal {L} \) interprets primitive types and operations: we have a choice \( [\![ \underline{\mathbf {real}}^n]\!] \in \mathrm{ob}\,\mathcal {L} \) to interpret real n and, for each \( \mathsf {lop}\in \mathsf {LOp}_{n_1,\ldots ,n_k;n^{\prime }_1,\ldots , n^{\prime }_l}^{m_1,\ldots ,m_r} \), compatible \( \mathcal {L} \)-morphisms \( [\![ \mathsf {lop}]\!] \) in \( \mathcal {L}([\![ \mathbf {real}^{n_1}]\!] \times \cdots \times [\![ \mathbf {real}^{n_k}]\!])([\![ {\bf LDom} (\mathsf {lop})]\!] , [\![ {\bf CDomo} (\mathsf {lop})]\!]) \).

In particular, any biadditive model of intuitionistic linear/non-linear logic [6, 17, 35] is such a categorical model, as long as we choose interpretations for primitive types and operations.

Next, we turn to the interpretation of our target language in such models, which gives an operational intuition of the different components of a categorical model. We can interpret linear types τ as objects \( [\![ { \underline{\tau } }]\!] \) of \( \mathcal {L} \): \( \begin{align*} & [\![ \underline{\mathbf {1}}]\!] \stackrel{\mathrm{def}}{=}\mathbb {1}\qquad [\![ { \underline{\tau } }\boldsymbol {\mathop {*}}{ \underline{\sigma } }]\!] \stackrel{\mathrm{def}}{=}[\![ { \underline{\tau } }]\!] \times [\![ { \underline{\sigma } }]\!] \qquad [\![ { \tau }\rightarrow { \underline{\sigma } }]\!] \stackrel{\mathrm{def}}{=}[\![ { \tau }]\!] \Rightarrow [\![ { \underline{\sigma } }]\!] \qquad [\![ {!}{ \tau }\otimes _{}{ \underline{\sigma } }]\!] \stackrel{\mathrm{def}}{=}{!}[\![ { \tau }]\!] \otimes _{}[\![ { \underline{\sigma } }]\!] . \end{align*} \) We can interpret τσ as the \( \mathcal {C} \)-object \( [\![ { \underline{\tau } }\multimap { \underline{\sigma } }]\!] \stackrel{\mathrm{def}}{=}[\![ { \underline{\tau } }]\!] \multimap [\![ { \underline{\sigma } }]\!] \). Finally, we can interpret terms Γt: τ as morphisms \( [\![ { t}]\!] \) in \( \mathcal {C}([\![ \Gamma ]\!] ,[\![ { \tau }]\!]) \) and terms \( \Gamma ;\mathsf {v}:{ \underline{\tau } }\vdash { t}: { \underline{\sigma } } \) as \( [\![ { t}]\!] \) in \( \mathcal {L}([\![ \Gamma ]\!])([\![ { \underline{\tau } }]\!] ,[\![ { \underline{\sigma } }]\!]) \): \( \begin{align*} &[\![ \mathsf {lop}({ t}_1,\ldots ,{ t}_k;{ s})]\!] \stackrel{\mathrm{def}}{=}[\![ { s}]\!] ;\mathcal {L}(([\![ { t}_1]\!] ,\ldots ,[\![ { t}_k]\!]))([\![ \mathsf {lop}]\!])\\ &[\![ \Gamma ;\mathsf {v}:{ \underline{\tau } }\vdash \mathsf {v} : { \underline{\tau } }]\!] \stackrel{\mathrm{def}}{=}{\rm id}_{[\![ { \underline{\tau } }]\!] } \qquad [\![ \mathbf {let}\,{ x}={ t}\,\mathbf {in}\,{ s}]\!] \stackrel{\mathrm{def}}{=}\mathcal {L}(({\rm id}_{},[\![ { t}]\!]))([\![ { s}]\!])\qquad [\![ \mathbf {let}\,\mathsf {v}={ t}\,\mathbf {in}\,{ s}]\!] \stackrel{\mathrm{def}}{=}[\![ { t}]\!] ;[\![ { s}]\!] \\ & [\![ \langle \rangle ]\!] \stackrel{\mathrm{def}}{=}()\qquad [\![ \langle { t}, { s}\rangle ]\!] \stackrel{\mathrm{def}}{=}([\![ { t}]\!] , [\![ { s}]\!])\qquad [\![ \mathbf {fst}\,{ t}]\!] \stackrel{\mathrm{def}}{=}[\![ { t}]\!] ;\pi _1\qquad [\![ \mathbf {snd}\,{ t}]\!] \stackrel{\mathrm{def}}{=}[\![ { t}]\!] ;\pi _2 \\ &[\![ \lambda { x}.\,{ t}]\!] \stackrel{\mathrm{def}}{=}\Psi ([\![ { t}]\!]) \qquad [\![ { t}\,{ s}]\!] \stackrel{\mathrm{def}}{=}\mathcal {L}(({\rm id}_{}, [\![ { s}]\!]))(\Psi ^{-1}([\![ { t}]\!])) \\ &[\![ {!}{ t}\otimes _{}{ s}]\!] \stackrel{\mathrm{def}}{=}\mathcal {L}(({\rm id}_{}, [\![ { t}]\!]))(\Phi ({\rm id}_{})); ({!}[\![ { \sigma }]\!] \otimes _{}[\![ { s}]\!]) \qquad [\![ \mathbf {case}\,{ t}\,\mathbf {of}\,{!}{ y}\otimes _{}\mathsf {v}\rightarrow { s}]\!] \stackrel{\mathrm{def}}{=}[\![ { t}]\!] ;\Phi ^{-1}([\![ { s}]\!]) \\ & [\![ \underline{\lambda } \mathsf {v}.\,{ t}]\!] \stackrel{\mathrm{def}}{=}\underline{\Lambda }({[\![ { t}]\!] }) \qquad [\![ { t}{\bullet } { s}]\!] \stackrel{\mathrm{def}}{=}\underline{\Lambda }^{-1}([\![ { t}]\!]);[\![ { s}]\!] \qquad [\![ \underline{0}]\!] \stackrel{\mathrm{def}}{=}[]\qquad [\![ { t}+{ s}]\!] \stackrel{\mathrm{def}}{=}({\rm id}_{}, {\rm id}_{});[[\![ { t}]\!] ,[\![ { s}]\!] ]. \end{align*} \) Observe that we interpret 0 and + using the biproduct structure of \( \mathcal {L} \).

Proposition 5.6.

The interpretation \( [\![ -]\!] \) of the language of Section 4 in categorical models is both sound and complete with respect to the βη + -equational theory: \( { t}\!\stackrel{\beta \eta +}{=}\!{ s} \) iff \( [\![ { t}]\!] =[\![ { s}]\!] \) in each such model.

The proof is a minor variation of syntax-semantics correspondences developed in detail in chapters 3 and 5 of Reference [41], where we use the well-known result that finite products in a category are biproducts iff the category is enriched over commutative monoids [17]. Soundness follows by case analysis on the βη + -rules. Completeness follows by the construction of the syntactic model LSyn: CSynopCat:

  • CSyn extends its full subcategory Syn with Cartesian ⊸-types;

  • Objects of LSyn(τ) are linear types σ of our target language.

  • Morphisms in LSyn(τ)(σ , ρ) are terms \( { x}:{ \tau };\mathsf {v}:{ \underline{\sigma } }\vdash { t}:{ \underline{\rho } } \) modulo (α)βη + -equivalence.

  • Identities in LSyn(τ) are represented by the terms \( { x}:{ \tau };\mathsf {v}:{ \underline{\sigma } }\vdash \mathsf {v}:{ \underline{\sigma } } \).

  • Composition of \( { x}:{ \tau };\mathsf {v}:{ \underline{\sigma } }_1\vdash { t}:{ \underline{\sigma } }_2 \) and \( { x}:{ \tau };\mathsf {v}:{ \underline{\sigma } }_2\vdash { s}:{ \underline{\sigma } }_3 \) in LSyn(τ) is represented by \( { x}:{ \tau };\mathsf {v}:{ \underline{\sigma } }_1\vdash \mathbf {let}\,\mathsf {v}={ t}\,\mathbf {in}\,{ s}:{ \underline{\sigma } }_3 \).

  • Change of base LSyn(t): LSyn(τ) → LSyn(τ′) along (x′: τ′⊢t: τ) ∈ CSyn(τ′, τ) is defined \( {\mathbf {LSyn}}({ t})({ x}:{ \tau };\mathsf {v}:{ \underline{\sigma } }\vdash { s}:{ \underline{\rho } })\stackrel{\mathrm{def}}{=}{ x}^{\prime }:{ \tau }^{\prime };\mathsf {v}:{ \underline{\sigma } }\vdash \mathbf {let}\,{ x}={ t}\,\mathbf {in}\,{ s}~:~{ \underline{\rho } } \).

  • All type formers are interpreted as one expects based on their notation, using introduction and elimination rules for the required structural isomorphisms.

5.3 Concrete Denotational Semantics

5.3.1 Sets and Commutative Monoids.

Throughout this article, we have a particularly simple instance of the abstract semantics of our languages in mind, as we intend to interpret realn as the usual Euclidean space \( \mathbb {R}^n \) (considered as a set) and to interpret each program \( { x}_1:\mathbf {real}^{n_1},\ldots ,{ x}_k:\mathbf {real}^{n_k}\vdash { t}:{\mathbf {real}^m} \) as a function \( \mathbb {R}^{n_1}\times \cdots \times \mathbb {R}^{n_k}\rightarrow \mathbb {R}^m \). Similarly, we intend to interpret real n as the commutative monoid \( \underline{\mathbb {R}}^n \) and each program \( { x}_1:\mathbf {real}^{n_1},\ldots ,{ x}_k:\mathbf {real}^{n_k};{ y}:\underline{\mathbf {real}}^{m}\vdash { t}: \underline{\mathbf {real}}^r \) as a function \( \mathbb {R}^{n_1}\times \cdots \times \mathbb {R}^{n_k}\rightarrow \underline{\mathbb {R}}^m\multimap \underline{\mathbb {R}}^r \). That is, we will work with a concrete denotational semantics in terms of sets and commutative monoids.

Some readers will immediately recognize that the free-forgetful adjunction \( \mathbf {Set}\leftrightarrows \mathbf {CMon} \) gives a model of full intuitionistic linear logic [35]. In fact, seeing that CMon is CMon-enriched, the model is biadditive [17].

However, we do not need such a rich type system. For us, the following suffices. Define CMon(X), for X ∈ ob Set, to have the objects of CMon and homsets \( \mathbf {CMon}(X)(Y,Z)\stackrel{\mathrm{def}}{=}\mathbf {Set}(X,Y\multimap ~Z) \). Identities are defined as x↦(yy) and composition f; CMon(X)g is given by x↦(f(x); CMong(x)). Given fSet(X, X′), we define change-of-base CMon(X′) → CMon(X) as \( \mathbf {CMon}(f)(g)\stackrel{\mathrm{def}}{=}f;_{\mathbf {Set}}g \). CMon(−) defines a locally indexed category. By taking \( \mathcal {C}=\mathbf {Set} \) and \( \mathcal {L}(-)=\mathbf {CMon}(-) \), we obtain a concrete instance of our abstract semantics. Indeed, we have natural isomorphisms \( \begin{align*} &\mathbf {CMon}(X)({!}X^{\prime }\otimes _{}Y, Z)\xrightarrow {\Phi } \mathbf {CMon}(X\times X^{\prime })(Y,Z) && \mathbf {CMon}(X\times X^{\prime })(Y, Z)\xrightarrow {\Psi } \mathbf {CMon}(X)(Y,X^{\prime }\Rightarrow Z)\\ &\Phi (f)(x,x^{\prime })(y)\stackrel{\mathrm{def}}{=}f(x)({!}x^{\prime }\otimes _{}y) && \Psi (f)(x)(y)(x^{\prime })\stackrel{\mathrm{def}}{=}f(x,x^{\prime })(y)\\ &\Phi ^{-1}(f)(x)\left(\sum _{i=1}^n({!}x^{\prime }_i\otimes _{}y_i)\right)\stackrel{\mathrm{def}}{=}\sum _{i=1}^n f(x,x^{\prime }_i)(y_i)&& \Psi ^{-1}(f)(x,x^{\prime })(y)\stackrel{\mathrm{def}}{=}f(x)(y)(x^{\prime }). \end{align*} \)

The prime motivating examples of morphisms in this category are derivatives. Recall that the derivative at x, Df(x), and transposed derivative at x, Dft(x), of a differentiable function \( f:\mathbb {R}^n\rightarrow \mathbb {R}^m \) are defined as the unique functions \( Df(x):\mathbb {R}^n\rightarrow \mathbb {R}^m \) and \( {Df}^{t}(x):\mathbb {R}^m\rightarrow \mathbb {R}^n \) satisfying \( \begin{equation*} Df(x)(v)=\mathrm{lim}_{\delta \rightarrow 0}\frac{f(x+\delta \cdot v)-f(x)}{\delta } \qquad {Df}^{t}(x)(w) \odot v=w \odot Df(x)(v) , \end{equation*} \) where we write vv′ for the inner product \( \sum _{i=1}^n (\pi _i v)\cdot (\pi _i v^{\prime }) \) of vectors \( v,v^{\prime }\in \mathbb {R}^n \). Now, for differentiable \( f:\mathbb {R}^n\rightarrow \mathbb {R}^m \), Df and Dft give maps in \( \mathbf {CMon}(\mathbb {R}^n)(\underline{\mathbb {R}}^n,\underline{\mathbb {R}}^m) \) and \( \mathbf {CMon}(\mathbb {R}^n)(\underline{\mathbb {R}}^m,\underline{\mathbb {R}}^n) \), respectively. Indeed, derivatives Df(x) of f at x are linear functions, as are transposed derivatives Dft(x). Both depend differentiably on x in case f is differentiable. Note that the derivatives are not merely linear in the sense of preserving 0 and + . They are also multiplicative in the sense that (Df)(x)(c · v) = c · (Df)(x)(v). We could have captured this property by working with vector spaces rather than commutative monoids. However, we will not need this property to phrase or establish correctness of AD. Therefore, we restrict our attention to the more straightforward structure of commutative monoids.

Defining \( [\![ \underline{\mathbf {real}}^n]\!] \stackrel{\mathrm{def}}{=}\underline{\mathbb {R}}^n \) and interpreting each \( \mathsf {lop}\in \mathsf {LOp} \) as the (differentiable) function \( [\![ \mathsf {lop}]\!] :(\mathbb {R}^{n_1}\times \cdots \times \mathbb {R}^{n_k})\rightarrow (\underline{\mathbb {R}}^{n^{\prime }_1}\times \cdots \times \underline{\mathbb {R}}^{n^{\prime }_l})\multimap (\underline{\mathbb {R}}^{m_1}\times \cdots \times \underline{\mathbb {R}}^{m_r}) \) it is intended to represent, we obtain a canonical interpretation of our target language in CMon.

5.4 Operational Semantics

In this section, we describe an operational semantics for our source and target languages. We consider call-by-value evaluation, but similar results can be obtained for call-by-name evaluation.10 We present this semantics in big-step style. Finally, we show that our denotational semantics are adequate with respect to this operational semantics, showing that the denotational semantics are sound tools for reasoning about our programs.

We consider the following programs values, where we write c for \( \mathsf {op}() \) and \( \underline{lc}(\mathsf {v}) \) for \( \mathsf {lop}(;\mathsf {v}) \): \( \begin{align*} \begin{array}{llll} v, w, u & ::=& & \qquad \text{values} \\ &\mathrel {\vert }& { x}\\ &\mathrel {\vert }& \underline{c}\\ &\mathrel {\vert }& \langle \rangle \\ &\mathrel {\vert }& \langle v, w\rangle \end{array} \qquad \qquad \qquad \begin{array}{llll} &\mathrel {\vert }&\mathsf {v}\\ &\mathrel {\vert }\quad \,& \lambda { x}.\,{{ t}}\\ &\mathrel {\vert }& \underline{\lambda } \mathsf {v}.\,{{ t}}\\ &\mathrel {\vert }& {!} v_1\otimes _{} w_1+ ({!} v_2\otimes _{} w_2+(\cdots {!} v_n\otimes _{} w_n)\cdots)\\ &\mathrel {\vert }& \underline{lc}(\mathsf {v}). \end{array} \end{align*} \) We then define the big-step reduction relation tv, which says that a program t evaluates to the value v, in Figure 7. To be able to define this semantics, we assume that our languages contain, at least, nullary operations \( \mathsf {op}=\underline{c} \) for all constants \( c\in \mathbb {R}^n \) and nullary linear operations lc for all linear maps (matrices) \( lc\in \underline{\mathbb {R}}^n\multimap \underline{\mathbb {R}}^m \). For all operations \( \mathsf {op} \) and linear operations \( \mathsf {lop} \), we assume that an intended semantics \( [\![ \mathsf {op}]\!] \) and \( [\![ \mathsf {lop}]\!] \) is specified as (functions on) vectors of reals. As a side-note, we observe that this operational semantics has the following basic properties:

Fig. 7.

Fig. 7. The big-step call-by-value operational semantics t⇓v for the source and target languages. In the first rule, we intend to indicate that v⇓v unless v is a linear function between tuples of real arrays in the sense that \( \Gamma ;\mathsf {v}:\underline{\mathbf {real}}^{n_1}\boldsymbol {\mathop {*}}\cdot \cdot \boldsymbol {\mathop {*}}\underline{\mathbf {real}}^{n_k} \vdash v:{\underline{\mathbf {real}}^{n_1}\boldsymbol {\mathop {*}}\cdot \cdot \boldsymbol {\mathop {*}}\underline{\mathbf {real}}^{n_k}} \) .

Lemma 5.7 (Subject Reduction, Termination, Determinism).

If Γt: τ, then there is a unique value v such that tv. Then, Γv: τ. Similarly, if \( \Gamma ;\mathsf {v}:{ \underline{\tau } }\vdash { t}: { \underline{\sigma } } \), then there is a unique value v, such that tv. Then, \( \Gamma ,\mathsf {v}:{ \underline{\tau } }\vdash v: { \underline{\sigma } } \).

Subject reduction and termination are proved through a standard logical relations argument similar to the ones in Reference [39]. Determinism is observed by simply noting that all rules in the definition of ⇓ have conclusions tv with disjoint t.

In fact, seeing that every well-typed program t has a unique value v such that tv, we write ⇓t for this v.

We assume that only first-order types are observable (i.e., have decidable equality on their values): \( \begin{align*} \begin{array}{llll} \phi , \psi & ::=& & \qquad \text{first-order Cartesian types} \\ &\mathrel {\vert }& \mathbf {real}^n\\ &\mathrel {\vert }& \mathbf {1}\\ &\mathrel {\vert }& \phi \boldsymbol {\mathop {*}}\psi \end{array} \qquad \qquad \qquad \begin{array}{llll} \phi , \psi & ::=& & \qquad \text{first-order linear types} \\ &\mathrel {\vert }& \underline{\mathbf {real}}^n\\ &\mathrel {\vert }& \underline{\mathbf {1}}\\ &\mathrel {\vert }& \underline{\phi }\boldsymbol {\mathop {*}}\underline{\psi }. \end{array} \end{align*} \)

We define program contexts \( C[\_] \) to be programs \( C[\_] \) that use the variable \( \_ \) exactly once. We call such program contexts of first-order type if they satisfy the typing judgement \( \_:{ \tau }\vdash C[\_]:\phi \) for first-order Cartesian type ϕ or \( \_:{ \tau };\mathsf {v}:\underline{\psi }\vdash C[\_]:\underline{\phi } \) for first-order linear types ψ and ϕ . We write C[t] for the capturing substitution of t for \( \_ \) in \( C[\_] \). This operational semantics and notion of observable types lead us to define observational equivalence (aka contextual equivalence) ts of programs · ⊢t, s: τ, where we say that ts holds if ⇓C[t] = ⇓C[s] for all program contexts of first-order type. Similarly, we call two programs \( \cdot ;\mathsf {v}:{ \underline{\tau } }\vdash { t},{ s}:{ \underline{\sigma } } \) of linear type observationally equivalent (write also ts) if \( \underline{\lambda } \mathsf {v}.\,{ t}\approx \underline{\lambda } \mathsf {v}.\,{ s} \).

Note that we consider values \( \cdot ;\mathsf {v}:\underline{\psi }\vdash v:\underline{\phi } \) for first-order linear types ψ and ϕ to be observable, seeing that linear functions between finite-dimensional spaces are finite-dimensional objects that can be fully observed by evaluating them on a (finite) basis for their domain type ψ . Indeed, such values v are always of the form \( \underline{lc}(\mathsf {v}) \) for some \( lc:\underline{\mathbb {R}}^n\rightarrow \underline{\mathbb {R}}^m \) hence are effectively matrices.

We first show two standard lemmas.

Lemma 5.8

(Compositionality of [[-]])

For any two terms Γt, s: τ and any type-compatible program context \( C[\_] \) we have that \( [\![ { t}]\!] =[\![ { s}]\!] \) implies \( [\![ C[{ t}]]\!] =[\![ C[{ s}]]\!] \).

This is proved by induction on the structure of terms.

Lemma 5.9 (Soundness of ⇓).

In case t, we have that \( [\![ { t}]\!] =[\![ \Downarrow { t}]\!] \).

This is proved by induction on the definition of ⇓: note that every operational rule is also an equation in the semantics. Then, adequacy follows.

Theorem 5.10 (Adequacy).

In case \( [\![ { t}]\!] =[\![ { s}]\!] \), it follows that ts.

Proof.

Suppose that \( [\![ { t}]\!] =[\![ { s}]\!] \) and let \( C[\_] \) be a type-compatible program context of first-order type. Then, \( [\![ \Downarrow C[{ t}]]\!] =[\![ C[{ t}]]\!] =[\![ C[{ s}]]\!] =[\![ \Downarrow C[{ s}]]\!] \) by the previous two lemmas. Finally, as values of observable types are easily seen to be faithfully (injectively) interpreted in our denotational semantics, it follows that ⇓C[t] = ⇓C[s]. Therefore, ts.□

That is, the denotational semantics is a sound means for proving observational equivalences of the operational semantics.

Skip 6PAIRING PRIMALS WITH (CO)TANGENTS, CATEGORICALLY Section

6 PAIRING PRIMALS WITH (CO)TANGENTS, CATEGORICALLY

In this section, we show that any categorical model \( \mathcal {L}:\mathcal {C}^{op}\rightarrow \mathbf {Cat} \) of our target language gives rise to two Cartesian closed categories \( \Sigma _{\mathcal {C}}\mathcal {L} \) and \( \Sigma _{\mathcal {C}}\mathcal {L}^{op} \). We believe that these observations of Cartesian closure are novel. Surprisingly, they are highly relevant for obtaining a principled understanding of AD on a higher-order language: the former for forward AD and the latter for reverse AD. Applying these constructions to the syntactic category LSyn: CSynopCat of our target language, we produce a canonical definition of the AD macros, as the canonical interpretation of the λ-calculus in the Cartesian closed categories ΣCSynLSyn and ΣCSynLSynop. In addition, when we apply this construction to the denotational semantics CMon: SetopCat and invoke a categorical logical relations technique, known as subsconing, we find an elegant correctness proof of the source code transformations. The abstract construction delineated in this section is in many ways the theoretical crux of this article.

6.1 Grothendieck Constructions on Strictly Indexed Categories

Recall that for any strictly indexed category, i.e., a (strict) functor \( \mathcal {L}:\mathcal {C}^{op}\rightarrow \mathbf {Cat} \), we can consider its total category (or Grothendieck construction) \( \Sigma _\mathcal {C}\mathcal {L} \), which is a fibered category over \( \mathcal {C} \) (see Sections A1.1.7 and B1.3.1 in Reference [23]). We can view it as a Σ-type of categories, which generalizes the Cartesian product. Concretely, its objects are pairs (A1, A2) of objects A1 of \( \mathcal {C} \) and A2 of \( \mathcal {L}(A_1) \). Its morphisms (A1, A2) → (B1, B2) are pairs (f1, f2) of a morphism f1: A1B1 in \( \mathcal {C} \) and a morphism \( f_2:A_2\rightarrow \mathcal {L}(f_1)(B_2) \) in \( \mathcal {L}(A_1) \). Identities are \( {\rm id}_{(A_1,A_2)}\stackrel{\mathrm{def}}{=}({\rm id}_{A_1}, {\rm id}_{A_2}) \) and composition is \( (f_1,f_2);(g_1,g_2)\stackrel{\mathrm{def}}{=}(f_1;g_1, f_2; \mathcal {L}(f_1)(g_2)) \). Furthermore, given a strictly indexed category \( \mathcal {L}:\mathcal {C}^{op}\rightarrow \mathbf {Cat} \), we can consider its fibrewise dual category \( \mathcal {L}^{op}:\mathcal {C}^{op}\rightarrow \mathbf {Cat} \), which is defined as the composition \( \mathcal {C}^{op}\xrightarrow {\mathcal {L}}\mathbf {Cat}\xrightarrow {op}\mathbf {Cat} \). Thus, we can apply the same construction to \( \mathcal {L}^{op} \) to obtain a category \( \Sigma _{\mathcal {C}}\mathcal {L}^{op} \).

6.2 Structure of ΣC L and ΣC Lop for Locally Indexed Categories

Section 6.1 applies, in particular, to the locally indexed categories of Section 5. In this case, we will analyze the categorical structure of \( \Sigma _{\mathcal {C}}\mathcal {L} \) and \( \Sigma _{\mathcal {C}}\mathcal {L}^{op} \). For reference, we first give a concrete description.

\( \Sigma _{\mathcal {C}}\mathcal {L} \) is the following category:

  • objects are pairs (A1, A2) of objects A1 of \( \mathcal {C} \) and A2 of \( \mathcal {L} \);

  • morphisms (A1, A2) → (B1, B2) are pairs (f1, f2) with \( f_1:A_1\rightarrow B_1\in \mathcal {C} \) and \( f_2:A_2\rightarrow B_2\in \mathcal {L}(A_1) \);

  • composition of \( (A_1,A_2)\xrightarrow {(f_1,f_2)}(B_1,B_2) \) and \( (B_1,B_2)\xrightarrow {(g_1,g_2)}(C_1,C_2) \) is given by \( (f_1;g_1, f_2;\mathcal {L}(f_1)(g_2)) \) and identities \( {\rm id}_{(A_1,A_2)} \) are \( ({\rm id}_{A_1},{\rm id}_{A_2}) \).

\( \Sigma _{\mathcal {C}}\mathcal {L}^{op} \) is the following category:

  • objects are pairs (A1, A2) of objects A1 of \( \mathcal {C} \) and A2 of \( \mathcal {L} \);

  • morphisms (A1, A2) → (B1, B2) are pairs (f1, f2) with \( f_1:A_1\rightarrow B_1\in \mathcal {C} \) and \( f_2:B_2\rightarrow A_2\in \mathcal {L}(A_1) \);

  • composition of \( (A_1,A_2)\xrightarrow {(f_1,f_2)}(B_1,B_2) \) and \( (B_1,B_2)\xrightarrow {(g_1,g_2)}(C_1,C_2) \) is given by \( (f_1;g_1, \mathcal {L}(f_1)(g_2);f_2) \) and identities \( {\rm id}_{(A_1,A_2)} \) are \( ({\rm id}_{A_1},{\rm id}_{A_2}) \).

These categories are relevant to automatic differentiation for the following reason. Let us write CartSp for the category of Cartesian spaces \( \mathbb {R}^n \) and differentiable functions between them. Observe that for any categorical model \( \mathcal {L}:\mathcal {C}^{op}\rightarrow \mathbf {Cat} \) of the target language, \( \begin{align*} &\Sigma _\mathcal {C}\mathcal {L}((A_1,A_2),(B_1,B_2))=\mathcal {C}(A_1,B_1)\times \mathcal {L}(A_1)(A_2,B_2)\cong \mathcal {C}(A_1, B_1\times (A_2\multimap B_2)) \\ &\Sigma _\mathcal {C}\mathcal {L}^{op}((A_1,A_2),(B_1,B_2))=\mathcal {C}(A_1,B_1)\times \mathcal {L}(A_1)(B_2,A_2)\cong \mathcal {C}(A_1, B_1\times (B_2\multimap A_2)). \end{align*} \) Then, observing that the composition in these Σ-types of categories is precisely the chain rule, we see that the paired-up derivative \( \mathcal {T}_{} \) and transposed derivative \( \mathcal {T}^*_{} \) of Section 2.2 define functors \( \begin{equation*} \mathcal {T}_{}: \mathbf {CartSp}\rightarrow \Sigma _\mathbf {Set}\mathbf {CMon}\qquad \qquad \qquad \qquad \mathcal {T}^*_{}:\mathbf {CartSp}\rightarrow \Sigma _\mathbf {Set}\mathbf {CMon}^{op}. \end{equation*} \) As we will see in Section 7, we can implement (higher-order extensions of) these functors as code transformations \( \begin{equation*} \overrightarrow {\mathcal {D}}_{}: \mathbf {Syn}\rightarrow \Sigma _{\mathbf {CSyn}}{\mathbf {LSyn}}\qquad \qquad \qquad \qquad \overleftarrow {\mathcal {D}}_{}:\mathbf {Syn}\rightarrow \Sigma _{\mathbf {CSyn}}{\mathbf {LSyn}}^{op}. \end{equation*} \)

As we will see, we can derive these code transformations by examining the categorical structure present in \( \Sigma _{\mathcal {C}}\mathcal {L} \) and \( \Sigma _{\mathcal {C}}\mathcal {L}^{op} \) for categorical models \( \mathcal {L}:\mathcal {C}^{op}\rightarrow \mathbf {Cat} \) of the target language in the sense of Section 5. We believe the existence of this categorical structure is a novel observation. We will make heavy use of it to define our AD algorithms and to prove them correct.

Theorem 6.1.

For a categorical model \( \mathcal {L}:\mathcal {C}^{op}\rightarrow \mathbf {Cat} \) of the target language, \( \Sigma _{\mathcal {C}}\mathcal {L} \) has:

  • terminal object \( \mathbb {1}=(\mathbb {1},\mathbb {1}) \) and binary products (A1, A2) × (B1, B2) = (A1 × B1, A2 × B2);

  • exponentials (A1, A2)⇒(B1, B2) = (A1⇒(B1 × (A2B2)), A1B2).

Proof.

We have (natural) bijections \( \begin{align*} &\Sigma _{\mathcal {C}}\mathcal {L}((A_1,A_2), (\mathbb {1},\mathbb {1})) = \mathcal {C}(A_1,\mathbb {1})\times \mathcal {L}(A_1)(A_2,\mathbb {1}) \cong \mathbb {1}\times \mathbb {1}\cong \mathbb {1}{\qquad \qquad \qquad \qquad \lbrace \text{{$\mathbb {1}$ terminal in $\mathcal {C}$ and $\mathcal {L}(A_1)$}}\rbrace }\\ &\\ &\Sigma _{\mathcal {C}}\mathcal {L}((A_1,A_2), (B_1\times C_1,B_2\times C_2)) =\mathcal {C}(A_1,B_1\times C_1)\times \mathcal {L}(A_1)(A_2, B_2\times C_2)\hspace{-40.0pt}\;\\ &\cong \mathcal {C}(A_1,B_1)\times \mathcal {C}(A_1,C_1)\times \mathcal {L}(A_1)(A_2, B_2)\times \mathcal {L}(A_1)(A_2, C_2){\qquad \qquad \qquad \qquad \lbrace \text{{$\times $ product in $\mathcal {C}$ and $\mathcal {L}(A_1)$}}\rbrace }\\ &\cong \Sigma _{\mathcal {C}}\mathcal {L}((A_1,A_2),(B_1,B_2))\times \Sigma _{\mathcal {C}}\mathcal {L}((A_1,A_2), (C_1,C_2))\hspace{-40.0pt}\;\\ &\\ &\Sigma _{\mathcal {C}}\mathcal {L}((A_1,A_2)\times (B_1,B_2), (C_1,C_2)) = \Sigma _{\mathcal {C}}\mathcal {L}((A_1\times B_1,A_2\times B_2), (C_1,C_2))\hspace{-40.0pt}\; \\ &=\mathcal {C}(A_1\times B_1, C_1)\times \mathcal {L}(A_1\times B_1)(A_2\times B_2, C_2)\\ &\cong \mathcal {C}(A_1\times B_1, C_1)\times \mathcal {L}(A_1\times B_1)(A_2, C_2)\times \mathcal {L}(A_1\times B_1)(B_2,C_2){\qquad \qquad \qquad \qquad \lbrace \text{{$\times $ coproducts in $\mathcal {L}(A_1\times B_1)$}}\rbrace }\\ &\cong \mathcal {C}(A_1\times B_1, C_1)\times \mathcal {L}(A_1)(A_2, B_1\Rightarrow C_2)\times \mathcal {L}(A_1\times B_1)(B_2,C_2) {\qquad \qquad \qquad \qquad \lbrace \text{{$\Rightarrow $-types in $\mathcal {L}$}}\rbrace }\\ &\cong \mathcal {C}(A_1\times B_1, C_1)\times \mathcal {L}(A_1)(A_2, B_1\Rightarrow C_2)\times \mathcal {C}(A_1\times B_1,B_2\multimap C_2) {\qquad \qquad \qquad \qquad \lbrace \text{{Cartesian $\multimap $-types}}\rbrace }\\ &\cong \mathcal {C}(A_1\times B_1, C_1\times (B_2\multimap C_2))\times \mathcal {L}(A_1)(A_2, B_1\Rightarrow C_2){\qquad \qquad \qquad \qquad \lbrace \text{{$\times $ is product in $\mathcal {C}$}}\rbrace }\\ &\cong \mathcal {C}(A_1, B_1\Rightarrow (C_1\times (B_2\multimap C_2)))\times \mathcal {L}(A_1)(A_2, B_1\Rightarrow C_2) {\qquad \qquad \qquad \qquad \lbrace \text{{$\Rightarrow $ is exponential in $\mathcal {C}$}}\rbrace }\\ &= \Sigma _{\mathcal {C}}\mathcal {L}((A_1,A_2), (B_1\Rightarrow (C_1\times (B_2\multimap C_2)), B_1\Rightarrow C_2))\\ &= \Sigma _{\mathcal {C}}\mathcal {L}((A_1,A_2), (B_1,B_2)\Rightarrow (C_1,C_2)). \end{align*} \)

We observe that we need \( \mathcal {L} \) to have biproducts (equivalently, to be CMon enriched) to show Cartesian closure. Furthermore, we need linear ⇒-types and Cartesian ⊸-types to construct exponentials. Codually, we also obtain the Cartesian closure of \( \Sigma _\mathcal {C}\mathcal {L}^{op} \). However, for concreteness, we write out the proof by hand.

Theorem 6.2.

For a categorical model \( \mathcal {L}:\mathcal {C}^{op}\rightarrow \mathbf {Cat} \) of the target language, \( \Sigma _{\mathcal {C}}\mathcal {L}^{op} \) has

  • terminal object \( \mathbb {1}=(\mathbb {1},\mathbb {1}) \) and binary products (A1, A2) × (B1, B2) = (A1 × B1, A2 × B2);

  • exponentials (A1, A2)⇒(B1, B2) = (A1⇒(B1 × (B2A2)), !A1B2).

Proof. We have (natural) bijections \( \begin{align*} &\Sigma _{\mathcal {C}}\mathcal {L}^{op}((A_1,A_2), (\mathbb {1},\mathbb {1})) = \mathcal {C}(A_1,\mathbb {1})\times \mathcal {L}(A_1)(\mathbb {1},A_2) \cong \mathbb {1}\times \mathbb {1}\cong \mathbb {1}{\qquad \qquad \qquad \qquad \lbrace \text{{$\mathbb {1}$ terminal in $\mathcal {C}$, initial in $\mathcal {L}(A_1)$}}\rbrace }&\\ &&\\ &\Sigma _{\mathcal {C}}\mathcal {L}^{op}((A_1,A_2), (B_1\times C_1,B_2\times C_2)) =\mathcal {C}(A_1,B_1\times C_1)\times \mathcal {L}(A_1)(B_2\times C_2,A_2)\hspace{-60.0pt}\;&\\ &\cong \mathcal {C}(A_1,B_1)\times \mathcal {C}(A_1,C_1)\times \mathcal {L}(A_1)(B_2,A_2)\times \mathcal {L}(A_1)(C_2,A_2) {\qquad \qquad \qquad \qquad \lbrace \text{{$\times $ product in $\mathcal {C}$, coproduct in $\mathcal {L}(A_1)$}}\rbrace }&\\ &= \Sigma _{\mathcal {C}}\mathcal {L}^{op}((A_1,A_2),(B_1,B_2))\times \Sigma _{\mathcal {C}}\mathcal {L}^{op}((A_1,A_2), (C_1,C_2)) &\\ &\\ &\Sigma _{\mathcal {C}}\mathcal {L}^{op}((A_1,A_2)\times (B_1,B_2), (C_1,C_2)) = \Sigma _{\mathcal {C}}\mathcal {L}^{op}((A_1\times B_1,A_2\times B_2), (C_1,C_2))\hspace{-60.0pt}\; &\\ &=\mathcal {C}(A_1\times B_1, C_1)\times \mathcal {L}(A_1\times B_1)(C_2, A_2\times B_2)&\\ &\cong \mathcal {C}(A_1\times B_1, C_1)\times \mathcal {L}(A_1\times B_1)(C_2, A_2)\times \mathcal {L}(A_1\times B_1)(C_2, B_2) {\qquad \qquad \qquad \qquad \lbrace \text{{$\times $ is product in $\mathcal {L}(A_1\times B_1)$}}\rbrace }&\\ &\cong \mathcal {C}(A_1\times B_1, C_1)\times \mathcal {C}(A_1\times B_1,C_2\multimap B_2)\times \mathcal {L}(A_1\times B_1)(C_2, A_2)\hspace{-40.0pt}\; {\qquad \qquad \qquad \qquad \lbrace \text{{Cartesian $\multimap $-types}}\rbrace }&\\ &\cong \mathcal {C}(A_1\times B_1, C_1\times (C_2\multimap B_2))\times \mathcal {L}(A_1\times B_1)(C_2, A_2) {\qquad \qquad \qquad \qquad \lbrace \text{{$\times $ is product in $\mathcal {C}$}}\rbrace }&\\ &\cong \mathcal {C}(A_1,B_1\Rightarrow (C_1\times (C_2\multimap B_2)))\times \mathcal {L}(A_1\times B_1)(C_2, A_2) {\qquad \qquad \qquad \qquad \lbrace \text{{$\Rightarrow $ is exponential in $\mathcal {C}$}}\rbrace }&\\ &\cong \mathcal {C}(A_1,B_1\Rightarrow (C_1\times (C_2\multimap B_2)))\times \mathcal {L}(A_1)({!}B_1\otimes _{}C_2, A_2) {\qquad \qquad \qquad \qquad \lbrace \text{{${!}(-)\otimes _{}(-)$-types}}\rbrace } &\\ &= \Sigma _{\mathcal {C}}\mathcal {L}^{op}((A_1,A_2), (B_1\Rightarrow (C_1\times (C_2\multimap B_2)), {!}B_1\otimes _{}C_2))&\\ &=\Sigma _{\mathcal {C}}\mathcal {L}^{op}((A_1,A_2), (B_1,B_2)\Rightarrow (C_1,C_2)).&\Box \end{align*} \)

Observe that we need the biproduct structure of \( \mathcal {L} \) to construct finite products in \( \Sigma _{\mathcal {C}}\mathcal {L}^{op} \). Furthermore, we need Cartesian ⊸-types and !(−)⊗(−)-types, but not biproducts, to construct exponentials.

Interestingly, we observe that the exponentials in \( \Sigma _\mathcal {C}\mathcal {L} \) and \( \Sigma _\mathcal {C}\mathcal {L}^{op} \) are not fibered over \( \mathcal {C} \) (unlike their products, for example). Indeed (A1, A2)⇒(B1, B2) has first component not equal to A1B1. In the context of automatic differentiation, this has the consequence that primals associated with values f of function type are not equal to f itself. Instead, as we will see, they include both a copy of f and a copy of its (transposed) derivative. These primals at higher-order types can be contrasted with the situation at first-order types, where values are equal to their associated primal, as a result of the finite products being fibered.

Skip 7NOVEL AD ALGORITHMS AS SOURCE-CODE TRANSFORMATIONS Section

7 NOVEL AD ALGORITHMS AS SOURCE-CODE TRANSFORMATIONS

As ΣCSynLSyn and ΣCSynLSynop are both Cartesian closed categories by Theorems 6.1 and 6.2, the universal property of the source language (Proposition 3.1) gives us the following definition of forward and reverse mode CHAD as a canonical homomorphic functors.

Corollary 7.1 (Canonical Definition of CHAD).

Once we fix compatible definitions \( \overrightarrow {\mathcal {D}}_{}(\mathbf {real}^n) \) and \( \overrightarrow {\mathcal {D}}_{}(\mathsf {op}) \) (resp. \( \overleftarrow {\mathcal {D}}_{}(\mathbf {real}^n) \) and \( \overleftarrow {\mathcal {D}}_{}(\mathsf {op}) \)), we obtain a unique structure-preserving functor \( \begin{equation*} \overrightarrow {\mathcal {D}}_{}(-):\mathbf {Syn}\rightarrow \Sigma _{\mathbf {CSyn}}{\mathbf {LSyn}}\qquad \qquad (\text{resp.\quad } \overleftarrow {\mathcal {D}}_{}(-):\mathbf {Syn}\rightarrow \Sigma _{\mathbf {CSyn}}{\mathbf {LSyn}}^{op}). \end{equation*} \)

In this section, we discuss

  • the interpretation of the above functors as a type-respecting code transformation;

  • how we can define the basic definitions \( \overrightarrow {\mathcal {D}}_{}(\mathbf {real}^n) \), \( \overleftarrow {\mathcal {D}}_{}(\mathbf {real}^n) \), \( \overrightarrow {\mathcal {D}}_{}(\mathsf {op}) \) and \( \overleftarrow {\mathcal {D}}_{}(\mathsf {op}) \);

  • what the induced AD definitions \( \overrightarrow {\mathcal {D}}_{}({ t}) \) and \( \overleftarrow {\mathcal {D}}_{}({ t}) \) are for arbitrary source language programs t;

  • some consequences of the sharing of subexpressions that we have employed when defining the code transformations.

7.1 Some Notation

In the rest of this section, we use the following syntactic sugar:

  • a notation for (linear) n-ary tuple types: \( \boldsymbol {(}{ \underline{\tau } }_1 \boldsymbol {\mathop {*}} \ldots \boldsymbol {\mathop {*}} { \underline{\tau } }_n\boldsymbol {)}\stackrel{\mathrm{def}}{=}\boldsymbol {(}\boldsymbol {(}\boldsymbol {(}{ \underline{\tau } }_1 \boldsymbol {\mathop {*}} { \underline{\tau } }_2\boldsymbol {)}\cdots \boldsymbol {\mathop {*}} { \underline{\tau } }_{n-1}\boldsymbol {)} \boldsymbol {\mathop {*}} { \underline{\tau } }_n\boldsymbol {)} \);

  • a notation for n-ary tuples: \( \langle { t}_1, \ldots , { t}_n\rangle \stackrel{\mathrm{def}}{=}\langle \langle \langle { t}_1, { t}_2\rangle \cdots , { t}_{n-1}\rangle , { t}_n\rangle \);

  • given \( \Gamma ;\mathsf {v}:{ \underline{\tau } }\vdash { t}:\boldsymbol {(}{ \underline{\sigma } }_1 \boldsymbol {\mathop {*}} \cdots \boldsymbol {\mathop {*}} { \underline{\sigma } }_n\boldsymbol {)} \), we write \( \Gamma ;\mathsf {v}:{ \underline{\tau } }\vdash \mathbf {proj}_{i}\,({ t}):{ \underline{\sigma } }_i \) for the obvious ith projection of t, which is constructed by repeatedly applying fst  and snd  to t;

  • given \( \Gamma ;\mathsf {v}:{ \underline{\tau } }\vdash { t}:{ \underline{\sigma } }_i \), we write the ith coprojection \( \Gamma ;\mathsf {v}:{ \underline{\tau } }\vdash \mathbf {coproj}_{i}\,({ t})\stackrel{\mathrm{def}}{=}\langle \underline{0},\ldots ,\underline{0},{ t},\underline{0},\ldots ,\underline{0}\rangle :\boldsymbol {(}{ \underline{\sigma } }_1 \boldsymbol {\mathop {*}} \cdots \boldsymbol {\mathop {*}} { \underline{\sigma } }_n\boldsymbol {)} \);

  • for a list x1, …, xn of distinct identifiers, we write \( \mathbf {idx}({ x}_i; { x}_1,\ldots , { x}_n)\,\stackrel{\mathrm{def}}{=}i \) for the index of the identifier xi in this list;

  • a let-binding for tuples: \( \mathbf {let}\,\langle { x}, { y}\rangle ={ t}\,\mathbf {in}\,{ s}\stackrel{\mathrm{def}}{=}\mathbf {let}\,{ z}={ t}\,\mathbf {in}\,\mathbf {let}\,{ x}=\mathbf {fst}\,{ z}\,\mathbf {in}\,\mathbf {let}\,{ y}=\mathbf {snd}\,{ z}\,\mathbf {in}\,{ s}, \) where z is a fresh variable.

Furthermore, all variables used in the source code transformations below are assumed to be freshly chosen.

7.2 D(-) and D(-) as Type-Respecting Code Transformations

Writing out the definitions of the categories Syn, ΣCSynLSyn, ΣCSynLSynop, \( \overrightarrow {\mathcal {D}}_{}(-) \) and \( \overleftarrow {\mathcal {D}}_{}(-) \) provide for each type τ of the source language (Section 3) the following types in the target language (Section 4):

  • a Cartesian type \( \overrightarrow {\mathcal {D}}_{}({ \tau })_1 \) of forward mode primals;

  • a linear type \( \overrightarrow {\mathcal {D}}_{}({ \tau })_2 \) of forward mode tangents;

  • a Cartesian type \( \overleftarrow {\mathcal {D}}_{}({ \tau })_1 \) of reverse mode primals;

  • a linear type \( \overleftarrow {\mathcal {D}}_{}({ \tau })_2 \) of reverse mode cotangents.

We can extend the actions of \( \overrightarrow {\mathcal {D}}_{}(-) \) and \( \overleftarrow {\mathcal {D}}_{}(-) \) to typing contexts Γ = x1: τ1, …, xn: τn as \( \begin{align*} &\overrightarrow {\mathcal {D}}_{}(\Gamma)_1\stackrel{\mathrm{def}}{=}{ x}_1:\overrightarrow {\mathcal {D}}_{}({ \tau }_1)_1,\ldots ,{ x}_n:\overrightarrow {\mathcal {D}}_{}({ \tau }_n)_n\qquad \;&&\text{(a Cartesian typing context)}\\ &\overrightarrow {\mathcal {D}}_{}(\Gamma)_2\stackrel{\mathrm{def}}{=}\boldsymbol {(}\overrightarrow {\mathcal {D}}_{}({ \tau }_1)_2 \boldsymbol {\mathop {*}} \cdots \boldsymbol {\mathop {*}} \overrightarrow {\mathcal {D}}_{}({ \tau }_n)_2\boldsymbol {)}\qquad \;&&\text{(a linear type)}\\ &\overleftarrow {\mathcal {D}}_{}(\Gamma)_1\stackrel{\mathrm{def}}{=}{ x}_1:\overleftarrow {\mathcal {D}}_{}({ \tau }_1)_1,\ldots ,{ x}_n:\overleftarrow {\mathcal {D}}_{}({ \tau }_n)_n\qquad \;&&\text{(a Cartesian typing context)}\\ &\overleftarrow {\mathcal {D}}_{}(\Gamma)_2\stackrel{\mathrm{def}}{=}\boldsymbol {(}\overleftarrow {\mathcal {D}}_{}({ \tau }_1)_2 \boldsymbol {\mathop {*}} \cdots \boldsymbol {\mathop {*}} \overleftarrow {\mathcal {D}}_{}({ \tau }_n)_2\boldsymbol {)}\qquad \;&&\text{(a linear type)}. \end{align*} \) Similarly, \( \overrightarrow {\mathcal {D}}_{}(-) \) and \( \overleftarrow {\mathcal {D}}_{}(-) \) associate to each source language program Γt: τ the following programs in the target language (Section 4):

  • a forward mode primal computation \( \overrightarrow {\mathcal {D}}_{}(\Gamma)_1\vdash \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_1:\overrightarrow {\mathcal {D}}_{}({ \tau })_1 \);

  • a forward mode tangent computation \( \overrightarrow {\mathcal {D}}_{}(\Gamma)_1;\mathsf {v}:\overrightarrow {\mathcal {D}}_{}(\Gamma)_2\vdash \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_2:\overrightarrow {\mathcal {D}}_{}({ \tau })_2 \);

  • a reverse mode primal computation \( \overleftarrow {\mathcal {D}}_{}(\Gamma)_1\vdash \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_1:\overleftarrow {\mathcal {D}}_{}({ \tau })_1 \);

  • a reverse mode cotangent computation \( \overleftarrow {\mathcal {D}}_{}(\Gamma)_1;\mathsf {v}:\overleftarrow {\mathcal {D}}_{}({ \tau })_2\vdash \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_2:\overleftarrow {\mathcal {D}}_{}(\Gamma)_2 \).

Here, we write \( \overline{\Gamma } \) for the list of identifiers x1, …, xn that occur in the typing context Γ = x1: τ1, …, xn: τn. As we will see, we need to know these context identifiers to define the code transformation. Equivalently, we can pair up the primal and (co)tangent computations as

  • a combined forward mode primal and tangent computation \( \overrightarrow {\mathcal {D}}_{}(\Gamma)_1\vdash \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t}):\overrightarrow {\mathcal {D}}_{}({ \tau })_1\boldsymbol {\mathop {*}}(\overrightarrow {\mathcal {D}}_{}(\Gamma)_2\multimap \overrightarrow {\mathcal {D}}_{}({ \tau })_2) \), where \( \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})\!\stackrel{\beta \eta +}{=}\!\langle \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_1, \underline{\lambda } \mathsf {v}.\,\overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_2\rangle \);

  • a combined reverse mode primal and cotangent computation \( \overleftarrow {\mathcal {D}}_{}(\Gamma)_1\vdash \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t}):\overleftarrow {\mathcal {D}}_{}({ \tau })_1\boldsymbol {\mathop {*}}(\overleftarrow {\mathcal {D}}_{}({ \tau })_2\multimap \overleftarrow {\mathcal {D}}_{}(\Gamma)_2) \), where \( \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})\!\stackrel{\beta \eta +}{=}\!\langle \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_1, \underline{\lambda } \mathsf {v}.\,\overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_2\rangle \).

We prefer to work with these combined primal and (co)tangent code transformations as this allows us to share common subexpressions between the primal and (co)tangent computations using let-bindings. Indeed, note that the universal property of Syn only defines the code transformations \( \overrightarrow {\mathcal {D}}_{}(-) \) and \( \overleftarrow {\mathcal {D}}_{}(-) \) up to \( \!\stackrel{\beta \eta +}{=}\! \). In writing down the definitions of CHAD on programs, we make sure to choose sensible representatives of these βη + -equivalence classes: ones that share common subexpressions through let-bindings. While these let-bindings naturally do not affect correctness of the transformation, they let us avoid code explosion at compile-time and unnecessary recomputation at runtime.

Finally, we note that, due to their definition from a universal property, our code transformations automatically respect equational reasoning in the sense that \( \Gamma \vdash { t}\stackrel{\beta \eta }{=}{ s}:{ \tau } \) implies that \( \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})\!\stackrel{\beta \eta +}{=}\!\overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ s}) \) and \( \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})\!\stackrel{\beta \eta +}{=}\!\overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ s}) \).

7.3 The Basic Definitions: D (realn), D (realn), D (op), and D (op)

In Section 4, we have assumed that there are suitable terms (for example, linear operations) \( \begin{align*} &{ x}_1:\mathbf {real}^{n_1}, \ldots ,{ x}_k: \mathbf {real}^{n_k}\;\;;\;\; \mathsf {v}:\underline{\mathbf {real}}^{n_1}\boldsymbol {\mathop {*}} \cdots \boldsymbol {\mathop {*}} \underline{\mathbf {real}}^{n_k} &\hspace{-5.0pt}&\vdash D \mathsf {op}({ x}_1,\ldots ,{ x}_k;\mathsf {v})\hspace{-5.0pt} &:\; &\underline{\mathbf {real}}^m\\ &{ x}_1:\mathbf {real}^{n_1}, \ldots ,{ x}_k: \mathbf {real}^{n_k}\;\;;\;\; \mathsf {v}: \underline{\mathbf {real}}^m &\hspace{-5.0pt}&\vdash {D \mathsf {op}}^{t}({ x}_1,\ldots ,{ x}_k;\mathsf {v})\hspace{-5.0pt}& :\;&\underline{\mathbf {real}}^{n_1}\boldsymbol {\mathop {*}} \cdots \boldsymbol {\mathop {*}} \underline{\mathbf {real}}^{n_k} \end{align*} \) to represent the forward and reverse mode derivatives of the primitive operations \( \mathsf {op}\in \mathsf {Op}_{n_1,\ldots ,n_k}^m \). Using these, we define \( \begin{align*} &\overrightarrow {\mathcal {D}}_{}(\mathbf {real}^n)_1 &&\stackrel{\mathrm{def}}{=}&& \mathbf {real}^n\\ & {\overrightarrow {\mathcal {D}}_{}(\mathbf {real}^n)_2} && \stackrel{\mathrm{def}}{=}&& \underline{\mathbf {real}}^n\\ \\ &\overrightarrow {\mathcal {D}}_{\overline{\Gamma }}(\mathsf {op}({ t}_1,\ldots ,{ t}_k)) && \stackrel{\mathrm{def}}{=}&& \mathbf {let}\,\langle { x}_1, { x}_1^{\prime }\rangle =\overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t}_1)\,\mathbf {in}\,\cdots \mathbf {let}\,\langle { x}_k, { x}_k^{\prime }\rangle =\overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t}_k)\,\mathbf {in}\,\\ && &&&\langle \mathsf {op}({ x}_1,\ldots ,{ x}_k), \underline{\lambda } \mathsf {v}.\,D \mathsf {op}({ x}_1,\ldots ,{ x}_n;\langle { x}_1^{\prime }{\bullet } \mathsf {v}, \ldots , { x}_k^{\prime }{\bullet } \mathsf {v}\rangle)\rangle \\ &\overleftarrow {\mathcal {D}}_{}(\mathbf {real}^n)_1 && \stackrel{\mathrm{def}}{=}&& \mathbf {real}^n\\ & \overleftarrow {\mathcal {D}}_{}(\mathbf {real}^n)_2 && \stackrel{\mathrm{def}}{=}&& \underline{\mathbf {real}}^n\\ \\ &\overleftarrow {\mathcal {D}}_{\overline{\Gamma }}(\mathsf {op}({ t}_1,\ldots ,{ t}_k)) && \stackrel{\mathrm{def}}{=}&& \mathbf {let}\,\langle { x}_1, { x}_1^{\prime }\rangle =\overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t}_1)\,\mathbf {in}\,\cdots \mathbf {let}\,\langle { x}_k, { x}_k^{\prime }\rangle =\overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t}_k)\,\mathbf {in}\,\\ &&&&&\langle \mathsf {op}({ x}_1,\ldots ,{ x}_k), \underline{\lambda } \mathsf {v}.\,\mathbf {let}\,\mathsf {v}={D \mathsf {op}}^{t}({ x}_1,\ldots ,{ x}_k;\mathsf {v})\,\mathbf {in}\,{ x}_1^{\prime }{\bullet } (\mathbf {proj}_{1}\,{\mathsf {v}})+\cdots +{ x}_k^{\prime }{\bullet } (\mathbf {proj}_{k}\,{\mathsf {v}})\rangle \end{align*} \) These basic definitions of CHAD for primitive operations implement the well-known multivariate chain rules for (transposed) derivatives of Section 2.2.

For the AD transformations to be correct, it is important that these derivatives of language primitives are implemented correctly in the sense that \( \begin{equation*} [\![ { x}_1,\ldots ,{ x}_k;{ y}\vdash D \mathsf {op}({ x}_1,\ldots ,{ x}_k;\mathsf {v})]\!] =D[\![ \mathsf {op}]\!] \qquad [\![ { x}_1,\ldots ,{ x}_k;\mathsf {v}\vdash {D \mathsf {op}}^{t}({ x}_1,\ldots ,{ x}_k;\mathsf {v})]\!] ={D[\![ \mathsf {op}]\!] }^{t}. \end{equation*} \) For example, for elementwise multiplication \( (*)\in \mathsf {Op}_{n,n}^n \), which we interpret as the usual elementwise product \( [\![ (*)]\!] \stackrel{\mathrm{def}}{=}(*): \mathbb {R}^n\times \mathbb {R}^n\rightarrow \mathbb {R}^n \), we need, by the product rule for differentiation, that \( \begin{align*} &[\![ D(*)({ x}_1,{ x}_2;\mathsf {v})]\!] ((a_1, a_2), (b_1, b_2))&&=&a_1 * b_2 + a_2 * b_1\\ &[\![ {D(*)}^{t}({ x}_1,{ x}_2;\mathsf {v})]\!] ((a_1, a_2),b)&&=&(a_2 * b, a_1 * b). \end{align*} \) By Proposition 3.1, the extension of the AD transformations \( \overrightarrow {\mathcal {D}}_{} \) and \( \overleftarrow {\mathcal {D}}_{} \) to the full source language are now canonically determined, as the unique Cartesian closed functors that extend these basic definitions.

7.4 The Implied Forward Mode CHAD Definitions

We define the types of (forward mode) primals \( \overrightarrow {\mathcal {D}}_{}({ \tau })_1 \) and tangents \( \overrightarrow {\mathcal {D}}_{}({ \tau })_2 \) associated with a type τ as follows: \( \begin{align*} &\overrightarrow {\mathcal {D}}_{}(\mathbf {1})_1 &&\stackrel{\mathrm{def}}{=}&& \mathbf {1}&& \overrightarrow {\mathcal {D}}_{}(\mathbf {1})_2 &&\stackrel{\mathrm{def}}{=}&& \underline{\mathbf {1}}\\ &\overrightarrow {\mathcal {D}}_{}({ \tau }\boldsymbol {\mathop {*}}{ \sigma })_1 &&\stackrel{\mathrm{def}}{=}&& \overrightarrow {\mathcal {D}}_{}({ \tau })_1\boldsymbol {\mathop {*}}\overrightarrow {\mathcal {D}}_{}({ \sigma })_1&& \overrightarrow {\mathcal {D}}_{}({ \tau }\boldsymbol {\mathop {*}}{ \sigma })_2 &&\stackrel{\mathrm{def}}{=}&& \overrightarrow {\mathcal {D}}_{}({ \tau })_2\boldsymbol {\mathop {*}}\overrightarrow {\mathcal {D}}_{}({ \sigma })_2\\ &\overrightarrow {\mathcal {D}}_{}({ \tau }\rightarrow { \sigma })_1 &&\stackrel{\mathrm{def}}{=}&& \overrightarrow {\mathcal {D}}_{}({ \tau })_1\rightarrow (\overrightarrow {\mathcal {D}}_{}({ \sigma })_1\boldsymbol {\mathop {*}} (\overrightarrow {\mathcal {D}}_{}({ \tau })_2\multimap \overrightarrow {\mathcal {D}}_{}({ \sigma })_2))\qquad \qquad \qquad \qquad &&\overrightarrow {\mathcal {D}}_{}({ \tau }\rightarrow { \sigma })_2 &&\stackrel{\mathrm{def}}{=}&& \overrightarrow {\mathcal {D}}_{}({ \tau })_1\rightarrow \overrightarrow {\mathcal {D}}_{}({ \sigma })_2. \end{align*} \) Observe that the type of primals associated with a function type is not equal to the original type. This is a consequence of the non-fibered nature of the exponentials in Σ-types ΣCSynLSyn of categories (Section 6).

For programs t, we define their efficient CHAD transformation \( \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t}) \) as follows: \( \begin{align*} &\overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ x}) &&\stackrel{\mathrm{def}}{=}&& \langle { x}, \underline{\lambda } \mathsf {v}.\,\mathbf {proj}_{\mathbf {idx}({ x}; \overline{\Gamma })\,}\,(\mathsf {v})\rangle \\ &\overrightarrow {\mathcal {D}}_{\overline{\Gamma }}(\mathbf {let}\,{ x}={ t}\,\mathbf {in}\,{ s}) &&\stackrel{\mathrm{def}}{=}&&\mathbf {let}\,\langle { x}, { x}^{\prime }\rangle =\overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})\,\mathbf {in}\, \mathbf {let}\,\langle { y}, { y}^{\prime }\rangle =\overrightarrow {\mathcal {D}}_{\overline{\Gamma },{ x}}({ s})\,\mathbf {in}\,\langle { y}, \underline{\lambda } \mathsf {v}.\,{ y}^{\prime }{\bullet } \langle \mathsf {v}, { x}^{\prime }{\bullet } \mathsf {v}\rangle \rangle \\ &\overrightarrow {\mathcal {D}}_{\overline{\Gamma }}(\langle \rangle) &&\stackrel{\mathrm{def}}{=}&&\langle \langle \rangle , \underline{\lambda } \mathsf {v}.\,\langle \rangle \rangle \\ &\overrightarrow {\mathcal {D}}_{\overline{\Gamma }}(\langle { t}, { s}\rangle) &&\stackrel{\mathrm{def}}{=}&& \mathbf {let}\,\langle { x}, { x}^{\prime }\rangle =\overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})\,\mathbf {in}\, \mathbf {let}\,\langle { y}, { y}^{\prime }\rangle =\overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})\,\mathbf {in}\,\langle \langle { x}, { y}\rangle , \underline{\lambda } \mathsf {v}.\,\langle { x}^{\prime }{\bullet } \mathsf {v}, { y}^{\prime }{\bullet } \mathsf {v}\rangle \rangle \\ & \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}(\mathbf {fst}\,{ t}) &&\stackrel{\mathrm{def}}{=}&& \mathbf {let}\,\langle { x}, { x}^{\prime }\rangle =\overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})\,\mathbf {in}\,\langle \mathbf {fst}\,{ x}, \underline{\lambda } \mathsf {v}.\,\mathbf {fst}\,({ x}^{\prime }{\bullet } \mathsf {v})\rangle \\ & \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}(\mathbf {snd}\,{ t}) &&\stackrel{\mathrm{def}}{=}&& \mathbf {let}\,\langle { x}, { x}^{\prime }\rangle =\overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})\,\mathbf {in}\,\langle \mathbf {snd}\,{ x}, \underline{\lambda } \mathsf {v}.\,\mathbf {snd}\,({ x}^{\prime }{\bullet } \mathsf {v})\rangle \\ & \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}(\lambda { x}.\,{ t})&&\stackrel{\mathrm{def}}{=}&& \mathbf {let}\,{ y}=\lambda { x}.\,\overrightarrow {\mathcal {D}}_{\overline{\Gamma },{ x}}({ t})\,\mathbf {in}\,\langle \lambda { x}.\,\mathbf {let}\,\langle { z}, { z}^{\prime }\rangle ={ y}\,{ x}\,\mathbf {in}\,\langle { z}, \underline{\lambda } \mathsf {v}.\,{ z}^{\prime }{\bullet } \langle \underline{0}, \mathsf {v}\rangle \rangle , \underline{\lambda } \mathsf {v}.\,\lambda { x}.\,(\mathbf {snd}\,({ y}\,{ x})){\bullet } \langle \mathsf {v}, \underline{0}\rangle \rangle \\ & \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t}\,{ s}) &&\stackrel{\mathrm{def}}{=}&& \mathbf {let}\,\langle { x}, { x}^{\prime }_{\text{ctx}}\rangle =\overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})\,\mathbf {in}\, \mathbf {let}\,\langle { y}, { y}^{\prime }\rangle =\overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})\,\mathbf {in}\, \mathbf {let}\,\langle { z}, { x}^{\prime }_{\text{arg}}\rangle ={ x}\,{ y}\,\mathbf {in}\, \\ & && && \langle { z}, \underline{\lambda } \mathsf {v}.\,({ x}^{\prime }_{\text{ctx}}{\bullet } \mathsf {v})\,{ y}+ { x}^{\prime }_{\text{arg}}{\bullet } ({ y}^{\prime }{\bullet } \mathsf {v})\rangle . \end{align*} \) We explain and justify these transformations in the next subsection after discussing the transformations for reverse CHAD.

7.5 The Implied Reverse Mode CHAD Definitions

We define the types of (reverse mode) primals \( \overleftarrow {\mathcal {D}}_{}({ \tau })_1 \) and cotangents \( \overleftarrow {\mathcal {D}}_{}({ \tau })_2 \) associated with a type τ as follows: \( \begin{align*} &\overleftarrow {\mathcal {D}}_{}(\mathbf {1})_1 &&\stackrel{\mathrm{def}}{=}&& \mathbf {1}&& \overleftarrow {\mathcal {D}}_{}(\mathbf {1})_2 \stackrel{\mathrm{def}}{=}\underline{\mathbf {1}}\\ &\overleftarrow {\mathcal {D}}_{}({ \tau }\boldsymbol {\mathop {*}}{ \sigma })_1 &&\stackrel{\mathrm{def}}{=}&& \overleftarrow {\mathcal {D}}_{}({ \tau })_1\boldsymbol {\mathop {*}}\overleftarrow {\mathcal {D}}_{}({ \sigma })_1 && \overleftarrow {\mathcal {D}}_{}({ \tau }\boldsymbol {\mathop {*}}{ \sigma })_2 &&\stackrel{\mathrm{def}}{=}&& \overleftarrow {\mathcal {D}}_{}({ \tau })_2\boldsymbol {\mathop {*}}\overleftarrow {\mathcal {D}}_{}({ \sigma })_2\\ &\overleftarrow {\mathcal {D}}_{}({ \tau }\rightarrow { \sigma })_1 &&\stackrel{\mathrm{def}}{=}&& \overleftarrow {\mathcal {D}}_{}({ \tau })_1\rightarrow (\overleftarrow {\mathcal {D}}_{}({ \sigma })_1\boldsymbol {\mathop {*}}(\overleftarrow {\mathcal {D}}_{}({ \sigma })_2\multimap \overleftarrow {\mathcal {D}}_{}({ \tau })_2))\qquad \qquad \qquad \qquad &&\overleftarrow {\mathcal {D}}_{}({ \tau }\rightarrow { \sigma })_2 &&\stackrel{\mathrm{def}}{=}&& {!}\overleftarrow {\mathcal {D}}_{}({ \tau })_1\otimes _{}\overleftarrow {\mathcal {D}}_{}({ \sigma })_2. \end{align*} \) Again, we associate a non-trivial type of primals to function types as exponentials are not fibered in ΣCSynLSynop (Section 6).

For programs t, we define their efficient CHAD transformation \( \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t}) \) as follows: \( \begin{align*} &\overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ x}) &&\stackrel{\mathrm{def}}{=}&& \langle { x}, \underline{\lambda } \mathsf {v}.\, \mathbf {coproj}_{\mathbf {idx}({ x}; \overline{\Gamma })\,}\,(\mathsf {v})\rangle \\ & \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}(\mathbf {let}\,{ x}={ t}\,\mathbf {in}\,{ s}) && \stackrel{\mathrm{def}}{=}&& \mathbf {let}\,\langle { x}, { x}^{\prime }\rangle =\overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})\,\mathbf {in}\, \mathbf {let}\,\langle { y}, { y}^{\prime }\rangle =\overleftarrow {\mathcal {D}}_{\overline{\Gamma },{ x}}({ s})\,\mathbf {in}\, \langle { y}, \underline{\lambda } \mathsf {v}.\,\mathbf {let}\,\mathsf {v}={ y}^{\prime }{\bullet } \mathsf {v}\,\mathbf {in}\, \mathbf {fst}\,\mathsf {v}+{ x}^{\prime }{\bullet } (\mathbf {snd}\,\mathsf {v}) \rangle \\ & \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}(\langle \rangle) &&\stackrel{\mathrm{def}}{=}&& \langle \langle \rangle , \underline{\lambda } \mathsf {v}.\,\underline{0}\rangle \\ & \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}(\langle { t}, { s}\rangle) &&\stackrel{\mathrm{def}}{=}&& \mathbf {let}\,\langle { x}, { x}^{\prime }\rangle =\overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})\,\mathbf {in}\, \mathbf {let}\,\langle { y}, { y}^{\prime }\rangle =\overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})\,\mathbf {in}\, \langle \langle { x}, { y}\rangle , \underline{\lambda } \mathsf {v}.\,{ x}^{\prime }{\bullet } (\mathbf {fst}\,\mathsf {v})\rangle + {{ y}^{\prime }{\bullet } (\mathbf {snd}\,\mathsf {v})} \\ & \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}(\mathbf {fst}\,{ t}) &&\stackrel{\mathrm{def}}{=}&& \mathbf {let}\,\langle { x}, { x}^{\prime }\rangle =\overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})\,\mathbf {in}\,\langle \mathbf {fst}\,{ x}, \underline{\lambda } \mathsf {v}.\,{ x}^{\prime }{\bullet } \langle \mathsf {v}, \underline{0}\rangle \rangle \\ & \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}(\mathbf {snd}\,{ t}) &&\stackrel{\mathrm{def}}{=}&& \mathbf {let}\,\langle { x}, { x}^{\prime }\rangle =\overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})\,\mathbf {in}\,\langle \mathbf {snd}\,{ x}, \underline{\lambda } \mathsf {v}.\,{ x}^{\prime }{\bullet } \langle \underline{0}, \mathsf {v}\rangle \rangle \\ & \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}(\lambda { x}.\,{ t}) &&\stackrel{\mathrm{def}}{=}&& \mathbf {let}\,{ y}=\lambda { x}.\,\overleftarrow {\mathcal {D}}_{\overline{\Gamma },{ x}}({ t})\,\mathbf {in}\,\\ &&&&& \langle \lambda { x}.\,\mathbf {let}\,\langle { z}, { z}^{\prime }\rangle ={ y}\,{ x}\,\mathbf {in}\, \langle { z}, \underline{\lambda } \mathsf {v}.\,\mathbf {snd}\,({ z}^{\prime }{\bullet } \mathsf {v})\rangle , \underline{\lambda } \mathsf {v}.\,\mathbf {case}\,\mathsf {v}\,\mathbf {of}\,{!}{ x}\otimes _{}\mathsf {v}\rightarrow \mathbf {fst}\,((\mathbf {snd}\,({ y}\,{ x})){\bullet } \mathsf {v}) \rangle \\ & \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t}\,{ s}) &&\stackrel{\mathrm{def}}{=}&& \mathbf {let}\,\langle { x}, { x}^{\prime }_{\text{ctx}}\rangle =\overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})\,\mathbf {in}\, \mathbf {let}\,\langle { y}, { y}^{\prime }\rangle =\overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})\,\mathbf {in}\, \mathbf {let}\,\langle { z}, { x}^{\prime }_{\text{arg}}\rangle ={ x}\,{ y}\,\mathbf {in}\, \\ & && && \langle { z}, \underline{\lambda } \mathsf {v}.\,{ x}^{\prime }_{\text{ctx}}{\bullet } (!{ y}\otimes \mathsf {v}) + { y}^{\prime }{\bullet } ({ x}^{\prime }_{\text{arg}}{\bullet } \mathsf {v})\rangle . \end{align*} \) We now explain and justify the forward and reverse CHAD transformations. The transformations for variables, tuples and projections implement the well-known multivariate calculus facts about (transposed) derivatives of differentiable functions into and out of products of spaces. The transformations for let-bindings add to that the chain rules for \( \mathcal {T}_{}\ \) and \( \mathcal {T}^*_{} \) of Section 2.2. The transformations for λ-abstractions split the derivative of a closure λx. t into the derivative z′ with respect to the function argument x and the derivative snd (yx) with respect to the captured context variables; they store z′ together with the primal computation z of λx. t in the primal associated with the closure and they store snd (yx) in the (co)tangent associated with the closure. Conversely, the transformations for evaluations extracts those two components of the (transposed) derivative \( { x}^{\prime }_{\text{ctx}} \) (w.r.t. context variables) and \( { x}^{\prime }_{\text{arg}} \) (w.r.t. function argument) from the (co)tangent and primal, respectively, and recombine them to correctly propagate (co)tangent contributions from both sources.

7.6 Sharing of Common Subexpressions

Through careful use of let-bindings, we have taken care to ensure that the CHAD code transformations we specified have the following good property: for every program former C[t1, …, tn] that takes n subprograms t1, …, tn (for example, function application ts takes two subprograms t and s), we have that \( \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}(C[{ t}_1,\ldots ,{ t}_n]) \) uses \( \overrightarrow {\mathcal {D}}_{\overline{\Gamma }_i}({ t}_i) \) exactly once in its definition for each subprogram ti, for some list of identifiers \( \overline{\Gamma }_i \). Similarly, \( \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}(C[{ t}_1,\ldots ,{ t}_n]) \) uses \( \overleftarrow {\mathcal {D}}_{\overline{\Gamma }_i}({ t}_i) \) exactly once in its definition for each subprogram ti, which demonstrates the following.

Corollary 7.2 (No Code Explosion).

The code sizes of the forward and reverse CHAD transformed programs \( \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t}) \) and \( \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t}) \) both grow linearly in the size of the original source program t.

This compile-time complexity property is crucial if we are to keep compilation times and executable sizes manageable when performing AD on large code-bases.

Of course, our use of let-bindings has the additional benefit at runtime that repeated subcomputations are performed only once and their stored results are shared, rather than recomputing every time their results are needed. We have taken care to avoid any unnecessary computation in this way, which we hope will benefit the performance of CHAD in practice. However, we leave a proper complexity and practical performance analysis to future work.

Skip 8PROVING REVERSE AND FORWARD AD SEMANTICALLY CORRECT Section

8 PROVING REVERSE AND FORWARD AD SEMANTICALLY CORRECT

In this section, we show that the CHAD code transformations described in Section 7 correctly compute mathematical derivatives (Theorem 8.3). The proof mainly consists of an (open) logical relations argument over the semantics in the Cartesian closed categories Set × ΣSetCMon and Set × ΣSetCMonop. The intuition behind the proof is as follows:

  • the logical relations relate a differentiable functions \( \mathbb {R}^d\rightarrow [\![ { \tau }]\!] \) to associated primal and (co)tangent functions;

  • the semantics \( [\![ { t}]\!] \times [\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})]\!] \) and \( [\![ { t}]\!] \times [\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})]\!] \) of forward and reverse mode CHAD respect the logical relations;

  • therefore, by basic results in calculus, they must equal the derivative and transposed derivative of \( [\![ { t}]\!] \).

This logical relations proof can be phrased in elementary terms, but the resulting argument is technical and would be hard to discover. Instead, we prefer to phrase it in terms of a categorical subsconing construction, a more abstract and elegant perspective on logical relations. We discovered the proof by taking this categorical perspective, and, while we have verified the elementary argument (see Section 2.6), we would not otherwise have come up with it.

8.1 Preliminaries

8.1.1 Subsconing.

Logical relations arguments provide a powerful proof technique for demonstrating properties of typed programs. The arguments proceed by induction on the structure of types. Here, we briefly review the basics of categorical logical relations arguments, or subsconing constructions. We restrict to the level of generality that we need here, but we would like to point out that the theory applies much more generally.

Consider a Cartesian closed category \( (\mathcal {C},\mathbb {1},\times ,\Rightarrow) \). Suppose that we are given a functor \( F:\mathcal {C}\rightarrow \mathbf {Set} \) to the category Set of sets and functions that preserves finite products in the sense that \( F(\mathbb {1})\cong \mathbb {1} \) and F(C × C′)≅F(C) × F(C′). Then, we can form the subscone of F, or category of logical relations over F, which is Cartesian closed, with a faithful Cartesian closed functor π1 to \( \mathcal {C} \) that forgets about the predicates [24]:

  • objects are pairs (C, P) of an object C of \( \mathcal {C} \) and a predicate PFC;

  • morphisms (C, P) → (C′, P′) are \( \mathcal {C} \) morphisms f: CC′ that respect the predicates in the sense that F(f)(P)⊆P′;

  • identities and composition are as in \( \mathcal {C} \);

  • \( (\mathbb {1}, F\mathbb {1}) \) is the terminal object, and binary products and exponentials are given by \( \begin{align*} (C,P)\times (C^{\prime },P^{\prime })&= (C\times C^{\prime }, \left\lbrace \alpha \in F(C\times C^{\prime })\mid F(\pi _1)(\alpha)\in P, F(\pi _2)(\alpha)\in P^{\prime }\right\rbrace)\\ (C,P)\Rightarrow (C^{\prime }, P^{\prime })&= (C\Rightarrow C^{\prime }, \lbrace F(\pi _1)(\gamma)\mid \gamma \in F((C\Rightarrow C^{\prime })\times C) \text { s.t. } F(\pi _2)(\gamma) \in P \text { implies } F(\mathbf {ev}^{})(\gamma)\in P^{\prime } \rbrace). \end{align*} \)

In typical applications, \( \mathcal {C} \) can be the syntactic category of a language (like Syn), the codomain of a denotational semantics \( [\![ -]\!] \) (like Set), or a product of the above, if we want to consider n-ary logical relations. Typically, F tends to be a hom-functor (which always preserves products), like \( \mathcal {C}(\mathbb {1},-) \) or \( \mathcal {C}(C_0,-) \), for some important object C0. When applied to the syntactic category Syn and F = Syn(1, −), the formulae for products and exponentials in the subscone clearly reproduce the usual recipes in traditional, syntactic logical relations arguments. As such, subsconing generalises standard logical relations methods.

8.2 Subsconing for Correctness of AD

We apply the subsconing construction above to \( \begin{equation*} \begin{array}{lll} \mathcal {C}=\mathbf {Set}\times \Sigma _{\mathbf {Set}}\mathbf {CMon}& F=\mathbf {Set}\times \Sigma _{\mathbf {Set}}\mathbf {CMon}((\mathbb {R}^d,(\mathbb {R}^d,\underline{\mathbb {R}}^d)),-)&\hspace{-3.0pt}\text {(forward AD)}\\ \mathcal {C}=\mathbf {Set}\times \Sigma _{\mathbf {Set}}\mathbf {CMon}^{op}\qquad \qquad & F=\mathbf {Set}\times \Sigma _{\mathbf {Set}}\mathbf {CMon}^{op}((\mathbb {R}^d,(\mathbb {R}^d,\underline{\mathbb {R}}^d)),-)\qquad \qquad &\text {(reverse AD)}, \end{array} \end{equation*} \) where we note that Set, ΣSetCMon, and ΣSetCMonop are Cartesian closed (given the arguments of Sections 5 and 6) and that the product of Cartesian closed categories is again Cartesian closed. Let us write \( \overrightarrow {\mathbf {SScone}}_{} \) and \( \overleftarrow {\mathbf {SScone}}_{} \), respectively, for the resulting categories of logical relations.

Seeing that \( \overrightarrow {\mathbf {SScone}}_{} \) and \( \overleftarrow {\mathbf {SScone}}_{} \) are Cartesian closed, we obtain unique Cartesian closed functors \( (\!| -|\!) ^f:\mathbf {Syn}\rightarrow \overrightarrow {\mathbf {SScone}}_{} \) and \( (\!| -|\!) ^r:\mathbf {Syn}\rightarrow \overleftarrow {\mathbf {SScone}}_{} \), by the universal property of Syn (Section 3), once we fix an interpretation of realn and all operations \( \mathsf {op} \). We write \( P_{{ \tau }}^f \) and \( P_{{ \tau }}^r \), respectively, for the relations \( \pi _2(\!| { \tau }|\!) ^f \) and \( \pi _2(\!| { \tau }|\!) ^r \). Let us interpret \( \begin{align*} &(\!| \mathbf {real}^n|\!) ^f\stackrel{\mathrm{def}}{=}(((\mathbb {R}^n,(\mathbb {R}^n,\underline{\mathbb {R}}^n)), \left\lbrace (f,(g,h))\mid f\text { is {differentiable}{}}, f=g\text { and } h=Df \right\rbrace))\\ &(\!| \mathbf {real}^n|\!) ^r\stackrel{\mathrm{def}}{=}(((\mathbb {R}^n,(\mathbb {R}^n,\underline{\mathbb {R}}^n)),\lbrace (f,(g,h))\mid f\text { is {differentiable}{}}, f=g\text { and } h={Df}^{t} \rbrace))\\ &(\!| \mathsf {op}|\!) ^f\stackrel{\mathrm{def}}{=}([\![ \mathsf {op}]\!] , ([\![ \mathsf {op}]\!] , [\![ D \mathsf {op}]\!]))\qquad (\!| \mathsf {op}|\!) ^r\stackrel{\mathrm{def}}{=}([\![ \mathsf {op}]\!] , ([\![ \mathsf {op}]\!] , [\![ {D \mathsf {op}}^{t}]\!])), \end{align*} \) where we write Df for the semantic derivative of f and (−)t for the matrix transpose (see Section 5).

Lemma 8.1.

These definitions extend uniquely to define Cartesian closed functors \( \begin{equation*} (\!| -|\!) ^f:\mathbf {Syn}\rightarrow \overrightarrow {\mathbf {SScone}}_{}\qquad \text { and }\qquad (\!| -|\!) ^r:\mathbf {Syn}\rightarrow \overleftarrow {\mathbf {SScone}}_{}. \end{equation*} \)

Proof.

This follows from the universal property of Syn (Proposition 3.1) once we verify that \( ([\![ \mathsf {op}]\!] , ([\![ \mathsf {op}]\!] , [\![ D \mathsf {op}]\!])) \) and \( ([\![ \mathsf {op}]\!] , ([\![ \mathsf {op}]\!] , [\![ {D \mathsf {op}}^{t}]\!])) \) respect the logical relations Pf and Pr, respectively. This respecting of relations follows immediately from the chain rule for multivariate differentiation, as long as we have implemented our derivatives correctly for the basic operations \( \mathsf {op} \), in the sense that \( \begin{align*} &[\![ { x};{ y}\vdash D \mathsf {op}({ x};{ y})]\!] =D[\![ \mathsf {op}]\!] \;\;\quad \qquad \text {and}\quad \qquad \;\; [\![ { x};{ y}\vdash {D \mathsf {op}}^{t}({ x};{ y})]\!] ={D[\![ \mathsf {op}]\!] }^{t}. \end{align*} \)

Writing \( \mathbf {real}^{n_1,\ldots ,n_k}\!\stackrel{\mathrm{def}}{=}\! \mathbf {real}^{n_1}\boldsymbol {\mathop {*}}\cdots \boldsymbol {\mathop {*}}\mathbf {real}^{n_k} \) and \( \mathbb {R}^{n_1,\ldots ,n_k}\!\stackrel{\mathrm{def}}{=}\! \mathbb {R}^{n_1}\times \cdots \times \mathbb {R}^{n_k} \), we compute \( \begin{align*} &(\!| \mathbf {real}^{n_1,\ldots ,n_k}|\!) ^f\!=\! ((\mathbb {R}^{n_1,\ldots ,n_k},(\mathbb {R}^{n_1,\ldots ,n_k},\underline{\mathbb {R}}^{n_1,\ldots ,n_k})), \left\lbrace (f,(g,h))\mid f\text { is {differentiable}{}}, f=g, h=Df \right\rbrace)\\ &(\!| \mathbf {real}^{n_1,\ldots ,n_k}|\!) ^r\!=\! ((\mathbb {R}^{n_1,\ldots ,n_k},(\mathbb {R}^{n_1,\ldots ,n_k},\underline{\mathbb {R}}^{n_1,\ldots ,n_k})), \lbrace (f,(g,h))\mid f\text { is {differentiable}{}}, f=g, h={Df}^{t}\rbrace), \end{align*} \) since derivatives of tuple-valued functions are computed component-wise. (In fact, the corresponding facts hold more generally for any first-order type, as an iterated product of realn.) Suppose that \( (f,(g,h))\in P^f_{\mathbf {real}^{n_1,\ldots ,n_k}} \), i.e., f is differentiable, g = f and h = Df. Then, using the chain rule in the last step, we have \( \begin{align*} & (f,(g,h));([\![ \mathsf {op}]\!] ,([\![ \mathsf {op}]\!] ,[\![ D \mathsf {op}]\!])) \\ & = (f,(f,Df));([\![ \mathsf {op}]\!] ,([\![ { \mathsf {op}}]\!] ,[\![ { x};{ y}\vdash D \mathsf {op}(x;y)]\!]))\\ & = (f,(f,Df));([\![ \mathsf {op}]\!] ,([\![ \mathsf {op}]\!] ,D[\![ \mathsf {op}]\!])) \\ & = (f;[\![ \mathsf {op}]\!] ,(f;[\![ \mathsf {op}]\!] , x\mapsto r\mapsto D[\![ \mathsf {op}]\!] (f(x))(Df(x)(r))))\\ & = (f;[\![ \mathsf {op}]\!] ,(f;[\![ \mathsf {op}]\!] , D(f;[\![ \mathsf {op}]\!])))\in P_{\mathbf {real}^m}^f. \end{align*} \) Similarly, if \( (f, (g,h))\in P^r_{\mathbf {real}^{n_1,\ldots ,n_k}} \), then by the chain rule and linear algebra \( \begin{align*} & (f,(g,h));([\![ \mathsf {op}]\!] ,([\![ \mathsf {op}]\!] ,[\![ {D \mathsf {op}}^{t}]\!])) \\ & = (f,(f,{Df}^{t}));([\![ \mathsf {op}]\!] ,([\![ { \mathsf {op}}]\!] ,[\![ { x};{ y}\vdash {D \mathsf {op}}^{t}(x;y)]\!])) \\ & = (f,(f,{Df}^{t}));([\![ \mathsf {op}]\!] ,([\![ \mathsf {op}]\!] ,{D[\![ \mathsf {op}]\!] }^{t})) \\ & = (f;[\![ \mathsf {op}]\!] ,(f;[\![ \mathsf {op}]\!] , x\mapsto v\mapsto {Df}^{t}(x)({D[\![ \mathsf {op}]\!] }^{t}(f(x))(v)))) \\ & = (f;[\![ \mathsf {op}]\!] ,(f;[\![ \mathsf {op}]\!] , x\mapsto v\mapsto {Df(x);D[\![ \mathsf {op}]\!] (f(x))}^{t} (v))) \\ & = (f;[\![ \mathsf {op}]\!] ,(f;[\![ \mathsf {op}]\!] , {D(f;[\![ \mathsf {op}]\!])}^{t}))\in P_{\mathbf {real}^m}^r. \end{align*} \) Consequently, we obtain our unique Cartesian closed functors \( (\!| -|\!) ^f \) and \( (\!| -|\!) ^r \).□

Furthermore, observe that \( \Sigma _{[\![ -]\!] }[\![ -]\!] ({ t}_1,{ t}_2)\stackrel{\mathrm{def}}{=}([\![ { t}_1]\!] ,[\![ { t}_2]\!]) \) defines a Cartesian closed functor \( \Sigma _{[\![ -]\!] }[\![ -]\!] :\Sigma _{{\mathbf {CSyn}}}{\mathbf {LSyn}}\rightarrow \Sigma _{\mathbf {Set}}\mathbf {CMon} \). Similarly, we get a Cartesian closed functor \( \Sigma _{[\![ -]\!] }[\![ -]\!] ^{op}:\Sigma _{{\mathbf {CSyn}}}{\mathbf {LSyn}}^{op}\rightarrow \Sigma _{\mathbf {Set}}\mathbf {CMon}^{op} \). As a consequence, both squares below commute.

Indeed, going around the squares in both directions define Cartesian closed functors that agree on their action on the generators realn and \( \mathsf {op} \) of the Cartesian closed cateogry Syn.

Corollary 8.2.

For any source language (Section 3) program Γt: τ , \( ([\![ { t}]\!] , ([\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_1]\!] ,[\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_2]\!])) \) is a morphism in \( \overrightarrow {\mathbf {SScone}}_{} \) and therefore respects the logical relations Pf. Similarly, \( ([\![ { t}]\!] , ([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_1]\!] ,[\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_2]\!])) \) is a morphism in \( \overleftarrow {\mathbf {SScone}}_{} \) and therefore respects the logical relations Pr.

Most of the work is now in place to show correctness of AD. We finish the proof below. To ease notation, we work with terms in a context with a single type. Doing so is not a restriction as our language has products, and the theorem holds for arbitrary terms between first-order types.

Theorem 8.3 (Correctness of AD).

For programs Γt: σ where σ and all types τi in Γ = x1: τ1, …, xn: τn are first-order types, \( [\![ { t}]\!] \) is differentiable and \( \begin{equation*} [\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_1]\!] =[\![ { t}]\!] \qquad [\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_2]\!] =D[\![ { t}]\!] \qquad [\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_1]\!] =[\![ { t}]\!] \qquad [\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_2]\!] ={D[\![ { t}]\!] }^{t}, \end{equation*} \) where we write D and (−)t for the usual calculus derivative and matrix transpose. Hence, \( \begin{equation*} [\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})]\!] =([\![ { t}]\!] ,D[\![ { t}]\!])\qquad \text{and}\qquad [\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})]\!] =([\![ { t}]\!] ,{D[\![ { t}]\!] }^{t}). \end{equation*} \)

Proof. Seeing that our language has tuples, we may assume without loss of generality that Γ = x: τ. We use the logical relations for general d to show differentiability of all programs. Next, the case of d = 1 suffices to show that CHAD computes correct derivatives.

First, we observe that \( [\![ { t}]\!] \) sends differentiable functions \( \mathbb {R}^d\rightarrow [\![ { \tau }]\!] \) to differentiable functions \( \mathbb {R}^d\rightarrow [\![ { \sigma }]\!] \), as t respects the logical relations. Observing that \( [\![ { \tau }]\!] \cong \mathbb {R}^N \) for some N, as τ is a first-order type, we can choose d = N. Then, \( P_{{ \tau }}^f \) contains (f, (g, h)) for a differentiable isomorphism f. It therefore follows that \( [\![ { t}]\!] \) is differentiable.

Second, we focus on the correctness of forward AD, \( \overrightarrow {\mathcal {D}}_{} \).

Let \( x\in [\![ \overrightarrow {\mathcal {D}}_{}({ \tau })_1]\!] =[\![ { \tau }]\!] \cong \mathbb {R}^N \) and \( v\in [\![ \overrightarrow {\mathcal {D}}_{}({ \tau })_2]\!] \cong \underline{\mathbb {R}}^N \) (for some N). Then, there is a differentiable curve \( \gamma :\mathbb {R}\rightarrow [\![ { \tau }]\!] \), such that γ(0) = x and (0)(1) = v. Clearly, \( (\gamma ,(\gamma , D\gamma))\in P_{{ \tau }}^f \) (for d = 1).

As \( ([\![ { t}]\!] , ([\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_1]\!] ,[\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_2]\!])) \) respects the logical relation Pf by Corollary 8.2, we have \( \begin{align*} &(\gamma ;[\![ { t}]\!] , (\gamma ;[\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_1]\!] ,x\mapsto r\mapsto [\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_2]\!] (\gamma (x))(D\gamma (x)(r))))= (\gamma ,(\gamma ,D\gamma));([\![ { t}]\!] , ([\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_1]\!] ,[\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_2]\!])) \in P^f_{{ \sigma }}, \end{align*} \) where we use the definition of composition in Set × ΣSetCMon. Therefore, \( \begin{equation*} \gamma ;[\![ { t}]\!] =\gamma ;[\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_1]\!] \end{equation*} \) and, by the chain rule, \( \begin{align*} x\mapsto r\mapsto D[\![ { t}]\!] (\gamma (x))(D\gamma (x)(r)) &=D(\gamma ;[\![ { t}]\!])= x\mapsto r\mapsto [\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_2]\!] (\gamma (x))(D\gamma (x)(r)). \end{align*} \) Evaluating the former at 0 gives \( [\![ { t}]\!] (x)=[\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_1]\!] (x) \). Similarly, evaluating the latter at 0 and 1 gives \( D[\![ { t}]\!] (x)(v)= [\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_2]\!] (x)(v) \).

Third, we turn to the correctness of reverse AD, \( \overleftarrow {\mathcal {D}}_{} \).

Let \( x\in [\![ \overleftarrow {\mathcal {D}}_{}({ \tau })_1]\!] =[\![ { \tau }]\!] \cong \mathbb {R}^N \) and \( v\in [\![ \overleftarrow {\mathcal {D}}_{}({ \tau })_2]\!] \cong \underline{\mathbb {R}}^N \) (for some N). Let \( \gamma _i:\mathbb {R}\rightarrow [\![ { \tau }]\!] \) be a differentiable curve such that γi(0) = x and i(0)(1) = ei, where we write ei for the ith standard basis vector of \( [\![ \overleftarrow {\mathcal {D}}_{}({ \tau })_2]\!] \cong \underline{\mathbb {R}}^N \). Clearly, \( (\gamma _i,(\gamma _i, {D\gamma _i}^{t}))\in P_{{ \tau }}^r \) (for d = 1).

As \( ([\![ { t}]\!] , ([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_1]\!] ,[\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_2]\!])) \) respects the logical relation Pr by Corollary 8.2, we have \( \begin{align*} &(\gamma _i;[\![ { t}]\!] , (\gamma _i;[\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_1]\!] , x\mapsto w\mapsto {D\gamma _i(x)}^{t}([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_2]\!] (\gamma _i(x))(w))))= (\gamma _i,(\gamma _i,{D\gamma _i}^{t}));([\![ { t}]\!] , ([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_1]\!] ,[\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_2]\!]))\in P^r_{{ \sigma }}, \end{align*} \) by using the definition of composition in Set × ΣSetCMonop. Consequently, \( \begin{equation*} \gamma _i;[\![ { t}]\!] =\gamma _i;[\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_1]\!] \end{equation*} \) and, by the chain rule, \( \begin{align*} x\mapsto w\mapsto {D\gamma _i(x)}^{t}({D[\![ { t}]\!] (\gamma _i(x))}^{t}(w))& ={D(\gamma _i;[\![ { t}]\!])}^{t} = x\mapsto w\mapsto {D\gamma _i(x)}^{t}([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_2]\!] (\gamma _i(x))(w)). \end{align*} \) Evaluating the former at 0 gives \( [\![ { t}]\!] (x)=[\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_1]\!] (x) \). Similarly, evaluating the latter at 0 and v gives us \( e_i \odot {D[\![ { t}]\!] (x)}^{t}(v)= e_i \odot [\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_2]\!] (x)(v) \). As this equation holds for all basis vectors ei of \( [\![ \overleftarrow {\mathcal {D}}_{}({ \tau })]\!] \), we find that \( \begin{align*} {D[\![ { t}]\!] (x)}^{t}(v)&= \quad \sum _{i=1}^N (e_i \odot {D[\![ { t}]\!] (x)}^{t}(v))\cdot e_i =\sum _{i=1}^N (e_i \odot [\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_2]\!] (x)(v))\cdot e_i = [\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})_2]\!] (x)(v).\quad \Box \end{align*} \)

Skip 9PRACTICAL RELEVANCE AND IMPLEMENTATION IN FUNCTIONAL LANGUAGES Section

9 PRACTICAL RELEVANCE AND IMPLEMENTATION IN FUNCTIONAL LANGUAGES

Most popular functional languages, such as Haskell and OCaml, do not natively support linear types. As such, the transformations described in this article may seem hard to implement. However, as we will argue in this section, we can easily implement the limited linear types used in phrasing the transformations as abstract data types by using merely a basic module system, such as that of Haskell. The key idea is that linear function types τσ can be represented as plain functions τσ and copowers !τ σ can be represented as lists or arrays of pairs of type \( { \tau }\boldsymbol {\mathop {*}}{ \sigma } \).

To substantiate that claim, we provide a reference implementation of CHAD operating on strongly typed deeply embedded DSLs Haskell at https://github.com/VMatthijs/CHAD. This section explains how that implementation relates to the theoretical development in the rest of this article. It is rather short, because our implementation almost exactly follows the theoretical development in Sections 3, 4, 5, and 7.

9.1 Implementing Linear Functions and Copowers as Abstract Types in Functional Languages

Based on the denotational semantics, τσ -types should hold (representations of) functions f from τ to σ that are homomorphisms of the monoid structures on τ and σ . We will see that these types can be implemented using an abstract data type that holds certain basic linear functions (extensible as the library evolves) and is closed under the identity, composition, argument swapping, and currying. Again, based on the semantics, ! τ σ should contain (representations of) finite maps (associative arrays) \( \sum _{i=1}^n{!}{ t}_i\otimes _{}{ s}_i \) of pairs (ti, si), where ti is of type τ , and si is of type σ , and where we identify xs + !ts + !ts′ and xs + !t(s + s′).

To implement this idea, we consider abstract types LFun(τ, σ) of linear functions and Copower(τ, σ) of copowers. Their programs are generated by the following grammar \( \begin{align*} \begin{array}{llll} { \tau }, { \sigma }, { \rho } & ::=& & \qquad \text{types}\\ & \mathrel {\vert }&\ldots & \qquad \text{as in Section~3}\\ &&&\\ { t}, { s}, { r} & ::=& & \qquad \text{terms}\\ & \mathrel {\vert }& \ldots & \qquad \text{as in Section~3}\\ & \mathrel {\vert }& \mathsf {lop}({ t}_1,\ldots ,{ t}_n) & \qquad \text{linear operations}\\ & \mathrel {\vert }& \underline{0}_{{ \tau }} & \qquad \text{zero}\\ & \mathrel {\vert }& { t}+ { s} & \qquad \text{plus}\\ & \mathrel {\vert }& {\mathbf {lid}}_{}& \qquad \text{linear identity}\\ & \mathrel {\vert }& { t};_{\ell }{ s} & \qquad \text{linear composition}\\ \end{array} \qquad \begin{array}{llll} &\mathrel {\vert }\quad \, & \mathbf {Copower}({ \tau },{ \sigma }) & \qquad \text{copower types}\\ & \mathrel {\vert }& \mathbf {LFun}({ \tau },{ \sigma }) & \qquad \text{linear function}\\ & & & \\ &\mathrel {\vert }& {\mathbf {lswap}}_{}\,{ t}& \qquad \text{swapping args}\\ &\mathrel {\vert }& {\mathbf {leval}}_{{ t}} & \qquad \text{linear evaluation}\\ &\mathrel {\vert }& \lbrace ({ t},-)\rbrace & \qquad \text{singletons}\\ &\mathrel {\vert }& {\mathbf {lcopowfold}}\,{}{ t}& \qquad \text{$\mathbf {Copower}$-elimination}\\ &\mathrel {\vert }& \mathbf {lfst}\,& \qquad \text{linear projection}\\ &\mathrel {\vert }& \mathbf {lsnd}\,&\qquad \text{linear projection}\\ &\mathrel {\vert }& \mathbf {lpair}({ t},{ s}) & \qquad \text{linear pairing,} \end{array} \end{align*} \) and their API can be typed according to the rules of Figure 8.

Fig. 8.

Fig. 8. Typing rules for the applied target language, to extend the source language.

We note that these abstract types give us precisely the functionality and type safety of the linear function and copower types of our target language of Section 4. Indeed, we can define a semantics and type preserving translation (−)T from that target language to our source language extended with these LFun(τ, σ) and Copower(τ, σ,) types, for which \( ({!}{ \tau }\otimes _{}{ \underline{\sigma } })^T\stackrel{\mathrm{def}}{=}\mathbf {Copower}({ \tau }^T,{ \underline{\sigma } }^T, \)), \( ({ \underline{\tau } }\multimap { \underline{\sigma } })^T\stackrel{\mathrm{def}}{=}\mathbf {LFun}({ \underline{\tau } }^T,{ \underline{\sigma } }^T) \), \( (\underline{\mathbf {real}}^n)^T\stackrel{\mathrm{def}}{=}\mathbf {real}^n \) and we extend (−)T structurally recursively, letting it preserve all other type formers. We then translate \( ({ x}_1:{ \tau },\ldots ,{ x}_n:{ \tau };{ y}:{ \underline{\sigma } }\vdash { t}:{{ \underline{\rho } }})^T\stackrel{\mathrm{def}}{=}{ x}_1:{ \tau }^T,\ldots ,{ x}_n:{ \tau }^T\vdash { t}^T:{({ \underline{\sigma } }\multimap { \underline{\rho } })^T} \) and \( ({ x}_1:{ \tau },\ldots ,{ x}_n:{ \tau }\vdash { t}:{{ \sigma }})^T\stackrel{\mathrm{def}}{=}{ x}_1:{ \tau }^T,\ldots ,{ x}_n:{ \tau }^T\vdash { t}^T : { \sigma }^T \). We believe an interested reader can fill in the details.

9.2 Implementing the API of LFun(τ, σ) and Copower(τ, σ,) Types

We observe that we can implement this API of LFun(τ, σ) and Copower(τ, σ) types, as follows, in a language that extends the source language with types List(τ) of lists (or arrays) of elements of type τ. Indeed, we implement LFun(τ, σ) under the hood, for example, as τσ and Copower(τ, σ) as \( \mathbf {List}({ \tau }\boldsymbol {\mathop {*}}{ \sigma }) \). The idea is that LFun(τ, σ), which arose as a right adjoint in our linear language, is essentially a subtype of τσ. However, Copower(τ, σ), which arose as a left adjoint, is a quotient type of \( \mathbf {List}({ \tau }\boldsymbol {\mathop {*}}{ \sigma }) \). We achieve the desired subtyping and quotient typing by exposing only the API of Figure 8 and hiding the implementation. We can then implement this interface as follows.11 \( \begin{align*} &\mathsf {lop}\stackrel{\mathrm{def}}{=}\overline{\mathsf {lop}} \qquad \underline{0}_{\mathbf {1}} \stackrel{\mathrm{def}}{=}\langle \rangle \qquad { t}+_{\mathbf {1}} { s} \stackrel{\mathrm{def}}{=}\langle \rangle \qquad \underline{0}_{{ \tau }\boldsymbol {\mathop {*}}{ \sigma }} \stackrel{\mathrm{def}}{=}\langle \underline{0}_{{ \tau }}, \underline{0}_{{ \sigma }}\rangle \qquad { t}+_{{ \tau }\boldsymbol {\mathop {*}}{ \sigma }}{ s} \stackrel{\mathrm{def}}{=}\langle \mathbf {fst}\,{ t}+_{{ \tau }}\mathbf {fst}\,{ s}, \mathbf {snd}\,{ t}+_{{ \sigma }}\mathbf {snd}\,{ s}\rangle \\ &\underline{0}_{{ \tau }\rightarrow { \sigma }} \stackrel{\mathrm{def}}{=}\lambda \_.\,\underline{0}_{{ \sigma }}\qquad { t}+_{{ \tau }\rightarrow { \sigma }} { s} \stackrel{\mathrm{def}}{=}\lambda { x}.\,{ t}\,{ x}+_{{ \sigma }} { s}\,{ x}\qquad \underline{0}_{\mathbf {LFun}({ \tau },{ \sigma })} \stackrel{\mathrm{def}}{=}\lambda \_.\,\underline{0}_{{ \sigma }} \qquad { t}+_{\mathbf {LFun}({ \tau },{ \sigma })} { s} \stackrel{\mathrm{def}}{=}\lambda { x}.\,{ t}\,{ x}+_{{ \sigma }} { s}\,{ x}\\ &\underline{0}_{\mathbf {Copower}({ \tau },{ \sigma })}\stackrel{\mathrm{def}}{=}\mathbf {[\,]}\qquad { t}+_{\mathbf {Copower}({ \tau },{ \sigma })} { s} \stackrel{\mathrm{def}}{=}\mathbf {fold}\,{ x}:: acc\,\mathbf {over}\,{ x}\,\mathbf {in}\,{ t}\,\mathbf {from}\,acc={ s} \\ &{\mathbf {lid}}_{}\stackrel{\mathrm{def}}{=}\lambda { x}.\,{ x}\qquad { t};_{\ell }{ s} \stackrel{\mathrm{def}}{=}\lambda { x}.\,{ s}\,({ t}\,{ x}) \qquad {\mathbf {lswap}}_{}\,{ t}\stackrel{\mathrm{def}}{=}\lambda { x}.\,\lambda { y}.\,{ t}\,{ y}\,{ x}\qquad {\mathbf {leval}}_{{ t}}\stackrel{\mathrm{def}}{=}\lambda { x}.\,{ x}\,{ t}\\ & \lbrace ({ t},-)\rbrace \stackrel{\mathrm{def}}{=}\lambda { x}.\,\langle { t}, { x}\rangle :: \mathbf {[\,]}\qquad {\mathbf {lcopowfold}}\,{}{ t}\stackrel{\mathrm{def}}{=}\lambda { z}.\,\mathbf {fold}\,{ t}\, (\mathbf {fst}\,{ x})\,(\mathbf {snd}\,{ x})+acc\,\mathbf {over}\,{ x}\,\mathbf {in}\,{ z}\,\mathbf {from}\,acc=\underline{0} \\ &\mathbf {lfst}\,\stackrel{\mathrm{def}}{=}\lambda { x}.\,{\mathbf {fst}\,{ x}}\qquad \mathbf {lsnd}\,\stackrel{\mathrm{def}}{=}\lambda { x}.\,{\mathbf {snd}\,{ x}} \qquad \mathbf {lpair}({ t},{ s})\stackrel{\mathrm{def}}{=}\lambda { x}.\,{\langle { t}\,{ x}, { s}\,{ x}\rangle } \end{align*} \) Here, we write \( \overline{\mathsf {lop}} \) for the function that \( \mathsf {lop} \) is intended to implement, [ ] for the empty list, t: : s for the list consisting of s with t prepended on the front, and foldtoverxinsfromacc = init for (right) folding an operation t over a list s, starting from init. The monoid structure 0τ, + τ can be defined by induction on the structure of types, by using, for example, type classes in Haskell or OCaml’s module system or (in any functional language with algebraic data types) by using reified types. Furthermore, the implementer of the AD library can determine which linear operations \( \mathsf {lop} \) to include within the implementation of LFun. We expect these linear operations to include various forms of dense and sparse matrix-vector multiplication as well as code for computing Jacobian-vector and Jacobian-adjoint products for the operations \( \mathsf {op} \) that avoids having to compute the full Jacobian. Another option is to simply include two linear operations \( D \mathsf {op} \) and \( {D \mathsf {op}}^{t} \) for computing the derivative and transposed derivative of each operation \( \mathsf {op} \).

9.3 Maintaining Type Safety throughout the Compilation Pipe-Line in Our Reference Implementation

In a principled approach to building a define-then-run AD library, one would shield this implementation using the abstract data types Copower(τ, σ) and LFun(τ, σ) as we describe, both for reasons of type safety and because it conveys the intuition behind the algorithm and its correctness. By using such abstract data types in our Haskell implementation combined with GADTs and type families, we achieve a fully type-safe (well-scoped, well-typed De Bruijn) implementation of the source and target languages of Sections 3 and 4 with their semantics of Section 5 and statically type-checked code transformations of Section 7.

However, nothing stops library implementers from exposing the full implementation rather than working with abstract types. In fact, this seems to be the approach [43] have taken. A downside of that “exposed” approach is that the transformations then no longer respect equational reasoning principles. In our reference implementation, we include a compiler from the (linearly typed) target language to a less type-safe “concrete” target language (implementing Section 9.2 as a compilation step): essentially the source language extended with list (or array) types.12 This demonstrates definitely that CHAD implements a notion of compile-time AD code transformation that takes in and spits out standard functional code without any custom semantics.

9.4 Compiling Away Copowers

As a final observation on this implementation, we would like to note that while the proposed implementation of copowers as lists is generally applicable, more efficient implementation strategies can often be achieved in practice. In fact, in unpublished follow-up work to this article led by Tom Smeding, we show that when we implement CHAD to operate on Accelerate [34], we can optimize away uses of copower types.

Skip 10ADDING HIGHER-ORDER ARRAY PRIMITIVES Section

10 ADDING HIGHER-ORDER ARRAY PRIMITIVES

The aim of this article is to answer the foundational question of how to perform (reverse) AD at higher types. The problem of how to perform AD of evaluation and currying is highly challenging. For this reason, we have devoted this article to explaining a solution to that problem in detail, working with a toy language with ground types of black-box, sized arrays realn with some first-order operations \( \mathsf {op} \). However, many of the interesting applications only arise once we can use higher-order array primitives such as map and fold on realn.

Our definitions and correctness proofs extend to this setting with standard array processing primitives including map, fold, filter, zipWith, permute (aka scatter), backpermute (aka gather), generate (or build), and array indexing. We plan to discuss these primitives as well as CHAD applied to dynamically sized arrays in detail in an applied follow-up paper, which will focus on an implementation of CHAD to operate on Accelerate [34].

To illustrate the idea behind such an extension, we briefly discuss the case for map here and leave the rest to future work. Suppose that we add operations13 \( \begin{equation*} \frac {\Gamma ,{ x}:\mathbf {real}\vdash { t}: \mathbf {real}\quad \Gamma \vdash { s} : \mathbf {real}^n}{\Gamma \vdash \mathbf {map}({ x}.{ t},{ s}) : \mathbf {real}^n} \end{equation*} \) to the source language, to “map” functions over the black-box arrays. Then, supposing that we add to the target language primitives \( \begin{align*} \frac {\Gamma ,{ x}:\mathbf {real};\mathsf {v}:{ \underline{\tau } }\boldsymbol {\mathop {*}}\underline{\mathbf {real}}\vdash { t}: \underline{\mathbf {real}}\quad \Gamma \vdash { s}: \mathbf {real}^n\quad \Gamma ;\mathsf {v}:{ \underline{\tau } }\vdash { r}:\underline{\mathbf {real}}^n } {\Gamma ;\mathsf {v}:{ \underline{\tau } }\vdash D\mathbf {map}({ x}.{ t},{ s},{ r}) :\underline{\mathbf {real}}^n}\\ \\ \frac { \Gamma ,{ x}:\mathbf {real}; \mathsf {v}:\underline{\mathbf {real}}\vdash { t}:{ \underline{\tau } }\boldsymbol {\mathop {*}} \underline{\mathbf {real}}\quad \Gamma \vdash { s}: \mathbf {real}^n \quad \Gamma ;\mathsf {v}:\underline{\mathbf {real}}^n\vdash { r} : { \underline{\tau } }} {\Gamma ;\mathsf {v}:\underline{\mathbf {real}}^n\vdash {D\mathbf {map}}^{t}({ x}.{ t},{ s},{ r}) : { \underline{\tau } },}, \end{align*} \) we can define \( \begin{align*} &\overrightarrow {\mathcal {D}}_{\overline{\Gamma }}(\mathbf {map}({ x}.{ t},{ s})) &&\stackrel{\mathrm{def}}{=}&& \mathbf {let}\,{ y}=\lambda { x}.\,\overrightarrow {\mathcal {D}}_{\overline{\Gamma },{ x}}({ t})\,\mathbf {in}\, \mathbf {let}\,\langle { z}, { z}^{\prime }\rangle =\overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})\,\mathbf {in}\, & \hspace{60.0pt}\\ &&&&&\langle \mathbf {map}({ x}.\mathbf {fst}\,({ y}\,{ x}),{ z}), \underline{\lambda } \mathsf {v}.\,D\mathbf {map}({ x}.(\mathbf {snd}\,({ y}\,{ x})){\bullet } \mathsf {v},{ z},{ z}^{\prime }{\bullet } \mathsf {v})\rangle \\ &\overleftarrow {\mathcal {D}}_{\overline{\Gamma }}(\mathbf {map}({ x}.{ t},{ s})) &&\stackrel{\mathrm{def}}{=}&& \mathbf {let}\,{ y}=\lambda { x}.\,\overleftarrow {\mathcal {D}}_{\overline{\Gamma },{ x}}({ t})\,\mathbf {in}\, \mathbf {let}\,\langle { z}, { z}^{\prime }\rangle =\overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})\,\mathbf {in}\,\\ &&&&&\langle \mathbf {map}({ x}.\mathbf {fst}\,({ y}\,{ x}),{ z}), \underline{\lambda } \mathsf {v}.\,{D\mathbf {map}}^{t}({ x}.(\mathbf {snd}\,({ y}\,{ x})){\bullet } \mathsf {v},{ z},{ z}^{\prime }{\bullet } \mathsf {v})\rangle . \end{align*} \)

In our practical API of Section 9.1, the required target language primitives correspond to \( \begin{align*} \frac {\Gamma ,{ x}:\mathbf {real}\vdash { t}: \mathbf {LFun}({ \underline{\tau } }\boldsymbol {\mathop {*}}\underline{\mathbf {real}},\underline{\mathbf {real}})\quad \Gamma \vdash { s}: \mathbf {real}^n\quad \Gamma \vdash { r}:\mathbf {LFun}({ \underline{\tau } },\underline{\mathbf {real}}^n) } {\Gamma \vdash D\mathbf {map}({ x}.{ t},{ s},{ r}) :\mathbf {LFun}({ \underline{\tau } },\underline{\mathbf {real}}^n)}\\ \frac { \Gamma ,{ x}:\mathbf {real}\vdash { t}:\mathbf {LFun}(\underline{\mathbf {real}},{ \underline{\tau } }\boldsymbol {\mathop {*}} \underline{\mathbf {real}}) \quad \Gamma \vdash { s}: \mathbf {real}^n \quad \Gamma \vdash { r} : \mathbf {LFun}(\underline{\mathbf {real}}^n,{ \underline{\tau } })} {\Gamma \vdash {D\mathbf {map}}^{t}({ x}.{ t},{ s},{ r}) : \mathbf {LFun}(\underline{\mathbf {real}}^n,{ \underline{\tau } }).}. \end{align*} \) Extending Section 9.2, we can implement this API as \( \begin{align*} &D\mathbf {map}({ x}.{ t},{ s},{ r})\stackrel{\mathrm{def}}{=}\lambda { y}.\, \mathbf {zipWith}(({ x},{ x}^{\prime }).{ t}\,\langle { y}, { x}^{\prime }\rangle ,{ s},{ r}\,{ y})\\ &{D\mathbf {map}({ x}.{ t},{ s},{ r})}^{t}\stackrel{\mathrm{def}}{=}\lambda { y}.\,\mathbf {let}\,zs=\mathbf {zipWith}(({ x},{ x}^{\prime }).{ t}\,{ x}^{\prime },{ s},{ y})\,\mathbf {in}\, \mathbf {sum}(\mathbf {map}(w.\mathbf {fst}\,\,w,zs)) + r\,\mathbf {map}(w.\mathbf {snd}\,\,w,zs) , \end{align*} \) where \( \begin{align*} \frac {\Gamma ,{ x}:{ \tau }\vdash { t}: { \sigma }\quad \Gamma \vdash { s} : { \tau }^n}{\Gamma \vdash \mathbf {map}({ x}.{ t},{ s}) : { \sigma }^n} \qquad \frac {\Gamma ,{ x}:{ \tau },{ x}^{\prime }:{ \sigma }\vdash { t}:{ \rho }\quad \Gamma \vdash { s}:{ \tau }^n\quad \Gamma \vdash { s}:{ \sigma }^n} {\Gamma \vdash \mathbf {zipWith}(({ x},{ x}^{\prime }).{ t},{ s},{ r}):{{ \rho }}^n}\qquad \frac {\Gamma \vdash { t}: {{ \tau }}^n}{\Gamma \vdash \mathbf {sum}\,{ t}: { \tau }} \end{align*} \) are the usual functional programming idiom for mapping a unary function over an array, zipping two arrays with a binary operation and for taking the sum of the elements in an array. Note that we assume that we have types τn for length-n arrays of elements of type τ, here, generalizing the arrays realn of elements of type real. We present a correctness proof for this implementation of the derivatives in Appendix A.

Applications frequently require AD of higher-order primitives such as differential and algebraic equation solvers, e.g., for use in pharmacological modelling in Stan [40]. Currently, derivatives of such primitives are derived using the calculus of variations (and implemented with define-by-run AD) [7, 18]. Our proof method provides a more lightweight and formal method for calculating, and establishing the correctness of, derivatives for such higher-order primitives. Indeed, most formalizations of the calculus of variations use infinite-dimensional vector spaces and are technically involved [26].

Skip 11SCOPE OF CHAD AND FUTURE WORK Section

11 SCOPE OF CHAD AND FUTURE WORK

11.1 Memory Use of CHAD’s Forward AD

CHAD has formulated reverse and forward AD to be precisely each other’s categorical dual. The former first computes the primals in a forward pass and next the cotangents in a reverse pass. Dually, the latter first computes the primals in a forward pass and next the tangents in another forward pass. Seeing that the two forward passes in forward AD have identical control-flow, it can be advantageous to interleave them and simultaneously compute the primals and tangents. Such interleaving greatly reduces the memory consumption of the algorithm, as it means that all primals no longer need to be stored for most of the algorithm. We present such an interleaved formulation of forward AD in Reference [21].

While it is much more memory efficient, it has the conceptual downside of no longer being the mirror image of reverse AD. Furthermore, these interleaved formulations of forward AD work by operating on dual numbers. That is, they use an array-of-structs representation, by contrast with the struct-of-arrays representation used to pair primals with tangents in CHAD. Therefore, an SoA-to-AoS optimization is typically needed to make interleaved implementations of forward AD efficient [38].

Finally, we note that such interleaving techniques do not apply to reverse AD, as we need to have completed the forward primal pass before we can start the reverse cotangent pass, due to the dependency structure of the algorithm.

11.2 Applying CHAD to Richer Source Languages

The core observations that let us use CHAD for AD on a higher-order language were the following:

(1)

there is a class of categories with structure \( \mathcal {S} \) (in this case, Cartesian closure) such that the source language Syn that we want to perform AD on can be seen as the freely generated \( \mathcal {S} \)-category on the operations \( \mathsf {op} \);

(2)

we identified structure \( \mathcal {T} \) that suffices for a CMon-enriched strictly indexed category \( \mathcal {L}:\mathcal {C}^{op}\rightarrow \mathbf {Cat} \) to ensure that \( \Sigma _\mathcal {C}\mathcal {L} \) and \( \Sigma _\mathcal {C}\mathcal {L}^{op} \) are \( \mathcal {S} \)-categories;

(3)

we gave a description LSyn: CSynopCat of the freely generated CMon-enriched strictly indexed category with structure \( \mathcal {T} \), on the Cartesian operations \( \mathsf {op} \) in CSyn and linear operations \( D \mathsf {op} \) and \( {D \mathsf {op}}^{t} \) in LSyn; we interpret these linear/non-linear language as the target language of our AD translations;

(4)

by the universal property of Syn, we now obtained unique \( \mathcal {S} \)-homomorphic AD functors \( \overrightarrow {\mathcal {D}}_{}:\mathbf {Syn}\rightarrow \Sigma _{\mathbf {CSyn}}{\mathbf {LSyn}} \) and \( \overleftarrow {\mathcal {D}}_{}:\mathbf {Syn}\rightarrow \Sigma _{\mathbf {CSyn}}{\mathbf {LSyn}}^{op} \) such that \( \overrightarrow {\mathcal {D}}_{}(\mathsf {op})=(\mathsf {op},D \mathsf {op}) \) and \( \overleftarrow {\mathcal {D}}_{}(\mathsf {op})=(\mathsf {op},{D \mathsf {op}}^{t}) \), whose correctness proof follows immediately because of the well-known theory of subsconing for \( \mathcal {S} \)-categories.

CHAD applies equally to source languages with other choices of \( \mathcal {S} \), provided that we can follow steps (1)–(4).

In particular, in Reference [30] it is shown how CHAD applies equally to languages with sum types and (co)inductive types (and tuple and function types). In that setting, the category LSyn is a genuine strictly indexed category over CSyn, to account for the fact that the (co)tangent space to a space of varying dimension depends on the chosen base point. That is, in its most principled formulation, the target language has (linear) dependent types. However, we can also work with a simply typed target language in this setting, at the cost of some extra type safety. In fact, our Haskell implementation already supports such a treatment of coproducts.

As discussed in Section 10, \( \mathcal {S} \) can also be chosen to include various operations for manipulating arrays, such as map, fold, filter, zipWith, permute (aka scatter), backpermute (aka gather), generate (aka build), and array indexing. We plan to describe this application of CHAD to array processing languages and we are implementing CHAD to operate on the Accelerate parallel array processing language.

In work in progress, we are applying CHAD to partial features such as real conditionals, iteration, recursion and recursive types. Our Haskell implementation of CHAD already supports real conditionals, iteration and recursion. The challenge in this setting is to understand the subtle interactions between the ωCPO-structure needed to model recursion and the commutative monoid structure that CHAD uses to accumulate (co)tangents.

11.3 CHAD for Other Dynamic Program Analyses

As noted in Reference [43], source-code transformation AD has a lot of similarities with other dynamic program analyses such as dynamic symbolic analysis and provenance analysis.

In fact, as the abstract perspective on CHAD given in Section 11.2 makes clear, CHAD is in no way tied to automatic differentiation. In many ways, it is much more general, and can best be seen as a framework for applying dynamic program analyses that accumulate data (either by going through the program forwards or backwards) in a commutative monoid to functional languages with expressive features. In fact, by varying the definitions of \( \overrightarrow {\mathcal {D}}_{}(\mathcal {R}) \) and \( \overleftarrow {\mathcal {D}}_{}(\mathcal {R}) \) for the ground types \( \mathcal {R} \) and the definitions of \( \overrightarrow {\mathcal {D}}_{}(\mathsf {op}) \) and \( \overleftarrow {\mathcal {D}}_{}(\mathsf {op}) \) for the primitive operations \( \mathsf {op} \) (we do not even need to use \( \overrightarrow {\mathcal {D}}_{}(\mathcal {R})_1=\mathcal {R} \), \( \overleftarrow {\mathcal {D}}_{}(\mathcal {R})_1=\mathcal {R} \), \( \overrightarrow {\mathcal {D}}_{}(\mathsf {op})_1= \mathsf {op} \) or \( \overleftarrow {\mathcal {D}}_{}(\mathsf {op})_1= \mathsf {op} \)!), we can completely change the nature of the analysis. In most cases, as long as a notion of correctness of the analysis can be phrased at the level of a denotational semantics, we conjecture that our subsconing techniques lead to straightforward correctness proofs of the analysis.

To give one more example application of such an analysis, beyond AD, dynamic symbolic analysis and provenance analysis, one can note that for a source language Syn, generated from a base type \( \mathcal {R} \) that is a commutative semi-ring, we have a notion of algebraic (or formal) derivative of any polynomial \( x_1:\mathcal {R},\ldots ,x_n:\mathcal {R}\vdash \mathsf {op}(x_1,\ldots ,x_n):\mathcal {R} \) [28]. CHAD can be used to extend and compute this notion of derivative for arbitrary functional programs generated from the polynomials \( \mathsf {op} \) as basic operations. The particular case of such formal derivatives for the Boolean semi-ring \( \mathcal {R}=\mathbb {B} \) are used in Reference [45] to feed into a gradient descent algorithm to learn Boolean circuits. CHAD makes this method applicable to more general (higher-order) programs over (arrays of) Booleans.

Skip 12RELATED WORK Section

12 RELATED WORK

This work is closely related to References [21] and [20], which introduced a similar semantic correctness proof for a dual-numbers version of forward mode AD and higher-order forward AD, using a subsconing construction. A major difference is that this article also phrases and proves correctness of reverse mode AD on a λ-calculus and relates reverse mode to forward mode AD. Using a syntactic logical relations proof instead, Reference [4] also proves correctness of forward mode AD. Again, it does not address reverse AD.

Reference [12] proposes a similar construction to that of Section 6, and it relates it to the differential λ-calculus. This article develops sophisticated axiomatics for semantic reverse differentiation. However, it neither relates the semantics to a source-code transformation, nor discusses differentiation of higher-order functions. Our construction of differentiation with a (biadditive) linear target language might remind the reader of differential linear logic [15]. In differential linear logic, (forward) differentiation is a first-class operation in a (biadditive) linear language. By contrast, in our treatment, differentiation is a meta-operation.

Importantly, Reference [16] describes and implements what are essentially our source-code transformations, though they were restricted to first-order functions and scalars. After completing this work, we realized that Reference [43] describes an extension of the reverse mode transformation to higher-order functions in a similar manner as we propose in this article, but without the linear or abstract types. Though that paper did not derive the algorithm or show its correctness, it does discuss important practical considerations for its implementation and offers a dependently typed variant of the algorithm based on typed closure conversion, inspired by Reference [37].

Next, there are various lines of work relating to correctness of reverse mode AD that we consider less similar to our work. For example, Reference [31] define and prove correct a formulation of reverse mode AD on a higher-order language that depends on a non-standard operational semantics, essentially a form of symbolic execution. Reference [2] does something similar for reverse mode AD on a first-order language extended with conditionals and iteration. Reference [8] defines a beautifully simple AD algorithm on a simply typed λ-calculus with linear negation (essentially, a more finely typed version of the continuation-based AD of Reference [21]) and proves it correct using operational techniques. Reference [33] extends this work to apply to recursion. Furthermore, they show with an impressive operational argument that this simple algorithm, surprisingly, corresponds to true reverse mode AD with the correct complexity under an operational semantics with a “linear factoring rule.” While this is a natural operational semantics for a linear λ-calculus, it is fundamentally different from normal call-by-value or call-by-name evaluation (under which the generated code has the wrong computational complexity). For this reason, we require a custom interpreter or compiler to use this reverse AD method in practice. Very recently, Reference [25] specified another purely functional reverse AD algorithm, which appears similar to, and, which we conjecture to be equivalent to an implementation of the techniques of Reference [33]. These formulations of reverse mode AD all depend on non-standard run-times and hence fall into the category of “define-by-run” formulations of reverse mode AD, for our purposes. Meanwhile, we are concerned with “define-then-run” formulations: source-code transformations producing differentiated code at compile-time that can then be optimized during compilation with existing compiler tool-chains (such as the Accelerate [11], Futhark [19], and TensorFlow [1] frameworks for generating highly performant GPU code). While we can compile such define-by-run transformations together with their interpreter to achieve a source-code transformation (hence a sort of define-then-run transformation), the resulting code recursively traverses an AST, so does not obviously seem suitable for generating, via existing tool-chains, optimized machine code for the usual parallel hardwares that we use as targets for AD, such as GPUs and TPUs.

Finally, there is a long history of work on reverse mode AD, though almost none of it applies the technique to higher-order functions. A notable exception is Reference [37], which gives an impressive source-code transformation implementation of reverse AD in Scheme. While very efficient, this implementation crucially uses mutation. Moreover, the transformation is complex and correctness is not considered. More recently, Reference [44] describes a much simpler implementation of a reverse AD code transformation, again very performant. However, the transformation is quite different from the one considered in this article as it relies on a combination of delimited continuations and mutable state. Correctness is not considered, perhaps because of the semantic complexities introduced by impurity.

Our work adds to the existing literature by presenting a novel, generally applicable method for compositional source-code transformation (forward and) reverse AD on expressive functional languages without a need for a non-standard runtime, by giving a method for compositional correctness proofs of such AD algorithms, and by observing that the CHAD method and its correctness proof are not limited to AD but apply generally to dynamic program analyses that accumulate data in a commutative monoid.

APPENDICES

A CHAD CORRECTNESS FOR HIGHER-ORDER OPERATIONS SUCH AS MAP

We extend the proofs of Lemmas 2.1 and 2.2 to apply to the map-constructs of Section 10.

Skip A.1The Semantics of map and Its Derivatives Section

A.1 The Semantics of map and Its Derivatives

First, we observe that \( \begin{align*} [\![ \mathbf {map}({ x}.{ t},{ s},{ r})]\!] : &[\![ \Gamma ]\!] \rightarrow \mathbb {R}^n\\ &\gamma \mapsto \left([\![ { t}]\!] (\gamma , \pi _1([\![ { s}]\!] (\gamma))),\ldots , [\![ { t}]\!] (\gamma , \pi _n([\![ { s}]\!] (\gamma)))\right). \end{align*} \) Similarly, \( \begin{align*} [\![ D\mathbf {map}({ x}.{ t},{ s},{ r})]\!] : &[\![ \Gamma ]\!] \rightarrow [\![ { \underline{\tau } }]\!] \multimap \underline{\mathbb {R}}^n\\ &\gamma \mapsto v\mapsto \left([\![ { t}]\!] (\gamma ,\pi _1([\![ { s}]\!] (\gamma)))(v,\pi _1([\![ { r}]\!] (\gamma)(v))),\ldots , [\![ { t}]\!] (\gamma ,\pi _n([\![ { s}]\!] (\gamma)))(v,\pi _n([\![ { r}]\!] (\gamma)(v)))\right)\\ [\![ {D\mathbf {map}}^{t}({ x}.{ t},{ s})]\!] : &[\![ \Gamma ]\!] \rightarrow \underline{\mathbb {R}}^n\multimap [\![ { \underline{\tau } }]\!] \\ &\gamma \mapsto v\mapsto \pi _1([\![ { t}]\!] (\gamma ,\pi _1([\![ { s}]\!] (\gamma)))(\pi _1(v)))+\cdots + \pi _1([\![ { t}]\!] (\gamma ,\pi _n([\![ { s}]\!] (\gamma)))(\pi _n(v)))+\\ &\hspace{40.0pt}[\![ { r}]\!] (\gamma)(\pi _2([\![ { t}]\!] (\gamma ,\pi _1([\![ { s}]\!] (\gamma)))(\pi _1(v))),\ldots , \pi _2([\![ { t}]\!] (\gamma ,\pi _n([\![ { s}]\!] (\gamma)))(\pi _n(v)))) \end{align*} \) This implies that \( \begin{align*} &\pi _1([\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}(\mathbf {map}({ x}.{ t},{ s}))]\!] (\gamma))=\left(\pi _1([\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma },{ x}}({ t})]\!] (\gamma ,\pi _1(\pi _1([\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})]\!] (\gamma))))),\ldots , \pi _1([\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma },{ x}}({ t})]\!] (\gamma ,\pi _n(\pi _1([\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})]\!] (\gamma)))))\right)\\ &\pi _2([\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}(\mathbf {map}({ x}.{ t},{ s}))]\!] (\gamma))(v)=\Big (\pi _2([\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma },{ x}}({ t})]\!] (\gamma ,\pi _1(\pi _1([\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})]\!] (\gamma)))))(v,\pi _1(\pi _2([\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})]\!] (\gamma))(v))) ,\ldots ,\\ &\hspace{150.0pt}\pi _2([\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma },{ x}}({ t})]\!] (\gamma ,\pi _n(\pi _1([\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})]\!] (\gamma)))))(v,\pi _n(\pi _2([\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})]\!] (\gamma))(v))) \Big)\\ &\pi _1([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}(\mathbf {map}({ x}.{ t},{ s}))]\!] (\gamma))=\left(\pi _1([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma },{ x}}({ t})]\!] (\gamma ,\pi _1(\pi _1([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})]\!] (\gamma))))),\ldots , \pi _1([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma },{ x}}({ t})]\!] (\gamma ,\pi _n(\pi _1([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})]\!] (\gamma)))))\right)\\ &\pi _2([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}(\mathbf {map}({ x}.{ t},{ s}))]\!] (\gamma))(v)= \pi _1(\pi _2([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma },{ x}}({ t})]\!] (\gamma ,\pi _1(\pi _1([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})]\!] (\gamma)))))(\pi _1(v)))+\cdots \\ &\quad \ +\ \pi _1(\pi _2([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma },{ x}}({ t})]\!] (\gamma ,\pi _n(\pi _1([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})]\!] (\gamma)))))(\pi _n(v)))\\ &\quad \ +\ \pi _2([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})(\gamma)]\!])(\pi _2(\pi _2([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma },{ x}}({ t})]\!] (\gamma ,\pi _1(\pi _1([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})]\!] (\gamma)))))(\pi _1(v))),\ldots ,\\ &\hspace{185.0pt}\pi _2(\pi _2([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma },{ x}}({ t})]\!] (\gamma ,\pi _n(\pi _1([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})]\!] (\gamma)))))(\pi _n(v))))\\ &\phantom{\pi _2([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}(\mathbf {map}({ x}.{ t},{ s}))]\!] (\gamma))(v)}=\sum _{i=1}^n \pi _1(\pi _2([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma },{ x}}({ t})]\!] (\gamma ,\pi _i(\pi _1([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})]\!] (\gamma)))))(\pi _i(v)))\\ &\quad \ +\ \pi _2([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})(\gamma)]\!])(0,\ldots ,0,\pi _2(\pi _2([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma },{ x}}({ t})]\!] (\gamma ,\pi _i(\pi _1([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})]\!] (\gamma)))))(\pi _i(v))),0,\ldots ,0), \end{align*} \) where the last equation holds by linearity of \( \pi _2([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})(\gamma)]\!]) \).

Skip A.2Extending the Induction Proof of the Fundamental Lemma for Forward CHAD Section

A.2 Extending the Induction Proof of the Fundamental Lemma for Forward CHAD

First, we focus on extending the induction proof of the fundamental lemma for forward CHAD to apply to maps. Assume the induction hypothesis that t and s respect the logical relation. We show that map(x.t, s) does as well. (Where we assume all terms all well-typed.) Suppose that (f, (g, h)) ∈ PΓ. We want to show that \( \begin{equation*} (f^{\prime },(g^{\prime },h^{\prime }))\!\stackrel{\mathrm{def}}{=}\!(f;[\![ \mathbf {map}({ x}.{ t},{ s})]\!] , (g; [\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}(\mathbf {map}({ x}.{ t},{ s}))]\!] ; \pi _1, x\mapsto r\mapsto \pi _2([\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}(\mathbf {map}({ x}.{ t},{ s}))]\!] (g(x)))(h(x)(r))))\in P_{\mathbf {real}^n}. \end{equation*} \) Note \( f^{\prime }= (f^{\prime }_1,\ldots , f^{\prime }_n) \), \( g^{\prime }=(g^{\prime }_1,\ldots ,g^{\prime }_n) \) and \( h^{\prime }(x)=(h^{\prime }_1(x),\ldots ,h^{\prime }_n(x)) \) that, as derivatives are computed componentwise, it is equivalent to show that \( (f^{\prime }_i,(g^{\prime }_i, h^{\prime }_i))\in P_\mathbf {real} \) for i = 1, …, n. That is, we need to show that \( \begin{align*} &(x\mapsto [\![ { t}]\!] (f(x), \pi _i([\![ { s}]\!] (f(x)))), (x\mapsto \pi _1([\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma },{ x}}({ t})]\!] (g(x), \pi _i(\pi _1([\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})]\!] (g(x)))))),\\ &\qquad x\mapsto r\mapsto \pi _2([\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma },{ x}}({ t})]\!] (g(x),\pi _i(\pi _1([\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})]\!] (\gamma)))))(h(x)(r),\pi _i(\pi _2([\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})]\!] (g(x)))(h(x)(r))))))\in P_{\mathbf {real}}. \end{align*} \) As t respects the logical relation by our induction hypothesis, it is enough to show that \( \begin{align*} &(x\mapsto (f(x), \pi _i([\![ { s}]\!] (f(x)))), (x\mapsto (g(x), \pi _i(\pi _1([\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})]\!] (g(x)))))\\ &\qquad x\mapsto r\mapsto (h(x)(r),\pi _i(\pi _2([\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})]\!] (g(x)))(h(x)(r))))))\in P_{\Gamma ,{ x}:\mathbf {real}}. \end{align*} \) Seeing that \( (f,(g,h))\in P_\Gamma \) by assumption, it is enough, by definition of PΓ, x: real, to show that \( \begin{align*} &(x\mapsto \pi _i([\![ { s}]\!] (f(x))), (x\mapsto \pi _i(\pi _1([\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})]\!] (g(x)))),\\ &\qquad x\mapsto r\mapsto \pi _i(\pi _2([\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})]\!] (g(x)))(h(x)(r)))))\in P_{\mathbf {real}}. \end{align*} \) By definition of \( P_{\mathbf {real}^n} \), it is enough to show that \( \begin{align*} &(x\mapsto [\![ { s}]\!] (f(x)), (x\mapsto \pi _1([\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})]\!] (g(x))),\\ &\qquad x\mapsto r\mapsto \pi _2([\![ \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})]\!] (g(x)))(h(x)(r))))\in P_{\mathbf {real}}. \end{align*} \) Seeing that s respects the logical relation by our induction hypothesis, it is enough to show that \( \begin{align*} &(x\mapsto f(x), (x\mapsto g(x),\\ &\qquad x\mapsto r\mapsto h(x)(r)))\in P_{\mathbf {real}}, \end{align*} \) which is true by assumption.

Skip A.3Extending the Induction Proof of the Fundamental Lemma for Reverse CHAD Section

A.3 Extending the Induction Proof of the Fundamental Lemma for Reverse CHAD

Next, we extend the fundamental lemma for reverse CHAD to apply to maps. Assume the induction hypothesis that t and s respect the logical relation. We show that map(x.t, s) does as well. (Where we assume all terms all well-typed.) Suppose that (f, (g, h)) ∈ PΓ. We want to show that \( \begin{equation*} (f^{\prime },(g^{\prime },h^{\prime }))\!\stackrel{\mathrm{def}}{=}\! (f;[\![ \mathbf {map}({ x}.{ t},{ s})]\!] , (g; [\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}(\mathbf {map}({ x}.{ t},{ s}))]\!] ; \pi _1,{x}\mapsto {v}\mapsto h(x)(\pi _2([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}(\mathbf {map}({ x}.{ t},{ s}))]\!] (g(x)))(v))))\in P_{\mathbf {real}^n}. \end{equation*} \) Observing that \( (f^{\prime },(g^{\prime },h^{\prime }))\in P_{\mathbf {real}^n} \) are all of the form \( f^{\prime }=(f^{\prime }_1,\ldots ,f^{\prime }_n) \), \( g^{\prime }=(g^{\prime }_1,\ldots ,g^{\prime }_n) \) and \( h^{\prime }(x)(v)=h^{\prime }_1(x)(\pi _1(v))+\cdots +h^{\prime }_n(x)(\pi _n(v)) \) where \( (f^{\prime }_i,(g^{\prime }_i,h^{\prime }_i))\in P_{\mathbf {real}} \) for i = 1, …, n, by basic multivariate calculus. That is, we need to show (by linearity of h(x)) that \( \begin{align*} & (x\mapsto [\![ { t}]\!] (f(x),\pi _i([\![ { s}]\!] (f(x)))), (x\mapsto \pi _1([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma },{ x}}({ t})]\!] (g(x),\pi _i([\![ { s}]\!] (f(x))))),\\ &\qquad x\mapsto v\mapsto h(x)(\pi _1(\pi _2([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma },{ x}}({ t})]\!] (g(x),\pi _i(\pi _1([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})]\!] (g(x))))))(v))\\ &\qquad \;\;\;+\pi _2([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})(g(x))]\!])(0,\ldots ,0,\pi _2(\pi _2([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma },{ x}}({ t})]\!] (g(x),\pi _i(\pi _1([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})]\!] (g(x))))))(v)),0,\ldots ,0))))\in P_{\mathbf {real}}. \end{align*} \) As t respects the logical relation by our induction hypothesis, it is enough to show that \( \begin{align*} & (x\mapsto (f(x),\pi _i([\![ { s}]\!] (f(x)))), (x\mapsto (g(x),\pi _i([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})]\!] (g(x)))),\\ &\qquad x\mapsto v\mapsto h(x)(\pi _1(v)+\pi _2([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})(g(x))]\!])(0,\ldots ,0,\pi _2(v),0,\ldots ,0))))\in P_{\Gamma ,{ x}:\mathbf {real}}. \end{align*} \) Seeing that \( (f,(g,h))\in P_\Gamma \) by assumption, we merely need to check the following, by definition of PΓ, x: real: \( \begin{align*} & (x\mapsto \pi _i([\![ { s}]\!] (f(x))), (x\mapsto \pi _i([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})]\!] (g(x))),\\ &\qquad x\mapsto v\mapsto h(x)(\pi _2([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})(g(x))]\!]))(0,\ldots ,0,v,0,\ldots ,0)))\in P_{\mathbf {real}}. \end{align*} \) By definition of \( P_{\mathbf {real}^n} \) and linearity of h(x), it is enough to show that \( \begin{align*} & (x\mapsto [\![ { s}]\!] (f(x)), (x\mapsto [\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})]\!] (g(x)),\\ &\qquad x\mapsto v\mapsto h(x)(\pi _2([\![ \overleftarrow {\mathcal {D}}_{\overline{\Gamma }}({ s})(g(x))]\!]))(v)))\in P_{\mathbf {real}^n}. \end{align*} \) Seeing that s respects the logical relation by our induction hypothesis, it is enough to show that \( \begin{align*} (f,(g,h))\in P_{\Gamma }, \end{align*} \) which holds by assumption.

B TERM SIMPLIFICATIONS IN THE IMPLEMENTATION

Our implementation14 of the AD macros that is described in Section 9 includes a number of simplification rules on the concrete target language whose only purpose is to make the produced code more readable and clear (without changing its asymptotic runtime cost). The motivation for these rules is to be able to generate legible code when applying the AD macros to example programs. In this appendix, we list these simplification rules explicitly and show the implementation’s output on the four example programs in Figures 1 and 2 under these simplification rules. We do this to illustrate that

(1)

the simplifications given here are evidently meaning-preserving, given the βη + rules in Figures 4 and 6, and are standard rules that any optimizing compiler would apply;

(2)

the thus simplified output of the AD macros from Section 7 is indeed equivalent to the differentiated programs in Figures 1 and 2.

The simplification rules in question are given below in Table 1. In the implementation, these are (at the time of writing) implemented in the simplifier for the concrete target language.15

Table 1.
NameRuleJustification
lamAppLet(λx. e) aletx = ainelambda subst., let subst.
letRotateletx = (lety = ainb) inelety = ainletx = binelet substitution
letPairSplitletx = (a, b) ine\( \mathbf {let}\ x_1 = a\ \mathbf {in}\ \mathbf {let}\ x_2 = b\ \mathbf {in}\ e{}[^{\langle x_1,x_2 \rangle }\!/\!_{x}] \)let substitution
letInlineletx = ainee[a/x]let substitution
(if a is cheap or used at most once in e)
pairProj1fst ⟨a, baβ pair
style="colsep1">⇝pairProj2snd ⟨a, bbβ pair
pairEtafsta, sndaaη pair
letProj1fst (letx = aine)letx = ainfstelet substitution
letProj2snd (letx = aine)letx = ainsndelet substitution
plusZero1pluszeroaaequational rule
plusZero2plusazeroaequational rule
plusPairplus ⟨a, b⟩ ⟨c, d⇝⟨plusac, plusbdequational rule
plusLet1plus (letx = eina) bletx = einplusablet substitution
plusLet2plusa (letx = einb)letx = einplusablet substitution
algebra0*x, 0 *x⇝0 (etc.)basic algebra
letLamPairSplitletf = λx. ⟨a, b⟩ ineletf1 = λx. ainletf2 = λx. b\( \mathbf {in}\ e{}[^{\lambda x.\langle f_1\,x,f_2\,x \rangle }\!/\!_{f}] \)η lambda, let subst.
mapPairSplitmap (λx. (b, c)) aleta′ = ain ⟨map (λx. b) a′, map (λx. c) a′⟩equational rule
mapZeromap (λx. zero) azeroequational rule
sumZipsum (zipab)⇝⟨suma, sumbequational rule
sumZerosumzerozeroequational rule
sumSingletonsum (map (λx. [x]) e)eequational rule

Table 1. The Simplification Rules to Aid Legibility That Are Available in the CHAD Implementation in Haskell on the Concrete Target Language

The last column in the table shows the justification for the simplification rule: “let substitution,” “β pair,” “η pair,” “lambda substitution,” and “η lambda” refer to the corresponding rules in Figure 4. The equational rules are either from Figure 6 in the case of t + 0 =t and its symmetric variant, from the type-based translation rules in Section 9.2 in the case of plus on pairs, or otherwise general laws that hold for zero (i.e., 0) and/or the array combinators in question.

Note that all of the rules preserve time complexity of the program through careful sharing of values with let-bindings. These let-bindings could only be argued to increase work if the value is only used once in the body of the let – but in that case, the “letInline” rule will eliminate the let-binding anyway.

Skip B.1First-order Example Programs Section

B.1 First-order Example Programs

The output of our implementation for the forward derivative of Figure 1(a) and the reverse derivative of Figure 1(b) can be found below in Figure 9.

Fig. 9.

Fig. 9. Output of our Haskell implementation when executed on the first-order example programs in Figure 1(a) and (b).

The simplification rules listed above have already been applied (otherwise the output would, indeed, be much less readable). The only change we made to the literal text output of the implementation is formatting and renaming of variables.

For both programs, we note that in the implementation, environments are encoded using snoc-lists: that is, the environment Γ = x1: real, x2: real, x3: real is represented as ((ϵ, x1: real), x2: real), x3: real. Hence, for this Γ, \( \overrightarrow {\mathcal {D}}_{}(\Gamma)_2 \) would be represented as \( ((\underline{\mathbf {1}}\boldsymbol {\mathop {*}} \underline{\mathbf {real}}) \boldsymbol {\mathop {*}} \underline{\mathbf {real}}) \boldsymbol {\mathop {*}} \underline{\mathbf {real}} \). This has an effect on the code in Figure 9, where in subfigure (a), the variable x′ has type \( \mathbf {1}\boldsymbol {\mathop {*}} \mathbf {real} \) instead of real, which it had in Figure 1(c). Furthermore, the output in Figure 9(b), which is the cotangent (i.e., adjoint) of the environment, has type \( (((\mathbf {1}\boldsymbol {\mathop {*}} \mathbf {real}) \boldsymbol {\mathop {*}} \mathbf {real}) \boldsymbol {\mathop {*}} \mathbf {real}) \boldsymbol {\mathop {*}} \mathbf {real} \), meaning that the “zero” term in the result has type 1 and is thus equal to ⟨⟩.

We hope that it is evident to the reader that these outputs are equivalent to the programs given in Figure 1(c) and (d).

Skip B.2Second-order Example Program Figure 2(a) Section

B.2 Second-order Example Program Figure 2(a)

The implementation’s forward derivative of Figure 2(a) is shown below in Figure 10. This version contains a let-bound function “g” that does not occur in the code of Figure 2(c). However, inlining this function in the two places where it is used does not increase work, because due to pair projection and equational rules concerning “zero” and “plus,” only one half of the “plus” expression in “g” remains at both of the invocation sites of “g.” A version with “g” manually inlined and simplified using the stated rules is shown in Figure 11. (Our automatic simplifier is currently not smart enough to prove that inlining “g” does not increase work, and hence keeps it let-bound.)

Fig. 10.

Fig. 10. Output of our Haskell implementation of the forward AD macro when executed on Figure 2(a). The implementation writes operations on scalar arrays (realn) with a “v” prefix.

Fig. 11.

Fig. 11. Manually simplified code from Figure 10, as described in the text.

First, we notice that the type of the variable x′ here is \( \mathbf {1}\boldsymbol {\mathop {*}} \underline{\mathbf {real}} \) instead of real because of the snoc-list representation of environments, as was discussed in the previous subsection. Because of this, the expression snd x′ in Figure 11 is equivalent to the expression x′ in Figure 2(c). Knowing this, let us work out how the implementation got this output code, and how it is equivalent to the code in Figure 2(c).

For the purposes of this explanation, the most important component of the source of Figure 2(a) is its first line: “let f = λz. x * z + 1 in....” Recall from the forward AD macro from Section 7.4: \( \begin{align*} & \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}(\mathbf {let}\,x={ t}\,\mathbf {in}\,{ s}) \quad \stackrel{\mathrm{def}}{=}\quad \mathbf {let}\,\langle x, x^{\prime }\rangle =\overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t})\,\mathbf {in}\, \mathbf {let}\,\langle y, y^{\prime }\rangle =\overrightarrow {\mathcal {D}}_{\overline{\Gamma },x}({ s})\,\mathbf {in}\, \langle y, \underline{\lambda } \mathsf {v}.\,y^{\prime }{\bullet } \langle \mathsf {v}, x^{\prime }{\bullet } \mathsf {v}\rangle \rangle . & \end{align*} \) Hence, the binding of t is transformed to a binding of \( \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}({ t}) \), which is then used somehow in the body. Since t is the term “λz. x * z + 1” here, and since the macro rule for lambda abstraction is as follows: \( \begin{align*} & \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}(\lambda x.\, { t}) \quad \stackrel{\mathrm{def}}{=}\quad \mathbf {let}\,y=\lambda x.\,\overrightarrow {\mathcal {D}}_{\overline{\Gamma },x}({ t})\,\mathbf {in}\, \langle \lambda x.\, \mathbf {let}\,\langle z, z^{\prime }\rangle =y\,x\,\mathbf {in}\,\langle z, \underline{\lambda } \mathsf {v}.\,z^{\prime }{\bullet } \langle \underline{0}, \mathsf {v}\rangle \rangle , \underline{\lambda } \mathsf {v}.\,\lambda x.\,(\mathbf {snd}\,(y\,x)){\bullet } \langle \mathsf {v}, \underline{0}\rangle \rangle , & \end{align*} \) we get the following result for \( \overrightarrow {\mathcal {D}}_{\overline{\Gamma }}(\lambda z.\, x * z + 1) \) with Γ = ϵ, x: real: \( \begin{align*} \overrightarrow {\mathcal {D}}_{\epsilon ,x}(\lambda z.\, x * z + 1) = \; &\mathbf {let}\ y = \lambda z.\, \langle x * z + 1, \underline{\lambda } \langle \langle \langle \rangle ,x^{\prime }\rangle ,z^{\prime }\rangle .\, x * z^{\prime } + z * x^{\prime }\rangle \\ {[}-0.4em] &\mathbf {in}\ \langle \lambda z.\, \mathbf {let}\ \langle u, u^{\prime }\rangle = y\ z\ \mathbf {in}\ \langle u, \underline{\lambda } \mathsf {v}.\,u^{\prime }{\bullet } \langle \underline{0}, \mathsf {v}\rangle \rangle , \underline{\lambda } \mathsf {v}.\,\lambda z.\, (\mathbf {snd}\,(y\ z)){\bullet } \langle \mathsf {v}, \underline{0}\rangle \rangle . \end{align*} \) This is the term that appears on the right-hand side of a let-binding in the forward AD transformed version of the code from Figure 2(a). Inlining of y and some further simplification yields: \( \begin{align*} \overrightarrow {\mathcal {D}}_{\epsilon ,x}(\lambda z.\, x * z + 1) = \; &\langle \lambda z.\, \langle x * z + 1, \underline{\lambda } \mathsf {v}.\,x * \mathsf {v}\rangle , \underline{\lambda } \mathsf {v}.\,\lambda z.\, z * \mathbf {snd}\,\mathsf {v}\rangle . \end{align*} \) wherein we recognize f and f′ from Figure 2(c).

The implementation proceeded differently in the simplification of \( \overrightarrow {\mathcal {D}}_{\epsilon ,x}(\lambda z.\, x * z + 1) \), namely by splitting the lambda bound to y using “letLamPairSplit.” The second of the resulting two lambda functions is λz. λ ⟨⟨⟨⟩, x′⟩, z′⟩. x*z′ + z*x′, or by desugaring pattern matching, \( \lambda z.\, \underline{\lambda } \mathsf {v}.\,x * \mathbf {snd}\,\mathsf {v}+ z * \mathbf {snd}\,(\mathbf {fst}\,\mathsf {v}) \). We recognise this expression as the right-hand side of the f′ binding in Figure 11.

After seeing this, we leave it to the reader to show that the code in Figures 11 and 2(c) is equivalent under forward and reverse let-substitution.

Skip B.3Second-order Example Program Figure 2(b) Section

B.3 Second-order Example Program Figure 2(b)

When the implementation performs reverse AD on the code in Figure 2(b) and simplifies the result using the simplification rules in Table 1, the result is the code that is shown below in Figure 12.

Fig. 12.

Fig. 12. Output of our Haskell implementation of the reverse AD macro when executed on Figure 2(b). The implementation writes operations on scalar arrays (realn) with a “v” prefix; list operations are written without a prefix. Contrast this to Figure 2, where all arrays are implemented as lists and hence there are no pure-array operations that would have been written using a “v” prefix.

First, note the evalOp EScalProd. Since scalar multiplication is implemented as an operation (\( \mathsf {op} \)), it does not take two separate arguments but rather a pair of the two scalars to multiply. Due to application of the “pairEta” simplification rule from Table 1, the argument to the multiplication was reduced from ⟨fst p, snd p⟩ to just “p,” preventing the pretty-printer from showing fst p * snd p; instead, this becomes a bare operation application to the argument “p.” In the code in Figure 2(d), this lambda is the argument to the “map” in the cotangents block, meaning that “p” stands for ⟨x2i, y′⟩.

In this example, a copower structure is created because of the usage of a function abstraction in the code to be differentiated using reverse AD. Here, this copower was interpreted using lists as described in Section 9.2. The “toList” function converts an array of scalars (i.e., a value of type realn) to a list of scalars. The code in Figure 2(d) goes further and interprets all arrays as lists, effectively removing the distinction between arrays originating from arrays in the source program and lists originating from copower values. Together with recognizing the snoc-list representation of environments, it becomes sufficient to inline some let-bindings in Figure 2(d) to arrive at code equivalent to the code shown here.

ACKNOWLEDGMENTS

We are grateful to the anonymous reviewers who gave excellent comments on earlier versions of this article that prompted various much-needed rewrites.

Footnotes

  1. 1 The program transformations are really called \( \overrightarrow {\mathcal {D}}_{}(-)_2 \) and \( \overleftarrow {\mathcal {D}}_{}(-)_2 \) here. In Section 2.2, we discuss that it is better to define our actual program transformations to have a slightly different type. The second half of those transformations (defined in Section 2.3) corresponds to these \( \overrightarrow {\mathcal {D}}_{}(-)_2 \) and \( \overleftarrow {\mathcal {D}}_{}(-)_2 \).

    Footnote
  2. 2 For forward AD, we can also choose to implement instead \( \begin{align*} \mathcal {T}_{}^{\prime }f:\; &(\mathbb {R}^n\times \mathbb {R}^n)\rightarrow (\mathbb {R}^m\times \mathbb {R}^m)\\ &(x, v) \mapsto (f(x), Df(x)(v)), \end{align*} \) together with its chain rule as code transformations. This leads to a different style of forward AD based on a dual numbers representation. Reference [21] gives an analysis of this style of forward AD, similar to the treatment of reverse AD and (non-dual number) forward AD in this article. Although dual numbers forward AD is more efficient in its memory use and preferable in practical implementations, it does not have an obvious reverse mode variant. See Section 11.1 for more discussion.

    Footnote
  3. 3 In the rest of the article, we also consider the unit type 1 a first-order type. These types are also called ground types.

    Footnote
  4. 4 In Reference [42], we worked with a semantics in terms of diffeological spaces and differentiable functions, instead, to ensure that any first-order function is differentiable. This choice amounted to a separation between the proof that every first-order denotation is differentiable and that AD computes the correct derivative. To make the presentation of this article more accessible, we have chosen to simply work with sets and functions and to prove differentiability of every first-order denotation simultaneously in the proof that AD computes the correct derivative.

    Footnote
  5. 5 For information on the exact simplifications performed, see Appendix B.

    Footnote
  6. 6 Here, we work with statically sized arrays to simplify the theoretical development. However, in our implementation, we show that CHAD applies equally well to types of varying dimension such as dynamically sized arrays.

    Footnote
  7. 7 Observe that this restriction does not exclude, for example, functions that are differentiable almost everywhere like ReLU in a meaningful way, as such functions can be approximated with differentiable functions. Given how coarse an approximation real numbers already are to the reality of floating point arithmetic, the distinction between everywhere differentiable and almost-everywhere differentiable is not meaningful, in practice.

    Footnote
  8. 8 A locally \( \mathcal {C} \)-indexed category \( \mathcal {L} \) can be equivalently defined as a category \( \mathcal {L} \) enriched over the presheaf category \( [\mathcal {C}^{op},\mathbf {Set}] \). We prefer to consider locally indexed categories as special cases of indexed categories, instead, as CHAD’s natural generalization to data types of varying dimension, such as unsized arrays or sum types, requires us to work with more general (non-locally) indexed categories [30].

    Footnote
  9. 9 This condition says that the types !C′⊗CL and C′⇒CL do not depend on C. We need to add this condition to match the syntax of the target language, in which copowers and powers only depend on two argument types.

    Footnote
  10. 10 In fact, we conjecture our target language to be pure in the sense that reductions are confluent.

    Footnote
  11. 11 Note that the implementation of t + Copower(τ, σ)s is merely list concatenation written using a fold.

    Footnote
  12. 12 To be exact, the to-concrete compilation step in the implementation does convert copowers and linear functions to lists and regular functions, but retains 0 and + primitives. We feel this brings a major increase in readability of the output. Actually implementing the inductive definitions for 0 and + is easy when reified types (singletons) are added to the implementation, which itself is an easy change if one modifies the LT type class.

    Footnote
  13. 13 We prefer to work with this elementary formulation of maps rather than the usual higher-order formulation of \( \begin{equation*} \frac {\Gamma \vdash { t}: \mathbf {real}\rightarrow \mathbf {real}\quad \Gamma \vdash { s} : \mathbf {real}^n}{\Gamma \vdash \mathbf {map}({ t},{ s}) : \mathbf {real}^n}, \end{equation*} \) because it makes sense in the wider context of languages without function types as well and because it simplifies its CHAD correctness proof. Note that both are equivalent in presence of function types: map(t, s) = map(x.tx, s) and map(x.t, s) = map(λx. t, s).

    Footnote
  14. 14 As also mentioned in Section 9, the implementation is available at https://github.com/VMatthijs/CHAD.

  15. 15 https://github.com/VMatthijs/CHAD/blob/eedd6b12f224ed28ef9ca8650718d901c2b5e6a3/src/Concrete/Simplify.hs.

REFERENCES

  1. [1] Abadi Martín, Barham Paul, Chen Jianmin, Chen Zhifeng, Davis Andy, Dean Jeffrey, Devin Matthieu, Ghemawat Sanjay, Irving Geoffrey, Isard Michael, et al. 2016. Tensorflow: A system for large-scale machine learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI’16). 265283.Google ScholarGoogle Scholar
  2. [2] Abadi Martín and Plotkin Gordon D.. 2020. A simple differentiable programming language. In Proceedings of the ACM SIGPLAN Symposium on Principles of Programming Languages (POPL’20). ACM.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. [3] Barber Andrew and Plotkin Gordon. 1996. Dual Intuitionistic Linear Logic. University of Edinburgh, Department of Computer Science, Laboratory for Foundations of Computer Science.Google ScholarGoogle Scholar
  4. [4] Barthe Gilles, Crubillé Raphaëlle, Lago Ugo Dal, and Gavazzo Francesco. 2020. On the versatility of open logical relations - continuity, automatic differentiation, and a containment theorem. In Proceedings of the 29th European Symposium on Programming (ESOP’20) Held as Part of the European Joint Conferences on Theory and Practice of Software (ETAPS’20),Lecture Notes in Computer Science, Vol. 12075, Müller Peter (Ed.). Springer, 5683. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. [5] Baydin Atilim Gunes and Pearlmutter Barak A.. 2018. Automatic differentiation in machine learning: a survey. Journal of Machine Learning Research 18, 1 (2018), 143.Google ScholarGoogle Scholar
  6. [6] Benton P. Nick. 1994. A mixed linear and non-linear logic: Proofs, terms and models. In Proceedings of the International Workshop on Computer Science Logic. Springer, 121135.Google ScholarGoogle Scholar
  7. [7] Betancourt Michael, Margossian Charles C., and Leos-Barajas Vianey. 2020. The discrete adjoint method: Efficient derivatives for functions of discrete sequences. arXiv:2002.00326. Retrieved from https://arxiv.org/abs/2002.00326.Google ScholarGoogle Scholar
  8. [8] Brunel Alois, Mazza Damiano, and Pagani Michele. 2020. Backpropagation in the simply typed lambda-calculus with linear negation. In Proceedings of the ACM SIGPLAN Symposium on Principles of Programming Languages (POPL’20).Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. [9] Carpenter Bob, Hoffman Matthew D., Brubaker Marcus, Lee Daniel, Li Peter, and Betancourt Michael. 2015. The Stan math library: Reverse-mode automatic differentiation in C++. arXiv:1509.07164. Retrieved from https://arxiv.org/abs/1509.07164.Google ScholarGoogle Scholar
  10. [10] Chakravarty Manuel M. T., Keller Gabriele, Lee Sean, McDonell Trevor L., and Grover Vinod. 2011. Accelerating Haskell array codes with multicore GPUs. In Proceedings of the 6th Workshop on Declarative Aspects of Multicore Programming. 314.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. [11] Chakravarty Manuel M. T., Keller Gabriele, Lee Sean, McDonell Trevor L., and Grover Vinod. 2011. Accelerating haskell array codes with multicore GPUs. In Proceedings of the 6th Workshop on Declarative Aspects of Multicore Programming (DAMP’11). ACM, New York, NY, 314. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. [12] Cockett J. Robin B., Cruttwell Geoff S. H., Gallagher Jonathan, Lemay Jean-Simon Pacaud, MacAdam Benjamin, Plotkin Gordon D., and Pronk Dorette. 2020. Reverse derivative categories. In Proceedings of the Annual Conference on Computer Science Logic (CSL’20).Google ScholarGoogle Scholar
  13. [13] Curien P.-L.. 1986. Categorical combinators. Inf. Contr. 69, 1–3 (1986), 188254.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. [14] Egger Jeff, Møgelberg Rasmus Ejlers, and Simpson Alex. 2009. Enriching an effect calculus with linear types. In Proceedings of the International Workshop on Computer Science Logic. Springer, 240254.Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Ehrhard Thomas. 2018. An introduction to differential linear logic: Proof-nets, models and antiderivatives. Math. Struct. Comput. Sci. 28, 7 (2018), 9951060.Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Elliott Conal. 2018. The simple essence of automatic differentiation. Proc. ACM Program. Lang. 2, ICFP (2018), 129.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. [17] Fiore Marcelo P.. 2007. Differential structure in models of multiplicative biadditive intuitionistic linear logic. In Proceedings of the International Conference on Typed Lambda Calculi and Applications. Springer, 163177.Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Hannemann-Tamas Ralf, Munoz Diego A., and Marquardt Wolfgang. 2015. Adjoint sensitivity analysis for nonsmooth differential-algebraic equation systems. SIAM J. Sci. Comput. 37, 5 (2015), A2380–A2402.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. [19] Henriksen Troels, Serup Niels G. W., Elsman Martin, Henglein Fritz, and Oancea Cosmin E.. 2017. Futhark: Purely functional GPU-programming with nested parallelism and in-place array updates. In Proceedings of the 38th ACM SIGPLAN Conference on Programming Language Design and Implementation. 556571.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Huot Mathieu, Staton Sam, and Vákár Matthijs. 2022. Higher order automatic differentiation of higher order functions. Logical Methods in Computer Science 18, 1 (2022), 41:1–41:34.Google ScholarGoogle Scholar
  21. [21] Huot Mathieu, Staton Sam, and Vákár Matthijs. 2020. Correctness of automatic differentiation via diffeologies and categorical gluing. In Proceedings of the International Conference on Foundations of Software Science and Computation Structures (FoSSaCS’20).Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. [22] Innes Michael. 2018. Don’t unroll adjoint: Differentiating SSA-Form programs. arXiv:1810.07951. Retrieved from https://arxiv.org/abs/1810.07951.Google ScholarGoogle Scholar
  23. [23] Johnstone Peter T.. 2002. Sketches of An Elephant: A Topos Theory Compendium, Vol. 2. Oxford University Press.Google ScholarGoogle Scholar
  24. [24] Johnstone Peter T., Lack Stephen, and Sobocinski P.. 2007. Quasitoposes, quasiadhesive categories and artin glueing. In Proceedings of the Conference on Algebra and Coalgebra in Computer Science (CALCO’07).Google ScholarGoogle ScholarCross RefCross Ref
  25. [25] Krawiec Faustyna, Krishnaswami Neel, Peyton Jones Simon, Ellis Tom, Fitzgibbon Andrew, and Eisenberg R.. 2022. Provably correct, asymptotically efficient, higher-order reverse-mode automatic differentiation. Proc. ACM Program. Lang. 6, POPL (2022), 130.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. [26] Kriegl Andreas and Michor Peter W.. 1997. The Convenient Setting of Global Analysis, Vol. 53. American Mathematical Soc.Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Lambek Joachim and Scott Philip J.. 1988. Introduction to Higher-order Categorical Logic, Vol. 7. Cambridge University Press.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. [28] Lang Serge. 2002. Algebra. Springer, New York, NY. Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Levy Paul Blain. 2012. Call-by-push-value: A Functional/imperative Synthesis, Vol. 2. Springer Science & Business Media.Google ScholarGoogle Scholar
  30. [30] Lucatelli Nunes Fernando and Vákár Matthijs. 2021. CHAD for expressive total languages. arXiv:2110.00446. Retrieved from https://arxiv.org/abs/2110.00446.Google ScholarGoogle Scholar
  31. [31] Mak Carol and Ong Luke. 2020. A differential-form pullback programming language for higher-order reverse-mode automatic differentiation. arxiv:2002.08241. Retrieved from https://arxiv.org/abs/2002.08241.Google ScholarGoogle Scholar
  32. [32] Margossian Charles C.. 2019. A review of automatic differentiation and its efficient implementation. Data Min. Knowl. Discov. 9, 4 (2019), e1305.Google ScholarGoogle Scholar
  33. [33] Mazza Damiano and Pagani Michele. 2021. Automatic differentiation in PCF. Proc. ACM Program. Lang. 5, POPL (2021), 127.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. [34] McDonell Trevor L., Chakravarty Manuel M. T., Keller Gabriele, and Lippmeier Ben. 2013. Optimising purely functional GPU programs. In Proceedings of the 18th ACM SIGPLAN International Conference on Functional Programming (ICFP’13). ACM, New York, NY, 4960. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. [35] Mellies Paul-André. 2009. Categorical semantics of linear logic. Panor. Synth. 27 (2009), 15215.Google ScholarGoogle Scholar
  36. [36] Paszke Adam, Gross Sam, Chintala Soumith, Chanan Gregory, Yang Edward, DeVito Zachary, Lin Zeming, Desmaison Alban, Antiga Luca, and Lerer Adam. 2017. Automatic differentiation in pytorch. In NIPS 2017 Workshop on Autodiff. https://openreview.net/forum?id=BJJsrmfCZ.Google ScholarGoogle Scholar
  37. [37] Pearlmutter Barak A. and Siskind Jeffrey Mark. 2008. Reverse-mode AD in a functional framework: Lambda the ultimate backpropagator. ACM Trans. Program. Lang. Syst. 30, 2 (2008), 7.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. [38] Shaikhha Amir, Fitzgibbon Andrew, Vytiniotis Dimitrios, and Peyton Jones Simon. 2019. Efficient differentiable programming in a functional array-processing language. Proc. ACM Program. Lang. 3, ICFP (2019), 97.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. [39] Skorstengaard Lau. 2019. An introduction to logical relations. arXiv:1907.11133. http://arxiv.org/abs/1907.11133.Google ScholarGoogle Scholar
  40. [40] Tsiros Periklis, Bois Frederic Y., Dokoumetzidis Aristides, Tsiliki Georgia, and Sarimveis Haralambos. 2019. Population pharmacokinetic reanalysis of a Diazepam PBPK model: A comparison of stan and GNU MCSim. J. Pharmacokinet. Pharmacodynam. 46, 2 (2019), 173192.Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Vákár Matthijs. 2017. In search of effectful dependent types. arXiv:1706.07997. Retrieved from https://arxiv.org/abs/1706.07997.Google ScholarGoogle Scholar
  42. [42] Vákár Matthijs. 2021. Reverse AD at higher types: Pure, principled and denotationally correct. In Proceedings of the European Symposium on Programming (ESOP’21).Google ScholarGoogle Scholar
  43. [43] Vytiniotis Dimitrios, Belov Dan, Wei Richard, Plotkin Gordon, and Abadi Martin. 2019. The differentiable curry. In NeurIPS Workshop Program Transformations.Google ScholarGoogle Scholar
  44. [44] Wang Fei, Wu Xilun, Essertel Gregory, Decker James, and Rompf Tiark. 2019. Demystifying differentiable programming: Shift/reset the penultimate backpropagator. Proc. ACM Program. Lang. 3, ICFP (2019).Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. [45] Wilson Paul W. and Zanasi Fabio. 2020. Reverse derivative ascent: A categorical approach to learning boolean circuits. In Proceedings of the 3rd Annual International Applied Category Theory Conference (ACT’20), Spivak David I. and Vicary Jamie (Eds.). 247260. Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. CHAD: Combinatory Homomorphic Automatic Differentiation

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Programming Languages and Systems
          ACM Transactions on Programming Languages and Systems  Volume 44, Issue 3
          September 2022
          302 pages
          ISSN:0164-0925
          EISSN:1558-4593
          DOI:10.1145/3544000
          Issue’s Table of Contents

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 17 August 2022
          • Accepted: 1 March 2022
          • Revised: 1 October 2021
          • Received: 1 March 2021
          Published in toplas Volume 44, Issue 3

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Refereed

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format .

        View HTML Format
        About Cookies On This Site

        We use cookies to ensure that we give you the best experience on our website.

        Learn more

        Got it!