Asynchronous Modal FRP

Over the past decade, a number of languages for functional reactive programming (FRP) have been suggested, which use modal types to ensure properties like causality, productivity and lack of space leaks. So far, almost all of these languages have included a modal operator for delay on a global clock. For some applications, however, a global clock is unnatural and leads to leaky abstractions as well as inefficient implementations. While modal languages without a global clock have been proposed, no operational properties have been proved about them, yet. This paper proposes Async RaTT, a new modal language for asynchronous FRP, equipped with an operational semantics mapping complete programs to machines that take asynchronous input signals and produce output signals. The main novelty of Async RaTT is a new modality for asynchronous delay, allowing each output channel to be associated at runtime with the set of input channels it depends on, thus causing the machine to only compute new output when necessary. We prove a series of operational properties including causality, productivity and lack of space leaks. We also show that, although the set of input channels associated with an output channel can change during execution, upper bounds on these can be determined statically by the type system.


INTRODUCTION
Reactive programs are programs that engage in a dialogue with their environment, receiving input and producing output, often without ever terminating.Examples include much of the most safety critical software in use today, such as control software and servers, as well as GUIs.Most reactive software is written in imperative languages using a combination of complex features such as callbacks and shared memory, and for this reason it is error-prone and hard to reason about.
The idea of functional reactive programming (FRP) [12] is to provide the programmer with the right abstractions to write reactive programs in functional style, allowing for short modular programs, as well as modular reasoning about these.For such abstractions to be useful it is important that they are designed to allow for efficient low-level implementations to be automatically generated from programs.
The main abstraction of FRP is that of signals, which are time-dependent values.In the case of discrete time given by a global clock, a signal can be thought of as a stream of data.A reactive program is essentially just a function taking input signals and producing output signals.For this to be implementable, however, it needs to be causal: The current output must only depend on current and past input.Moreover, the low-level implementations generated from high-level programs should also be free of (implicit) space-and time-leaks.This means that reactive programs should not store data indefinitely causing the program to eventually run out of space, nor should they repeat computations in such a way that the execution of each step becomes increasingly slower.
These requirements have led to the development of modal FRP [2,3,[16][17][18][19][20][21], a family of languages using modal types to ensure that all programs can be implemented efficiently.The most important modal type constructor is , used to classify data available in the next time step on some global discrete clock.For example, the type of signals should satisfy the type isomorphism  Sig × (Sig ) stating that the current value of the signal is available now, but its future values are only available after the next time step.Using this encoding of signals, one can ensure that all reactive programs are causal.Many modal FRP languages also include a variant of the Nakano [22] guarded fixed point operator of type ( → ) → .The type ensures that recursive calls are only performed in future steps, thus ensuring termination of each step of computation, a property called productivity.Often these languages also include a modality used to classify data that is stable, in the sense that it can be kept until the next time step without causing space leaks.Other modal constructors, such as (eventually) can be encoded, suggesting a Curry-Howard correspondence between linear temporal logic [25] and modal FRP [4,9,16,18].
However, for many applications, the notion of a global clock associated with the modal operator may not be natural and can also lead to inefficient implementations.Consider, for example, a GUI which takes an input signal of user keystrokes, as well as other signals that are updated more frequently, like the mouse pointer coordinates.The global clock would have to tick at least as fast as the updates to the fastest signal, and updates on the keystroke signal will only happen on very few ticks on the global clock.Perhaps the most natural way to model the keystroke signal is therefore using a signal of type Maybe(Char).In the modal FRP languages of Bahr et al. [3], Krishnaswami [19], the processor for this signal will have to wake up for each tick on the global clock, check for input, and often also transport some local state to the next time step by calling itself recursively.Perhaps more problematic, however, is that an important abstraction barrier is broken when a processor for an input signal is given access to the global clock.Instead, we would like to write the GUI as a collection of processors for asynchronous input signals that are only activated upon updates to the signals on which they depend.

Async RaTT
This paper presents Async RaTT, a modal FRP language in the RaTT family [2][3][4], designed for processing asynchronous input.A reactive program in Async RaTT reads signals from a set of input channels and in response sends signals to a set of output channels.In a GUI application, typical input channels would include the mouse position and keystroke events, while output channels could for example include the content of a text field or the colour of a text field.
For each output channel , the reactive program keeps track of the set of input channels on which depends (cf. Figure 1a).We refer to such a set of input channels as a clock.When the signal on an input channel is updated, only those output channels whose clock contains will be updated.For example, the keystroke input channel might be in the clock for the text field content but not the text field colour.Since the program can dynamically change its internal dataflow graph, the clock associated with an output channel may change during execution (cf. Figure 1b) and so is not known at compile time.For example, the text field might fall out of focus and thus not react to keystrokes any longer.We refer to the arrival of new data on an input channel in the clock as a tick on clock .
Async RaTT has a modal operator used to classify stable data, as well as two new modalities: ∃ for asynchronous delays and ∀ for a delay on the global clock.A value of type ∃ is a pair consisting of a clock and a computation that can be executed to return data of type on the first tick on .The type ∃ can therefore be thought of as an existential type.The clocks of output channels, as illustrated in Figure 1, are stored in the first component of this existential type.Our notion of signal is encoded in types as a recursive type Sig × ∃ Sig .That means, the clock associated with the tail of a signal may change from one step to the next, allowing for dynamic updates of clocks associated with output channels as in Figure 1.
Unlike the synchronous , the asynchronous ∃ does not have an applicative action of type ∃ ( → ) → ∃ → ∃ because the delayed function and the delayed input may not arrive at the same time, and to avoid space leaks, Async RaTT does not allow the first input to be stored until the second input arrives.Instead, Async RaTT synchronises delayed data using an operator Given two delayed computations associated with clocks 1 and 2 , respectively, sync returns the delayed computation associated with the union clock 1 ⊔ 2 .This delayed computation waits for an input on any input channel ∈ 1 ⊔ 2 , and then evaluates the computations that can be evaluated depending on whether ∈ 1 , ∈ 2 , or both.For example, if the input arrives on channel ∈ 1 \ 2 , only the first delayed computation is evaluated.The sync operator can be used to implement operators like switch : Sig → ∃ (Sig ) → Sig which dynamically update the dataflow graph of a program.
Note that sync can be read as a linear time axiom: Given two clocks, either one ticks before the other, or they tick simultaneously.Async RaTT programs are therefore dependent on the run-time environment to schedule the order in which inputs are processed.What we mean by asynchronicity is that output channels are updated asynchronously.This is reflected in the type system by ∃ not being an applicative functor as explained above.
The modal type ∀ classifies computations that can be run at any time in the future, but not now.It is used in the guarded fixed point operator, which in Async RaTT has type The input to the fixed point operator must be a stable function (as classified by ), because it will be used in unfoldings at any time in the future.The use of ∀ restricts fixed points to only unfold in the future, ensuring termination of each step of computation.

Operational Semantics and Results
We present an operational semantics mapping each complete Async RaTT program to a machine that transforms a sequence of inputs received on its input channels to a sequence of outputs on its output channels.The transformation is done in steps, processing one input at a time, producing new outputs on the affected output channels.
The operational semantics consists of two parts.The first is the evaluation semantics describing the evaluation of a term in each step of the evaluation.This takes a term and a store and returns a value and an updated store in the context of current values on input signals.The store contains

Locations
∈ Loc Input Channels ∈ Chan Clock Expr. : delayed computations, and the evaluation semantics may run previously stored delayed computations as well as store new ones to be evaluated at a later step.The reactive semantics on the other hand, describes the machine which, at each step, locates the output signals to be updated and executes the corresponding delayed computations to produce output.
The transformation of input to output described by the operational semantics is causal by construction.We show that it is also deterministic and productive (in the sense that each step terminates and never gets stuck).We also show that the execution of an Async RaTT program is free of (implicit) space leaks.This is achieved following a technique originally due to Krishnaswami [19]: At the end of each step of execution, the machine deletes all delayed computations that in principle could have been run in the current step -regardless of whether they actually were run.All inputs are also deleted, either at the end of the step or when the next input from the same signal arrives, depending on the kind of the specific input signal.Our results show that this aggressive garbage collection strategy is safe.Of course, the programmer can still write programs that accumulate space, but such leaks will be explicit in the source program, not implicitly introduced by the implementation of the language.(See Krishnaswami [19] for a further discussion of implicit vs explicit space leaks.)Finally, we show that an upper bound on the dynamic clocks associated with an output signal can be computed statically.More precisely, given an Async RaTT program consisting of a number of output signals in a given context Δ of input channels, if one of the output signals can be typed in a smaller context Δ ′ ⊆ Δ, then that signal will never need to update on input arriving on channels in Δ \ Δ ′ .Note that this result holds despite the existence of operators like switch, which dynamically change the dataflow graph of a program.

Overview
The paper is organised as follows: Async RaTT is presented along with its typing rules in section 2, and section 3 illustrates the expressivity of Async RaTT by developing a small library of signal combinators, along with examples that use the library for GUI programming and computing integrals and derivatives of signals.The operational semantics is defined in section 4, which also illustrates it with an example, and presents the main results.Section 5 sketches the proofs of the main results, and in particular defines the Kripke logical relation used for the proofs.Finally, section 6 and section 7 discuss related work, conclusions and future work.In addition, Appendix A gives a detailed account of the proof of the fundamental property of the Kripke logical relation.

ASYNC RATT
This section gives an overview of Async RaTT, referring to Figures 2 and 3 for the full specification of its syntax and typing rules.
An Async RaTT program has access to a set of input channels, each of which receive updates asynchronously from each other.To account for this, typing judgements are relative to an input channel context Δ or input context for short.An example of such a context is keyPressed : p Nat, mouseCoord : bp Nat × Nat, time : b Float There are three classes of input channels, each corresponding to one of the subscripts p, b, and bp as in the example above.Push-only input channels, indicated by p, are input channels whose updates are pushed through the program, possibly causing output channels to be updated.In the example context above, the programmer will want to react to user keypresses immediately, and so updates to this should be pushed.On the other hand, we may wish to have access to a time input channel, which we can read from at any time, but we may not want the program to wake up whenever the time changes.Time is therefore treated as a buffered-only input channel, indicated by b, whose most recent value is buffered, but whose changes will not trigger the program to update any output channel.Finally, input channels may be both buffered and pushed, indicated by bp, which means that updates are pushed, but we also keep the value around in a buffer, so that the latest value can always be read by the program.This is unlike the push-only input channels whose values are deleted for space efficiency reasons, once an update push has been treated.For example, we might want to be informed when the mouse coordinates are updated, but also keep these around so that we can read the mouse coordinates when a key is pressed, even if the mouse has not moved.We refer to input channels that are either push-only or buffered-push (p or bp) as push channels and similarly to input channels that are either buffered-only or buffered-push as buffered channels.
All signals are assumed to have value types, i.e., any declaration : in Δ must have a value type .The grammar for value types is given in Figure 2.

Clocks and ∃
A clock is intuitively a set of push channels (p or bp), that the program may have to react to.For instance, ∅, {keyPressed} and {keyPressed, mouseCoord} are all examples of clocks for the example input context mentioned earlier.The type ∃ is a type of delayed computation on an existentially quantified clock.In other words, a value of type ∃ is a pair of a clock and a computation that will produce a value of type once an update on one of the input channels in is received.We refer to such an update as a tick on the clock .For example, if the associated clock is {keyPressed, mouseCoord}, then the data of type can be computed once keyPressed or mouseCoord receive new input.
Since ∃ are existential types, one can obtain the clock cl ( ) for any value of these types.The values of type ∃ are variables and wait where is one of the push channels.The latter acts as a reference to the next value pushed on , and so intuitively cl (wait ) = { }.Clocks can also be combined using a union operator ⊔.We also include an element never which is associated with the empty clock.
We use Fitch-style [10], rather than the more traditional dual context style [11] for programming with the modal type constructors of Async RaTT.In the case of ∃ , this means that introduction and elimination rules use a special symbol , referred to as a tick, in the context.One can think of a tick as representing ticks of the clock , and it divides the judgement into variables (to the left of ) received before the tick, and everything else, which happens after the tick.For example, the elimination rule should be read as: If has type ∃ now, then after a tick on the clock cl ( ), adv( ) has type .Similarly, the introduction rule for ∃ should be read as: If has type after a tick on clock then delay has type ∃ now.Note that there can be at most one tick in a context.This is a restriction that is required for the proof of the productivity theorem (Theorem 4.1), and also appears in other languages in the RaTT family [3,4].However, Bahr [2] shows that this restriction can be lifted by a program transformation that transforms a program typable with multiple ticks into one with only one tick and where adv is only applied to variables.
Operationally, the term delay creates a delayed computation which is stored in a heap until the input data necessary for evaluating it is available.It is therefore not considered a value, rather delay evaluates to a heap reference to the delayed computation.Although heap references are part of Async RaTT, and even considered values (Figure 2), programmers are not allowed to use these directly, and there are therefore no typing rules for them.
Two delayed values 1 : ∃ 1 and 2 : ∃ 2 can be synchronised using select once a tick on the union clock cl ( 1 ) ⊔ cl ( 2 ) has been received.The type of select 1 2 reflects the three possible cases for such a tick: It could be in one cl ( ), but not the other, or it could be in both.For example, if the input is in cl ( 1 ), but not cl ( 2 ), then data of type 1 × ∃ 2 can be computed.The sync operator shown in section 1.1 can be defined using select.The idea of using a term like select to distinguish between these cases is due to Graulund et al. [14], who only require two cases to be defined, resorting to non-deterministic choice in the case where the tick is in the intersection of the clocks.In Async RaTT, providing all three cases is crucial for the operational results of section 4.
Note that the rules for select and adv restrict the application of these constructions to values.One reason for this is that it simplifies the metatheory by preventing arbitrary terms occurring in contexts through clocks.It also means that clock expressions always are values that do not need to be evaluated.For example, evaluating delay cl( ) (adv( )) requires evaluating twice: first for evaluating the clock, and then to evaluate the term itself.Elimination of ∃ can be done for more general terms using a combination of let-binding and adv.

Stable Types and Fixed Points
General values in Async RaTT can contain references to time-dependent data, such as delayed computations stored in the heap.One of the main purposes of the type system is to prevent such references to be dereferenced at times in the future when a delayed computation has been deleted from the heap.For this reason, arbitrary data should not be kept across time steps, and this is reflected in the type system in the variable introduction rule which prevents general variables to be introduced across ticks.
For some types, however, values can not contain such references.We refer to these as stable types and the grammar for these is given in Figure 2. Stable types include all those of the form , which classify computations that produce values of type without any access to delayed computations.The introduction rule for constructs a delayed computation box( ) that can be evaluated at any time in the future.This requires to be typed in a stable context, and so the hypothesis of the typing rule removes all ticks and all variables not of stable type from the context.The modality has a counit and a comultiplication → .Note that wait and read are stable in the sense that ⊢ Δ box(wait ) : ( ∃ ) for any : ∈ Δ where ∈ {p, bp} and ⊢ Δ box(read ) : for any : ∈ Δ where ∈ {b, bp}.Async RaTT is a terminating calculus in the sense that each step of computation terminates.It does, however, still allow recursive definitions through a fixed point operator, whose type ensures that recursive calls are only done in later time steps.More precisely, the recursion variable in fix .has type ∀ , which means that the recursive definition can be unfolded to produce a term of type any time in the future, but not now.This is ensured through the elimination rule for ∀ which allows it to be advanced using a tick on any clock typable in the current context.Since fixed points can be called recursively at any time in the future, these must be stable, and so is required to be typable in a stable context.The types Fix .are guarded recursive types that unfold to [ ∃ (Fix .)/ ] via the terms into and out.The most important of these types is Sig defined as Fix .(× ), which unfolds to × ∃ (Sig ).A signal consists of a current value and a delayed tail, which at some time in the future may return a new signal.Any push channel : ∈ Δ, where ∈ {p, bp}, induces a stable signal: box fix .delaycl(wait ) (into (adv(wait ), adv( ))) : ( ∃ (Sig )) where the recursion variable has type ∀ ( ∃ ).These signals, of course, operate on a fixed clock { }, but in general, the clock associated with the tail of a signal may change from one step to the next, which we shall see examples of in section 3.
Besides all these constructions, Async RaTT also has a number of standard constructions from functional programming: sum types, product types, natural numbers and function types.The typing rules for these are completely standard, with the exception that function types can only be constructed in contexts with no ticks.Similar restrictions are known from other calculi in the RaTT family [3,4], and are necessary for the results of section 4. The aforementioned program transformation by Bahr [2] also removes this restriction.Note that function types are not stable, since time-dependent references can be stored in closures.

PROGRAMMING IN ASYNC RATT
In this section, we demonstrate the expressiveness of Async RaTT with a number of examples.To this end, we assume a surface language that extends Async RaTT with syntactic sugar for pattern matching, recursion, and top-level definitions.These can be easily elaborated into the Async RaTT calculus as described in section 3.5.

Simple Signal Combinators
We start by implementing a small set of simple combinators to manipulate signals, i.e., elements of the guarded recursive type Sig defined as Fix .(× ).For readability we use the shorthand :: for into ( , ), such that, given : and : ∃ (Sig ), we have that :: : Sig .We start with perhaps the simplest signal combinator: The map combinator takes a stable function and applies it pointwise to a given signal.The fact that is of type ( → ) rather than just → is crucial: Since → is not a stable type, would otherwise not be in scope under the delay, where we need for the recursive call.It also has an intuitive justification: The function will be applied to values of the input signal arbitrarily far into the future, but a closure of type → may contain references to delayed computations that may have been garbage collected in the future.The map combinator is stateless in the sense that the current value of the output signal only depends on the current value of the input signal.We can generalise this combinator to scan, which produces an output signal that in addition may depend on the previous value of the output signal: where acc ′ = unbox f acc a Every time the input signal updates, the output signal produces a new value based on the current value of the input signal and the previous value of the output signal.Since the previous value of the output signal is accessed, must be a stable type.We use the ⇒ notation to delineate such constraints from the type signature.
For example, we can use scan to produce the sum of an input signal of numbers: Often we only have access to a delayed signal.For instance, for each push channel : ∈ Δ, ∈ {p, bp} we have the signal sigAwait : ∃ (Sig A) sigAwait = delay (adv wait :: sigAwait ) For example, we might have the push-only channels mouseClick : p 1 or keyPress : p KeyCode available.We can derive a version of scan for such signals: A simple use case of scanAwait is a combinator that counts the updates of a given delayed signal, e.g., the number of key presses: Finally, we have the most simple combinator that simply produces a constant signal: In isolation this combinator may appear to be of little use.Its utility becomes apparent once we also have the switching combinators introduced in the next section.

Concurrent Signal Combinators
The combinators we looked at so far only consumed a single signal, and thus had no need to account for the concurrent behaviour of two or more clocks.For example, we may have two input signals produced by two redundant sensors that independently provide a reading we are interested in.To combine these two signals, we can interleave them using the following combinator: In this and subsequent definitions, we use the shorthands Le , Right, and Both, in the expected way.For example, Le is short for in 1 (in 1 ( , )), i.e., the case that the left clock ticked first.The interleave combinator uses select in order to wait until at least one of the input signals ticks, and then updates the output signal accordingly.In case that both signals tick simultaneously, the provided merging function is applied.For example, could just always use the value of the first signal or take the average.Note that the produced signal combines the clocks of the input signals, i.e., it ticks whenever either of the input signals ticks.
We might also be interested in the values of both input signals simultaneously, in which case we would use zip: Similarly to interleave, the output signal produced by zip ticks whenever either of the input signals does.However, note that in the Le and Right cases, we copy the previously observed value from the signal that did not tick into the future.Hence, we need both types, and , to be stable.
Finally, we consider the switching of signals.We wish to produce a signal that behaves initially like a given input signal, but switches to a different signal as soon as some event happens.This idea is implemented in the switch function: The event that represents the future change of the signal is represented as a delayed signal, and as soon as this delayed signal ticks, as in the Right and Both cases, it takes over.With the help of switch we can construct dynamic dataflow graphs since we replace a given signal with an entirely new signal, which may depend on different input channels and intermediate signals compared to the original signal.We will demonstrate an example of this dynamic behaviour in the next section.In preparation for that we devise a variant of switch, where the new signal depends on the value of the previous signal: Instead of a new signal, this combinator waits for a function that produces the new signal, and we feed this function the last value of the first signal.

A Simple GUI Example
To demonstrate how to use our signal combinators, we consider a very simple example of a GUI application: Our goal is to write a reactive program with two output channels that describe the contents of two text fields.To this end, the two output channels are given the type Sig Nat.The number displayed in text fields should be incremented each time the user clicks a button, which is available as an input channel up : p 1 ∈ Δ.However, there is only one 'up' button and the user can change which text field should be changed by the 'up' button using a 'toggle' button, which is available as an input channel toggle : p 1 ∈ Δ.
That means, the contents of the first text field can be described by the count combinator, but then switches to the signal described by the const combinator when 'toggle' is pressed.The behaviour of the other text field is reversed: first const, then count.This continuous toggling between behaviours can be concisely described by the following combinator: where tick = unbox tog The first argument provides the events that determine when to toggle between the two behaviours, which in turn are given as the next two arguments.In the implementation we use the notation ; as a shorthand for let = in .The toggleSig combinator uses switchf to start with the first signal provided by , but then switches to as soon as the toggle tog ticks by using a recursive call that swaps the order of the two arguments and .
The output channels that describe the two text fields can now be implemented by providing the appropriate input signals to toggleSig: Note that the dataflow graph changes during the execution of the program and how that change is reflected in the clock associated with the output channels: the output channel for the first text field first has the clock {up, toggle} as it must both count the number of times the 'up' button is clicked and change its behaviour in reaction to the 'toggle' button being clicked.Once the 'toggle' button has been clicked, the clock for output channel for the text field changes to {toggle} as it now ignores the 'up' button.We will examine the run-time behaviour of this example in more detail in section 4.3.

Integral and Derivative
Buffered input channels can be used to represent input signals that change at discrete points in time, but whose current value can be accessed at any time.For example, given a buffered push channel : bp ∈ Δ, we can construct the following signal (using sigAwait from section 3.1): To illustrate what we can do with such input signals, we assume that Async RaTT has a stable type Float together with typical operations on floating-point numbers.Figure 4 gives the definition of two signal combinators that each take a floating-point-valued signal and produce the integral and the derivative of that signal.To this end, we assume a buffered push channel sample : bp Float ∈ Δ that produces a new floating-point number at some fixed interval (e.g., 10 times per second).This number is the number of seconds since the last update on the channel, e.g., = 0.1 if sample ticks 10 times per second.
The integral combinator produces the integral of a given signal starting from a given constant that is provided as the first argument.Its implementation uses a simple approximation that samples the value of the underlying signal each time the sample channel produces a value and adds the area of the rectangle formed by the value of the signal and the time that has passed since the last sampling.
The first equation of the definition is an optimisation and could be omitted.It says that if the current value of the underlying signal is 0, we simply wait until the underlying signal is updated, since the value of the integral won't change until the underlying signal has a non-zero value.Hence, we don't have to sample every time the sample channel ticks.
Similarly to integral, we can implement a function derivative that, given a floating-point-valued signal, produces its derivative.Like the integral function, also derivative samples the underlying signal every time sample ticks.To do so it uses the auxiliary function der, which takes two additional arguments: the current value of the derivative and the value of the underlying signal at the time of the most recent input from of the sample channel.Similarly to integral, the first line of der performs an optimisation: If the computed value of the derivative is 0, the sampling will pause until the underlying signal is updated.As soon as it does, we pretend that sample ticked to provide a timely update of the derivative.
These two combinators can be easily generalised from floating-point values to any vector space.This can then be used to describe complex behaviours in reaction to multidimensional sensor data.

Elaboration of Surface Syntax into Core Calculus
To illustrate how the surface language elaborates into the Async RaTT core calculus, reconsider the definition of map map : which elaborates to the following term in plain Async RaTT: Recall that :: is a shorthand for into ( , ).Pattern matching is translated into the corresponding elimination forms, out for recursive types, for product types, and case for sum types.The recursion syntaxmap occurs in the body of its definition -is translated to a fixed point fix .
so that the recursive occurrence of map is replaced by adv .Hence, recursive calls must always occur in the scope of a , which is the case in the definition of map as it appears in the scope of a delay.Moreover, we elide the subscript cl (xs) of delay since it can be uniquely inferred from the fact that we have the term adv xs in the scope of the delay.
In addition, we make use of top-level definitions like map and scan, which may be used in any context later on.For example, scan is used in the definition of scanAwait in the scope of a .We can think these top-level definitions to be implicitly boxed when defined and unboxed when used later on.That is, these definitions are translated as follows to the core calculus:

OPERATIONAL SEMANTICS AND OPERATIONAL GUARANTEES
We describe the operational semantics of Async RaTT in two stages: We begin in section 4.1 with the evaluation semantics that describes how Async RaTT terms are evaluated at a particular point in time.Among other things, the evaluation semantics describes the computation that must happen to make updates in reaction to the arrival of new input on a push channel.We then describe in section 4.2 the reactive semantics that captures the dynamic behaviour of Async RaTT programs over time.The reactive semantics is a machine that waits for new input to arrive, and then computes new values for output channels that depend on the newly arrived input.For the latter, the reactive semantics invokes the evaluation semantics to perform the necessary updating computations.Finally, after demonstrating the operational semantics on an example in section 4.3, we conclude the discussion of the operational semantics in section 4.4 with a precise account of our main technical results about the properties of the operational semantics: productivity, causality, signal independence, and the absence of implicit space leaks.To prove the latter, the evaluation semantics uses a store in which both external inputs and delayed computations are stored.Delayed computations are garbage collected as soon as the data on which they depend has arrived.In this fashion, Async RaTT avoids implicit space leaks by construction, provided we can prove that the operational semantics never gets stuck.

Evaluation Semantics
Figure 5 defines the evaluation semantics as a deterministic big-step operational semantics.We write ; ⇓ ; to denote that when given a term , a store , and an input buffer , the machine computes a value and a new store .During the computation, the machine may defer computations into the future by storing unevaluated terms in the store to be retrieved and evaluated later.Conversely, the machine may also retrieve terms whose evaluation have been deferred at an earlier time and evaluate them now.In addition, the machine may read the new value of the most recently updated push channel from the store and read the current value of any buffered channel from the input buffer .
To facilitate the delay of computations, the syntax of the language features heap locations , which are not typable in the calculus but may be introduced by the machine during evaluation.A heap location represents a delayed computation that can be resumed once a particular clock has ticked, which indicates that the data the delayed computation is waiting for has arrived.To this end, each heap location is associated with a clock, denoted cl ( ).As soon as the clock cl ( ) ticks, the delayed computation represented by can be resumed by retrieving the unevaluated term stored at heap location and evaluating it.We write Loc for the set of all heap locations and assume that for each clock Θ, there are countably infinitely many locations with cl ( ) = Θ.A clock Θ is a   finite set of push channels drawn from dom (Δ), and it ticks any time any of its channels ∈ Θ is updated.For example, assuming an input context Δ for a GUI, the clock {keyPressed, mouseCoord} ticks whenever the user presses a key or moves the mouse.Note that we are now more precise in distinguishing clock expressions, typically denoted , and clocks, typically denoted Θ.A closed clock expression evaluates to a clock | | as follows: Delayed computations reside in a heap, which is simply a finite mapping from heap locations to terms.Of particular interest are heaps whose locations, denoted dom ( ), each have a clock that contains a given input channel : It is safe to evaluate terms stored in a heap ∈ Heap as soon as a new value on the input channel has arrived.This intuition is reflected in the representation of stores , which can be in one of two forms: a single-heap store or a two-heap store ↦ → with ∈ Heap .We typically refer to as the later heap, which is used to store delayed computations for later, and to as the now heap, whose stored terms are safe to be evaluated now.The ↦ → component of a twoheap store indicates that the input channel has been updated to the new value .The machine can thus safely resume computations from since the data that the delayed computations in were waiting for has arrived.
Let's first consider the semantics for delay: To allocate fresh locations in the store, we assume a function alloc Θ (•), which, if given a store or ↦ → , produces a location ∉ dom ( ) with cl ( ) = Θ.This results in a store , ↦ → or ↦ → , ↦ → , respectively, where , ↦ → denotes the heap extended with the mapping ↦ → .Conversely, adv retrieves a previously delayed computation.The typing discipline ensures that adv will only be evaluated in the context of a store of the form ↦ → with ∈ dom ( ) and therefore also ∈ cl ( ).In addition, adv may be applied to wait which simply looks up the new value from the channel .
The select combinator allows us to interact with two delayed computations simultaneously.Its semantics checks for the three possible contingencies, namely which non-empty subset of the two delayed computations has been triggered.Each of the two argument values 1 or 2 is either a heap location or wait and thus the machine can simply check whether the current input channel is in the clocks associated with 1 , 2 , or both.Depending on the outcome, the machine advances the corresponding value(s).
Finally, the fixed point combinator fix is evaluated with the help of the combinator dfix, which similarly to heap locations is not typable in the calculus but is introduced by the machine.Intuitively speaking, we can think of a value of the form dfix .as shorthand for Λ .(delay(fix .)).That is, dfix . is a thunk that, when given a clock , produces a delayed computation on , which in turn evaluates a fixed point once ticks.The action of the adv combinator for ∀ can thus also be interpreted as first providing the clock and then advancing the delayed computation delay (fix .), which means evaluating fix . .

Reactive Semantics
An Async RaTT program interacts with its environment by receiving input from a set of input channels and in return sends output to a set of output channels.The input context Δ describes the available input channels.In addition, we also have an output context Γ out , that only contains variables : , where is a value type.We refer to the variables in Γ out as output channels.Taken together, we call the pair consisting of Δ and Γ out a reactive interface, written Δ ⇒ Γ out .
Given an output interface Γ out = 1 : 1 , . . ., : , we define the type Prod (Γ out ) as the product of all types in Γ out , i.e., Prod (Γ out ) = Sig 1 × • • • × Sig .The -ary product type used here can be encoded using the binary product type and the unit type in the standard way.An Async RaTT term is said to be a reactive program implementing the reactive interface Δ ⇒ Γ out , denoted : The operational semantics of a reactive program is described by the machine in Figure 6.The state of the machine can be of two different forms: Initially, the machine is in a state of the form ; ∅ ⇓ 1 :: 1 , . . ., :: ; ; , where : Δ ⇒ Γ out is the reactive program and is the initial input buffer, which contains the initial values of all buffered input channels.Subsequently, the machine state is a pair ; ; , where is a sequence of the form 1 ↦ → 1 , . . ., ↦ → that maps output channels ∈ dom (Γ out ) to heap locations.That is, records for each output channel the location of the delayed computation that will produce the next value of the output channel as soon as it needs updating.
The machine can make three kinds of transitions: an initialisation transition ; =⇒ ; ; an input transition ; ; where is a sequence 1 ↦ → 1 , . . ., ↦ → that maps output channels to values.After the initial transition, which initialises the values of all output channels, the machine alternates between input transitions, each of which updates the value of an input channel and possibly the input buffer (if the new input is on a buffered channel), and output transitions, each of which provides new values for all those output channels triggered by the immediately preceding input transition.The initialisation transition evaluates the reactive program in the context of the initial input buffer and thereby produces a tuple 1 :: 1 , . . ., :: whose components :: correspond to the output channels : ∈ Γ out .Each is the initial value of the output channel and each points to a delayed computation in the heap that computes future values of .An input transition receives an updated value on the input channel and reacts by updating the input buffer (if it already had a value for ) and transitioning the store to the new store [ ] ∈ ↦ → [ ] ∉ .This splits the heap into a part that contains and a part that does not: That is, in the subsequent output transition, the machine can read from [ ] ∈ , i.e., exactly those heap locations from that were waiting for input from , and access the new value from .Finally, the output transition checks for each element ↦ → in , whether it should be advanced because it depends on ( ) or should remain untouched because it does not depend on ( ).Only in the case, a new output value for is produced.In the end, the output transition performs the desired garbage collection that deletes both the now heap and the input value ( ).This also means that the updates performed by , are not only possible (because the required data arrived), but also necessary (because both the input data and the delayed computations they depend on will be gone after this output transition of the machine).

Example
To see the operational semantics in action, we revisit the simple GUI program from section 3.3 and run it on the machine.To this end, we first elaborate the definition of toggleSig into an explicit fixed point term of the core calculus as described in section 3.5: During the execution, the machine turns fixed points like toggleSig into delayed fixed points that use dfix instead of fix.We write toggleSig ′ for this delayed fixed point, i.e., toggleSig ′ is obtained from toggleSig by replacing fix with dfix.We will use the same notational convention for other fixed point definitions and write sigAwait ′ and scan ′ for the dfix versions of sigAwait and scan from section 3.1.We consider the program field1 : Δ ⇒ Γ out with Δ = up : p 1, toggle : p 1 , Γ out = : Nat, and field1 = toggleSig 1 2 0 where = box wait toggle That is, this program describes the behaviour of the text field that initially is in focus and thus reacts to the 'up' button.
For better clarity of the transition steps of the machine, we write the machine's store as just the list of its heap locations, and write the contents of the locations along with their clocks separately underneath.The first step of the machine performs the initialisation that provides the initial value of the output signal: We can see that the next value for the output channel is provided by the delayed computation at location 1 , and since cl ( 1 ) = {toggle, up} we know that will produce a new value as soon as the user clicks either of the two buttons.If the user clicks the 'up' button, we see the following: Finally, note that since the input context Δ contains no buffered input channels the input buffer remains empty during the entire run of the program.

Main Results
The operational semantics presented above allows us to precisely state the operational guarantees provided by Async RaTT, namely productivity, the absence of implicit space leaks, causality, and signal independence.We address each of them in turn.

Productivity.
Reactive programs : Δ ⇒ Γ out are productive in the sense that if we feed with a well-typed initial input buffer and an infinite sequence of well-typed inputs on its input channels, then it will produce an infinite sequence of well-typed outputs on its output channels.Before we can state the productivity property formally, we need to make precise what we mean by well-typed: ∈ Δ and ⊢ : .• A set of output values is well-typed, written ⊢ : Γ out , if for all ↦ → ∈ , we have that : ∈ Γ out and ⊢ : .
We can now formally state the productivity property as follows: Theorem 4.1 (productivity).Given a reactive program : Δ ⇒ Γ out , well-typed input values ⊢ ↦ → : Δ for all ∈ N, and a well-typed initial input buffer ⊢ 0 : Δ, there is an infinite transition sequence While a reactive program will always produce a set of output values +1 for each incoming input value ↦ → , this set may be empty.This happens if none of the heap locations in depends on the input , i.e., if ∉ cl ( ) for all ↦ → ∈ .As we will see in Proposition 4.4, this will necessarily be the case for inputs : b ∈ Δ that are buffered-only.Note that all output channels are initialised in the initialisation transition.An empty set of output values therefore only means that no output channels need to be updated.4.4.2Implicit Space Leaks.The absence of implicit space leaks is a direct consequence of the productivity property (Theorem 4.1).More precisely, the operational semantics of Async RaTT is formulated in such a way that after each pair of input/output transitions ; ; all heap locations in that depend on , i.e., those with ∈ cl ( ), are garbage collected and thus do not appear in .That is, a delayed computation at location is only kept in memory until its clock cl ( ) ticks.By Theorem 4.1, this aggressive garbage collection strategy is safe: The machine never gets stuck attempting to dereference a garbage collected heap location.

Causality.
In the following we refer to the transition sequences for a reactive program obtained by Theorem 4.1 simply as well-typed transition sequences for .A reactive program is causal, if for any of its well-typed transition sequences each set of output values only depends on the initial input buffer 0 and previously received input values ↦ → with < .To see that this is always the case, we first note that the operational semantics is deterministic in the following sense: Lemma 4.2 (deterministic semantics).
4.4.4Signal Independence.From the definition of the reactive semantics we can see that the machine only updates an output channel : ∈ Γ out if it depends on the input value ↦ → that has just arrived, i.e., if the machine is in a state ; ↦ → ; with ∈ cl ( ( )).However, the typing system allows us to give two useful static criteria for when ∉ cl ( ( )) is guaranteed and thus the output signal need not (and indeed cannot) be updated.
As alluded to earlier, values received on buffered-only channels will never produce an output: Proposition 4.4 (buffered signal independence).Suppose : Δ ⇒ Γ out is a reactive program and is a well-typed transition sequence for .Then +1 is empty whenever : b ∈ Δ for some .
Secondly, the input context Δ for a given output signal implementation gives us an upper bound on the push channels that will trigger an update: Theorem 4.5 (push signal independence).Suppose ( , ) : Δ ⇒ Γ out is a reactive program with Γ out = Γ, : such that also : Δ ′ ⇒ ( : ) is a reactive program for some Δ ′ ⊂ Δ and is a well-typed transition sequence for ( , ).Then ↦ → ∈ +1 implies that ∈ dom (Δ ′ ).In other words, the output channel is only updated when inputs in Δ ′ are updated.

METATHEORY
In this section, we sketch the proof of the operational properties presented in section 4.4, namely Theorem 4.1, Proposition 4.4, and Theorem 4.5.All three follow from a more general semantic soundness property.To prove this property, we first devise a semantic model of the Async RaTT calculus in the form of a Kripke logical relation.That is, the model consists of a family ( ) of sets of closed terms that satisfy the soundness properties we are interested in.This family of sets is indexed by a world and is defined by induction on the structure of the type and world .The soundness proof is thus reduced to a proof that ⊢ Δ : implies ∈ ( ), which is also known as the fundamental property of the logical relation.

Kripke Logical Relation
The worlds for our logical relation consist of two components: a natural number and a store .The number allows us to model guarded recursive types via step-indexing [1].This is achieved by defining ∃ ( + 1, ) in terms of ( , ′ ) for some suitable ′ .Since recursive types Fix .unfold to [ ∃ (Fix .)/ ], we can define Fix .( + 1, ) in terms of ( + 1, ) and Fix .( , ′ ), which is well-founded since in the former we refer to the smaller type and in the latter we refer to a smaller step index .
A key aspect of the operational semantics of Async RaTT is that it stores delayed computations in a store .Hence, in order to capture the semantics of a term , we have to account for the fact that may contain heap locations that point into some suitable store .Intuitively speaking, the set ( , ) contains those terms that, starting with the store , can be executed safely to produce a value of type .Ultimately, the index enables us to prove that the garbage collection performed by the reactive semantics is indeed sound.
What makes ( , ) a Kripke logical relation is the fact that we have a preorder on worlds such that ( , ) ( ′ , ′ ) implies ( , ) ⊆ ( ′ , ′ ).We can think of ( ′ , ′ ) as a future world reachable from ( , ), i.e., it describes how the surrounding context changes as the machine performs computations.There are four different kinds of changes, which we address in turn below: Firstly, time may pass, which means that we have fewer time steps left, i.e., > ′ .Secondly, the machine performs garbage collection on the store .The following garbage collection function describes this: Third, the machine may store delayed computation in , which we account for by the order ⊑ on heaps and stores: That is, ⊑ ′ iff ′ is obtained from by storing additional terms.
Finally, the machine may receive an input value ↦ → , which is captured by the following order ⊑ Δ on stores: That is, in addition to the allocations captured by ⊑, the order may also introduce an input value ↦ → .
Taken together, we can define the Kripke preorder on worlds as follows: This does not include garbage collection, as it is restricted to certain circumstances.Indeed, the machine performs garbage collection only at certain points of the execution, namely at the end of an output transition.
Finally, before we can give the definition of the Kripke logical relation, we need to semantically capture the notion of input independence that is needed both for the operational semantics of select and the signal independence properties (Proposition 4.4 and Theorem 4.5).In essence, we need that a heap location in the world ( + 1, ) should still be present in the future world ( , ′ ) in which we received an input on a channel ∉ .We achieve this by making the logical relation ∃ ( , ) satisfy the following clock independence property: where [ ] Θ restricts to heap locations whose clocks are subclocks of Θ: The full definition of the Kripke logical relation is given in Figure 7.In addition to the aspects discussed above, it is parameterised by the context Δ and distinguishes between the value relation V Δ ( ) and the term relation T Δ ( ).The two relations are defined by well-founded recursion by the lexicographic ordering on the tuple ( , | | , ), where | | is the size of defined below, and = 1 for the term relation and = 0 for the value relation.
Note that in the definition for V Δ ∃ ( ) in Figure 7, we use the shorthand ( ) for ( ), where is the later heap of .Our goal is to prove the fundamental property, i.e., that ⊢ Δ : implies ∈ T Δ ( , ), by induction on the typing derivation.Therefore, we need to generalise the fundamental property to open terms as well.That means we also need a corresponding logical relation for contexts, which is given at the bottom of Figure 7.The interpretation of in a context is quite technical, but is essentially determined by the interpretation of ∃ due to the requirement of being left adjoint [7].
Preservation under garbage collection, however, only holds for values and tick-free contexts: Moreover, the clock independence property holds for both the value and context relations: Finally, we obtain the soundness of the language by the following fundamental property of the logical relation: The proof is a standard induction on the typing relation Γ ⊢ Δ : that makes use of the aforementioned closure properties of the logical relations and is included in Appendix A.

Operational Properties
We close this section by showing how we can use the fundamental property to prove the operational properties presented in section 4.4.To this end, we will sketch the proofs of Theorem 4.1, Proposition 4.4, and Theorem 4.5.

Productivity.
In the following we assume a fixed reactive interface Δ ⇒ Γ out , for which we define the following sets of machine states and for the reactive semantics: The following lemma proves that the machine stays inside the sets of states defined above and will only produce well-typed outputs.For the latter, we make use of the fact that ∈ V Δ ( , ) iff ⊢ : for every value type .

RELATED WORK
Functional reactive programming originates with Elliott and Hudak [12].The use of modal types for FRP was first suggested by Krishnaswami and Benton [20], and the connection between linear temporal logic and FRP was discovered independently by Jeffrey [16] and Jeltsch [18].Although some of these calculi have been implemented, they do not offer operational guarantees like the ones proved here for lack of space leaks.The first such operational guarantees were given by Krishnaswami et al. [21] who describe a modal FRP language using linear types and allocation resources to statically bound the memory used by a reactive program.The simpler, but less precise, idea of using an aggressive garbage collection technique for avoiding space leaks is due to Krishnaswami [19].Krishnaswami's calculus used a dual context approach to programming with modal types.Bahr et al. [3] recast these results in a Fitch-style modal calculus, the first in the RaTT family.This was later implemented in Haskell with some minor modifications [2].
All the above calculi are based on a global notion of time, which in almost all cases is discrete.In particular, the modal operator for time steps in these calculi refers to the next time step on the global clock.One can of course also understand the step semantics of Async RaTT as operating on a global clock, but in our model each step is associated with an input coming from an input channel, and this allows us to define the delay modality ∃ as a delay on a set of input channels.From the model perspective, ∃ carries some similarities with the type ( ), where + is a guarded recursive type.This encoding, however, suffers from the efficiency and abstraction problems mentioned in the introduction.
The only asynchronous modal FRP calculus that we are aware of is widget defined by Graulund et al. [14], which takes as a type constructor primitive and endows it with synchronisation primitive similar to select in Async RaTT.However, the programming primitives in widget are very different from the ones use here.For example, widget allows an element of to be decomposed into a time and an element of at that time, and much programming with uses this decomposition.There is also no delay type constructor , so ∃ is not expressible: Unlike ∃ , an element of could give a value of type already now.Graulund et al. provide a denotational semantics for widget , but no operational semantics, and no operational guarantees as proved here.
Another approach to avoiding space leaks and non-causal reactive programs is to devise a carefully designed interface to manipulate signals such as Yampa [23] or FRPNow![24].Rhine [5] is a recent refinement of Yampa that annotates signal functions with type-level clocks, which allows the construction of complex dataflow graphs that combine subsystems running at different clock speeds.The typing discipline fixes the clock of each subsystem statically at compile time, since the aim of Rhine is provide efficient resampling between subsystems.By contrast, the type-level clocks of Async RaTT are existentially quantified, which allows Async RaTT programs to dynamically change the clock of a signal, e.g., by using the switch combinator from section 3.2.
Elliott [13] proposed a push-pull implementation of FRP, where signals (which in the tradition of classic FRP [12] are called behaviours) are updated at discrete time steps (push), but can also be sampled at any time between such updates (pull).We can represent such push-pull signals in Async RaTT using the type Sig (Time → ), i.e., at each tick of the clock we get a new function Time → that describes the time-varying value of the signal until the next tick of the clock.
Futures, first implemented in MuliLisp [15] and now commonly found in many programming languages under different names (promise, async/await, delay, etc.), provide a powerful abstraction to facilitate communication between concurrent computations.A value of type Future is the promise to deliver a value of type at some time in the future.For example, a function to read the contents of a file could immediately return a value of type Future Buffer instead of blocking the caller until the file was read into a buffer.Async RaTT can provide a similar interface using the type modality ∃ , either directly or by defining Future as a guarded recursive type Future + ∃ (Future ) to give Future a monadic interface.Since Async RaTT does not require the set of push-only channels to be finite, we could implement a function that takes a filename and returns a result of type Future Buffer simply as a family of channels readFile : p Buffer.The machine would monitor delayed computations for clocks containing these channels, initiate reading the corresponding files in parallel, and provide the value of type Buffer on the channel upon completion of the file reading procedure.
As mentioned earlier, Krishnaswami et al. [21] used a linear typing discipline to obtain static memory bounds.In addition to such memory bounds, synchronous (dataflow) languages such as Esterel [6], Lustre [8], and Lucid Synchrone [26] even provide bounds on runtime.Despite these strong guarantees, Lucid Synchrone affords a high-level, modular programming style with support for higher-order functions.However, to achieve such static guarantees, synchronous dataflow languages must necessarily enforce strict limits on the dynamic behaviour, disallowing both timevarying values of arbitrary types (e.g., we cannot have a stream of streams) and dynamic switching (i.e., no functionality equivalent to the switch combinator).Both Lustre and Lucid Synchrone have a notion of a clock, which is simply a stream of Booleans that indicates at each tick of the global clock, whether the local clock ticks as well.

CONCLUSION AND FUTURE WORK
This paper presented Async RaTT, the first modal language for asynchronous FRP with operational guarantees.We showed how the new modal type ∃ for asynchronous delay can be used to annotate the runtime system with dependencies from output channels to input channels, ensuring that outputs are only recomputed when necessary.The examples of the integral and the derivative even show how the programmer can actively influence the update rate of output channels.
The choice of Fitch-style modalities is a question of taste, and we believe that the results could be reproduced in a dual context language.Even though Fitch-style uses non-standard operations on contexts, other languages in the RaTT family have been implemented as libraries in Haskell [2].We therefore believe that also Async RaTT can be implemented in Haskell or other functional programming languages, giving programmers access to a combination of features from RaTT and the hosting programming language.
One aspect missing from Async RaTT is filtering of output channels.For example, it is not possible to write a filter function that only produces output when some condition on the input is met.The best way to do model this is using an output channel of type Maybe( ), leaving it to the runtime system to only push values of type to the consumers of the output channel.This way the filtering is external to the programming language.We see no way to meaningfully extend the runtime model of Async RaTT to internalise it.